Image Title

Search Results for Sagemaker:

Closing Panel | Generative AI: Riding the Wave | AWS Startup Showcase S3 E1


 

(mellow music) >> Hello everyone, welcome to theCUBE's coverage of AWS Startup Showcase. This is the closing panel session on AI machine learning, the top startups generating generative AI on AWS. It's a great panel. This is going to be the experts talking about riding the wave in generative AI. We got Ankur Mehrotra, who's the director and general manager of AI and machine learning at AWS, and Clem Delangue, co-founder and CEO of Hugging Face, and Ori Goshen, who's the co-founder and CEO of AI21 Labs. Ori from Tel Aviv dialing in, and rest coming in here on theCUBE. Appreciate you coming on for this closing session for the Startup Showcase. >> Thanks for having us. >> Thank you for having us. >> Thank you. >> I'm super excited to have you all on. Hugging Face was recently in the news with the AWS relationship, so congratulations. Open source, open science, really driving the machine learning. And we got the AI21 Labs access to the LLMs, generating huge scale live applications, commercial applications, coming to the market, all powered by AWS. So everyone, congratulations on all your success, and thank you for headlining this panel. Let's get right into it. AWS is powering this wave here. We're seeing a lot of push here from applications. Ankur, set the table for us on the AI machine learning. It's not new, it's been goin' on for a while. Past three years have been significant advancements, but there's been a lot of work done in AI machine learning. Now it's released to the public. Everybody's super excited and now says, "Oh, the future's here!" It's kind of been going on for a while and baking. Now it's kind of coming out. What's your view here? Let's get it started. >> Yes, thank you. So, yeah, as you may be aware, Amazon has been in investing in machine learning research and development since quite some time now. And we've used machine learning to innovate and improve user experiences across different Amazon products, whether it's Alexa or Amazon.com. But we've also brought in our expertise to extend what we are doing in the space and add more generative AI technology to our AWS products and services, starting with CodeWhisperer, which is an AWS service that we announced a few months ago, which is, you can think of it as a coding companion as a service, which uses generative AI models underneath. And so this is a service that customers who have no machine learning expertise can just use. And we also are talking to customers, and we see a lot of excitement about generative AI, and customers who want to build these models themselves, who have the talent and the expertise and resources. For them, AWS has a number of different options and capabilities they can leverage, such as our custom silicon, such as Trainium and Inferentia, as well as distributed machine learning capabilities that we offer as part of SageMaker, which is an end-to-end machine learning development service. At the same time, many of our customers tell us that they're interested in not training and building these generative AI models from scratch, given they can be expensive and can require specialized talent and skills to build. And so for those customers, we are also making it super easy to bring in existing generative AI models into their machine learning development environment within SageMaker for them to use. So we recently announced our partnership with Hugging Face, where we are making it super easy for customers to bring in those models into their SageMaker development environment for fine tuning and deployment. And then we are also partnering with other proprietary model providers such as AI21 and others, where we making these generative AI models available within SageMaker for our customers to use. So our approach here is to really provide customers options and choices and help them accelerate their generative AI journey. >> Ankur, thank you for setting the table there. Clem and Ori, I want to get your take, because the riding the waves, the theme of this session, and to me being in California, I imagine the big surf, the big waves, the big talent out there. This is like alpha geeks, alpha coders, developers are really leaning into this. You're seeing massive uptake from the smartest people. Whether they're young or around, they're coming in with their kind of surfboards, (chuckles) if you will. These early adopters, they've been on this for a while; Now the waves are hitting. This is a big wave, everyone sees it. What are some of those early adopter devs doing? What are some of the use cases you're seeing right out of the gate? And what does this mean for the folks that are going to come in and get on this wave? Can you guys share your perspective on this? Because you're seeing the best talent now leaning into this. >> Yeah, absolutely. I mean, from Hugging Face vantage points, it's not even a a wave, it's a tidal wave, or maybe even the tide itself. Because actually what we are seeing is that AI and machine learning is not something that you add to your products. It's very much a new paradigm to do all technology. It's this idea that we had in the past 15, 20 years, one way to build software and to build technology, which was writing a million lines of code, very rule-based, and then you get your product. Now what we are seeing is that every single product, every single feature, every single company is starting to adopt AI to build the next generation of technology. And that works both to make the existing use cases better, if you think of search, if you think of social network, if you think of SaaS, but also it's creating completely new capabilities that weren't possible with the previous paradigm. Now AI can generate text, it can generate image, it can describe your image, it can do so many new things that weren't possible before. >> It's going to really make the developers really productive, right? I mean, you're seeing the developer uptake strong, right? >> Yes, we have over 15,000 companies using Hugging Face now, and it keeps accelerating. I really think that maybe in like three, five years, there's not going to be any company not using AI. It's going to be really kind of the default to build all technology. >> Ori, weigh in on this. APIs, the cloud. Now I'm a developer, I want to have live applications, I want the commercial applications on this. What's your take? Weigh in here. >> Yeah, first, I absolutely agree. I mean, we're in the midst of a technology shift here. I think not a lot of people realize how big this is going to be. Just the number of possibilities is endless, and I think hard to imagine. And I don't think it's just the use cases. I think we can think of it as two separate categories. We'll see companies and products enhancing their offerings with these new AI capabilities, but we'll also see new companies that are AI first, that kind of reimagine certain experiences. They build something that wasn't possible before. And that's why I think it's actually extremely exciting times. And maybe more philosophically, I think now these large language models and large transformer based models are helping us people to express our thoughts and kind of making the bridge from our thinking to a creative digital asset in a speed we've never imagined before. I can write something down and get a piece of text, or an image, or a code. So I'll start by saying it's hard to imagine all the possibilities right now, but it's certainly big. And if I had to bet, I would say it's probably at least as big as the mobile revolution we've seen in the last 20 years. >> Yeah, this is the biggest. I mean, it's been compared to the Enlightenment Age. I saw the Wall Street Journal had a recent story on this. We've been saying that this is probably going to be bigger than all inflection points combined in the tech industry, given what transformation is coming. I guess I want to ask you guys, on the early adopters, we've been hearing on these interviews and throughout the industry that there's already a set of big companies, a set of companies out there that have a lot of data and they're already there, they're kind of tinkering. Kind of reminds me of the old hyper scaler days where they were building their own scale, and they're eatin' glass, spittin' nails out, you know, they're hardcore. Then you got everybody else kind of saying board level, "Hey team, how do I leverage this?" How do you see those two things coming together? You got the fast followers coming in behind the early adopters. What's it like for the second wave coming in? What are those conversations for those developers like? >> I mean, I think for me, the important switch for companies is to change their mindset from being kind of like a traditional software company to being an AI or machine learning company. And that means investing, hiring machine learning engineers, machine learning scientists, infrastructure in members who are working on how to put these models in production, team members who are able to optimize models, specialized models, customized models for the company's specific use cases. So it's really changing this mindset of how you build technology and optimize your company building around that. Things are moving so fast that I think now it's kind of like too late for low hanging fruits or small, small adjustments. I think it's important to realize that if you want to be good at that, and if you really want to surf this wave, you need massive investments. If there are like some surfers listening with this analogy of the wave, right, when there are waves, it's not enough just to stand and make a little bit of adjustments. You need to position yourself aggressively, paddle like crazy, and that's how you get into the waves. So that's what companies, in my opinion, need to do right now. >> Ori, what's your take on the generative models out there? We hear a lot about foundation models. What's your experience running end-to-end applications for large foundation models? Any insights you can share with the app developers out there who are looking to get in? >> Yeah, I think first of all, it's start create an economy, where it probably doesn't make sense for every company to create their own foundation models. You can basically start by using an existing foundation model, either open source or a proprietary one, and start deploying it for your needs. And then comes the second round when you are starting the optimization process. You bootstrap, whether it's a demo, or a small feature, or introducing new capability within your product, and then start collecting data. That data, and particularly the human feedback data, helps you to constantly improve the model, so you create this data flywheel. And I think we're now entering an era where customers have a lot of different choice of how they want to start their generative AI endeavor. And it's a good thing that there's a variety of choices. And the really amazing thing here is that every industry, any company you speak with, it could be something very traditional like industrial or financial, medical, really any company. I think peoples now start to imagine what are the possibilities, and seriously think what's their strategy for adopting this generative AI technology. And I think in that sense, the foundation model actually enabled this to become scalable. So the barrier to entry became lower; Now the adoption could actually accelerate. >> There's a lot of integration aspects here in this new wave that's a little bit different. Before it was like very monolithic, hardcore, very brittle. A lot more integration, you see a lot more data coming together. I have to ask you guys, as developers come in and grow, I mean, when I went to college and you were a software engineer, I mean, I got a degree in computer science, and software engineering, that's all you did was code, (chuckles) you coded. Now, isn't it like everyone's a machine learning engineer at this point? Because that will be ultimately the science. So, (chuckles) you got open source, you got open software, you got the communities. Swami called you guys the GitHub of machine learning, Hugging Face is the GitHub of machine learning, mainly because that's where people are going to code. So this is essentially, machine learning is computer science. What's your reaction to that? >> Yes, my co-founder Julien at Hugging Face have been having this thing for quite a while now, for over three years, which was saying that actually software engineering as we know it today is a subset of machine learning, instead of the other way around. People would call us crazy a few years ago when we're seeing that. But now we are realizing that you can actually code with machine learning. So machine learning is generating code. And we are starting to see that every software engineer can leverage machine learning through open models, through APIs, through different technology stack. So yeah, it's not crazy anymore to think that maybe in a few years, there's going to be more people doing AI and machine learning. However you call it, right? Maybe you'll still call them software engineers, maybe you'll call them machine learning engineers. But there might be more of these people in a couple of years than there is software engineers today. >> I bring this up as more tongue in cheek as well, because Ankur, infrastructure's co is what made Cloud great, right? That's kind of the DevOps movement. But here the shift is so massive, there will be a game-changing philosophy around coding. Machine learning as code, you're starting to see CodeWhisperer, you guys have had coding companions for a while on AWS. So this is a paradigm shift. How is the cloud playing into this for you guys? Because to me, I've been riffing on some interviews where it's like, okay, you got the cloud going next level. This is an example of that, where there is a DevOps-like moment happening with machine learning, whether you call it coding or whatever. It's writing code on its own. Can you guys comment on what this means on top of the cloud? What comes out of the scale? What comes out of the benefit here? >> Absolutely, so- >> Well first- >> Oh, go ahead. >> Yeah, so I think as far as scale is concerned, I think customers are really relying on cloud to make sure that the applications that they build can scale along with the needs of their business. But there's another aspect to it, which is that until a few years ago, John, what we saw was that machine learning was a data scientist heavy activity. They were data scientists who were taking the data and training models. And then as machine learning found its way more and more into production and actual usage, we saw the MLOps become a thing, and MLOps engineers become more involved into the process. And then we now are seeing, as machine learning is being used to solve more business critical problems, we're seeing even legal and compliance teams get involved. We are seeing business stakeholders more engaged. So, more and more machine learning is becoming an activity that's not just performed by data scientists, but is performed by a team and a group of people with different skills. And for them, we as AWS are focused on providing the best tools and services for these different personas to be able to do their job and really complete that end-to-end machine learning story. So that's where, whether it's tools related to MLOps or even for folks who cannot code or don't know any machine learning. For example, we launched SageMaker Canvas as a tool last year, which is a UI-based tool which data analysts and business analysts can use to build machine learning models. So overall, the spectrum in terms of persona and who can get involved in the machine learning process is expanding, and the cloud is playing a big role in that process. >> Ori, Clem, can you guys weigh in too? 'Cause this is just another abstraction layer of scale. What's it mean for you guys as you look forward to your customers and the use cases that you're enabling? >> Yes, I think what's important is that the AI companies and providers and the cloud kind of work together. That's how you make a seamless experience and you actually reduce the barrier to entry for this technology. So that's what we've been super happy to do with AWS for the past few years. We actually announced not too long ago that we are doubling down on our partnership with AWS. We're excited to have many, many customers on our shared product, the Hugging Face deep learning container on SageMaker. And we are working really closely with the Inferentia team and the Trainium team to release some more exciting stuff in the coming weeks and coming months. So I think when you have an ecosystem and a system where the AWS and the AI providers, AI startups can work hand in hand, it's to the benefit of the customers and the companies, because it makes it orders of magnitude easier for them to adopt this new paradigm to build technology AI. >> Ori, this is a scale on reasoning too. The data's out there and making sense out of it, making it reason, getting comprehension, having it make decisions is next, isn't it? And you need scale for that. >> Yes. Just a comment about the infrastructure side. So I think really the purpose is to streamline and make these technologies much more accessible. And I think we'll see, I predict that we'll see in the next few years more and more tooling that make this technology much more simple to consume. And I think it plays a very important role. There's so many aspects, like the monitoring the models and their kind of outputs they produce, and kind of containing and running them in a production environment. There's so much there to build on, the infrastructure side will play a very significant role. >> All right, that's awesome stuff. I'd love to change gears a little bit and get a little philosophy here around AI and how it's going to transform, if you guys don't mind. There's been a lot of conversations around, on theCUBE here as well as in some industry areas, where it's like, okay, all the heavy lifting is automated away with machine learning and AI, the complexity, there's some efficiencies, it's horizontal and scalable across all industries. Ankur, good point there. Everyone's going to use it for something. And a lot of stuff gets brought to the table with large language models and other things. But the key ingredient will be proprietary data or human input, or some sort of AI whisperer kind of role, or prompt engineering, people are saying. So with that being said, some are saying it's automating intelligence. And that creativity will be unleashed from this. If the heavy lifting goes away and AI can fill the void, that shifts the value to the intellect or the input. And so that means data's got to come together, interact, fuse, and understand each other. This is kind of new. I mean, old school AI was, okay, got a big model, I provisioned it long time, very expensive. Now it's all free flowing. Can you guys comment on where you see this going with this freeform, data flowing everywhere, heavy lifting, and then specialization? >> Yeah, I think- >> Go ahead. >> Yeah, I think, so what we are seeing with these large language models or generative models is that they're really good at creating stuff. But I think it's also important to recognize their limitations. They're not as good at reasoning and logic. And I think now we're seeing great enthusiasm, I think, which is justified. And the next phase would be how to make these systems more reliable. How to inject more reasoning capabilities into these models, or augment with other mechanisms that actually perform more reasoning so we can achieve more reliable results. And we can count on these models to perform for critical tasks, whether it's medical tasks, legal tasks. We really want to kind of offload a lot of the intelligence to these systems. And then we'll have to get back, we'll have to make sure these are reliable, we'll have to make sure we get some sort of explainability that we can understand the process behind the generated results that we received. So I think this is kind of the next phase of systems that are based on these generated models. >> Clem, what's your view on this? Obviously you're at open community, open source has been around, it's been a great track record, proven model. I'm assuming creativity's going to come out of the woodwork, and if we can automate open source contribution, and relationships, and onboarding more developers, there's going to be unleashing of creativity. >> Yes, it's been so exciting on the open source front. We all know Bert, Bloom, GPT-J, T5, Stable Diffusion, that work up. The previous or the current generation of open source models that are on Hugging Face. It has been accelerating in the past few months. So I'm super excited about ControlNet right now that is really having a lot of impact, which is kind of like a way to control the generation of images. Super excited about Flan UL2, which is like a new model that has been recently released and is open source. So yeah, it's really fun to see the ecosystem coming together. Open source has been the basis for traditional software, with like open source programming languages, of course, but also all the great open source that we've gotten over the years. So we're happy to see that the same thing is happening for machine learning and AI, and hopefully can help a lot of companies reduce a little bit the barrier to entry. So yeah, it's going to be exciting to see how it evolves in the next few years in that respect. >> I think the developer productivity angle that's been talked about a lot in the industry will be accelerated significantly. I think security will be enhanced by this. I think in general, applications are going to transform at a radical rate, accelerated, incredible rate. So I think it's not a big wave, it's the water, right? I mean, (chuckles) it's the new thing. My final question for you guys, if you don't mind, I'd love to get each of you to answer the question I'm going to ask you, which is, a lot of conversations around data. Data infrastructure's obviously involved in this. And the common thread that I'm hearing is that every company that looks at this is asking themselves, if we don't rebuild our company, start thinking about rebuilding our business model around AI, we might be dinosaurs, we might be extinct. And it reminds me that scene in Moneyball when, at the end, it's like, if we're not building the model around your model, every company will be out of business. What's your advice to companies out there that are having those kind of moments where it's like, okay, this is real, this is next gen, this is happening. I better start thinking and putting into motion plans to refactor my business, 'cause it's happening, business transformation is happening on the cloud. This kind of puts an exclamation point on, with the AI, as a next step function. Big increase in value. So it's an opportunity for leaders. Ankur, we'll start with you. What's your advice for folks out there thinking about this? Do they put their toe in the water? Do they jump right into the deep end? What's your advice? >> Yeah, John, so we talk to a lot of customers, and customers are excited about what's happening in the space, but they often ask us like, "Hey, where do we start?" So we always advise our customers to do a lot of proof of concepts, understand where they can drive the biggest ROI. And then also leverage existing tools and services to move fast and scale, and try and not reinvent the wheel where it doesn't need to be. That's basically our advice to customers. >> Get it. Ori, what's your advice to folks who are scratching their head going, "I better jump in here. "How do I get started?" What's your advice? >> So I actually think that need to think about it really economically. Both on the opportunity side and the challenges. So there's a lot of opportunities for many companies to actually gain revenue upside by building these new generative features and capabilities. On the other hand, of course, this would probably affect the cogs, and incorporating these capabilities could probably affect the cogs. So I think we really need to think carefully about both of these sides, and also understand clearly if this is a project or an F word towards cost reduction, then the ROI is pretty clear, or revenue amplifier, where there's, again, a lot of different opportunities. So I think once you think about this in a structured way, I think, and map the different initiatives, then it's probably a good way to start and a good way to start thinking about these endeavors. >> Awesome. Clem, what's your take on this? What's your advice, folks out there? >> Yes, all of these are very good advice already. Something that you said before, John, that I disagreed a little bit, a lot of people are talking about the data mode and proprietary data. Actually, when you look at some of the organizations that have been building the best models, they don't have specialized or unique access to data. So I'm not sure that's so important today. I think what's important for companies, and it's been the same for the previous generation of technology, is their ability to build better technology faster than others. And in this new paradigm, that means being able to build machine learning faster than others, and better. So that's how, in my opinion, you should approach this. And kind of like how can you evolve your company, your teams, your products, so that you are able in the long run to build machine learning better and faster than your competitors. And if you manage to put yourself in that situation, then that's when you'll be able to differentiate yourself to really kind of be impactful and get results. That's really hard to do. It's something really different, because machine learning and AI is a different paradigm than traditional software. So this is going to be challenging, but I think if you manage to nail that, then the future is going to be very interesting for your company. >> That's a great point. Thanks for calling that out. I think this all reminds me of the cloud days early on. If you went to the cloud early, you took advantage of it when the pandemic hit. If you weren't native in the cloud, you got hamstrung by that, you were flatfooted. So just get in there. (laughs) Get in the cloud, get into AI, you're going to be good. Thanks for for calling that. Final parting comments, what's your most exciting thing going on right now for you guys? Ori, Clem, what's the most exciting thing on your plate right now that you'd like to share with folks? >> I mean, for me it's just the diversity of use cases and really creative ways of companies leveraging this technology. Every day I speak with about two, three customers, and I'm continuously being surprised by the creative ideas. And the future is really exciting of what can be achieved here. And also I'm amazed by the pace that things move in this industry. It's just, there's not at dull moment. So, definitely exciting times. >> Clem, what are you most excited about right now? >> For me, it's all the new open source models that have been released in the past few weeks, and that they'll keep being released in the next few weeks. I'm also super excited about more and more companies getting into this capability of chaining different models and different APIs. I think that's a very, very interesting development, because it creates new capabilities, new possibilities, new functionalities that weren't possible before. You can plug an API with an open source embedding model, with like a no-geo transcription model. So that's also very exciting. This capability of having more interoperable machine learning will also, I think, open a lot of interesting things in the future. >> Clem, congratulations on your success at Hugging Face. Please pass that on to your team. Ori, congratulations on your success, and continue to, just day one. I mean, it's just the beginning. It's not even scratching the service. Ankur, I'll give you the last word. What are you excited for at AWS? More cloud goodness coming here with AI. Give you the final word. >> Yeah, so as both Clem and Ori said, I think the research in the space is moving really, really fast, so we are excited about that. But we are also excited to see the speed at which enterprises and other AWS customers are applying machine learning to solve real business problems, and the kind of results they're seeing. So when they come back to us and tell us the kind of improvement in their business metrics and overall customer experience that they're driving and they're seeing real business results, that's what keeps us going and inspires us to continue inventing on their behalf. >> Gentlemen, thank you so much for this awesome high impact panel. Ankur, Clem, Ori, congratulations on all your success. We'll see you around. Thanks for coming on. Generative AI, riding the wave, it's a tidal wave, it's the water, it's all happening. All great stuff. This is season three, episode one of AWS Startup Showcase closing panel. This is the AI ML episode, the top startups building generative AI on AWS. I'm John Furrier, your host. Thanks for watching. (mellow music)

Published Date : Mar 9 2023

SUMMARY :

This is the closing panel I'm super excited to have you all on. is to really provide and to me being in California, and then you get your product. kind of the default APIs, the cloud. and kind of making the I saw the Wall Street Journal I think it's important to realize that the app developers out there So the barrier to entry became lower; I have to ask you guys, instead of the other way around. That's kind of the DevOps movement. and the cloud is playing a and the use cases that you're enabling? the barrier to entry And you need scale for that. in the next few years and AI can fill the void, a lot of the intelligence and if we can automate reduce a little bit the barrier to entry. I'd love to get each of you drive the biggest ROI. to folks who are scratching So I think once you think Clem, what's your take on this? and it's been the same of the cloud days early on. And also I'm amazed by the pace in the past few weeks, Please pass that on to your team. and the kind of results they're seeing. This is the AI ML episode,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ankur MehrotraPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

ClemPERSON

0.99+

Ori GoshenPERSON

0.99+

John FurrierPERSON

0.99+

CaliforniaLOCATION

0.99+

OriPERSON

0.99+

Clem DelanguePERSON

0.99+

Hugging FaceORGANIZATION

0.99+

JulienPERSON

0.99+

AnkurPERSON

0.99+

AmazonORGANIZATION

0.99+

Tel AvivLOCATION

0.99+

threeQUANTITY

0.99+

AnkurORGANIZATION

0.99+

second roundQUANTITY

0.99+

AI21 LabsORGANIZATION

0.99+

two separate categoriesQUANTITY

0.99+

Amazon.comORGANIZATION

0.99+

last yearDATE

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.98+

over 15,000 companiesQUANTITY

0.98+

BothQUANTITY

0.98+

five yearsQUANTITY

0.98+

bothQUANTITY

0.98+

over three yearsQUANTITY

0.98+

three customersQUANTITY

0.98+

eachQUANTITY

0.98+

TrainiumORGANIZATION

0.98+

todayDATE

0.98+

AlexaTITLE

0.98+

Stable DiffusionORGANIZATION

0.97+

SwamiPERSON

0.97+

InferentiaORGANIZATION

0.96+

GPT-JORGANIZATION

0.96+

SageMakerTITLE

0.96+

AI21 LabsORGANIZATION

0.95+

Riding the WaveTITLE

0.95+

ControlNetORGANIZATION

0.94+

one wayQUANTITY

0.94+

a million linesQUANTITY

0.93+

Startup ShowcaseEVENT

0.92+

few months agoDATE

0.92+

second waveEVENT

0.91+

theCUBEORGANIZATION

0.91+

few years agoDATE

0.91+

CodeWhispererTITLE

0.9+

AI21ORGANIZATION

0.89+

Adam Wenchel & John Dickerson, Arthur | AWS Startup Showcase S3 E1


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI Machine Learning Top Startups Building Generative AI on AWS. This is season 3, episode 1 of the ongoing series covering the exciting startup from the AWS ecosystem to talk about AI and machine learning. I'm your host, John Furrier. I'm joined by two great guests here, Adam Wenchel, who's the CEO of Arthur, and Chief Scientist of Arthur, John Dickerson. Talk about how they help people build better LLM AI systems to get them into the market faster. Gentlemen, thank you for coming on. >> Yeah, thanks for having us, John. >> Well, I got to say I got to temper my enthusiasm because the last few months explosion of interest in LLMs with ChatGPT, has opened the eyes to everybody around the reality of that this is going next gen, this is it, this is the moment, this is the the point we're going to look back and say, this is the time where AI really hit the scene for real applications. So, a lot of Large Language Models, also known as LLMs, foundational models, and generative AI is all booming. This is where all the alpha developers are going. This is where everyone's focusing their business model transformations on. This is where developers are seeing action. So it's all happening, the wave is here. So I got to ask you guys, what are you guys seeing right now? You're in the middle of it, it's hitting you guys right on. You're in the front end of this massive wave. >> Yeah, John, I don't think you have to temper your enthusiasm at all. I mean, what we're seeing every single day is, everything from existing enterprise customers coming in with new ways that they're rethinking, like business things that they've been doing for many years that they can now do an entirely different way, as well as all manner of new companies popping up, applying LLMs to everything from generating code and SQL statements to generating health transcripts and just legal briefs. Everything you can imagine. And when you actually sit down and look at these systems and the demos we get of them, the hype is definitely justified. It's pretty amazing what they're going to do. And even just internally, we built, about a month ago in January, we built an Arthur chatbot so customers could ask questions, technical questions from our, rather than read our product documentation, they could just ask this LLM a particular question and get an answer. And at the time it was like state of the art, but then just last week we decided to rebuild it because the tooling has changed so much that we, last week, we've completely rebuilt it. It's now way better, built on an entirely different stack. And the tooling has undergone a full generation worth of change in six weeks, which is crazy. So it just tells you how much energy is going into this and how fast it's evolving right now. >> John, weigh in as a chief scientist. I mean, you must be blown away. Talk about kid in the candy store. I mean, you must be looking like this saying, I mean, she must be super busy to begin with, but the change, the acceleration, can you scope the kind of change you're seeing and be specific around the areas you're seeing movement and highly accelerated change? >> Yeah, definitely. And it is very, very exciting actually, thinking back to when ChatGPT was announced, that was a night our company was throwing an event at NeurIPS, which is maybe the biggest machine learning conference out there. And the hype when that happened was palatable and it was just shocking to see how well that performed. And then obviously over the last few months since then, as LLMs have continued to enter the market, we've seen use cases for them, like Adam mentioned all over the place. And so, some things I'm excited about in this space are the use of LLMs and more generally, foundation models to redesign traditional operations, research style problems, logistics problems, like auctions, decisioning problems. So moving beyond the already amazing news cases, like creating marketing content into more core integration and a lot of the bread and butter companies and tasks that drive the American ecosystem. And I think we're just starting to see some of that. And in the next 12 months, I think we're going to see a lot more. If I had to make other predictions, I think we're going to continue seeing a lot of work being done on managing like inference time costs via shrinking models or distillation. And I don't know how to make this prediction, but at some point we're going to be seeing lots of these very large scale models operating on the edge as well. So the time scales are extremely compressed, like Adam mentioned, 12 months from now, hard to say. >> We were talking on theCUBE prior to this session here. We had theCUBE conversation here and then the Wall Street Journal just picked up on the same theme, which is the printing press moment created the enlightenment stage of the history. Here we're in the whole nother automating intellect efficiency, doing heavy lifting, the creative class coming back, a whole nother level of reality around the corner that's being hyped up. The question is, is this justified? Is there really a breakthrough here or is this just another result of continued progress with AI? Can you guys weigh in, because there's two schools of thought. There's the, "Oh my God, we're entering a new enlightenment tech phase, of the equivalent of the printing press in all areas. Then there's, Ah, it's just AI (indistinct) inch by inch. What's your guys' opinion? >> Yeah, I think on the one hand when you're down in the weeds of building AI systems all day, every day, like we are, it's easy to look at this as an incremental progress. Like we have customers who've been building on foundation models since we started the company four years ago, particular in computer vision for classification tasks, starting with pre-trained models, things like that. So that part of it doesn't feel real new, but what does feel new is just when you apply these things to language with all the breakthroughs and computational efficiency, algorithmic improvements, things like that, when you actually sit down and interact with ChatGPT or one of the other systems that's out there that's building on top of LLMs, it really is breathtaking, like, the level of understanding that they have and how quickly you can accelerate your development efforts and get an actual working system in place that solves a really important real world problem and makes people way faster, way more efficient. So I do think there's definitely something there. It's more than just incremental improvement. This feels like a real trajectory inflection point for the adoption of AI. >> John, what's your take on this? As people come into the field, I'm seeing a lot of people move from, hey, I've been coding in Python, I've been doing some development, I've been a software engineer, I'm a computer science student. I'm coding in C++ old school, OG systems person. Where do they come in? Where's the focus, where's the action? Where are the breakthroughs? Where are people jumping in and rolling up their sleeves and getting dirty with this stuff? >> Yeah, all over the place. And it's funny you mentioned students in a different life. I wore a university professor hat and so I'm very, very familiar with the teaching aspects of this. And I will say toward Adam's point, this really is a leap forward in that techniques like in a co-pilot for example, everybody's using them right now and they really do accelerate the way that we develop. When I think about the areas where people are really, really focusing right now, tooling is certainly one of them. Like you and I were chatting about LangChain right before this interview started, two or three people can sit down and create an amazing set of pipes that connect different aspects of the LLM ecosystem. Two, I would say is in engineering. So like distributed training might be one, or just understanding better ways to even be able to train large models, understanding better ways to then distill them or run them. So like this heavy interaction now between engineering and what I might call traditional machine learning from 10 years ago where you had to know a lot of math, you had to know calculus very well, things like that. Now you also need to be, again, a very strong engineer, which is exciting. >> I interviewed Swami when he talked about the news. He's ahead of Amazon's machine learning and AI when they announced Hugging Face announcement. And I reminded him how Amazon was easy to get into if you were developing a startup back in 2007,8, and that the language models had that similar problem. It's step up a lot of content and a lot of expense to get provisioned up, now it's easy. So this is the next wave of innovation. So how do you guys see that from where we are right now? Are we at that point where it's that moment where it's that cloud-like experience for LLMs and large language models? >> Yeah, go ahead John. >> I think the answer is yes. We see a number of large companies that are training these and serving these, some of which are being co-interviewed in this episode. I think we're at that. Like, you can hit one of these with a simple, single line of Python, hitting an API, you can boot this up in seconds if you want. It's easy. >> Got it. >> So I (audio cuts out). >> Well let's take a step back and talk about the company. You guys being featured here on the Showcase. Arthur, what drove you to start the company? How'd this all come together? What's the origination story? Obviously you got a big customers, how'd get started? What are you guys doing? How do you make money? Give a quick overview. >> Yeah, I think John and I come at it from slightly different angles, but for myself, I have been a part of a number of technology companies. I joined Capital One, they acquired my last company and shortly after I joined, they asked me to start their AI team. And so even though I've been doing AI for a long time, I started my career back in DARPA. It was the first time I was really working at scale in AI at an organization where there were hundreds of millions of dollars in revenue at stake with the operation of these models and that they were impacting millions of people's financial livelihoods. And so it just got me hyper-focused on these issues around making sure that your AI worked well and it worked well for your company and it worked well for the people who were being affected by it. At the time when I was doing this 2016, 2017, 2018, there just wasn't any tooling out there to support this production management model monitoring life phase of the life cycle. And so we basically left to start the company that I wanted. And John has a his own story. I'll let let you share that one, John. >> Go ahead John, you're up. >> Yeah, so I'm coming at this from a different world. So I'm on leave now from a tenured role in academia where I was leading a large lab focusing on the intersection of machine learning and economics. And so questions like fairness or the response to the dynamism on the underlying environment have been around for quite a long time in that space. And so I've been thinking very deeply about some of those more like R and D style questions as well as having deployed some automation code across a couple of different industries, some in online advertising, some in the healthcare space and so on, where concerns of, again, fairness come to bear. And so Adam and I connected to understand the space of what that might look like in the 2018 20 19 realm from a quantitative and from a human-centered point of view. And so booted things up from there. >> Yeah, bring that applied engineering R and D into the Capital One, DNA that he had at scale. I could see that fit. I got to ask you now, next step, as you guys move out and think about LLMs and the recent AI news around the generative models and the foundational models like ChatGPT, how should we be looking at that news and everyone watching might be thinking the same thing. I know at the board level companies like, we should refactor our business, this is the future. It's that kind of moment, and the tech team's like, okay, boss, how do we do this again? Or are they prepared? How should we be thinking? How should people watching be thinking about LLMs? >> Yeah, I think they really are transformative. And so, I mean, we're seeing companies all over the place. Everything from large tech companies to a lot of our large enterprise customers are launching significant projects at core parts of their business. And so, yeah, I would be surprised, if you're serious about becoming an AI native company, which most leading companies are, then this is a trend that you need to be taking seriously. And we're seeing the adoption rate. It's funny, I would say the AI adoption in the broader business world really started, let's call it four or five years ago, and it was a relatively slow adoption rate, but I think all that kind of investment in and scaling the maturity curve has paid off because the rate at which people are adopting and deploying systems based on this is tremendous. I mean, this has all just happened in the few months and we're already seeing people get systems into production. So, now there's a lot of things you have to guarantee in order to put these in production in a way that basically is added into your business and doesn't cause more headaches than it solves. And so that's where we help customers is where how do you put these out there in a way that they're going to represent your company well, they're going to perform well, they're going to do their job and do it properly. >> So in the use case, as a customer, as I think about this, there's workflows. They might have had an ML AI ops team that's around IT. Their inference engines are out there. They probably don't have a visibility on say how much it costs, they're kicking the tires. When you look at the deployment, there's a cost piece, there's a workflow piece, there's fairness you mentioned John, what should be, I should be thinking about if I'm going to be deploying stuff into production, I got to think about those things. What's your opinion? >> Yeah, I'm happy to dive in on that one. So monitoring in general is extremely important once you have one of these LLMs in production, and there have been some changes versus traditional monitoring that we can dive deeper into that LLMs are really accelerated. But a lot of that bread and butter style of things you should be looking out for remain just as important as they are for what you might call traditional machine learning models. So the underlying environment of data streams, the way users interact with these models, these are all changing over time. And so any performance metrics that you care about, traditional ones like an accuracy, if you can define that for an LLM, ones around, for example, fairness or bias. If that is a concern for your particular use case and so on. Those need to be tracked. Now there are some interesting changes that LLMs are bringing along as well. So most ML models in production that we see are relatively static in the sense that they're not getting flipped in more than maybe once a day or once a week or they're just set once and then not changed ever again. With LLMs, there's this ongoing value alignment or collection of preferences from users that is often constantly updating the model. And so that opens up all sorts of vectors for, I won't say attack, but for problems to arise in production. Like users might learn to use your system in a different way and thus change the way those preferences are getting collected and thus change your system in ways that you never intended. So maybe that went through governance already internally at the company and now it's totally, totally changed and it's through no fault of your own, but you need to be watching over that for sure. >> Talk about the reinforced learnings from human feedback. How's that factoring in to the LLMs? Is that part of it? Should people be thinking about that? Is that a component that's important? >> It certainly is, yeah. So this is one of the big tweaks that happened with InstructGPT, which is the basis model behind ChatGPT and has since gone on to be used all over the place. So value alignment I think is through RLHF like you mentioned is a very interesting space to get into and it's one that you need to watch over. Like, you're asking humans for feedback over outputs from a model and then you're updating the model with respect to that human feedback. And now you've thrown humans into the loop here in a way that is just going to complicate things. And it certainly helps in many ways. You can ask humans to, let's say that you're deploying an internal chat bot at an enterprise, you could ask humans to align that LLM behind the chatbot to, say company values. And so you're listening feedback about these company values and that's going to scoot that chatbot that you're running internally more toward the kind of language that you'd like to use internally on like a Slack channel or something like that. Watching over that model I think in that specific case, that's a compliance and HR issue as well. So while it is part of the greater LLM stack, you can also view that as an independent bit to watch over. >> Got it, and these are important factors. When people see the Bing news, they freak out how it's doing great. Then it goes off the rails, it goes big, fails big. (laughing) So these models people see that, is that human interaction or is that feedback, is that not accepting it or how do people understand how to take that input in and how to build the right apps around LLMs? This is a tough question. >> Yeah, for sure. So some of the examples that you'll see online where these chatbots go off the rails are obviously humans trying to break the system, but some of them clearly aren't. And that's because these are large statistical models and we don't know what's going to pop out of them all the time. And even if you're doing as much in-house testing at the big companies like the Go-HERE's and the OpenAI's of the world, to try to prevent things like toxicity or racism or other sorts of bad content that might lead to bad pr, you're never going to catch all of these possible holes in the model itself. And so, again, it's very, very important to keep watching over that while it's in production. >> On the business model side, how are you guys doing? What's the approach? How do you guys engage with customers? Take a minute to explain the customer engagement. What do they need? What do you need? How's that work? >> Yeah, I can talk a little bit about that. So it's really easy to get started. It's literally a matter of like just handing out an API key and people can get started. And so we also offer alternative, we also offer versions that can be installed on-prem for models that, we find a lot of our customers have models that deal with very sensitive data. So you can run it in your cloud account or use our cloud version. And so yeah, it's pretty easy to get started with this stuff. We find people start using it a lot of times during the validation phase 'cause that way they can start baselining performance models, they can do champion challenger, they can really kind of baseline the performance of, maybe they're considering different foundation models. And so it's a really helpful tool for understanding differences in the way these models perform. And then from there they can just flow that into their production inferencing, so that as these systems are out there, you have really kind of real time monitoring for anomalies and for all sorts of weird behaviors as well as that continuous feedback loop that helps you make make your product get better and observability and you can run all sorts of aggregated reports to really understand what's going on with these models when they're out there deciding. I should also add that we just today have another way to adopt Arthur and that is we are in the AWS marketplace, and so we are available there just to make it that much easier to use your cloud credits, skip the procurement process, and get up and running really quickly. >> And that's great 'cause Amazon's got SageMaker, which handles a lot of privacy stuff, all kinds of cool things, or you can get down and dirty. So I got to ask on the next one, production is a big deal, getting stuff into production. What have you guys learned that you could share to folks watching? Is there a cost issue? I got to monitor, obviously you brought that up, we talked about the even reinforcement issues, all these things are happening. What is the big learnings that you could share for people that are going to put these into production to watch out for, to plan for, or be prepared for, hope for the best plan for the worst? What's your advice? >> I can give a couple opinions there and I'm sure Adam has. Well, yeah, the big one from my side is, again, I had mentioned this earlier, it's just the input data streams because humans are also exploring how they can use these systems to begin with. It's really, really hard to predict the type of inputs you're going to be seeing in production. Especially, we always talk about chatbots, but then any generative text tasks like this, let's say you're taking in news articles and summarizing them or something like that, it's very hard to get a good sampling even of the set of news articles in such a way that you can really predict what's going to pop out of that model. So to me, it's, adversarial maybe isn't the word that I would use, but it's an unnatural shifting input distribution of like prompts that you might see for these models. That's certainly one. And then the second one that I would talk about is, it can be hard to understand the costs, the inference time costs behind these LLMs. So the pricing on these is always changing as the models change size, it might go up, it might go down based on model size, based on energy cost and so on, but your pricing per token or per a thousand tokens and that I think can be difficult for some clients to wrap their head around. Again, you don't know how these systems are going to be used after all so it can be tough. And so again that's another metric that really should be tracked. >> Yeah, and there's a lot of trade off choices in there with like, how many tokens do you want at each step and in the sequence and based on, you have (indistinct) and you reject these tokens and so based on how your system's operating, that can make the cost highly variable. And that's if you're using like an API version that you're paying per token. A lot of people also choose to run these internally and as John mentioned, the inference time on these is significantly higher than a traditional classifi, even NLP classification model or tabular data model, like orders of magnitude higher. And so you really need to understand how that, as you're constantly iterating on these models and putting out new versions and new features in these models, how that's affecting the overall scale of that inference cost because you can use a lot of computing power very quickly with these profits. >> Yeah, scale, performance, price all come together. I got to ask while we're here on the secret sauce of the company, if you had to describe to people out there watching, what's the secret sauce of the company? What's the key to your success? >> Yeah, so John leads our research team and they've had a number of really cool, I think AI as much as it's been hyped for a while, it's still commercial AI at least is really in its infancy. And so the way we're able to pioneer new ways to think about performance for computer vision NLP LLMs is probably the thing that I'm proudest about. John and his team publish papers all the time at Navs and other places. But I think it's really being able to define what performance means for basically any kind of model type and give people really powerful tools to understand that on an ongoing basis. >> John, secret sauce, how would you describe it? You got all the action happening all around you. >> Yeah, well I going to appreciate Adam talking me up like that. No, I. (all laughing) >> Furrier: Robs to you. >> I would also say a couple of other things here. So we have a very strong engineering team and so I think some early hires there really set the standard at a very high bar that we've maintained as we've grown. And I think that's really paid dividends as scalabilities become even more of a challenge in these spaces, right? And so that's not just scalability when it comes to LLMs, that's scalability when it comes to millions of inferences per day, that kind of thing as well in traditional ML models. And I think that's compared to potential competitors, that's really... Well, it's made us able to just operate more efficiently and pass that along to the client. >> Yeah, and I think the infancy comment is really important because it's the beginning. You really is a long journey ahead. A lot of change coming, like I said, it's a huge wave. So I'm sure you guys got a lot of plannings at the foundation even for your own company, so I appreciate the candid response there. Final question for you guys is, what should the top things be for a company in 2023? If I'm going to set the agenda and I'm a customer moving forward, putting the pedal to the metal, so to speak, what are the top things I should be prioritizing or I need to do to be successful with AI in 2023? >> Yeah, I think, so number one, as we talked about, we've been talking about this entire episode, the things are changing so quickly and the opportunities for business transformation and really disrupting different applications, different use cases, is almost, I don't think we've even fully comprehended how big it is. And so really digging in to your business and understanding where I can apply these new sets of foundation models is, that's a top priority. The interesting thing is I think there's another force at play, which is the macroeconomic conditions and a lot of places are, they're having to work harder to justify budgets. So in the past, couple years ago maybe, they had a blank check to spend on AI and AI development at a lot of large enterprises that was limited primarily by the amount of talent they could scoop up. Nowadays these expenditures are getting scrutinized more. And so one of the things that we really help our customers with is like really calculating the ROI on these things. And so if you have models out there performing and you have a new version that you can put out that lifts the performance by 3%, how many tens of millions of dollars does that mean in business benefit? Or if I want to go to get approval from the CFO to spend a few million dollars on this new project, how can I bake in from the beginning the tools to really show the ROI along the way? Because I think in these systems when done well for a software project, the ROI can be like pretty spectacular. Like we see over a hundred percent ROI in the first year on some of these projects. And so, I think in 2023, you just need to be able to show what you're getting for that spend. >> It's a needle moving moment. You see it all the time with some of these aha moments or like, whoa, blown away. John, I want to get your thoughts on this because one of the things that comes up a lot for companies that I talked to, that are on my second wave, I would say coming in, maybe not, maybe the front wave of adopters is talent and team building. You mentioned some of the hires you got were game changing for you guys and set the bar high. As you move the needle, new developers going to need to come in. What's your advice given that you've been a professor, you've seen students, I know a lot of computer science people want to shift, they might not be yet skilled in AI, but they're proficient in programming, is that's going to be another opportunity with open source when things are happening. How do you talk to that next level of talent that wants to come in to this market to supplement teams and be on teams, lead teams? Any advice you have for people who want to build their teams and people who are out there and want to be a coder in AI? >> Yeah, I've advice, and this actually works for what it would take to be a successful AI company in 2023 as well, which is, just don't be afraid to iterate really quickly with these tools. The space is still being explored on what they can be used for. A lot of the tasks that they're used for now right? like creating marketing content using a machine learning is not a new thing to do. It just works really well now. And so I'm excited to see what the next year brings in terms of folks from outside of core computer science who are, other engineers or physicists or chemists or whatever who are learning how to use these increasingly easy to use tools to leverage LLMs for tasks that I think none of us have really thought about before. So that's really, really exciting. And so toward that I would say iterate quickly. Build things on your own, build demos, show them the friends, host them online and you'll learn along the way and you'll have somebody to show for it. And also you'll help us explore that space. >> Guys, congratulations with Arthur. Great company, great picks and shovels opportunities out there for everybody. Iterate fast, get in quickly and don't be afraid to iterate. Great advice and thank you for coming on and being part of the AWS showcase, thanks. >> Yeah, thanks for having us on John. Always a pleasure. >> Yeah, great stuff. Adam Wenchel, John Dickerson with Arthur. Thanks for coming on theCUBE. I'm John Furrier, your host. Generative AI and AWS. Keep it right there for more action with theCUBE. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the AWS Startup Showcase has opened the eyes to everybody and the demos we get of them, but the change, the acceleration, And in the next 12 months, of the equivalent of the printing press and how quickly you can accelerate As people come into the field, aspects of the LLM ecosystem. and that the language models in seconds if you want. and talk about the company. of the life cycle. in the 2018 20 19 realm I got to ask you now, next step, in the broader business world So in the use case, as a the way users interact with these models, How's that factoring in to that LLM behind the chatbot and how to build the Go-HERE's and the OpenAI's What's the approach? differences in the way that are going to put So the pricing on these is always changing and in the sequence What's the key to your success? And so the way we're able to You got all the action Yeah, well I going to appreciate Adam and pass that along to the client. so I appreciate the candid response there. get approval from the CFO to spend You see it all the time with some of A lot of the tasks that and being part of the Yeah, thanks for having us Generative AI and AWS.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Adam WenchelPERSON

0.99+

AmazonORGANIZATION

0.99+

AdamPERSON

0.99+

John FurrierPERSON

0.99+

twoQUANTITY

0.99+

John DickersonPERSON

0.99+

2016DATE

0.99+

2018DATE

0.99+

2023DATE

0.99+

3%QUANTITY

0.99+

2017DATE

0.99+

Capital OneORGANIZATION

0.99+

last weekDATE

0.99+

AWSORGANIZATION

0.99+

ArthurPERSON

0.99+

PythonTITLE

0.99+

millionsQUANTITY

0.99+

TwoQUANTITY

0.99+

each stepQUANTITY

0.99+

2018 20 19DATE

0.99+

two schoolsQUANTITY

0.99+

couple years agoDATE

0.99+

once a weekQUANTITY

0.99+

oneQUANTITY

0.98+

first yearQUANTITY

0.98+

SwamiPERSON

0.98+

four years agoDATE

0.98+

fourDATE

0.98+

first timeQUANTITY

0.98+

ArthurORGANIZATION

0.98+

two great guestsQUANTITY

0.98+

next yearDATE

0.98+

once a dayQUANTITY

0.98+

six weeksQUANTITY

0.97+

10 years agoDATE

0.97+

ChatGPTTITLE

0.97+

second oneQUANTITY

0.96+

three peopleQUANTITY

0.96+

frontEVENT

0.95+

second waveEVENT

0.95+

JanuaryDATE

0.95+

hundreds of millions of dollarsQUANTITY

0.95+

five years agoDATE

0.94+

about a month agoDATE

0.94+

tens of millionsQUANTITY

0.93+

todayDATE

0.92+

next 12 monthsDATE

0.91+

LangChainORGANIZATION

0.91+

over a hundred percentQUANTITY

0.91+

million dollarsQUANTITY

0.89+

millions of inferencesQUANTITY

0.89+

theCUBEORGANIZATION

0.88+

Opening Panel | Generative AI: Hype or Reality | AWS Startup Showcase S3 E1


 

(light airy music) >> Hello, everyone, welcome to theCUBE's presentation of the AWS Startup Showcase, AI and machine learning. "Top Startups Building Generative AI on AWS." This is season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talking about AI machine learning. We have three great guests Bratin Saha, VP, Vice President of Machine Learning and AI Services at Amazon Web Services. Tom Mason, the CTO of Stability AI, and Aidan Gomez, CEO and co-founder of Cohere. Two practitioners doing startups and AWS. Gentlemen, thank you for opening up this session, this episode. Thanks for coming on. >> Thank you. >> Thank you. >> Thank you. >> So the topic is hype versus reality. So I think we're all on the reality is great, hype is great, but the reality's here. I want to get into it. Generative AI's got all the momentum, it's going mainstream, it's kind of come out of the behind the ropes, it's now mainstream. We saw the success of ChatGPT, opens up everyone's eyes, but there's so much more going on. Let's jump in and get your early perspectives on what should people be talking about right now? What are you guys working on? We'll start with AWS. What's the big focus right now for you guys as you come into this market that's highly active, highly hyped up, but people see value right out of the gate? >> You know, we have been working on generative AI for some time. In fact, last year we released Code Whisperer, which is about using generative AI for software development and a number of customers are using it and getting real value out of it. So generative AI is now something that's mainstream that can be used by enterprise users. And we have also been partnering with a number of other companies. So, you know, stability.ai, we've been partnering with them a lot. We want to be partnering with other companies as well. In seeing how we do three things, you know, first is providing the most efficient infrastructure for generative AI. And that is where, you know, things like Trainium, things like Inferentia, things like SageMaker come in. And then next is the set of models and then the third is the kind of applications like Code Whisperer and so on. So, you know, it's early days yet, but clearly there's a lot of amazing capabilities that will come out and something that, you know, our customers are starting to pay a lot of attention to. >> Tom, talk about your company and what your focus is and why the Amazon Web Services relationship's important for you? >> So yeah, we're primarily committed to making incredible open source foundation models and obviously stable effusions been our kind of first big model there, which we trained all on AWS. We've been working with them over the last year and a half to develop, obviously a big cluster, and bring all that compute to training these models at scale, which has been a really successful partnership. And we're excited to take it further this year as we develop commercial strategy of the business and build out, you know, the ability for enterprise customers to come and get all the value from these models that we think they can get. So we're really excited about the future. We got hugely exciting pipeline for this year with new modalities and video models and wonderful things and trying to solve images for once and for all and get the kind of general value and value proposition correct for customers. So it's a really exciting time and very honored to be part of it. >> It's great to see some of your customers doing so well out there. Congratulations to your team. Appreciate that. Aidan, let's get into what you guys do. What does Cohere do? What are you excited about right now? >> Yeah, so Cohere builds large language models, which are the backbone of applications like ChatGPT and GPT-3. We're extremely focused on solving the issues with adoption for enterprise. So it's great that you can make a super flashy demo for consumers, but it takes a lot to actually get it into billion user products and large global enterprises. So about six months ago, we released our command models, which are some of the best that exist for large language models. And in December, we released our multilingual text understanding models and that's on over a hundred different languages and it's trained on, you know, authentic data directly from native speakers. And so we're super excited to continue pushing this into enterprise and solving those barriers for adoption, making this transformation a reality. >> Just real quick, while I got you there on the new products coming out. Where are we in the progress? People see some of the new stuff out there right now. There's so much more headroom. Can you just scope out in your mind what that looks like? Like from a headroom standpoint? Okay, we see ChatGPT. "Oh yeah, it writes my papers for me, does some homework for me." I mean okay, yawn, maybe people say that, (Aidan chuckles) people excited or people are blown away. I mean, it's helped theCUBE out, it helps me, you know, feed up a little bit from my write-ups but it's not always perfect. >> Yeah, at the moment it's like a writing assistant, right? And it's still super early in the technologies trajectory. I think it's fascinating and it's interesting but its impact is still really limited. I think in the next year, like within the next eight months, we're going to see some major changes. You've already seen the very first hints of that with stuff like Bing Chat, where you augment these dialogue models with an external knowledge base. So now the models can be kept up to date to the millisecond, right? Because they can search the web and they can see events that happened a millisecond ago. But that's still limited in the sense that when you ask the question, what can these models actually do? Well they can just write text back at you. That's the extent of what they can do. And so the real project, the real effort, that I think we're all working towards is actually taking action. So what happens when you give these models the ability to use tools, to use APIs? What can they do when they can actually affect change out in the real world, beyond just streaming text back at the user? I think that's the really exciting piece. >> Okay, so I wanted to tee that up early in the segment 'cause I want to get into the customer applications. We're seeing early adopters come in, using the technology because they have a lot of data, they have a lot of large language model opportunities and then there's a big fast follower wave coming behind it. I call that the people who are going to jump in the pool early and get into it. They might not be advanced. Can you guys share what customer applications are being used with large language and vision models today and how they're using it to transform on the early adopter side, and how is that a tell sign of what's to come? >> You know, one of the things we have been seeing both with the text models that Aidan talked about as well as the vision models that stability.ai does, Tom, is customers are really using it to change the way you interact with information. You know, one example of a customer that we have, is someone who's kind of using that to query customer conversations and ask questions like, you know, "What was the customer issue? How did we solve it?" And trying to get those kinds of insights that was previously much harder to do. And then of course software is a big area. You know, generating software, making that, you know, just deploying it in production. Those have been really big areas that we have seen customers start to do. You know, looking at documentation, like instead of you know, searching for stuff and so on, you know, you just have an interactive way, in which you can just look at the documentation for a product. You know, all of this goes to where we need to take the technology. One of which is, you know, the models have to be there but they have to work reliably in a production setting at scale, with privacy, with security, and you know, making sure all of this is happening, is going to be really key. That is what, you know, we at AWS are looking to do, which is work with partners like stability and others and in the open source and really take all of these and make them available at scale to customers, where they work reliably. >> Tom, Aidan, what's your thoughts on this? Where are customers landing on this first use cases or set of low-hanging fruit use cases or applications? >> Yeah, so I think like the first group of adopters that really found product market fit were the copywriting companies. So one great example of that is HyperWrite. Another one is Jasper. And so for Cohere, that's the tip of the iceberg, like there's a very long tail of usage from a bunch of different applications. HyperWrite is one of our customers, they help beat writer's block by drafting blog posts, emails, and marketing copy. We also have a global audio streaming platform, which is using us the power of search engine that can comb through podcast transcripts, in a bunch of different languages. Then a global apparel brand, which is using us to transform how they interact with their customers through a virtual assistant, two dozen global news outlets who are using us for news summarization. So really like, these large language models, they can be deployed all over the place into every single industry sector, language is everywhere. It's hard to think of any company on Earth that doesn't use language. So it's, very, very- >> We're doing it right now. We got the language coming in. >> Exactly. >> We'll transcribe this puppy. All right. Tom, on your side, what do you see the- >> Yeah, we're seeing some amazing applications of it and you know, I guess that's partly been, because of the growth in the open source community and some of these applications have come from there that are then triggering this secondary wave of innovation, which is coming a lot from, you know, controllability and explainability of the model. But we've got companies like, you know, Jasper, which Aidan mentioned, who are using stable diffusion for image generation in block creation, content creation. We've got Lensa, you know, which exploded, and is built on top of stable diffusion for fine tuning so people can bring themselves and their pets and you know, everything into the models. So we've now got fine tuned stable diffusion at scale, which is democratized, you know, that process, which is really fun to see your Lensa, you know, exploded. You know, I think it was the largest growing app in the App Store at one point. And lots of other examples like NightCafe and Lexica and Playground. So seeing lots of cool applications. >> So much applications, we'll probably be a customer for all you guys. We'll definitely talk after. But the challenges are there for people adopting, they want to get into what you guys see as the challenges that turn into opportunities. How do you see the customers adopting generative AI applications? For example, we have massive amounts of transcripts, timed up to all the videos. I don't even know what to do. Do I just, do I code my API there. So, everyone has this problem, every vertical has these use cases. What are the challenges for people getting into this and adopting these applications? Is it figuring out what to do first? Or is it a technical setup? Do they stand up stuff, they just go to Amazon? What do you guys see as the challenges? >> I think, you know, the first thing is coming up with where you think you're going to reimagine your customer experience by using generative AI. You know, we talked about Ada, and Tom talked about a number of these ones and you know, you pick up one or two of these, to get that robust. And then once you have them, you know, we have models and we'll have more models on AWS, these large language models that Aidan was talking about. Then you go in and start using these models and testing them out and seeing whether they fit in use case or not. In many situations, like you said, John, our customers want to say, "You know, I know you've trained these models on a lot of publicly available data, but I want to be able to customize it for my use cases. Because, you know, there's some knowledge that I have created and I want to be able to use that." And then in many cases, and I think Aidan mentioned this. You know, you need these models to be up to date. Like you can't have it staying. And in those cases, you augmented with a knowledge base, you know you have to make sure that these models are not hallucinating. And so you need to be able to do the right kind of responsible AI checks. So, you know, you start with a particular use case, and there are a lot of them. Then, you know, you can come to AWS, and then look at one of the many models we have and you know, we are going to have more models for other modalities as well. And then, you know, play around with the models. We have a playground kind of thing where you can test these models on some data and then you can probably, you will probably want to bring your own data, customize it to your own needs, do some of the testing to make sure that the model is giving the right output and then just deploy it. And you know, we have a lot of tools. >> Yeah. >> To make this easy for our customers. >> How should people think about large language models? Because do they think about it as something that they tap into with their IP or their data? Or is it a large language model that they apply into their system? Is the interface that way? What's the interaction look like? >> In many situations, you can use these models out of the box. But in typical, in most of the other situations, you will want to customize it with your own data or with your own expectations. So the typical use case would be, you know, these are models are exposed through APIs. So the typical use case would be, you know you're using these APIs a little bit for testing and getting familiar and then there will be an API that will allow you to train this model further on your data. So you use that AI, you know, make sure you augmented the knowledge base. So then you use those APIs to customize the model and then just deploy it in an application. You know, like Tom was mentioning, a number of companies that are using these models. So once you have it, then you know, you again, use an endpoint API and use it in an application. >> All right, I love the example. I want to ask Tom and Aidan, because like most my experience with Amazon Web Service in 2007, I would stand up in EC2, put my code on there, play around, if it didn't work out, I'd shut it down. Is that a similar dynamic we're going to see with the machine learning where developers just kind of log in and stand up infrastructure and play around and then have a cloud-like experience? >> So I can go first. So I mean, we obviously, with AWS working really closely with the SageMaker team, do fantastic platform there for ML training and inference. And you know, going back to your point earlier, you know, where the data is, is hugely important for companies. Many companies bringing their models to their data in AWS on-premise for them is hugely important. Having the models to be, you know, open sources, makes them explainable and transparent to the adopters of those models. So, you know, we are really excited to work with the SageMaker team over the coming year to bring companies to that platform and make the most of our models. >> Aidan, what's your take on developers? Do they just need to have a team in place, if we want to interface with you guys? Let's say, can they start learning? What do they got to do to set up? >> Yeah, so I think for Cohere, our product makes it much, much easier to people, for people to get started and start building, it solves a lot of the productionization problems. But of course with SageMaker, like Tom was saying, I think that lowers a barrier even further because it solves problems like data privacy. So I want to underline what Bratin was saying earlier around when you're fine tuning or when you're using these models, you don't want your data being incorporated into someone else's model. You don't want it being used for training elsewhere. And so the ability to solve for enterprises, that data privacy and that security guarantee has been hugely important for Cohere, and that's very easy to do through SageMaker. >> Yeah. >> But the barriers for using this technology are coming down super quickly. And so for developers, it's just becoming completely intuitive. I love this, there's this quote from Andrej Karpathy. He was saying like, "It really wasn't on my 2022 list of things to happen that English would become, you know, the most popular programming language." And so the barrier is coming down- >> Yeah. >> Super quickly and it's exciting to see. >> It's going to be awesome for all the companies here, and then we'll do more, we're probably going to see explosion of startups, already seeing that, the maps, ecosystem maps, the landscape maps are happening. So this is happening and I'm convinced it's not yesterday's chat bot, it's not yesterday's AI Ops. It's a whole another ballgame. So I have to ask you guys for the final question before we kick off the company's showcasing here. How do you guys gauge success of generative AI applications? Is there a lens to look through and say, okay, how do I see success? It could be just getting a win or is it a bigger picture? Bratin we'll start with you. How do you gauge success for generative AI? >> You know, ultimately it's about bringing business value to our customers. And making sure that those customers are able to reimagine their experiences by using generative AI. Now the way to get their ease, of course to deploy those models in a safe, effective manner, and ensuring that all of the robustness and the security guarantees and the privacy guarantees are all there. And we want to make sure that this transitions from something that's great demos to actual at scale products, which means making them work reliably all of the time not just some of the time. >> Tom, what's your gauge for success? >> Look, I think this, we're seeing a completely new form of ways to interact with data, to make data intelligent, and directly to bring in new revenue streams into business. So if businesses can use our models to leverage that and generate completely new revenue streams and ultimately bring incredible new value to their customers, then that's fantastic. And we hope we can power that revolution. >> Aidan, what's your take? >> Yeah, reiterating Bratin and Tom's point, I think that value in the enterprise and value in market is like a huge, you know, it's the goal that we're striving towards. I also think that, you know, the value to consumers and actual users and the transformation of the surface area of technology to create experiences like ChatGPT that are magical and it's the first time in human history we've been able to talk to something compelling that's not a human. I think that in itself is just extraordinary and so exciting to see. >> It really brings up a whole another category of markets. B2B, B2C, it's B2D, business to developer. Because I think this is kind of the big trend the consumers have to win. The developers coding the apps, it's a whole another sea change. Reminds me everyone use the "Moneyball" movie as example during the big data wave. Then you know, the value of data. There's a scene in "Moneyball" at the end, where Billy Beane's getting the offer from the Red Sox, then the owner says to the Red Sox, "If every team's not rebuilding their teams based upon your model, there'll be dinosaurs." I think that's the same with AI here. Every company will have to need to think about their business model and how they operate with AI. So it'll be a great run. >> Completely Agree >> It'll be a great run. >> Yeah. >> Aidan, Tom, thank you so much for sharing about your experiences at your companies and congratulations on your success and it's just the beginning. And Bratin, thanks for coming on representing AWS. And thank you, appreciate for what you do. Thank you. >> Thank you, John. Thank you, Aidan. >> Thank you John. >> Thanks so much. >> Okay, let's kick off season three, episode one. I'm John Furrier, your host. Thanks for watching. (light airy music)

Published Date : Mar 9 2023

SUMMARY :

of the AWS Startup Showcase, of the behind the ropes, and something that, you know, and build out, you know, Aidan, let's get into what you guys do. and it's trained on, you know, it helps me, you know, the ability to use tools, to use APIs? I call that the people and you know, making sure the first group of adopters We got the language coming in. Tom, on your side, what do you see the- and you know, everything into the models. they want to get into what you guys see and you know, you pick for our customers. then you know, you again, All right, I love the example. and make the most of our models. And so the ability to And so the barrier is coming down- and it's exciting to see. So I have to ask you guys and ensuring that all of the robustness and directly to bring in new and it's the first time in human history the consumers have to win. and it's just the beginning. I'm John Furrier, your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

TomPERSON

0.99+

Tom MasonPERSON

0.99+

AidanPERSON

0.99+

Red SoxORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Andrej KarpathyPERSON

0.99+

Bratin SahaPERSON

0.99+

DecemberDATE

0.99+

2007DATE

0.99+

John FurrierPERSON

0.99+

Aidan GomezPERSON

0.99+

AmazonORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Billy BeanePERSON

0.99+

BratinPERSON

0.99+

MoneyballTITLE

0.99+

oneQUANTITY

0.99+

AdaPERSON

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

EarthLOCATION

0.99+

yesterdayDATE

0.99+

Two practitionersQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

ChatGPTTITLE

0.99+

next yearDATE

0.99+

Code WhispererTITLE

0.99+

thirdQUANTITY

0.99+

this yearDATE

0.99+

App StoreTITLE

0.99+

first timeQUANTITY

0.98+

firstQUANTITY

0.98+

InferentiaTITLE

0.98+

EC2TITLE

0.98+

GPT-3TITLE

0.98+

bothQUANTITY

0.98+

LensaTITLE

0.98+

SageMakerORGANIZATION

0.98+

three thingsQUANTITY

0.97+

CohereORGANIZATION

0.96+

over a hundred different languagesQUANTITY

0.96+

EnglishOTHER

0.96+

one exampleQUANTITY

0.96+

about six months agoDATE

0.96+

OneQUANTITY

0.96+

first useQUANTITY

0.96+

SageMakerTITLE

0.96+

Bing ChatTITLE

0.95+

one pointQUANTITY

0.95+

TrainiumTITLE

0.95+

LexicaTITLE

0.94+

PlaygroundTITLE

0.94+

three great guestsQUANTITY

0.93+

HyperWriteTITLE

0.92+

Heather Ruden & Jenni Troutman | International Women's Day


 

(upbeat music) >> Hello, everyone. Welcome to theCUBE's special presentation of International Women's Day. I'm John Furrier, host of theCUBE. Jenni Troutman is here, Director of Products and Services, and Training and Certification at AWS, and Heather Ruden, Director of Education Programs, Training and Certification. Thanks for coming on theCUBE and for the International Women's Day special program. >> Thanks so much for having us. >> So, I'll just get it out of the way. I'm a big fan of what you guys do. I've been shouting at the top of my lungs, "It's free. Get cloud training and you'll have a six figure job." Pretty much. I'm over amplifying. But this is really a big opportunity in the industry, education and the skills gap, and the skill velocities that's changing. New roles are coming on around cloud native, cloud native operators, cybersecurity. There's so much excitement going on around the industry, and all these open positions, and they need new talent. So you can't get a degree for some of these things. So, nope, it doesn't matter what school you went to, everyone's kind of level. This is a really big deal. So, Heather, share with us your thoughts as well on this topic. Jenni, you too. Like, where are you guys at? 'Cause this is a big opportunity for women and anyone to level up in the industry. >> Absolutely. So I'll jump in and then I'll hand it over to Jenni. We're your dream team here. We can talk about both sides of this. So I run a set of programs here at AWS that are really intended to help build the next generation of cloud builders. And we do that with a variety of programs, whether it is targeting young learners from kind of 12 and up. We have AWS GetIT that is designed to get women ambassadors or women mentors in front of girls 12 to 14 and get them curious about a career in STEM. We also have a program that is all digital online. It's available in 11 languages. It's got hundreds of courses. That's called AWS Educate that is designed to do exactly what you just talked about, expose the opportunities and start building cloud skills for learners at age 13 and up. They can go online and register with an email and start learning. We want them to understand not only what the opportunity is for them, but the ways that they can help influence and bring more diversity and more inclusion and into the cloud technology space, and just keep building all those amazing builders that we need here for our customers and partners. And those are the programs that I manage, but Jenni also has an amazing program, a set of programs. And so I'll hand it over to her as you get into the professional side of this things. >> So Jenni, you're on the product side. You've got the keys to the kingdom on all the materials and shaping it. What's your view on this? 'Cause this is a huge opportunity and it's always changing. What's the latest and greatest? >> It is a massive opportunity and to give you a sense, there was a study in '21 where IT executives said that talent availability is the biggest challenge to emerging tech adoption. 64% of IT executives said that up from only 4% the year before. So the challenge is growing really fast, which for everyone that's ready to go out there and learn and try something new is a massive opportunity. And that's really why I'm here. We provide all kinds of learning experiences for people across different cloud technologies to be able to not only gain the knowledge around cloud, but also the confidence to be able to build in the cloud. And so we look across different learner levels, different roles, different opportunities, and we provide those experiences where people can actually get hands-on in a totally risk-free environment and practice building in the cloud so they can go and be ready to get their certifications, their AWS certifications, give them the credentials to be able to show an employer they can do it, and then go out and get these jobs. It's really exciting. And we go kind of end to end from the very beginning. What is cloud? I want to know what it is all the way through to I can prove that I can build in the cloud and I'm ready for a job. >> So Jenni, you nailed that confidence word. I think I want to double click on that. And Heather, you talked about you're the dream team. You guys, you're the go to market, you bring this to the marketplace. Jenni, you get the products. This is the key, but to me the the international women days angle is, is that what I hear over and over again is that, "It's too technical. I'm not qualified." It can be scary. We had a guest on who has two double E degrees in robotics and aerospace and she's hard charging. She almost lost her confidence twice she said in her career. But she was hard charging. It can get scary, but also the ability to level up fast is just as good. So if you can break through that confidence and keep the curiosity and be a builder, talk about that dynamic 'cause you guys are in the middle of it, you're in the industry, how do you handle that? 'Cause I think that's a big thing that comes up over and over again. And confidence is not just women, it's men too. But women can always, that comes up as a theme. >> It is. It is a big challenge. I mean, I've struggled with it personally and I mentor a lot of women and that is the number one challenge that is holding women back from really being able to advance is the confidence to step out there and show what they can do. And what I love about some of the products we've put out recently is we have AWS Skill Builder. You can go online, you can get all kinds of free core training and if you want to go deeper, you can go deeper. And there's a lot of different options on there. But what it does is not only gives you that based knowledge, but you can actually go in. We have something called AWS Labs. You can go in and you can actually practice on the AWS console with the services that people are using in their jobs every day without any risk of doing something that is going to blow up in your face. You're not going to suddenly get this big AWS bill. You're not going to break something that's out there running. You just go in. It's your own little environment that gets wiped when you're done and you can practice. And there's lots of different ways to learn as well. So if you go in there and you're watching a video and to your point you're like, "Oh my gosh, this is too technical. I can't understand it. I don't know what I'm going to go do." You can go another route. There's something called AWS Cloud Quest. It's a game. You go in and it's like you're gaming and it walks you through. You're actually in a virtual world. You're walking through and it's telling you, "Hey, go build this and if you need help, here's hints and here's tips." And it continues to build on itself. So you're learning and you're applying practical skills and it's at your own pace. You don't have to watch somebody else talking that is going at a pace that maybe accelerates beyond what you're ready. You can do it at your own pace, you can redo it, you can try it again until you feel confident that you know it and you're really ready to move on to the next thing. Personally, I find that hugely valuable. I go in and do these myself and I sit there and I have a lot of engineers on my team, very smart people. And I have my own imposter syndrome. I get nervous to go talk to them. Like, are they going to think I'm totally lost? And so I go in and I learn some of this myself by experiment. And then I feel like, okay, now I can go ask them some intelligent questions and they're not going to be like, "Oh gosh, my leader is totally unaware of what we're doing." And so I think that we all struggle with confidence. I think everybody does, but I see it especially in women as I mentor them. And that's what I encourage them to do is go and on your own time, practice a bit, get a little bit of experience and once you feel like you can throw a couple words out there that you know what they mean and suddenly other people look at you like, "Oh, she knows what she's talking about." And you can kind of get past that feeling. >> Well Jenni, you nailed it. Heather, she just mentioned she's in the job and she's going and she's still leveling up. That's the end when you're in, but it's also the barriers to entry are lowering. You guys are doing a good job of getting people in, but also growing fast too. So there's two dynamics at play here. How do people do this? What's the playbook? Because I think that's really key, easy to get in. And then once you're in, you can level up fast at your own pace to ride the wave. And then there's new stuff coming. I mean, every re:Invent there's 5,000 announcements. So it's like zillion new things and AI taught now. >> re:Invent is a perfect example of that ongoing imposter syndrome or confidence check for all of us. I think something that that Jenni said too is we really try and meet learners where they are and make sure that we have the support, whether it's accessibility requirements or we have the content that is built for the age that we're talking to, or we have a workforce development program called re/Start that is for people that have very little tech experience and really want to talk about a career in cloud, but they need a little bit more handholding. They need a combination of instructor-led and digital. But then we have AWS educators, I mentioned. If you want to be more self-directed, all of these tools are intended to work well together and to be complimentary and to take you on a journey as a learner. And the more skills you have, the more you increase your knowledge, the more you can take on more. But meeting folks where they are with a variety of programs, tools, languages, and accessibility really helps ensure that we can do that for learners throughout the world. >> That's awesome. Let's get into it. Let's get into the roadmaps of people and their personas. And you guys can share the programs that you have and where people could fit in. 'Cause this comes up a lot when I talk to folks. There's the young person who's I'm a gamer or whatever, I want to get a job. I'm in high school or an elementary or I want to tinker around or I'm in college or I'm learning, I'm an entry level kind of entry. Then you have the re-skilling. I'm going to change my careers, I'm kind of bored, I want to do something compelling. How do I get into the cloud game? And then the advanced re-skill is I want to get into cyber and AI and then there's other. Could you break down? Did I get that right or did I miss anything? And then what's available for those kind of lanes? So those persona lanes? >> Well, let's see, I could start with maybe the high schooler stuff and then we can bring Jenni in as well. I would say a great place to start for anyone is aws.amazon.com/training. That's going to give them the full suite of options that they could take on. If you're in high school, you can go onto AWS Educate. All you need is an email. And if you're 13 years and older, you can start exploring the types of jobs that are available in the cloud and you could start taking some introductory classes. You can do some of those labs in a safe environment that Jenni mentioned. That's a great place to start. If you are in an environment where you have an educator that is willing to go on this with you, this journey with you, we have this AWS GetIT program that is, again, educator-led. So it's an afterschool or it's an a program where we match mentors and students up with cloud professionals and they do some real-time experimentation. They build an app, they work on things together, and do a presentation at the end. The other thing I would say too is that if you are in a university, I would double check and see if the AWS Academy curriculum is already in your university. And if so, explore some of those classes there. We have instructor-led, educator-ready. course curriculum that we've designed that help people get to those certifications and get closer to those jobs and as well as hopefully then lead people right into skill builder and all the things that Jenni talked about to help them as they start out in a professional environment. >> So is the GetIT, is that an instructor-led that the person has to find someone for? Or is this available for them? >> It is through teachers. It's through educators. We are in, we've reached over 19,000 students. We're available in eight countries. There are ways for educators to lead this, but we want to make sure that we are helping the kids be successful and giving them an educator environment to do that. If they want to do it on their own, then they can absolutely go through AWS Educate or even and to explore where they want to get started. >> So what about someone who's educated in their middle of their career, might want to switch from being a biologist to a cloud cybersecurity guru or a cloud native operator? >> Yeah, so in that case, AWS re/Start is one of the great program for them to explore. We run that program with collaborating organizations in 160 cities in 80 countries throughout the world. That is a multi-week cohort-based program where we do take folks through a very clear path towards certification and job skilling that will help them get into those opportunities. Over 98% of the cohorts, the graduates of those cohorts get an interview and are hopefully on their path to getting a job. So that really has global reach. The partnership with collaborating organizations helps us ensure that we find communities that are often unreached by cloud skills training and we really work to keep a diverse focus on those cohorts and bring those folks into the cloud. >> Okay. Jenni, you've got the Skill Builder action here. What's going on on your side? Because you must have to manage all the change. I mean, AI is hot right now. I'm sure you're cranking away on curriculum and content for SageMaker, large language models, computer vision, cybersecurity. >> We do. There are a lot of options. >> How is your world? Tell us about what people can take out of way from your side. >> Yeah. So a great way to think about it is if they're already out in the workforce or they're entering the workforce, but they are technical, have technical skills is what are the roles that are interesting in the technologies that are interesting. Because the way we put out our training and our certifications is aligned to paths. So if you're look interested in a specific role. If you're interested in architecting a cloud environment or in security as you mentioned, and you want to go deep in security, there are AWS certifications that give you that. If you achieve them, they're very difficult. But if you work to them and achieve them, they give you the credential that you can take to an employer and say, "Look, I can do this job." And they are in very high demand. In fact that's where if you look at some of the publications that have come out, they talk about, what are people making if they have different certifications? What are the most in-demand certifications that are out there? And those are what help people get jobs. And so you identify what is that role or that technology area I want to learn. And then you have multiple options for how you build those skills depending on how you want to learn. And again, that's really our focus, is on providing experiences based on how people learn and making it accessible to them. 'Cause not everybody wants to learn in the same way. And so there is AWS Skill Builder where people can go learn on their own that is really great particularly for people who maybe are already working and have to learn in the evenings, on the weekends. People who like to learn at their own pace, who just want to be hands-on, but are self-starters. And they can get those whole learning plans through there all the way aligned to the certification and then they can go get their certification. There's also classroom training. So a lot of people maybe want to do continuous learning through an online, but want to go really deep with an expert in the room and maybe have a more focused period of time if they can go for a couple days. And so they can do classroom training. We provide a lot of classroom training. We have partners all over the globe who provide classroom training. And so there's that and what we find to be the most powerful is when you couple the two. If you can really get deep, you have an expert, you can ask questions, but first before you go do that, you get some of that foundational that you've kind of learned on your own. And then after you go back and reinforce, you go back online, you try out things that maybe you learned in the classroom, but you didn't quite, you hadn't used it enough yet to quite know how to do it. Now you can go back and actually use it, experiment and play around. And so we really encourage that kind of, figure out what are some areas you're interested in, go learn it and then go get a job and continue to learn because then once you learn that first area, you start to build confidence in it. Suddenly other areas become interesting. 'Cause as you said, cloud is changing fast. And once you learn a space, first of all you have to keep going back to stay up on it as it changes. But you quickly find that there are other areas that are really interesting too. >> I've observed that the training side, it's just like cloud itself, it's very agile. You can get hands-on quickly, you don't need to take a class, and then get in weeks later. You're in it like it's real time. So you're immersed in gamification and all kinds of ways to funnel into the either advanced tracks and certification. So you guys do a great job and I want to give you props for that and a shout out. The question I have for you guys is can you scope the opportunity for these certifications and opportunities for women in particular? What are some of the top jobs pulling down? Scope out the opportunity because I think when people hear that they really fall out of their chair, they go, "Wow, I didn't know I could make $200,000 doing cybersecurity." Well, yeah or maybe more. I just made the number, I don't actually know, but like I know people do make that much in cyber, but there are huge financial opportunities with certifications and education. Can you scope that order of magnitude? Can you share any data? >> Yeah, so in the US they certainly are. Certifications on average aligned to six digit type jobs. And if you go out and do a search, there are research studies out there that are refreshed every year that say what are the top IT industry certifications and how much money do they make? And the reason I don't put a number out there is because it's constantly changing and in fact it keeps going up, >> It's going up, not going down. >> But I would encourage people to do that quick search. What are the top IT industry certifications. Again, based on the country you're in, it makes a difference. But if you're US, there's a lot of data out there for the US and then there is some for other countries as well around how much on average people make. >> Do you list like the higher level certifications, stack rank them in terms of order? Like say, I'm a type A personnel, I want to climb Mount Everest, I want to get the highest level certification. How do I know that? Is it like laddered up or is like how do you guys present that? >> Yeah, so we have different types of certifications. There is a foundational, which we call the cloud practitioner. That one is more about just showing that you know something about cloud. It's not aligned to a specific job role. But then we have what we call associate level certifications, which are aligned to roles. So there's the solutions architect, cloud developer, so developer operations. And so you can tell by the role and associate is kind of that next level. And then the roles often have a professional level, which is even more advanced. And basically that's saying you're kind of an Uber expert at that point. And then there are technology specialties, which are less about a specific role, although some would argue a security technology specialty might align very well to a security role, but they're more about showing the technology. And so typically, it goes foundational, advanced, professional, and then the specialties are more on the side. They're not aligned, but they're deep. They're deep within that area. >> So you can go dig and pick your deep dive and jump into where you're comfortable. Heather, talk about the commitment in terms of dollars. I know Amazon's flaunted some numbers like 30 million or something, people they want to have trained, hundreds of millions of dollars in investment. This is key, obviously, more people trained on cloud, more operators, more cloud usage, obviously. I see the business connection. What's the women relationship to the numbers? Or what the experience is? How do you guys see that? Obviously International Women's Day, get the confidence, got the curiosity. You're a builder, you're in. It's that easy. >> It doesn't always feel that way, I'm sure to everybody, but we'd like to think that it is. Amazon and AWS do invest hundreds of millions of dollars in free training every year that is accessible to everyone out there. I think that sometimes the hardest obstacles to get overcome are getting started and we try and make it as easy as possible to get started with the tools that we've talked about already today. We run into plenty of cohorts of women as part of our re/Start program that are really grateful for the opportunity to see something, see a new way of thinking, see a new opportunity for them. We don't necessarily break out our funding by women versus men. We want to make sure that we are open and diverse for everybody to come in and get the training that they need to. But we definitely want to make sure that we are accessible and available to women and all genders outside of the US and inside the US. >> Well, I know the number's a lot lower than they should be and that's obviously why we're promoting this heavily. There's a lot more interest I see in tech. So digital transformation is gender neutral. I mean, it's like the world eats software and uses software, uses the cloud. So it has to get 50/50 in my opinion. So you guys do a great job. Now that we're done kind of promoting Amazon, which I wanted to do 'cause I think it's super important. Let's talk about you guys. What got you guys involved in tech? What was the inspiration and share some stories about your experiences and advice for folks watching? >> So I've always been in traditionally male dominated roles. I actually started in aviation and then moved to tech. And what I found was I got a mentor early on, a woman who was senior to me and who was kind of who I saw as the smartest person out there. She was incredibly smart, she was incredibly kind, and she was always lifting women up. And I kind of latched onto her and followed her around and she was such an amazing mentor. She brought me from throughout tech, from company to company, job to job, was always positioning me in front of other people as the go-to person. And I realized, "Wow, I want to be like her." And so that's been my focus as well in tech is you can be deeply technical in tech or you can be not deeply technical and be in tech and you can be successful both ways, but the way you're going to be most successful is if you find other people, build them up and help put them out in front. And so I personally love to mentor women and to put them in places where they can feel comfortable being out in front of people. And that's really been my career. I have tried to model her approach as much as I can. >> That's a really interesting observation. It's the pattern we've been seeing in all these interviews for the past two years of doing the International Women's Day is that networking, mentoring and sponsorship are one thing. So it's all one thing. It's not just mentoring. It's like people think, "Oh, just mentoring. What does that mean? Advice?" No, it's sponsorship, it's lifting people up, creating a keiretsu, creating networks. Really important. Heather, what's your experience? >> Yeah, I'm sort of the example of somebody who never thought they'd be in tech, but I happened to graduate from college in the Silicon Valley in the early nineties and next thing you know, it's more than a couple years later and I'm deeply in tech and I think it when we were having the conversation about confidence and willingness to learn and try that really spoke to me as well. I think I had to get out of my own way sometimes and just be willing to not be the smartest person in the room and just be willing to ask a lot of questions. And with every opportunity to ask questions, I think somebody, I ended up with good mentors, male and female, that saw the willingness to ask questions and the willingness to be humble in my approach to learning. And that really helped. I'm also very aware that nobody's journey is the same and I need to create an environment on my team and I need to be a role model within AWS and Amazon for allowing people to show up in the way that they're going to be most successful. And sometimes that will mean giving them learning opportunities. Sometimes that will be hooking them up with a mentor. Sometimes that will be giving them the freedom to do what they need for their family or their personal life. And modeling that behavior regardless of gender has always been how I choose to show up and what I ask my leaders to do. And the more we can do that, I've seen the team been able to grow and flourish in that way and support our entire team. >> I love that story. You also have a great leader, Maureen Lonergan, who I've met many conversations with, but also it starts at the top. Andy Jassy who can come across, he's kind of technical, he's dirty, he's a builder mentality. He has first principles and you're bringing up this first principles concept and whether that's passing it forward, what you've learned, having first principles helps in an organization. Can you guys talk about what that's like at your company? 'Cause everyone's different. And sometimes whether, and I sometimes I worry about what I say, but I also have my first principles. So talk about how principles matter in how you guys interface with others and letting people be their authentic self. >> Yeah, I'll jump in Jenni and then you can. The Amazon leadership principles are super important to how we interact with each other and it really does provide a set of guidelines for how we work with each other and how we work for our customers and with our partners. But most of all it gives us a common language and a common set of expectations. And I will be honest, they're not always easy. When you come from an environment that tends to be less open to feedback and less open to direct conversations than you find at Amazon, it could take a while to get used to that, but for me at least, it was extremely empowering to have those tools and those principles as guidance for how to operate and to gain the confidence in using them. I've also been able to participate in hundreds and hundreds of interviews in the time that I've been here as part of an interview team of bar raisers. I think that really helps us understand whether or not folks are going to be successful at AWS and at Amazon and helps them understand if they're going to be able to be successful. >> Bar raising is an Amazon term and it's gender neutral, right Jenni? >> It is gender neutral. >> Bar is a bar, it raises. >> That's right. And it's funny, we say that our culture here is peculiar. And when I started, I had been in consulting for several years, so I worked with a lot of different companies in tech and so I thought I'd seen everything and I came here and I went, "Hmm." I see what they mean by peculiar. It is very different environment. >> In the fullness of time, it'll all work out. >> That's right, that's right. Well and it's funny because when you first started, it's a lot to figure out to how to operate in an environment where people do use a 16 leadership principles. I've worked at a lot of companies with three or four core values and nobody can state those. We could state all 16 leadership principles and we use them in our regular everyday dialogue. That is an awkward thing when you first come to have people saying, "Oh, I'm going to use bias for action in this situation and I'm going to go move fast. And they're actually used in everyday conversations. But after a couple years suddenly you realize, "Oh, I'm doing that." And maybe even sometimes at the dinner table I'm doing that, which can get to be a bit much. But it creates an environment where we can all be different. We can all think differently. We can all have different ways of doing things, but we have a common overall approach to what we're trying to achieve. And that's really, it gives us a good framework for that. >> Jenni, it's great insight. Heather, thank you so much for sharing your stories. We're going to do this not once a year. We're going to continue this Women in Tech program every quarter. We'll check in with you guys and find out what's new. And thank you for what you do. We appreciate that getting the word out and really is an opportunity for everyone with education and cloud and it's only going to get more opportunities at the edge in AI and so much more tech. Thank you for coming on the program. >> Thank you for having us. >> Thanks, John. >> Thank you. That's the International Women's Day segment here with leaders from AWS. I'm John Furrier. Thanks for watching. (upbeat musiC)

Published Date : Mar 3 2023

SUMMARY :

and for the International and anyone to level up in the industry. to do exactly what you just talked about, You've got the keys to the and to give you a sense, the ability to level up fast and that is the number one challenge you can level up fast at your and to be complimentary and to take you the programs that you have is that if you are in a university, or even and to explore where and we really work to keep a and content for SageMaker, There are a lot of options. How is your world? and you want to go deep in security, and I want to give you props And if you go out and do a search, Again, based on the country you're in, or is like how do you guys present that? And so you can tell by So you can go dig and available to women and all genders So it has to get 50/50 in my opinion. and you can be successful both ways, for the past two years of doing and flourish in that way in how you guys interface with others Jenni and then you can. and so I thought I'd seen In the fullness of And maybe even sometimes at the and it's only going to get more That's the International

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JenniPERSON

0.99+

Maureen LonerganPERSON

0.99+

AWSORGANIZATION

0.99+

$200,000QUANTITY

0.99+

Jenni TroutmanPERSON

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

HeatherPERSON

0.99+

Andy JassyPERSON

0.99+

JohnPERSON

0.99+

Heather RudenPERSON

0.99+

13 yearsQUANTITY

0.99+

hundredsQUANTITY

0.99+

threeQUANTITY

0.99+

first principlesQUANTITY

0.99+

11 languagesQUANTITY

0.99+

12QUANTITY

0.99+

30 millionQUANTITY

0.99+

5,000 announcementsQUANTITY

0.99+

USLOCATION

0.99+

aws.amazon.com/trainingOTHER

0.99+

160 citiesQUANTITY

0.99+

UberORGANIZATION

0.99+

International Women's DayEVENT

0.99+

Silicon ValleyLOCATION

0.99+

International Women's DayEVENT

0.99+

International Women's DayEVENT

0.99+

64%QUANTITY

0.99+

twoQUANTITY

0.99+

80 countriesQUANTITY

0.99+

over 19,000 studentsQUANTITY

0.99+

GetITTITLE

0.99+

eight countriesQUANTITY

0.99+

both sidesQUANTITY

0.99+

two dynamicsQUANTITY

0.99+

twiceQUANTITY

0.98+

hundreds of millions of dollarsQUANTITY

0.98+

Over 98%QUANTITY

0.98+

Mount EverestLOCATION

0.98+

todayDATE

0.98+

14QUANTITY

0.98+

theCUBEORGANIZATION

0.98+

'21DATE

0.98+

one thingQUANTITY

0.98+

firstQUANTITY

0.98+

Adam Wenchel, Arthur.ai | CUBE Conversation


 

(bright upbeat music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCUBE. We've got a great conversation featuring Arthur AI. I'm your host. I'm excited to have Adam Wenchel who's the Co-Founder and CEO. Thanks for joining us today, appreciate it. >> Yeah, thanks for having me on, John, looking forward to the conversation. >> I got to say, it's been an exciting world in AI or artificial intelligence. Just an explosion of interest kind of in the mainstream with the language models, which people don't really get, but they're seeing the benefits of some of the hype around OpenAI. Which kind of wakes everyone up to, "Oh, I get it now." And then of course the pessimism comes in, all the skeptics are out there. But this breakthrough in generative AI field is just awesome, it's really a shift, it's a wave. We've been calling it probably the biggest inflection point, then the others combined of what this can do from a surge standpoint, applications. I mean, all aspects of what we used to know is the computing industry, software industry, hardware, is completely going to get turbo. So we're totally obviously bullish on this thing. So, this is really interesting. So my first question is, I got to ask you, what's you guys taking? 'Cause you've been doing this, you're in it, and now all of a sudden you're at the beach where the big waves are. What's the explosion of interest is there? What are you seeing right now? >> Yeah, I mean, it's amazing, so for starters, I've been in AI for over 20 years and just seeing this amount of excitement and the growth, and like you said, the inflection point we've hit in the last six months has just been amazing. And, you know, what we're seeing is like people are getting applications into production using LLMs. I mean, really all this excitement just started a few months ago, with ChatGPT and other breakthroughs and the amount of activity and the amount of new systems that we're seeing hitting production already so soon after that is just unlike anything we've ever seen. So it's pretty awesome. And, you know, these language models are just, they could be applied in so many different business contexts and that it's just the amount of value that's being created is again, like unprecedented compared to anything. >> Adam, you know, you've been in this for a while, so it's an interesting point you're bringing up, and this is a good point. I was talking with my friend John Markoff, former New York Times journalist and he was talking about, there's been a lot of work been done on ethics. So there's been, it's not like it's new. It's like been, there's a lot of stuff that's been baking over many, many years and, you know, decades. So now everyone wakes up in the season, so I think that is a key point I want to get into some of your observations. But before we get into it, I want you to explain for the folks watching, just so we can kind of get a definition on the record. What's an LLM, what's a foundational model and what's generative ai? Can you just quickly explain the three things there? >> Yeah, absolutely. So an LLM or a large language model, it's just a large, they would imply a large language model that's been trained on a huge amount of data typically pulled from the internet. And it's a general purpose language model that can be built on top for all sorts of different things, that includes traditional NLP tasks like document classification and sentiment understanding. But the thing that's gotten people really excited is it's used for generative tasks. So, you know, asking it to summarize documents or asking it to answer questions. And these aren't new techniques, they've been around for a while, but what's changed is just this new class of models that's based on new architectures. They're just so much more capable that they've gone from sort of science projects to something that's actually incredibly useful in the real world. And there's a number of companies that are making them accessible to everyone so that you can build on top of them. So that's the other big thing is, this kind of access to these models that can power generative tasks has been democratized in the last few months and it's just opening up all these new possibilities. And then the third one you mentioned foundation models is sort of a broader term for the category that includes LLMs, but it's not just language models that are included. So we've actually seen this for a while in the computer vision world. So people have been building on top of computer vision models, pre-trained computer vision models for a while for image classification, object detection, that's something we've had customers doing for three or four years already. And so, you know, like you said, there are antecedents to like, everything that's happened, it's not entirely new, but it does feel like a step change. >> Yeah, I did ask ChatGPT to give me a riveting introduction to you and it gave me an interesting read. If we have time, I'll read it. It's kind of, it's fun, you get a kick out of it. "Ladies and gentlemen, today we're a privileged "to have Adam Wenchel, Founder of Arthur who's going to talk "about the exciting world of artificial intelligence." And then it goes on with some really riveting sentences. So if we have time, I'll share that, it's kind of funny. It was good. >> Okay. >> So anyway, this is what people see and this is why I think it's exciting 'cause I think people are going to start refactoring what they do. And I've been saying this on theCUBE now for about a couple months is that, you know, there's a scene in "Moneyball" where Billy Beane sits down with the Red Sox owner and the Red Sox owner says, "If people aren't rebuilding their teams on your model, "they're going to be dinosaurs." And it reminds me of what's happening right now. And I think everyone that I talk to in the business sphere is looking at this and they're connecting the dots and just saying, if we don't rebuild our business with this new wave, they're going to be out of business because there's so much efficiency, there's so much automation, not like DevOps automation, but like the generative tasks that will free up the intellect of people. Like just the simple things like do an intro or do this for me, write some code, write a countermeasure to a hack. I mean, this is kind of what people are doing. And you mentioned computer vision, again, another huge field where 5G things are coming on, it's going to accelerate. What do you say to people when they kind of are leaning towards that, I need to rethink my business? >> Yeah, it's 100% accurate and what's been amazing to watch the last few months is the speed at which, and the urgency that companies like Microsoft and Google or others are actually racing to, to do that rethinking of their business. And you know, those teams, those companies which are large and haven't always been the fastest moving companies are working around the clock. And the pace at which they're rolling out LLMs across their suite of products is just phenomenal to watch. And it's not just the big, the large tech companies as well, I mean, we're seeing the number of startups, like we get, every week a couple of new startups get in touch with us for help with their LLMs and you know, there's just a huge amount of venture capital flowing into it right now because everyone realizes the opportunities for transforming like legal and healthcare and content creation in all these different areas is just wide open. And so there's a massive gold rush going on right now, which is amazing. >> And the cloud scale, obviously horizontal scalability of the cloud brings us to another level. We've been seeing data infrastructure since the Hadoop days where big data was coined. Now you're seeing this kind of take fruit, now you have vertical specialization where data shines, large language models all of a set up perfectly for kind of this piece. And you know, as you mentioned, you've been doing it for a long time. Let's take a step back and I want to get into how you started the company, what drove you to start it? Because you know, as an entrepreneur you're probably saw this opportunity before other people like, "Hey, this is finally it, it's here." Can you share the origination story of what you guys came up with, how you started it, what was the motivation and take us through that origination story. >> Yeah, absolutely. So as I mentioned, I've been doing AI for many years. I started my career at DARPA, but it wasn't really until 2015, 2016, my previous company was acquired by Capital One. Then I started working there and shortly after I joined, I was asked to start their AI team and scale it up. And for the first time I was actually doing it, had production models that we were working with, that was at scale, right? And so there was hundreds of millions of dollars of business revenue and certainly a big group of customers who were impacted by the way these models acted. And so it got me hyper-aware of these issues of when you get models into production, it, you know. So I think people who are earlier in the AI maturity look at that as a finish line, but it's really just the beginning and there's this constant drive to make them better, make sure they're not degrading, make sure you can explain what they're doing, if they're impacting people, making sure they're not biased. And so at that time, there really weren't any tools to exist to do this, there wasn't open source, there wasn't anything. And so after a few years there, I really started talking to other people in the industry and there was a really clear theme that this needed to be addressed. And so, I joined with my Co-Founder John Dickerson, who was on the faculty in University of Maryland and he'd been doing a lot of research in these areas. And so we ended up joining up together and starting Arthur. >> Awesome. Well, let's get into what you guys do. Can you explain the value proposition? What are people using you for now? Where's the action? What's the customers look like? What do prospects look like? Obviously you mentioned production, this has been the theme. It's not like people woke up one day and said, "Hey, I'm going to put stuff into production." This has kind of been happening. There's been companies that have been doing this at scale and then yet there's a whole follower model coming on mainstream enterprise and businesses. So there's kind of the early adopters are there now in production. What do you guys do? I mean, 'cause I think about just driving the car off the lot is not, you got to manage operations. I mean, that's a big thing. So what do you guys do? Talk about the value proposition and how you guys make money? >> Yeah, so what we do is, listen, when you go to validate ahead of deploying these models in production, starts at that point, right? So you want to make sure that if you're going to be upgrading a model, if you're going to replacing one that's currently in production, that you've proven that it's going to perform well, that it's going to be perform ethically and that you can explain what it's doing. And then when you launch it into production, traditionally data scientists would spend 25, 30% of their time just manually checking in on their model day-to-day babysitting as we call it, just to make sure that the data hasn't drifted, the model performance hasn't degraded, that a programmer did make a change in an upstream data system. You know, there's all sorts of reasons why the world changes and that can have a real adverse effect on these models. And so what we do is bring the same kind of automation that you have for other kinds of, let's say infrastructure monitoring, application monitoring, we bring that to your AI systems. And that way if there ever is an issue, it's not like weeks or months till you find it and you find it before it has an effect on your P&L and your balance sheet, which is too often before they had tools like Arthur, that was the way they were detected. >> You know, I was talking to Swami at Amazon who I've known for a long time for 13 years and been on theCUBE multiple times and you know, I watched Amazon try to pick up that sting with stage maker about six years ago and so much has happened since then. And he and I were talking about this wave, and I kind of brought up this analogy to how when cloud started, it was, Hey, I don't need a data center. 'Cause when I did my startup that time when Amazon, one of my startups at that time, my choice was put a box in the colo, get all the configuration before I could write over the line of code. So the cloud became the benefit for that and you can stand up stuff quickly and then it grew from there. Here it's kind of the same dynamic, you don't want to have to provision a large language model or do all this heavy lifting. So that seeing companies coming out there saying, you can get started faster, there's like a new way to get it going. So it's kind of like the same vibe of limiting that heavy lifting. >> Absolutely. >> How do you look at that because this seems to be a wave that's going to be coming in and how do you guys help companies who are going to move quickly and start developing? >> Yeah, so I think in the race to this kind of gold rush mentality, race to get these models into production, there's starting to see more sort of examples and evidence that there are a lot of risks that go along with it. Either your model says things, your system says things that are just wrong, you know, whether it's hallucination or just making things up, there's lots of examples. If you go on Twitter and the news, you can read about those, as well as sort of times when there could be toxic content coming out of things like that. And so there's a lot of risks there that you need to think about and be thoughtful about when you're deploying these systems. But you know, you need to balance that with the business imperative of getting these things into production and really transforming your business. And so that's where we help people, we say go ahead, put them in production, but just make sure you have the right guardrails in place so that you can do it in a smart way that's going to reflect well on you and your company. >> Let's frame the challenge for the companies now that you have, obviously there's the people who doing large scale production and then you have companies maybe like as small as us who have large linguistic databases or transcripts for example, right? So what are customers doing and why are they deploying AI right now? And is it a speed game, is it a cost game? Why have some companies been able to deploy AI at such faster rates than others? And what's a best practice to onboard new customers? >> Yeah, absolutely. So I mean, we're seeing across a bunch of different verticals, there are leaders who have really kind of started to solve this puzzle about getting AI models into production quickly and being able to iterate on them quickly. And I think those are the ones that realize that imperative that you mentioned earlier about how transformational this technology is. And you know, a lot of times, even like the CEOs or the boards are very personally kind of driving this sense of urgency around it. And so, you know, that creates a lot of movement, right? And so those companies have put in place really smart infrastructure and rails so that people can, data scientists aren't encumbered by having to like hunt down data, get access to it. They're not encumbered by having to stand up new platforms every time they want to deploy an AI system, but that stuff is already in place. There's a really nice ecosystem of products out there, including Arthur, that you can tap into. Compared to five or six years ago when I was building at a top 10 US bank, at that point you really had to build almost everything yourself and that's not the case now. And so it's really nice to have things like, you know, you mentioned AWS SageMaker and a whole host of other tools that can really accelerate things. >> What's your profile customer? Is it someone who already has a team or can people who are learning just dial into the service? What's the persona? What's the pitch, if you will, how do you align with that customer value proposition? Do people have to be built out with a team and in play or is it pre-production or can you start with people who are just getting going? >> Yeah, people do start using it pre-production for validation, but I think a lot of our customers do have a team going and they're starting to put, either close to putting something into production or about to, it's everything from large enterprises that have really sort of complicated, they have dozens of models running all over doing all sorts of use cases to tech startups that are very focused on a single problem, but that's like the lifeblood of the company and so they need to guarantee that it works well. And you know, we make it really easy to get started, especially if you're using one of the common model development platforms, you can just kind of turn key, get going and make sure that you have a nice feedback loop. So then when your models are out there, it's pointing out, areas where it's performing well, areas where it's performing less well, giving you that feedback so that you can make improvements, whether it's in training data or futurization work or algorithm selection. There's a number of, you know, depending on the symptoms, there's a number of things you can do to increase performance over time and we help guide people on that journey. >> So Adam, I have to ask, since you have such a great customer base and they're smart and they got teams and you're on the front end, I mean, early adopters is kind of an overused word, but they're killing it. They're putting stuff in the production's, not like it's a test, it's not like it's early. So as the next wave comes of fast followers, how do you see that coming online? What's your vision for that? How do you see companies that are like just waking up out of the frozen, you know, freeze of like old IT to like, okay, they got cloud, but they're not yet there. What do you see in the market? I see you're in the front end now with the top people really nailing AI and working hard. What's the- >> Yeah, I think a lot of these tools are becoming, or every year they get easier, more accessible, easier to use. And so, you know, even for that kind of like, as the market broadens, it takes less and less of a lift to put these systems in place. And the thing is, every business is unique, they have their own kind of data and so you can use these foundation models which have just been trained on generic data. They're a great starting point, a great accelerant, but then, in most cases you're either going to want to create a model or fine tune a model using data that's really kind of comes from your particular customers, the people you serve and so that it really reflects that and takes that into account. And so I do think that these, like the size of that market is expanding and its broadening as these tools just become easier to use and also the knowledge about how to build these systems becomes more widespread. >> Talk about your customer base you have now, what's the makeup, what size are they? Give a taste a little bit of a customer base you got there, what's they look like? I'll say Capital One, we know very well while you were at there, they were large scale, lot of data from fraud detection to all kinds of cool stuff. What do your customers now look like? >> Yeah, so we have a variety, but I would say one area we're really strong, we have several of the top 10 US banks, that's not surprising, that's a strength for us, but we also have Fortune 100 customers in healthcare, in manufacturing, in retail, in semiconductor and electronics. So what we find is like in any sort of these major verticals, there's typically, you know, one, two, three kind of companies that are really leading the charge and are the ones that, you know, in our opinion, those are the ones that for the next multiple decades are going to be the leaders, the ones that really kind of lead the charge on this AI transformation. And so we're very fortunate to be working with some of those. And then we have a number of startups as well who we love working with just because they're really pushing the boundaries technologically and so they provide great feedback and make sure that we're continuing to innovate and staying abreast of everything that's going on. >> You know, these early markups, even when the hyperscalers were coming online, they had to build everything themselves. That's the new, they're like the alphas out there building it. This is going to be a big wave again as that fast follower comes in. And so when you look at the scale, what advice would you give folks out there right now who want to tee it up and what's your secret sauce that will help them get there? >> Yeah, I think that the secret to teeing it up is just dive in and start like the, I think these are, there's not really a secret. I think it's amazing how accessible these are. I mean, there's all sorts of ways to access LLMs either via either API access or downloadable in some cases. And so, you know, go ahead and get started. And then our secret sauce really is the way that we provide that performance analysis of what's going on, right? So we can tell you in a very actionable way, like, hey, here's where your model is doing good things, here's where it's doing bad things. Here's something you want to take a look at, here's some potential remedies for it. We can help guide you through that. And that way when you're putting it out there, A, you're avoiding a lot of the common pitfalls that people see and B, you're able to really kind of make it better in a much faster way with that tight feedback loop. >> It's interesting, we've been kind of riffing on this supercloud idea because it was just different name than multicloud and you see apps like Snowflake built on top of AWS without even spending any CapEx, you just ride that cloud wave. This next AI, super AI wave is coming. I don't want to call AIOps because I think there's a different distinction. If you, MLOps and AIOps seem a little bit old, almost a few years back, how do you view that because everyone's is like, "Is this AIOps?" And like, "No, not kind of, but not really." How would you, you know, when someone says, just shoots off the hip, "Hey Adam, aren't you doing AIOps?" Do you say, yes we are, do you say, yes, but we do differently because it's doesn't seem like it's the same old AIOps. What's your- >> Yeah, it's a good question. AIOps has been a term that was co-opted for other things and MLOps also has people have used it for different meanings. So I like the term just AI infrastructure, I think it kind of like describes it really well and succinctly. >> But you guys are doing the ops. I mean that's the kind of ironic thing, it's like the next level, it's like NextGen ops, but it's not, you don't want to be put in that bucket. >> Yeah, no, it's very operationally focused platform that we have, I mean, it fires alerts, people can action off them. If you're familiar with like the way people run security operations centers or network operations centers, we do that for data science, right? So think of it as a DSOC, a Data Science Operations Center where all your models, you might have hundreds of models running across your organization, you may have five, but as problems are detected, alerts can be fired and you can actually work the case, make sure they're resolved, escalate them as necessary. And so there is a very strong operational aspect to it, you're right. >> You know, one of the things I think is interesting is, is that, if you don't mind commenting on it, is that the aspect of scale is huge and it feels like that was made up and now you have scale and production. What's your reaction to that when people say, how does scale impact this? >> Yeah, scale is huge for some of, you know, I think, I think look, the highest leverage business areas to apply these to, are generally going to be the ones at the biggest scale, right? And I think that's one of the advantages we have. Several of us come from enterprise backgrounds and we're used to doing things enterprise grade at scale and so, you know, we're seeing more and more companies, I think they started out deploying AI and sort of, you know, important but not necessarily like the crown jewel area of their business, but now they're deploying AI right in the heart of things and yeah, the scale that some of our companies are operating at is pretty impressive. >> John: Well, super exciting, great to have you on and congratulations. I got a final question for you, just random. What are you most excited about right now? Because I mean, you got to be pretty pumped right now with the way the world is going and again, I think this is just the beginning. What's your personal view? How do you feel right now? >> Yeah, the thing I'm really excited about for the next couple years now, you touched on it a little bit earlier, but is a sort of convergence of AI and AI systems with sort of turning into AI native businesses. And so, as you sort of do more, get good further along this transformation curve with AI, it turns out that like the better the performance of your AI systems, the better the performance of your business. Because these models are really starting to underpin all these key areas that cumulatively drive your P&L. And so one of the things that we work a lot with our customers is to do is just understand, you know, take these really esoteric data science notions and performance and tie them to all their business KPIs so that way you really are, it's kind of like the operating system for running your AI native business. And we're starting to see more and more companies get farther along that maturity curve and starting to think that way, which is really exciting. >> I love the AI native. I haven't heard any startup yet say AI first, although we kind of use the term, but I guarantee that's going to come in all the pitch decks, we're an AI first company, it's going to be great run. Adam, congratulations on your success to you and the team. Hey, if we do a few more interviews, we'll get the linguistics down. We can have bots just interact with you directly and ask you, have an interview directly. >> That sounds good, I'm going to go hang out on the beach, right? So, sounds good. >> Thanks for coming on, really appreciate the conversation. Super exciting, really important area and you guys doing great work. Thanks for coming on. >> Adam: Yeah, thanks John. >> Again, this is Cube Conversation. I'm John Furrier here in Palo Alto, AI going next gen. This is legit, this is going to a whole nother level that's going to open up huge opportunities for startups, that's going to use opportunities for investors and the value to the users and the experience will come in, in ways I think no one will ever see. So keep an eye out for more coverage on siliconangle.com and theCUBE.net, thanks for watching. (bright upbeat music)

Published Date : Mar 3 2023

SUMMARY :

I'm excited to have Adam Wenchel looking forward to the conversation. kind of in the mainstream and that it's just the amount Adam, you know, you've so that you can build on top of them. to give me a riveting introduction to you And you mentioned computer vision, again, And you know, those teams, And you know, as you mentioned, of when you get models into off the lot is not, you and that you can explain what it's doing. So it's kind of like the same vibe so that you can do it in a smart way And so, you know, that creates and make sure that you out of the frozen, you know, and so you can use these foundation models a customer base you got there, that are really leading the And so when you look at the scale, And so, you know, go how do you view that So I like the term just AI infrastructure, I mean that's the kind of ironic thing, and you can actually work the case, is that the aspect of and so, you know, we're seeing exciting, great to have you on so that way you really are, success to you and the team. out on the beach, right? and you guys doing great work. and the value to the users and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John MarkoffPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Adam WenchelPERSON

0.99+

JohnPERSON

0.99+

Red SoxORGANIZATION

0.99+

John DickersonPERSON

0.99+

AmazonORGANIZATION

0.99+

AdamPERSON

0.99+

John FurrierPERSON

0.99+

Palo AltoLOCATION

0.99+

2015DATE

0.99+

Capital OneORGANIZATION

0.99+

fiveQUANTITY

0.99+

100%QUANTITY

0.99+

2016DATE

0.99+

13 yearsQUANTITY

0.99+

SnowflakeTITLE

0.99+

threeQUANTITY

0.99+

first questionQUANTITY

0.99+

twoQUANTITY

0.99+

fiveDATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

four yearsQUANTITY

0.99+

Billy BeanePERSON

0.99+

over 20 yearsQUANTITY

0.99+

DARPAORGANIZATION

0.99+

third oneQUANTITY

0.98+

AWSORGANIZATION

0.98+

siliconangle.comOTHER

0.98+

University of MarylandORGANIZATION

0.97+

first timeQUANTITY

0.97+

USLOCATION

0.97+

firstQUANTITY

0.96+

six years agoDATE

0.96+

New York TimesORGANIZATION

0.96+

ChatGPTORGANIZATION

0.96+

SwamiPERSON

0.95+

ChatGPTTITLE

0.95+

hundreds of modelsQUANTITY

0.95+

25, 30%QUANTITY

0.95+

single problemQUANTITY

0.95+

hundreds of millions of dollarsQUANTITY

0.95+

10QUANTITY

0.94+

MoneyballTITLE

0.94+

waveEVENT

0.91+

three thingsQUANTITY

0.9+

AIOpsTITLE

0.9+

last six monthsDATE

0.89+

few months agoDATE

0.88+

bigEVENT

0.86+

next couple yearsDATE

0.86+

DevOpsTITLE

0.85+

ArthurPERSON

0.85+

CUBEORGANIZATION

0.83+

dozens of modelsQUANTITY

0.8+

a few years backDATE

0.8+

six years agoDATE

0.78+

theCUBEORGANIZATION

0.76+

SageMakerTITLE

0.75+

decadesQUANTITY

0.75+

TwitterORGANIZATION

0.74+

MLOpsTITLE

0.74+

supercloudORGANIZATION

0.73+

super AI waveEVENT

0.73+

a couple monthsQUANTITY

0.72+

ArthurORGANIZATION

0.72+

100 customersQUANTITY

0.71+

Cube ConversationEVENT

0.69+

theCUBE.netOTHER

0.67+

SiliconANGLE News | AWS Responds to OpenAI with Hugging Face Expanded Partnership


 

(upbeat music) >> Hello everyone. Welcome to Silicon Angle news breaking story here. Amazon Web Services, expanding their relationship with Hugging Face, breaking news here on Silicon Angle. I'm John Furrier, Silicon Angle reporter, founder and also co-host of theCUBE. And I have with me Swami from Amazon Web Services, vice president of database analytics machine learning with AWS. Swami, great to have you on for this breaking news segment on AWS's big news. Thanks for coming on, taking the time. >> Hey John, pleasure to be here. >> We've had many conversations on theCUBE over the years. We've watched Amazon really move fast into the large data modeling. You SageMaker became a very smashing success. Obviously you've been on this for a while, now with Chat GPT, open AI, a lot of buzz going mainstream, takes it from behind the curtain, inside the ropes, if you will, in the industry to a mainstream. And so this is a big moment I think in the industry. I want to get your perspective because your news with Hugging Face, I think is a is another tell sign that we're about to tip over into a new accelerated growth around making AI now application aware application centric, more programmable, more API access. What's the big news about with AWS Hugging Face, you know, what's going on with this announcement? >> Yeah, first of all, they're very excited to announce our expanded collaboration with Hugging Face because with this partnership, our goal, as you all know, I mean Hugging Face I consider them like the GitHub for machine learning. And with this partnership, Hugging Face and AWS will be able to democratize AI for a broad range of developers, not just specific deep AI startups. And now with this we can accelerate the training, fine tuning, and deployment of these large language models and vision models from Hugging Face in the cloud. So, and the broader context, when you step back and see what customer problem we are trying to solve with this announcement, essentially if you see these foundational models are used to now create like a huge number of applications, suggest like tech summarization, question answering, or search image generation, creative, other things. And these are all stuff we are seeing in the likes of these Chat GPT style applications. But there is a broad range of enterprise use cases that we don't even talk about. And it's because these kind of transformative generative AI capabilities and models are not available to, I mean, millions of developers. And because either training these elements from scratch can be very expensive or time consuming and need deep expertise, or more importantly, they don't need these generic models. They need them to be fine tuned for the specific use cases. And one of the biggest complaints we hear is that these models, when they try to use it for real production use cases, they are incredibly expensive to train and incredibly expensive to run inference on, to use it at a production scale, so And unlike search, web search style applications where the margins can be really huge, here in production use cases and enterprises, you want efficiency at scale. That's where a Hugging Face and AWS share our mission. And by integrating with Trainium and Inferentia, we're able to handle the cost efficient training and inference at scale. I'll deep dive on it and by training teaming up on the SageMaker front now the time it takes to build these models and fine tune them as also coming down. So that's what makes this partnership very unique as well. So I'm very excited. >> I want to get into the, to the time savings and the cost savings as well on the on the training and inference. It's a huge issue. But before we get into that, just how long have you guys been working with Hugging Face? I know this is a previous relationship. This is an expansion of that relationship. Can you comment on the what's different about what's happened before and then now? >> Yeah, so Hugging Face, we have had an great relationship in the past few years as well where they have actually made their models available to run on AWS in a fashion, even inspect their Bloom project was something many of our customers even used. Bloom Project for context is their open source project, which builds a GPT three style model. And now with this expanded collaboration, now Hugging Face selected AWS for that next generation of this generative AI model, building on their highly successful Bloom project as well. And the nice thing is now by direct integration with Trainium and Inferentia, where you get cost savings in a really significant way. Now for instance, tier 1 can provide up to 50% cost to train savings, and Inferentia can deliver up to 60% better costs and Forex more higher throughput. Now these models, especially as they train that next generation generated AI model, it is going to be not only more accessible to all the developers who use it in open. So it'll be a lot cheaper as well. And that's what makes this moment really exciting because yeah, we can't democratize AI unless we make it broadly accessible and cost efficient, and easy to program and use as well. >> Okay, thanks Swami. We really appreciate. Swami's a Cube alumni, but also vice President, database analyst machine learning web services breaking down the Hugging Face announcement. Obviously the relationship he called it the GitHub of machine learning. This is the beginning of what we will see, a continuing competitive battle with Microsoft. Microsoft launching OpenAI. Amazon's been doing it for years. They got Alexa, they know what they're doing. It's going to be very interesting to see how this all plays out. You're watching Silicon Angle News, breaking here. I'm John Furrier, host of the Cube. Thanks for watching. (ethereal music)

Published Date : Feb 23 2023

SUMMARY :

And I have with me Swami into the large data modeling. the time it takes to build these models and the cost savings as well on the and easy to program and use as well. I'm John Furrier, host of the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

SwamiPERSON

0.99+

AmazonORGANIZATION

0.99+

millionsQUANTITY

0.99+

GitHubORGANIZATION

0.98+

AlexaTITLE

0.98+

InferentiaORGANIZATION

0.97+

Silicon AngleORGANIZATION

0.97+

TrainiumORGANIZATION

0.97+

Hugging FaceORGANIZATION

0.96+

oneQUANTITY

0.95+

up to 60%QUANTITY

0.95+

up to 50%QUANTITY

0.95+

CubeORGANIZATION

0.94+

Hugging FaceTITLE

0.94+

Chat GPTTITLE

0.86+

BloomPERSON

0.84+

OpenAITITLE

0.83+

theCUBEORGANIZATION

0.77+

Chat GPTTITLE

0.76+

1OTHER

0.75+

Silicon Angle NewsTITLE

0.74+

FaceTITLE

0.73+

BloomTITLE

0.72+

developersQUANTITY

0.7+

TrainiumTITLE

0.7+

Silicon AngleORGANIZATION

0.64+

past few yearsDATE

0.63+

BloomORGANIZATION

0.56+

SiliconANGLE NewsTITLE

0.55+

SageMakerTITLE

0.53+

tierQUANTITY

0.52+

HuggingORGANIZATION

0.49+

SiliconORGANIZATION

0.48+

AngleLOCATION

0.47+

SiliconANGLE News | Swami Sivasubramanian Extended Version


 

(bright upbeat music) >> Hello, everyone. Welcome to SiliconANGLE News breaking story here. Amazon Web Services expanding their relationship with Hugging Face, breaking news here on SiliconANGLE. I'm John Furrier, SiliconANGLE reporter, founder, and also co-host of theCUBE. And I have with me, Swami, from Amazon Web Services, vice president of database, analytics, machine learning with AWS. Swami, great to have you on for this breaking news segment on AWS's big news. Thanks for coming on and taking the time. >> Hey, John, pleasure to be here. >> You know- >> Looking forward to it. >> We've had many conversations on theCUBE over the years, we've watched Amazon really move fast into the large data modeling, SageMaker became a very smashing success, obviously you've been on this for a while. Now with ChatGPT OpenAI, a lot of buzz going mainstream, takes it from behind the curtain inside the ropes, if you will, in the industry to a mainstream. And so this is a big moment, I think, in the industry, I want to get your perspective, because your news with Hugging Face, I think is another tell sign that we're about to tip over into a new accelerated growth around making AI now application aware, application centric, more programmable, more API access. What's the big news about, with AWS Hugging Face, you know, what's going on with this announcement? >> Yeah. First of all, they're very excited to announce our expanded collaboration with Hugging Face, because with this partnership, our goal, as you all know, I mean, Hugging Face, I consider them like the GitHub for machine learning. And with this partnership, Hugging Face and AWS, we'll be able to democratize AI for a broad range of developers, not just specific deep AI startups. And now with this, we can accelerate the training, fine tuning and deployment of these large language models, and vision models from Hugging Face in the cloud. And the broader context, when you step back and see what customer problem we are trying to solve with this announcement, essentially if you see these foundational models, are used to now create like a huge number of applications, suggest like tech summarization, question answering, or search image generation, creative, other things. And these are all stuff we are seeing in the likes of these ChatGPT style applications. But there is a broad range of enterprise use cases that we don't even talk about. And it's because these kind of transformative, generative AI capabilities and models are not available to, I mean, millions of developers. And because either training these elements from scratch can be very expensive or time consuming and need deep expertise, or more importantly, they don't need these generic models, they need them to be fine tuned for the specific use cases. And one of the biggest complaints we hear is that these models, when they try to use it for real production use cases, they are incredibly expensive to train and incredibly expensive to run inference on, to use it at a production scale. So, and unlike web search style applications, where the margins can be really huge, here in production use cases and enterprises, you want efficiency at scale. That's where Hugging Face and AWS share our mission. And by integrating with Trainium and Inferentia, we're able to handle the cost efficient training and inference at scale, I'll deep dive on it. And by teaming up on the SageMaker front, now the time it takes to build these models and fine tune them is also coming down. So that's what makes this partnership very unique as well. So I'm very excited. >> I want to get into the time savings and the cost savings as well on the training and inference, it's a huge issue, but before we get into that, just how long have you guys been working with Hugging Face? I know there's a previous relationship, this is an expansion of that relationship, can you comment on what's different about what's happened before and then now? >> Yeah. So, Hugging Face, we have had a great relationship in the past few years as well, where they have actually made their models available to run on AWS, you know, fashion. Even in fact, their Bloom Project was something many of our customers even used. Bloom Project, for context, is their open source project which builds a GPT-3 style model. And now with this expanded collaboration, now Hugging Face selected AWS for that next generation office generative AI model, building on their highly successful Bloom Project as well. And the nice thing is, now, by direct integration with Trainium and Inferentia, where you get cost savings in a really significant way, now, for instance, Trn1 can provide up to 50% cost to train savings, and Inferentia can deliver up to 60% better costs, and four x more higher throughput than (indistinct). Now, these models, especially as they train that next generation generative AI models, it is going to be, not only more accessible to all the developers, who use it in open, so it'll be a lot cheaper as well. And that's what makes this moment really exciting, because we can't democratize AI unless we make it broadly accessible and cost efficient and easy to program and use as well. >> Yeah. >> So very exciting. >> I'll get into the SageMaker and CodeWhisperer angle in a second, but you hit on some good points there. One, accessibility, which is, I call the democratization, which is getting this in the hands of developers, and/or AI to develop, we'll get into that in a second. So, access to coding and Git reasoning is a whole nother wave. But the three things I know you've been working on, I want to put in the buckets here and comment, one, I know you've, over the years, been working on saving time to train, that's a big point, you mentioned some of those stats, also cost, 'cause now cost is an equation on, you know, bundling whether you're uncoupling with hardware and software, that's a big issue. Where do I find the GPUs? Where's the horsepower cost? And then also sustainability. You've mentioned that in the past, is there a sustainability angle here? Can you talk about those three things, time, cost, and sustainability? >> Certainly. So if you look at it from the AWS perspective, we have been supporting customers doing machine learning for the past years. Just for broader context, Amazon has been doing ML the past two decades right from the early days of ML powered recommendation to actually also supporting all kinds of generative AI applications. If you look at even generative AI application within Amazon, Amazon search, when you go search for a product and so forth, we have a team called MFi within Amazon search that helps bring these large language models into creating highly accurate search results. And these are created with models, really large models with tens of billions of parameters, scales to thousands of training jobs every month and trained on large model of hardware. And this is an example of a really good large language foundation model application running at production scale, and also, of course, Alexa, which uses a large generator model as well. And they actually even had a research paper that showed that they are more, and do better in accuracy than other systems like GPT-3 and whatnot. So, and we also touched on things like CodeWhisperer, which uses generative AI to improve developer productivity, but in a responsible manner, because 40% of some of the studies show 40% of this generated code had serious security flaws in it. This is where we didn't just do generative AI, we combined with automated reasoning capabilities, which is a very, very useful technique to identify these issues and couple them so that it produces highly secure code as well. Now, all these learnings taught us few things, and which is what you put in these three buckets. And yeah, like more than 100,000 customers using ML and AI services, including leading startups in the generative AI space, like stability AI, AI21 Labs, or Hugging Face, or even Alexa, for that matter. They care about, I put them in three dimension, one is around cost, which we touched on with Trainium and Inferentia, where we actually, the Trainium, you provide to 50% better cost savings, but the other aspect is, Trainium is a lot more power efficient as well compared to traditional one. And Inferentia is also better in terms of throughput, when it comes to what it is capable of. Like it is able to deliver up to three x higher compute performance and four x higher throughput, compared to it's previous generation, and it is extremely cost efficient and power efficient as well. >> Well. >> Now, the second element that really is important is in a day, developers deeply value the time it takes to build these models, and they don't want to build models from scratch. And this is where SageMaker, which is, even going to Kaggle uses, this is what it is, number one, enterprise ML platform. What it did to traditional machine learning, where tens of thousands of customers use StageMaker today, including the ones I mentioned, is that what used to take like months to build these models have dropped down to now a matter of days, if not less. Now, a generative AI, the cost of building these models, if you look at the landscape, the model parameter size had jumped by more than thousand X in the past three years, thousand x. And that means the training is like a really big distributed systems problem. How do you actually scale these model training? How do you actually ensure that you utilize these efficiently? Because these machines are very expensive, let alone they consume a lot of power. So, this is where SageMaker capability to build, automatically train, tune, and deploy models really concern this, especially with this distributor training infrastructure, and those are some of the reasons why some of the leading generative AI startups are actually leveraging it, because they do not want a giant infrastructure team, which is constantly tuning and fine tuning, and keeping these clusters alive. >> It sounds like a lot like what startups are doing with the cloud early days, no data center, you move to the cloud. So, this is the trend we're seeing, right? You guys are making it easier for developers with Hugging Face, I get that. I love that GitHub for machine learning, large language models are complex and expensive to build, but not anymore, you got Trainium and Inferentia, developers can get faster time to value, but then you got the transformers data sets, token libraries, all that optimized for generator. This is a perfect storm for startups. Jon Turow, a former AWS person, who used to work, I think for you, is now a VC at Madrona Venture, he and I were talking about the generator AI landscape, it's exploding with startups. Every alpha entrepreneur out there is seeing this as the next frontier, that's the 20 mile stairs, next 10 years is going to be huge. What is the big thing that's happened? 'Cause some people were saying, the founder of Yquem said, "Oh, the start ups won't be real, because they don't all have AI experience." John Markoff, former New York Times writer told me that, AI, there's so much work done, this is going to explode, accelerate really fast, because it's almost like it's been waiting for this moment. What's your reaction? >> I actually think there is going to be an explosion of startups, not because they need to be AI startups, but now finally AI is really accessible or going to be accessible, so that they can create remarkable applications, either for enterprises or for disrupting actually how customer service is being done or how creative tools are being built. And I mean, this is going to change in many ways. When we think about generative AI, we always like to think of how it generates like school homework or arts or music or whatnot, but when you look at it on the practical side, generative AI is being actually used across various industries. I'll give an example of like Autodesk. Autodesk is a customer who runs an AWS and SageMaker. They already have an offering that enables generated design, where designers can generate many structural designs for products, whereby you give a specific set of constraints and they actually can generate a structure accordingly. And we see similar kind of trend across various industries, where it can be around creative media editing or various others. I have the strong sense that literally, in the next few years, just like now, conventional machine learning is embedded in every application, every mobile app that we see, it is pervasive, and we don't even think twice about it, same way, like almost all apps are built on cloud. Generative AI is going to be part of every startup, and they are going to create remarkable experiences without needing actually, these deep generative AI scientists. But you won't get that until you actually make these models accessible. And I also don't think one model is going to rule the world, then you want these developers to have access to broad range of models. Just like, go back to the early days of deep learning. Everybody thought it is going to be one framework that will rule the world, and it has been changing, from Caffe to TensorFlow to PyTorch to various other things. And I have a suspicion, we had to enable developers where they are, so. >> You know, Dave Vellante and I have been riffing on this concept called super cloud, and a lot of people have co-opted to be multicloud, but we really were getting at this whole next layer on top of say, AWS. You guys are the most comprehensive cloud, you guys are a super cloud, and even Adam and I are talking about ISVs evolving to ecosystem partners. I mean, your top customers have ecosystems building on top of it. This feels like a whole nother AWS. How are you guys leveraging the history of AWS, which by the way, had the same trajectory, startups came in, they didn't want to provision a data center, the heavy lifting, all the things that have made Amazon successful culturally. And day one thinking is, provide the heavy lifting, undifferentiated heavy lifting, and make it faster for developers to program code. AI's got the same thing. How are you guys taking this to the next level, because now, this is an opportunity for the competition to change the game and take it over? This is, I'm sure, a conversation, you guys have a lot of things going on in AWS that makes you unique. What's the internal and external positioning around how you take it to the next level? >> I mean, so I agree with you that generative AI has a very, very strong potential in terms of what it can enable in terms of next generation application. But this is where Amazon's experience and expertise in putting these foundation models to work internally really has helped us quite a bit. If you look at it, like amazon.com search is like a very, very important application in terms of what is the customer impact on number of customers who use that application openly, and the amount of dollar impact it does for an organization. And we have been doing it silently for a while now. And the same thing is true for like Alexa too, which actually not only uses it for natural language understanding other city, even national leverages is set for creating stories and various other examples. And now, our approach to it from AWS is we actually look at it as in terms of the same three tiers like we did in machine learning, because when you look at generative AI, we genuinely see three sets of customers. One is, like really deep technical expert practitioner startups. These are the startups that are creating the next generation models like the likes of stability AIs or Hugging Face with Bloom or AI21. And they generally want to build their own models, and they want the best price performance of their infrastructure for training and inference. That's where our investments in silicon and hardware and networking innovations, where Trainium and Inferentia really plays a big role. And we can nearly do that, and that is one. The second middle tier is where I do think developers don't want to spend time building their own models, let alone, they actually want the model to be useful to that data. They don't need their models to create like high school homeworks or various other things. What they generally want is, hey, I had this data from my enterprises that I want to fine tune and make it really work only for this, and make it work remarkable, can be for tech summarization, to generate a report, or it can be for better Q&A, and so forth. This is where we are. Our investments in the middle tier with SageMaker, and our partnership with Hugging Face and AI21 and co here are all going to very meaningful. And you'll see us investing, I mean, you already talked about CodeWhisperer, which is an open preview, but we are also partnering with a whole lot of top ISVs, and you'll see more on this front to enable the next wave of generated AI apps too, because this is an area where we do think lot of innovation is yet to be done. It's like day one for us in this space, and we want to enable that huge ecosystem to flourish. >> You know, one of the things Dave Vellante and I were talking about in our first podcast we just did on Friday, we're going to do weekly, is we highlighted the AI ChatGPT example as a horizontal use case, because everyone loves it, people are using it in all their different verticals, and horizontal scalable cloud plays perfectly into it. So I have to ask you, as you look at what AWS is going to bring to the table, a lot's changed over the past 13 years with AWS, a lot more services are available, how should someone rebuild or re-platform and refactor their application of business with AI, with AWS? What are some of the tools that you see and recommend? Is it Serverless, is it SageMaker, CodeWhisperer? What do you think's going to shine brightly within the AWS stack, if you will, or service list, that's going to be part of this? As you mentioned, CodeWhisperer and SageMaker, what else should people be looking at as they start tinkering and getting all these benefits, and scale up their ups? >> You know, if we were a startup, first, I would really work backwards from the customer problem I try to solve, and pick and choose, bar, I don't need to deal with the undifferentiated heavy lifting, so. And that's where the answer is going to change. If you look at it then, the answer is not going to be like a one size fits all, so you need a very strong, I mean, granted on the compute front, if you can actually completely accurate it, so unless, I will always recommend it, instead of running compute for running your ups, because it takes care of all the undifferentiated heavy lifting, but on the data, and that's where we provide a whole variety of databases, right from like relational data, or non-relational, or dynamo, and so forth. And of course, we also have a deep analytical stack, where data directly flows from our relational databases into data lakes and data virus. And you can get value along with partnership with various analytical providers. The area where I do think fundamentally things are changing on what people can do is like, with CodeWhisperer, I was literally trying to actually program a code on sending a message through Twilio, and I was going to pull up to read a documentation, and in my ID, I was actually saying like, let's try sending a message to Twilio, or let's actually update a Route 53 error code. All I had to do was type in just a comment, and it actually started generating the sub-routine. And it is going to be a huge time saver, if I were a developer. And the goal is for us not to actually do it just for AWS developers, and not to just generate the code, but make sure the code is actually highly secure and follows the best practices. So, it's not always about machine learning, it's augmenting with automated reasoning as well. And generative AI is going to be changing, and not just in how people write code, but also how it actually gets built and used as well. You'll see a lot more stuff coming on this front. >> Swami, thank you for your time. I know you're super busy. Thank you for sharing on the news and giving commentary. Again, I think this is a AWS moment and industry moment, heavy lifting, accelerated value, agility. AIOps is going to be probably redefined here. Thanks for sharing your commentary. And we'll see you next time, I'm looking forward to doing more follow up on this. It's going to be a big wave. Thanks. >> Okay. Thanks again, John, always a pleasure. >> Okay. This is SiliconANGLE's breaking news commentary. I'm John Furrier with SiliconANGLE News, as well as host of theCUBE. Swami, who's a leader in AWS, has been on theCUBE multiple times. We've been tracking the growth of how Amazon's journey has just been exploding past five years, in particular, past three. You heard the numbers, great performance, great reviews. This is a watershed moment, I think, for the industry, and it's going to be a lot of fun for the next 10 years. Thanks for watching. (bright music)

Published Date : Feb 22 2023

SUMMARY :

Swami, great to have you on inside the ropes, if you And one of the biggest complaints we hear and easy to program and use as well. I call the democratization, the Trainium, you provide And that means the training What is the big thing that's happened? and they are going to create this to the next level, and the amount of dollar impact that's going to be part of this? And generative AI is going to be changing, AIOps is going to be John, always a pleasure. and it's going to be a lot

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SwamiPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Jon TurowPERSON

0.99+

John MarkoffPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

40%QUANTITY

0.99+

AutodeskORGANIZATION

0.99+

50%QUANTITY

0.99+

Madrona VentureORGANIZATION

0.99+

20 mileQUANTITY

0.99+

Hugging FaceORGANIZATION

0.99+

FridayDATE

0.99+

second elementQUANTITY

0.99+

more than 100,000 customersQUANTITY

0.99+

AI21ORGANIZATION

0.99+

tens of thousandsQUANTITY

0.99+

first podcastQUANTITY

0.99+

three tiersQUANTITY

0.98+

SiliconANGLEORGANIZATION

0.98+

twiceQUANTITY

0.98+

Bloom ProjectTITLE

0.98+

oneQUANTITY

0.98+

SageMakerORGANIZATION

0.98+

Hugging FaceTITLE

0.98+

AlexaTITLE

0.98+

firstQUANTITY

0.98+

GitHubORGANIZATION

0.98+

one modelQUANTITY

0.98+

up to 50%QUANTITY

0.97+

ChatGPTTITLE

0.97+

FirstQUANTITY

0.97+

more than thousand XQUANTITY

0.97+

amazon.comORGANIZATION

0.96+

tens of billionsQUANTITY

0.96+

OneQUANTITY

0.96+

up to 60%QUANTITY

0.96+

one frameworkQUANTITY

0.96+

YquemORGANIZATION

0.94+

three thingsQUANTITY

0.94+

InferentiaORGANIZATION

0.94+

CodeWhispererTITLE

0.93+

fourQUANTITY

0.92+

three setsQUANTITY

0.92+

threeQUANTITY

0.92+

TwilioORGANIZATION

0.92+

Supercloud Applications & Developer Impact | Supercloud2


 

(gentle music) >> Okay, welcome back to Supercloud 2, live here in Palo Alto, California for our live stage performance. Supercloud 2 is our second Supercloud event. We're going to get these out as fast as we can every couple months. It's our second one, you'll see two and three this year. I'm John Furrier, my co-host, Dave Vellante. A panel here to break down the Supercloud momentum, the wave, and the developer impact that we bringing back Vittorio Viarengo, who's a VP for Cross-Cloud Services at VMware. Sarbjeet Johal, industry influencer and Analyst at StackPayne, his company, Cube alumni and Influencer. Sarbjeet, great to see you. Vittorio, thanks for coming back. >> Nice to be here. >> My pleasure. >> Vittorio, you just gave a keynote where we unpacked the cross-cloud services, what VMware is doing, how you guys see it, not just from VMware's perspective, but VMware looking out broadly at the industry and developers came up and you were like, "Developers, developer, developers", kind of a goof on the Steve Ballmer famous meme that everyone's seen. This is a huge star, sorry, I mean a big piece of it. The developers are the canary in the coal mines. They're the ones who are being asked to code the digital transformation, which is fully business transformation and with the market the way it is right now in terms of the accelerated technology, every enterprise grade business model's changing. The technology is evolving, the builders are kind of, they want go faster. I'm saying they're stuck in a way, but that's my opinion, but there's a lot of growth. >> Yeah. >> The impact, they got to get released up and let it go. Those developers need to accelerate faster. It's been a big part of productivity, and the conversations we've had. So developer impact is huge in Supercloud. What's your, what do you guys think about this? We'll start with you, Sarbjeet. >> Yeah, actually, developers are the masons of the digital empires I call 'em, right? They lay every brick and build all these big empires. On the left side of the SDLC, or the, you know, when you look at the system operations, developer is number one cost from economic side of things, and from technology side of things, they are tech hungry people. They are developers for that reason because developer nights are long, hours are long, they forget about when to eat, you know, like, I've been a developer, I still code. So you want to keep them happy, you want to hug your developers. We always say that, right? Vittorio said that right earlier. The key is to, in this context, in the Supercloud context, is that developers don't mind mucking around with platforms or APIs or new languages, but they hate the infrastructure part. That's a fact. They don't want to muck around with servers. It's friction for them, it is like they don't want to muck around even with the VMs. So they want the programmability to the nth degree. They want to automate everything, so that's how they think and cloud is the programmable infrastructure, industrialization of infrastructure in many ways. So they are happy with where we are going, and we need more abstraction layers for some developers. By the way, I have this sort of thinking frame for last year or so, not all developers are same, right? So if you are a developer at an ISV, you behave differently. If you are a developer at a typical enterprise, you behave differently or you are forced to behave differently because you're not writing software.- >> Well, developers, developers have changed, I mean, Vittorio, you and I were talking earlier on the keynote, and this is kind of the key point is what is a developer these days? If everything is software enabled, I mean, even hardware interviews we do with Nvidia, and Amazon and other people building silicon, they all say the same thing, "It's software on a chip." So you're seeing the role of software up and down the stack and the role of the stack is changing. The old days of full stack developer, what does that even mean? I mean, the cloud is a half a stack kind of right there. So, you know, developers are certainly more agile, but cloud native, I mean VMware is epitome of operations, IT operations, and the Tan Zoo initiative, you guys started, you went after the developers to look at them, and ask them questions, "What do you need?", "How do you transform the Ops from virtualization?" Again, back to your point, so this hardware abstraction, what is software, what is cloud native? It's kind of messy equation these days. How do you guys grokel with that? >> I would argue that developers don't want the Supercloud. I dropped that up there, so, >> Dave: Why not? >> Because developers, they, once they get comfortable in AWS or Google, because they're doing some AI stuff, which is, you know, very trendy right now, or they are in IBM, any of the IPA scaler, professional developers, system developers, they love that stuff, right? Yeah, they don't, the infrastructure gets in the way, but they're just, the problem is, and I think the Supercloud should be driven by the operators because as we discussed, the operators have been left behind because they're busy with day-to-day jobs, and in most cases IT is centralized, developers are in the business units. >> John: Yeah. >> Right? So they get the mandate from the top, say, "Our bank, they're competing against". They gave teenagers or like young people the ability to do all these new things online, and Venmo and all this integration, where are we? "Oh yeah, we can do it", and then build it, and then deploy it, "Okay, we caught up." but now the operators are back in the private cloud trying to keep the backend system running and so I think the Supercloud is needed for the primarily, initially, for the operators to get in front of the developers, fit in the workflow, but lay the foundation so it is secure.- >> So, so I love this thinking because I love the rift, because the rift points to what is the target audience for the value proposition and if you're a developer, Supercloud enables you so you shouldn't have to deal with Supercloud. >> Exactly. >> What you're saying is get the operating environment or operating system done properly, whether it's architecture, building the platform, this comes back to architecture platform conversations. What is the future platform? Is it a vendor supplied or is it customer created platform? >> Dave: So developers want best to breed, is what you just said. >> Vittorio: Yeah. >> Right and operators, they, 'cause developers don't want to deal with governance, they don't want to deal with security, >> No. >> They don't want to deal with spinning up infrastructure. That's the role of the operator, but that's where Supercloud enables, to John's point, the developer, so to your question, is it a platform where the platform vendor is responsible for the architecture, or there is it an architectural standard that spans multiple clouds that has to emerge? Based on what you just presented earlier, Vittorio, you are the determinant of the architecture. It's got to be open, but you guys determine that, whereas the nirvana is, "Oh no, it's all open, and it just kind of works." >> Yeah, so first of all, let's all level set on one thing. You cannot tell developers what to do. >> Dave: Right, great >> At least great developers, right? Cannot tell them what to do. >> Dave: So that's what, that's the way I want to sort of, >> You can tell 'em what's possible. >> There's a bottle on that >> If you tell 'em what's possible, they'll test it, they'll look at it, but if you try to jam it down their throat, >> Yeah. >> Dave: You can't tell 'em how to do it, just like your point >> Let me answer your answer the question. >> Yeah, yeah. >> So I think we need to build an architect, help them build an architecture, but it cannot be proprietary, has to be built on what works in the cloud and so what works in the cloud today is Kubernetes, is you know, number of different open source project that you need to enable and then provide, use this, but when I first got exposed to Kubernetes, I said, "Hallelujah!" We had a runtime that works the same everywhere only to realize there are 12 different distributions. So that's where we come in, right? And other vendors come in to say, "Hey, no, we can make them all look the same. So you still use Kubernetes, but we give you a place to build, to set those operation policy once so that you don't create friction for the developers because that's the last thing you want to do." >> Yeah, actually, coming back to the same point, not all developers are same, right? So if you're ISV developer, you want to go to the lowest sort of level of the infrastructure and you want to shave off the milliseconds from to get that performance, right? If you're working at AWS, you are doing that. If you're working at scale at Facebook, you're doing that. At Twitter, you're doing that, but when you go to DMV and Kansas City, you're not doing that, right? So your developers are different in nature. They are given certain parameters to work with, certain sort of constraints on the budget side. They are educated at a different level as well. Like they don't go to that end of the degree of sort of automation, if you will. So you cannot have the broad stroking of developers. We are talking about a citizen developer these days. That's a extreme low, >> You mean Low-Code. >> Yeah, Low-Code, No-code, yeah, on the extreme side. On one side, that's citizen developers. On the left side is the professional developers, when you say developers, your mind goes to the professional developers, like the hardcore developers, they love the flexibility, you know, >> John: Well app, developers too, I mean. >> App developers, yeah. >> You're right a lot of, >> Sarbjeet: Infrastructure platform developers, app developers, yes. >> But there are a lot of customers, its a spectrum, you're saying. >> Yes, it's a spectrum >> There's a lot of customers don't want deal with that muck. >> Yeah. >> You know, like you said, AWS, Twitter, the sophisticated developers do, but there's a whole suite of developers out there >> Yeah >> That just want tools that are abstracted. >> Within a company, within a company. Like how I see the Supercloud is there shouldn't be anything which blocks the developers, like their view of the world, of the future. Like if you're blocked as a developer, like something comes in front of you, you are not developer anymore, believe me, (John laughing) so you'll go somewhere else >> John: First of all, I'm, >> You'll leave the company by the way. >> Dave: Yeah, you got to quit >> Yeah, you will quit, you will go where the action is, where there's no sort of blockage there. So like if you put in front of them like a huge amount of a distraction, they don't like it, so they don't, >> Well, the idea of a developer, >> Coming back to that >> Let's get into 'cause you mentioned platform. Get year in the term platform engineering now. >> Yeah. >> Platform developer. You know, I remember back in, and I think there's still a term used today, but when I graduated my computer science degree, we were called "Software engineers," right? Do people use that term "Software engineering", or is it "Software development", or they the same, are they different? >> Well, >> I think there's a, >> So, who's engineering what? Are they engineering or are they developing? Or both? Well, I think it the, you made a great point. There is a factor of, I had the, I was blessed to work with Adam Bosworth, that is the guy that created some of the abstraction layer, like Visual Basic and Microsoft Access and he had so, he made his whole career thinking about this layer, and he always talk about the professional developers, the developers that, you know, give him a user manual, maybe just go at the APIs, he'll build anything, right, from system engine, go down there, and then through obstruction, you get the more the procedural logic type of engineers, the people that used to be able to write procedural logic and visual basic and so on and so forth. I think those developers right now are a little cut out of the picture. There's some No-code, Low-Code environment that are maybe gain some traction, I caught up with Adam Bosworth two weeks ago in New York and I asked him "What's happening to this higher level developers?" and you know what he is told me, and he is always a little bit out there, so I'm going to use his thought process here. He says, "ChapGPT", I mean, they will get to a point where this high level procedural logic will be written by, >> John: Computers. >> Computers, and so we may not need as many at the high level, but we still need the engineers down there. The point is the operation needs to get in front of them >> But, wait, wait, you seen the ChatGPT meme, I dunno if it's a Dilbert thing where it's like, "Time to tic" >> Yeah, yeah, yeah, I did that >> "Time to develop the code >> Five minutes, time to decode", you know, to debug the codes like five hours. So you know, the whole equation >> Well, this ChatGPT is a hot wave, everyone's been talking about it because I think it illustrates something that's NextGen, feels NextGen, and it's just getting started so it's going to get better. I mean people are throwing stones at it, but I think it's amazing. It's the equivalent of me seeing the browser for the first time, you know, like, "Wow, this is really compelling." This is game-changing, it's not just keyword chat bots. It's like this is real, this is next level, and I think the Supercloud wave that people are getting behind points to that and I think the question of Ops and Dev comes up because I think if you limit the infrastructure opportunity for a developer, I think they're going to be handicapped. I mean that's a general, my opinion, the thesis is you give more aperture to developers, more choice, more capabilities, more good things could happen, policy, and that's why you're seeing the convergence of networking people, virtualization talent, operational talent, get into the conversation because I think it's an infrastructure engineering opportunity. I think this is a seminal moment in a new stack that's emerging from an infrastructure, software virtualization, low-code, no-code layer that will be completely programmable by things like the next Chat GPT or something different, but yet still the mechanics and the plumbing will still need engineering. >> Sarbjeet: Oh yeah. >> So there's still going to be more stuff coming on. >> Yeah, we have, with the cloud, we have made the infrastructure programmable and you give the programmability to the programmer, they will be very creative with that and so we are being very creative with our infrastructure now and on top of that, we are being very creative with the silicone now, right? So we talk about that. That's part of it, by the way. So you write the code to the particle's silicone now, and on the flip side, the silicone is built for certain use cases for AI Inference and all that. >> You saw this at CES? >> Yeah, I saw at CES, the scenario is this, the Bosch, I spoke to Bosch, I spoke to John Deere, I spoke to AWS guys, >> Yeah. >> They were showcasing their technology there and I was spoke to Azure guys as well. So the Bosch is a good example. So they are building, they are right now using AWS. I have that interview on camera, I will put it some sometime later on there online. So they're using AWS on the back end now, but Bosch is the number one, number one or number two depending on what day it is of the year, supplier of the componentry to the auto industry, and they are creating a platform for our auto industry, so is Qualcomm actually by the way, with the Snapdragon. So they told me that customers, their customers, BMW, Audi, all the manufacturers, they demand the diversity of the backend. Like they don't want all, they, all of them don't want to go to AWS. So they want the choice on the backend. So whatever they cook in the middle has to work, they have to sprinkle the data for the data sovereign side because they have Chinese car makers as well, and for, you know, for other reasons, competitive reasons and like use. >> People don't go to, aw, people don't go to AWS either for political reasons or like competitive reasons or specific use cases, but for the most part, generally, I haven't met anyone who hasn't gone first choice with either, but that's me personally. >> No, but they're building. >> Point is the developer wants choice at the back end is what I'm hearing, but then finish that thought. >> Their developers want the choice, they want the choice on the back end, number one, because the customers are asking for, in this case, the customers are asking for it, right? But the customers requirements actually drive, their economics drives that decision making, right? So in the middle they have to, they're forced to cook up some solution which is vendor neutral on the backend or multicloud in nature. So >> Yeah, >> Every >> I mean I think that's nirvana. I don't think, I personally don't see that happening right now. I mean, I don't see the parody with clouds. So I think that's a challenge. I mean, >> Yeah, true. >> I mean the fact of the matter is if the development teams get fragmented, we had this chat with Kit Colbert last time, I think he's going to come on and I think he's going to talk about his keynote in a few, in an hour or so, development teams is this, the cloud is heterogenous, which is great. It's complex, which is challenging. You need skilled engineering to manage these clouds. So if you're a CIO and you go all in on AWS, it's hard. Then to then go out and say, "I want to be completely multi-vendor neutral" that's a tall order on many levels and this is the multicloud challenge, right? So, the question is, what's the strategy for me, the CIO or CISO, what do I do? I mean, to me, I would go all in on one and start getting hedges and start playing and then look at some >> Crystal clear. Crystal clear to me. >> Go ahead. >> If you're a CIO today, you have to build a platform engineering team, no question. 'Cause if we agree that we cannot tell the great developers what to do, we have to create a platform engineering team that using pieces of the Supercloud can build, and let's make this very pragmatic and give examples. First you need to be able to lay down the run time, okay? So you need a way to deploy multiple different Kubernetes environment in depending on the cloud. Okay, now we got that. The second part >> That's like table stakes. >> That are table stake, right? But now what is the advantage of having a Supercloud service to do that is that now you can put a policy in one place and it gets distributed everywhere consistently. So for example, you want to say, "If anybody in this organization across all these different buildings, all these developers don't even know, build a PCI compliant microservice, They can only talk to PCI compliant microservice." Now, I sleep tight. The developers still do that. Of course they're going to get their hands slapped if they don't encrypt some messages and say, "Oh, that should have been encrypted." So number one. The second thing I want to be able to say, "This service that this developer built over there better satisfy this SLA." So if the SLA is not satisfied, boom, I automatically spin up multiple instances to certify the SLA. Developers unencumbered, they don't even know. So this for me is like, CIO build a platform engineering team using one of the many Supercloud services that allow you to do that and lay down. >> And part of that is that the vendor behavior is such, 'cause the incentive is that they don't necessarily always work together. (John chuckling) I'll give you an example, we're going to hear today from Western Union. They're AWS shop, but they want to go to Google, they want to use some of Google's AI tools 'cause they're good and maybe they're even arguably better, but they're also a Snowflake customer and what you'll hear from them is Amazon and Snowflake are working together so that SageMaker can be integrated with Snowflake but Google said, "No, you want to use our AI tools, you got to use BigQuery." >> Yeah. >> Okay. So they say, "Ah, forget it." So if you have a platform engineering team, you can maybe solve some of that vendor friction and get competitive advantage. >> I think that the future proximity concept that I talk about is like, when you're doing one thing, you want to do another thing. Where do you go to get that thing, right? So that is very important. Like your question, John, is that your point is that AWS is ahead of the pack, which is true, right? They have the >> breadth of >> Infrastructure by a lot >> infrastructure service, right? They breadth of services, right? So, how do you, When do you bring in other cloud providers, right? So I believe that you should standardize on one cloud provider, like that's your primary, and for others, bring them in on as needed basis, in the subsection or sub portfolio of your applications or your platforms, what ever you can. >> So yeah, the Google AI example >> Yeah, I mean, >> Or the Microsoft collaboration software example. I mean there's always or the M and A. >> Yeah, but- >> You're going to get to run Windows, you can run Windows on Amazon, so. >> By the way, Supercloud doesn't mean that you cannot do that. So the perfect example is say that you're using Azure because you have a SQL server intensive workload. >> Yep >> And you're using Google for ML, great. If you are using some differentiated feature of this cloud, you'll have to go somewhere and configure this widget, but what you can abstract with the Supercloud is the lifecycle manage of the service that runs on top, right? So how does the service get deployed, right? How do you monitor performance? How do you lifecycle it? How you secure it that you can abstract and that's the value and eventually value will win. So the customers will find what is the values, obstructing in making it uniform or going deeper? >> How about identity? Like take identity for instance, you know, that's an opportunity to abstract. Whether I use Microsoft Identity or Okta, and I can abstract that. >> Yeah, and then we have APIs and standards that we can use so eventually I think where there is enough pain, the right open source will emerge to solve that problem. >> Dave: Yeah, I can use abstract things like object store, right? That's pretty simple. >> But back to the engineering question though, is that developers, developers, developers, one thing about developers psychology is if something's not right, they say, "Go get fixing. I'm not touching it until you fix it." They're very sticky about, if something's not working, they're not going to do it again, right? So you got to get it right for developers. I mean, they'll maybe tolerate something new, but is the "juice worth the squeeze" as they say, right? So you can't go to direct say, "Hey, it's, what's a work in progress? We're going to get our infrastructure together and the world's going to be great for you, but just hang tight." They're going to be like, "Get your shit together then talk to me." So I think that to me is the question. It's an Ops question, but where's that value for the developer in Supercloud where the capabilities are there, there's less friction, it's simpler, it solves the complexity problem. I don't need these high skilled labor to manage Amazon. I got services exposed. >> That's what we talked about earlier. It's like the Walmart example. They basically, they took away from the developer the need to spin up infrastructure and worry about all the governance. I mean, it's not completely there yet. So the developer could focus on what he or she wanted to do. >> But there's a big, like in our industry, there's a big sort of flaw or the contention between developers and operators. Developers want to be on the cutting edge, right? And operators want to be on the stability, you know, like we want governance. >> Yeah, totally. >> Right, so they want to control, developers are like these little bratty kids, right? And they want Legos, like they want toys, right? Some of them want toys by way. They want Legos, they want to build there and they want make a mess out of it. So you got to make sure. My number one advice in this context is that do it up your application portfolio and, or your platform portfolio if you are an ISV, right? So if you are ISV you most probably, you're building a platform these days, do it up in a way that you can say this portion of our applications and our platform will adhere to what you are saying, standardization, you know, like Kubernetes, like slam dunk, you know, it works across clouds and in your data center hybrid, you know, whole nine yards, but there is some subset on the next door systems of innovation. Everybody has, it doesn't matter if you're DMV of Kansas or you are, you know, metaverse, right? Or Meta company, right, which is Facebook, they have it, they are building something new. For that, give them some freedom to choose different things like play with non-standard things. So that is the mantra for moving forward, for any enterprise. >> Do you think developers are happy with the infrastructure now or are they wanting people to get their act together? I mean, what's your reaction, or you think. >> Developers are happy as long as they can do their stuff, which is running code. They want to write code and innovate. So to me, when Ballmer said, "Developer, develop, Developer, what he meant was, all you other people get your act together so these developers can do their thing, and to me the Supercloud is the way for IT to get there and let developer be creative and go fast. Why not, without getting in trouble. >> Okay, let's wrap up this segment with a super clip. Okay, we're going to do a sound bite that we're going to make into a short video for each of you >> All right >> On you guys summarizing why Supercloud's important, why this next wave is relevant for the practitioners, for the industry and we'll turn this into an Instagram reel, YouTube short. So we'll call it a "Super clip. >> Alright, >> Sarbjeet, you want, you want some time to think about it? You want to go first? Vittorio, you want. >> I just didn't mind. (all laughing) >> No, okay, okay. >> I'll do it again. >> Go back. No, we got a fresh one. We'll going to already got that one in the can. >> I'll go. >> Sarbjeet, you go first. >> I'll go >> What's your super clip? >> In software systems, abstraction is your friend. I always say that. Abstraction is your friend, even if you're super professional developer, abstraction is your friend. We saw from the MFC library from C++ days till today. Abstract, use abstraction. Do not try to reinvent what's already being invented. Leverage cloud, leverage the platform side of the cloud. Not just infrastructure service, but platform as a service side of the cloud as well, and Supercloud is a meta platform built on top of these infrastructure services from three or four or five cloud providers. So use that and embrace the programmability, embrace the abstraction layer. That's the key actually, and developers who are true developers or professional developers as you said, they know that. >> Awesome. Great super clip. Vittorio, another shot at the plate here for super clip. Go. >> Multicloud is awesome. There's a reason why multicloud happened, is because gave our developers the ability to innovate fast and ever before. So if you are embarking on a digital transformation journey, which I call a survival journey, if you're not innovating and transforming, you're not going to be around in business three, five years from now. You have to adopt the Supercloud so the developer can be developer and keep building great, innovating digital experiences for your customers and IT can get in front of it and not get in trouble together. >> Building those super apps with Supercloud. That was a great super clip. Vittorio, thank you for sharing. >> Thanks guys. >> Sarbjeet, thanks for coming on talking about the developer impact Supercloud 2. On our next segment, coming up right now, we're going to hear from Walmart enterprise architect, how they are building and they are continuing to innovate, to build their own Supercloud. Really informative, instructive from a practitioner doing it in real time. Be right back with Walmart here in Palo Alto. Thanks for watching. (gentle music)

Published Date : Feb 17 2023

SUMMARY :

the Supercloud momentum, and developers came up and you were like, and the conversations we've had. and cloud is the and the role of the stack is changing. I dropped that up there, so, developers are in the business units. the ability to do all because the rift points to What is the future platform? is what you just said. the developer, so to your question, You cannot tell developers what to do. Cannot tell them what to do. You can tell 'em your answer the question. but we give you a place to build, and you want to shave off the milliseconds they love the flexibility, you know, platform developers, you're saying. don't want deal with that muck. that are abstracted. Like how I see the Supercloud is So like if you put in front of them you mentioned platform. and I think there's the developers that, you The point is the operation to decode", you know, the browser for the first time, you know, going to be more stuff coming on. and on the flip side, the middle has to work, but for the most part, generally, Point is the developer So in the middle they have to, the parody with clouds. I mean the fact of the matter Crystal clear to me. in depending on the cloud. So if the SLA is not satisfied, boom, 'cause the incentive is that So if you have a platform AWS is ahead of the pack, So I believe that you should standardize or the M and A. you can run Windows on Amazon, so. So the perfect example is abstract and that's the value Like take identity for instance, you know, the right open source will Dave: Yeah, I can use abstract things and the world's going to be great for you, the need to spin up infrastructure on the stability, you know, So that is the mantra for moving forward, Do you think developers are happy and to me the Supercloud is for each of you for the industry you want some time to think about it? I just didn't mind. got that one in the can. platform side of the cloud. Vittorio, another shot at the the ability to innovate thank you for sharing. the developer impact Supercloud 2.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

BMWORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

JohnPERSON

0.99+

SarbjeetPERSON

0.99+

John FurrierPERSON

0.99+

BoschORGANIZATION

0.99+

VittorioPERSON

0.99+

NvidiaORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AudiORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Steve BallmerPERSON

0.99+

QualcommORGANIZATION

0.99+

Adam BosworthPERSON

0.99+

Palo AltoLOCATION

0.99+

FacebookORGANIZATION

0.99+

New YorkLOCATION

0.99+

Vittorio ViarengoPERSON

0.99+

Kit ColbertPERSON

0.99+

BallmerPERSON

0.99+

fourQUANTITY

0.99+

Sarbjeet JohalPERSON

0.99+

five hoursQUANTITY

0.99+

VMwareORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Five minutesQUANTITY

0.99+

NextGenORGANIZATION

0.99+

StackPayneORGANIZATION

0.99+

Visual BasicTITLE

0.99+

second partQUANTITY

0.99+

12 different distributionsQUANTITY

0.99+

CESEVENT

0.99+

FirstQUANTITY

0.99+

TwitterORGANIZATION

0.99+

Kansas CityLOCATION

0.99+

second oneQUANTITY

0.99+

threeQUANTITY

0.99+

bothQUANTITY

0.99+

KansasLOCATION

0.98+

first timeQUANTITY

0.98+

WindowsTITLE

0.98+

last yearDATE

0.98+

AWS Startup Showcase S3E1


 

(upbeat electronic music) >> Hello everyone, welcome to this CUBE conversation here from the studios in the CUBE in Palo Alto, California. I'm John Furrier, your host. We're featuring a startup, Astronomer. Astronomer.io is the URL, check it out. And we're going to have a great conversation around one of the most important topics hitting the industry, and that is the future of machine learning and AI, and the data that powers it underneath it. There's a lot of things that need to get done, and we're excited to have some of the co-founders of Astronomer here. Viraj Parekh, who is co-founder of Astronomer, and Paola Peraza Calderon, another co-founder, both with Astronomer. Thanks for coming on. First of all, how many co-founders do you guys have? >> You know, I think the answer's around six or seven. I forget the exact, but there's really been a lot of people around the table who've worked very hard to get this company to the point that it's at. We have long ways to go, right? But there's been a lot of people involved that have been absolutely necessary for the path we've been on so far. >> Thanks for that, Viraj, appreciate that. The first question I want to get out on the table, and then we'll get into some of the details, is take a minute to explain what you guys are doing. How did you guys get here? Obviously, multiple co-founders, sounds like a great project. The timing couldn't have been better. ChatGPT has essentially done so much public relations for the AI industry to kind of highlight this shift that's happening. It's real, we've been chronicalizing, take a minute to explain what you guys do. >> Yeah, sure, we can get started. So, yeah, when Viraj and I joined Astronomer in 2017, we really wanted to build a business around data, and we were using an open source project called Apache Airflow that we were just using sort of as customers ourselves. And over time, we realized that there was actually a market for companies who use Apache Airflow, which is a data pipeline management tool, which we'll get into, and that running Airflow is actually quite challenging, and that there's a big opportunity for us to create a set of commercial products and an opportunity to grow that open source community and actually build a company around that. So the crux of what we do is help companies run data pipelines with Apache Airflow. And certainly we've grown in our ambitions beyond that, but that's sort of the crux of what we do for folks. >> You know, data orchestration, data management has always been a big item in the old classic data infrastructure. But with AI, you're seeing a lot more emphasis on scale, tuning, training. Data orchestration is the center of the value proposition, when you're looking at coordinating resources, it's one of the most important things. Can you guys explain what data orchestration entails? What does it mean? Take us through the definition of what data orchestration entails. >> Yeah, for sure. I can take this one, and Viraj, feel free to jump in. So if you google data orchestration, here's what you're going to get. You're going to get something that says, "Data orchestration is the automated process" "for organizing silo data from numerous" "data storage points, standardizing it," "and making it accessible and prepared for data analysis." And you say, "Okay, but what does that actually mean," right, and so let's give sort of an an example. So let's say you're a business and you have sort of the following basic asks of your data team, right? Okay, give me a dashboard in Sigma, for example, for the number of customers or monthly active users, and then make sure that that gets updated on an hourly basis. And then number two, a consistent list of active customers that I have in HubSpot so that I can send them a monthly product newsletter, right? Two very basic asks for all sorts of companies and organizations. And when that data team, which has data engineers, data scientists, ML engineers, data analysts get that request, they're looking at an ecosystem of data sources that can help them get there, right? And that includes application databases, for example, that actually have in product user behavior and third party APIs from tools that the company uses that also has different attributes and qualities of those customers or users. And that data team needs to use tools like Fivetran to ingest data, a data warehouse, like Snowflake or Databricks to actually store that data and do analysis on top of it, a tool like DBT to do transformations and make sure that data is standardized in the way that it needs to be, a tool like Hightouch for reverse ETL. I mean, we could go on and on. There's so many partners of ours in this industry that are doing really, really exciting and critical things for those data movements. And the whole point here is that data teams have this plethora of tooling that they use to both ingest the right data and come up with the right interfaces to transform and interact with that data. And data orchestration, in our view, is really the heartbeat of all of those processes, right? And tangibly the unit of data orchestration is a data pipeline, a set of tasks or jobs that each do something with data over time and eventually run that on a schedule to make sure that those things are happening continuously as time moves on and the company advances. And so, for us, we're building a business around Apache Airflow, which is a workflow management tool that allows you to author, run, and monitor data pipelines. And so when we talk about data orchestration, we talk about sort of two things. One is that crux of data pipelines that, like I said, connect that large ecosystem of data tooling in your company. But number two, it's not just that data pipeline that needs to run every day, right? And Viraj will probably touch on this as we talk more about Astronomer and our value prop on top of Airflow. But then it's all the things that you need to actually run data and production and make sure that it's trustworthy, right? So it's actually not just that you're running things on a schedule, but it's also things like CICD tooling, secure secrets management, user permissions, monitoring, data lineage, documentation, things that enable other personas in your data team to actually use those tools. So long-winded way of saying that it's the heartbeat, we think, of of the data ecosystem, and certainly goes beyond scheduling, but again, data pipelines are really at the center of it. >> One of the things that jumped out, Viraj, if you can get into this, I'd like to hear more about how you guys look at all those little tools that are out. You mentioned a variety of things. You look at the data infrastructure, it's not just one stack. You've got an analytic stack, you've got a realtime stack, you've got a data lake stack, you got an AI stack potentially. I mean you have these stacks now emerging in the data world that are fundamental, that were once served by either a full package, old school software, and then a bunch of point solution. You mentioned Fivetran there, I would say in the analytics stack. Then you got S3, they're on the data lake stack. So all these things are kind of munged together. >> Yeah. >> How do you guys fit into that world? You make it easier, or like, what's the deal? >> Great question, right? And you know, I think that one of the biggest things we've found in working with customers over the last however many years is that if a data team is using a bunch of tools to get what they need done, and the number of tools they're using is growing exponentially and they're kind of roping things together here and there, that's actually a sign of a productive team, not a bad thing, right? It's because that team is moving fast. They have needs that are very specific to them, and they're trying to make something that's exactly tailored to their business. So a lot of times what we find is that customers have some sort of base layer, right? That's kind of like, it might be they're running most of the things in AWS, right? And then on top of that, they'll be using some of the things AWS offers, things like SageMaker, Redshift, whatever, but they also might need things that their cloud can't provide. Something like Fivetran, or Hightouch, those are other tools. And where data orchestration really shines, and something that we've had the pleasure of helping our customers build, is how do you take all those requirements, all those different tools and whip them together into something that fulfills a business need? So that somebody can read a dashboard and trust the number that it says, or somebody can make sure that the right emails go out to their customers. And Airflow serves as this amazing kind of glue between that data stack, right? It's to make it so that for any use case, be it ELT pipelines, or machine learning, or whatever, you need different things to do them, and Airflow helps tie them together in a way that's really specific for a individual business' needs. >> Take a step back and share the journey of what you guys went through as a company startup. So you mentioned Apache, open source. I was just having an interview with a VC, we were talking about foundational models. You got a lot of proprietary and open source development going on. It's almost the iPhone/Android moment in this whole generative space and foundational side. This is kind of important, the open source piece of it. Can you share how you guys started? And I can imagine your customers probably have their hair on fire and are probably building stuff on their own. Are you guys helping them? Take us through, 'cause you guys are on the front end of a big, big wave, and that is to make sense of the chaos, rain it in. Take us through your journey and why this is important. >> Yeah, Paola, I can take a crack at this, then I'll kind of hand it over to you to fill in whatever I miss in details. But you know, like Paola is saying, the heart of our company is open source, because we started using Airflow as an end user and started to say like, "Hey wait a second," "more and more people need this." Airflow, for background, started at Airbnb, and they were actually using that as a foundation for their whole data stack. Kind of how they made it so that they could give you recommendations, and predictions, and all of the processes that needed orchestrated. Airbnb created Airflow, gave it away to the public, and then fast forward a couple years and we're building a company around it, and we're really excited about that. >> That's a beautiful thing. That's exactly why open source is so great. >> Yeah, yeah. And for us, it's really been about watching the community and our customers take these problems, find a solution to those problems, standardize those solutions, and then building on top of that, right? So we're reaching to a point where a lot of our earlier customers who started to just using Airflow to get the base of their BI stack down and their reporting in their ELP infrastructure, they've solved that problem and now they're moving on to things like doing machine learning with their data, because now that they've built that foundation, all the connective tissue for their data arriving on time and being orchestrated correctly is happening, they can build a layer on top of that. And it's just been really, really exciting kind of watching what customers do once they're empowered to pick all the tools that they need, tie them together in the way they need to, and really deliver real value to their business. >> Can you share some of the use cases of these customers? Because I think that's where you're starting to see the innovation. What are some of the companies that you're working with, what are they doing? >> Viraj, I'll let you take that one too. (group laughs) >> So you know, a lot of it is... It goes across the gamut, right? Because it doesn't matter what you are, what you're doing with data, it needs to be orchestrated. So there's a lot of customers using us for their ETL and ELT reporting, right? Just getting data from other disparate sources into one place and then building on top of that. Be it building dashboards, answering questions for the business, building other data products and so on and so forth. From there, these use cases evolve a lot. You do see folks doing things like fraud detection, because Airflow's orchestrating how transactions go, transactions get analyzed. They do things like analyzing marketing spend to see where your highest ROI is. And then you kind of can't not talk about all of the machine learning that goes on, right? Where customers are taking data about their own customers, kind of analyze and aggregating that at scale, and trying to automate decision making processes. So it goes from your most basic, what we call data plumbing, right? Just to make sure data's moving as needed, all the ways to your more exciting expansive use cases around automated decision making and machine learning. >> And I'd say, I mean, I'd say that's one of the things that I think gets me most excited about our future, is how critical Airflow is to all of those processes, and I think when you know a tool is valuable is when something goes wrong and one of those critical processes doesn't work. And we know that our system is so mission critical to answering basic questions about your business and the growth of your company for so many organizations that we work with. So it's, I think, one of the things that gets Viraj and I and the rest of our company up every single morning is knowing how important the work that we do for all of those use cases across industries, across company sizes, and it's really quite energizing. >> It was such a big focus this year at AWS re:Invent, the role of data. And I think one of the things that's exciting about the open AI and all the movement towards large language models is that you can integrate data into these models from outside. So you're starting to see the integration easier to deal with. Still a lot of plumbing issues. So a lot of things happening. So I have to ask you guys, what is the state of the data orchestration area? Is it ready for disruption? Has it already been disrupted? Would you categorize it as a new first inning kind of opportunity, or what's the state of the data orchestration area right now? Both technically and from a business model standpoint. How would you guys describe that state of the market? >> Yeah, I mean, I think in a lot of ways, in some ways I think we're category creating. Schedulers have been around for a long time. I released a data presentation sort of on the evolution of going from something like Kron, which I think was built in like the 1970s out of Carnegie Mellon. And that's a long time ago, that's 50 years ago. So sort of like the basic need to schedule and do something with your data on a schedule is not a new concept. But to our point earlier, I think everything that you need around your ecosystem, first of all, the number of data tools and developer tooling that has come out industry has 5X'd over the last 10 years. And so obviously as that ecosystem grows, and grows, and grows, and grows, the need for orchestration only increases. And I think, as Astronomer, I think we... And we work with so many different types of companies, companies that have been around for 50 years, and companies that got started not even 12 months ago. And so I think for us it's trying to, in a ways, category create and adjust sort of what we sell and the value that we can provide for companies all across that journey. There are folks who are just getting started with orchestration, and then there's folks who have such advanced use case, 'cause they're hitting sort of a ceiling and only want to go up from there. And so I think we, as a company, care about both ends of that spectrum, and certainly want to build and continue building products for companies of all sorts, regardless of where they are on the maturity curve of data orchestration. >> That's a really good point, Paola. And I think the other thing to really take into account is it's the companies themselves, but also individuals who have to do their jobs. If you rewind the clock like 5 or 10 years ago, data engineers would be the ones responsible for orchestrating data through their org. But when we look at our customers today, it's not just data engineers anymore. There's data analysts who sit a lot closer to the business, and the data scientists who want to automate things around their models. So this idea that orchestration is this new category is right on the money. And what we're finding is the need for it is spreading to all parts of the data team, naturally where Airflow's emerged as an open source standard and we're hoping to take things to the next level. >> That's awesome. We've been up saying that the data market's kind of like the SRE with servers, right? You're going to need one person to deal with a lot of data, and that's data engineering, and then you're got to have the practitioners, the democratization. Clearly that's coming in what you're seeing. So I have to ask, how do you guys fit in from a value proposition standpoint? What's the pitch that you have to customers, or is it more inbound coming into you guys? Are you guys doing a lot of outreach, customer engagements? I'm sure they're getting a lot of great requirements from customers. What's the current value proposition? How do you guys engage? >> Yeah, I mean, there's so many... Sorry, Viraj, you can jump in. So there's so many companies using Airflow, right? So the baseline is that the open source project that is Airflow that came out of Airbnb, over five years ago at this point, has grown exponentially in users and continues to grow. And so the folks that we sell to primarily are folks who are already committed to using Apache Airflow, need data orchestration in their organization, and just want to do it better, want to do it more efficiently, want to do it without managing that infrastructure. And so our baseline proposition is for those organizations. Now to Viraj's point, obviously I think our ambitions go beyond that, both in terms of the personas that we addressed and going beyond that data engineer, but really it's to start at the baseline, as we continue to grow our our company, it's really making sure that we're adding value to folks using Airflow and help them do so in a better way, in a larger way, in a more efficient way, and that's really the crux of who we sell to. And so to answer your question on, we get a lot of inbound because they're... >> You have a built in audience. (laughs) >> The world that use it. Those are the folks who we talk to and come to our website and chat with us and get value from our content. I mean, the power of the opensource community is really just so, so big, and I think that's also one of the things that makes this job fun. >> And you guys are in a great position. Viraj, you can comment a little, get your reaction. There's been a big successful business model to starting a company around these big projects for a lot of reasons. One is open source is continuing to be great, but there's also supply chain challenges in there. There's also we want to continue more innovation and more code and keeping it free and and flowing. And then there's the commercialization of productizing it, operationalizing it. This is a huge new dynamic, I mean, in the past 5 or so years, 10 years, it's been happening all on CNCF from other areas like Apache, Linux Foundation, they're all implementing this. This is a huge opportunity for entrepreneurs to do this. >> Yeah, yeah. Open source is always going to be core to what we do, because we wouldn't exist without the open source community around us. They are huge in numbers. Oftentimes they're nameless people who are working on making something better in a way that everybody benefits from it. But open source is really hard, especially if you're a company whose core competency is running a business, right? Maybe you're running an e-commerce business, or maybe you're running, I don't know, some sort of like, any sort of business, especially if you're a company running a business, you don't really want to spend your time figuring out how to run open source software. You just want to use it, you want to use the best of it, you want to use the community around it, you want to be able to google something and get answers for it, you want the benefits of open source. You don't have the time or the resources to invest in becoming an expert in open source, right? And I think that dynamic is really what's given companies like us an ability to kind of form businesses around that in the sense that we'll make it so people get the best of both worlds. You'll get this vast open ecosystem that you can build on top of, that you can benefit from, that you can learn from. But you won't have to spend your time doing undifferentiated heavy lifting. You can do things that are just specific to your business. >> It's always been great to see that business model evolve. We used a debate 10 years ago, can there be another Red Hat? And we said, not really the same, but there'll be a lot of little ones that'll grow up to be big soon. Great stuff. Final question, can you guys share the history of the company? The milestones of Astromer's journey in data orchestration? >> Yeah, we could. So yeah, I mean, I think, so Viraj and I have obviously been at Astronomer along with our other founding team and leadership folks for over five years now. And it's been such an incredible journey of learning, of hiring really amazing people, solving, again, mission critical problems for so many types of organizations. We've had some funding that has allowed us to invest in the team that we have and in the software that we have, and that's been really phenomenal. And so that investment, I think, keeps us confident, even despite these sort of macroeconomic conditions that we're finding ourselves in. And so honestly, the milestones for us are focusing on our product, focusing on our customers over the next year, focusing on that market for us that we know can get valuable out of what we do, and making developers' lives better, and growing the open source community and making sure that everything that we're doing makes it easier for folks to get started, to contribute to the project and to feel a part of the community that we're cultivating here. >> You guys raised a little bit of money. How much have you guys raised? >> Don't know what the total is, but it's in the ballpark over $200 million. It feels good to... >> A little bit of capital. Got a little bit of cap to work with there. Great success. I know as a Series C Financing, you guys have been down. So you're up and running, what's next? What are you guys looking to do? What's the big horizon look like for you from a vision standpoint, more hiring, more product, what is some of the key things you're looking at doing? >> Yeah, it's really a little of all of the above, right? Kind of one of the best and worst things about working at earlier stage startups is there's always so much to do and you often have to just kind of figure out a way to get everything done. But really investing our product over the next, at least over the course of our company lifetime. And there's a lot of ways we want to make it more accessible to users, easier to get started with, easier to use, kind of on all areas there. And really, we really want to do more for the community, right, like I was saying, we wouldn't be anything without the large open source community around us. And we want to figure out ways to give back more in more creative ways, in more code driven ways, in more kind of events and everything else that we can keep those folks galvanized and just keep them happy using Airflow. >> Paola, any final words as we close out? >> No, I mean, I'm super excited. I think we'll keep growing the team this year. We've got a couple of offices in the the US, which we're excited about, and a fully global team that will only continue to grow. So Viraj and I are both here in New York, and we're excited to be engaging with our coworkers in person finally, after years of not doing so. We've got a bustling office in San Francisco as well. So growing those teams and continuing to hire all over the world, and really focusing on our product and the open source community is where our heads are at this year. So, excited. >> Congratulations. 200 million in funding, plus. Good runway, put that money in the bank, squirrel it away. It's a good time to kind of get some good interest on it, but still grow. Congratulations on all the work you guys do. We appreciate you and the open source community does, and good luck with the venture, continue to be successful, and we'll see you at the Startup Showcase. >> Thank you. >> Yeah, thanks so much, John. Appreciate it. >> Okay, that's the CUBE Conversation featuring astronomer.io, that's the website. Astronomer is doing well. Multiple rounds of funding, over 200 million in funding. Open source continues to lead the way in innovation. Great business model, good solution for the next gen cloud scale data operations, data stacks that are emerging. I'm John Furrier, your host, thanks for watching. (soft upbeat music)

Published Date : Feb 14 2023

SUMMARY :

and that is the future of for the path we've been on so far. for the AI industry to kind of highlight So the crux of what we center of the value proposition, that it's the heartbeat, One of the things and the number of tools they're using of what you guys went and all of the processes That's a beautiful thing. all the tools that they need, What are some of the companies Viraj, I'll let you take that one too. all of the machine learning and the growth of your company that state of the market? and the value that we can provide and the data scientists that the data market's And so the folks that we sell to You have a built in audience. one of the things that makes this job fun. in the past 5 or so years, 10 years, that you can build on top of, the history of the company? and in the software that we have, How much have you guys raised? but it's in the ballpark What's the big horizon look like for you Kind of one of the best and worst things and continuing to hire the work you guys do. Yeah, thanks so much, John. for the next gen cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Viraj ParekhPERSON

0.99+

PaolaPERSON

0.99+

VirajPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

AirbnbORGANIZATION

0.99+

2017DATE

0.99+

San FranciscoLOCATION

0.99+

New YorkLOCATION

0.99+

ApacheORGANIZATION

0.99+

USLOCATION

0.99+

TwoQUANTITY

0.99+

AWSORGANIZATION

0.99+

Paola Peraza CalderonPERSON

0.99+

1970sDATE

0.99+

first questionQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AirflowTITLE

0.99+

bothQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

200 millionQUANTITY

0.99+

AstronomerORGANIZATION

0.99+

OneQUANTITY

0.99+

over 200 millionQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

this yearDATE

0.99+

10 years agoDATE

0.99+

HubSpotORGANIZATION

0.98+

FivetranORGANIZATION

0.98+

50 years agoDATE

0.98+

over five yearsQUANTITY

0.98+

one stackQUANTITY

0.98+

12 months agoDATE

0.98+

10 yearsQUANTITY

0.97+

BothQUANTITY

0.97+

Apache AirflowTITLE

0.97+

both worldsQUANTITY

0.97+

CNCFORGANIZATION

0.97+

oneQUANTITY

0.97+

ChatGPTORGANIZATION

0.97+

5DATE

0.97+

next yearDATE

0.96+

AstromerORGANIZATION

0.96+

todayDATE

0.95+

5XQUANTITY

0.95+

over five years agoDATE

0.95+

CUBEORGANIZATION

0.94+

two thingsQUANTITY

0.94+

eachQUANTITY

0.93+

one personQUANTITY

0.93+

FirstQUANTITY

0.92+

S3TITLE

0.91+

Carnegie MellonORGANIZATION

0.91+

Startup ShowcaseEVENT

0.91+

AWS Startup Showcase S3E1


 

(soft music) >> Hello everyone, welcome to this Cube conversation here from the studios of theCube in Palo Alto, California. John Furrier, your host. We're featuring a startup, Astronomer, astronomer.io is the url. Check it out. And we're going to have a great conversation around one of the most important topics hitting the industry, and that is the future of machine learning and AI and the data that powers it underneath it. There's a lot of things that need to get done, and we're excited to have some of the co-founders of Astronomer here. Viraj Parekh, who is co-founder and Paola Peraza Calderon, another co-founder, both with Astronomer. Thanks for coming on. First of all, how many co-founders do you guys have? >> You know, I think the answer's around six or seven. I forget the exact, but there's really been a lot of people around the table, who've worked very hard to get this company to the point that it's at. And we have long ways to go, right? But there's been a lot of people involved that are, have been absolutely necessary for the path we've been on so far. >> Thanks for that, Viraj, appreciate that. The first question I want to get out on the table, and then we'll get into some of the details, is take a minute to explain what you guys are doing. How did you guys get here? Obviously, multiple co-founders sounds like a great project. The timing couldn't have been better. ChatGPT has essentially done so much public relations for the AI industry. Kind of highlight this shift that's happening. It's real. We've been chronologicalizing, take a minute to explain what you guys do. >> Yeah, sure. We can get started. So yeah, when Astronomer, when Viraj and I joined Astronomer in 2017, we really wanted to build a business around data and we were using an open source project called Apache Airflow, that we were just using sort of as customers ourselves. And over time, we realized that there was actually a market for companies who use Apache Airflow, which is a data pipeline management tool, which we'll get into. And that running Airflow is actually quite challenging and that there's a lot of, a big opportunity for us to create a set of commercial products and opportunity to grow that open source community and actually build a company around that. So the crux of what we do is help companies run data pipelines with Apache Airflow. And certainly we've grown in our ambitions beyond that, but that's sort of the crux of what we do for folks. >> You know, data orchestration, data management has always been a big item, you know, in the old classic data infrastructure. But with AI you're seeing a lot more emphasis on scale, tuning, training. You know, data orchestration is the center of the value proposition when you're looking at coordinating resources, it's one of the most important things. Could you guys explain what data orchestration entails? What does it mean? Take us through the definition of what data orchestration entails. >> Yeah, for sure. I can take this one and Viraj feel free to jump in. So if you google data orchestration, you know, here's what you're going to get. You're going to get something that says, data orchestration is the automated process for organizing silo data from numerous data storage points to organizing it and making it accessible and prepared for data analysis. And you say, okay, but what does that actually mean, right? And so let's give sort of an example. So let's say you're a business and you have sort of the following basic asks of your data team, right? Hey, give me a dashboard in Sigma, for example, for the number of customers or monthly active users and then make sure that that gets updated on an hourly basis. And then number two, a consistent list of active customers that I have in HubSpot so that I can send them a monthly product newsletter, right? Two very basic asks for all sorts of companies and organizations. And when that data team, which has data engineers, data scientists, ML engineers, data analysts get that request, they're looking at an ecosystem of data sources that can help them get there, right? And that includes application databases, for example, that actually have end product user behavior and third party APIs from tools that the company uses that also has different attributes and qualities of those customers or users. And that data team needs to use tools like Fivetran, to ingest data, a data warehouse like Snowflake or Databricks to actually store that data and do analysis on top of it, a tool like DBT to do transformations and make sure that that data is standardized in the way that it needs to be, a tool like Hightouch for reverse ETL. I mean, we could go on and on. There's so many partners of ours in this industry that are doing really, really exciting and critical things for those data movements. And the whole point here is that, you know, data teams have this plethora of tooling that they use to both ingest the right data and come up with the right interfaces to transform and interact with that data. And data orchestration in our view is really the heartbeat of all of those processes, right? And tangibly the unit of data orchestration, you know, is a data pipeline, a set of tasks or jobs that each do something with data over time and eventually run that on a schedule to make sure that those things are happening continuously as time moves on. And, you know, the company advances. And so, you know, for us, we're building a business around Apache Airflow, which is a workflow management tool that allows you to author, run and monitor data pipelines. And so when we talk about data orchestration, we talk about sort of two things. One is that crux of data pipelines that, like I said, connect that large ecosystem of data tooling in your company. But number two, it's not just that data pipeline that needs to run every day, right? And Viraj will probably touch on this as we talk more about Astronomer and our value prop on top of Airflow. But then it's all the things that you need to actually run data and production and make sure that it's trustworthy, right? So it's actually not just that you're running things on a schedule, but it's also things like CI/CD tooling, right? Secure secrets management, user permissions, monitoring, data lineage, documentation, things that enable other personas in your data team to actually use those tools. So long-winded way of saying that, it's the heartbeat that we think of the data ecosystem and certainly goes beyond scheduling, but again, data pipelines are really at the center of it. >> You know, one of the things that jumped out Viraj, if you can get into this, I'd like to hear more about how you guys look at all those little tools that are out there. You mentioned a variety of things. You know, if you look at the data infrastructure, it's not just one stack. You've got an analytic stack, you've got a realtime stack, you've got a data lake stack, you got an AI stack potentially. I mean you have these stacks now emerging in the data world that are >> Yeah. - >> fundamental, but we're once served by either a full package, old school software, and then a bunch of point solution. You mentioned Fivetran there, I would say in the analytics stack. Then you got, you know, S3, they're on the data lake stack. So all these things are kind of munged together. >> Yeah. >> How do you guys fit into that world? You make it easier or like, what's the deal? >> Great question, right? And you know, I think that one of the biggest things we've found in working with customers over, you know, the last however many years, is that like if a data team is using a bunch of tools to get what they need done and the number of tools they're using is growing exponentially and they're kind of roping things together here and there, that's actually a sign of a productive team, not a bad thing, right? It's because that team is moving fast. They have needs that are very specific to them and they're trying to make something that's exactly tailored to their business. So a lot of times what we find is that customers have like some sort of base layer, right? That's kind of like, you know, it might be they're running most of the things in AWS, right? And then on top of that, they'll be using some of the things AWS offers, you know, things like SageMaker, Redshift, whatever. But they also might need things that their Cloud can't provide, you know, something like Fivetran or Hightouch or anything of those other tools and where data orchestration really shines, right? And something that we've had the pleasure of helping our customers build, is how do you take all those requirements, all those different tools and whip them together into something that fulfills a business need, right? Something that makes it so that somebody can read a dashboard and trust the number that it says or somebody can make sure that the right emails go out to their customers. And Airflow serves as this amazing kind of glue between that data stack, right? It's to make it so that for any use case, be it ELT pipelines or machine learning or whatever, you need different things to do them and Airflow helps tie them together in a way that's really specific for a individual business's needs. >> Take a step back and share the journey of what your guys went through as a company startup. So you mentioned Apache open source, you know, we were just, I was just having an interview with the VC, we were talking about foundational models. You got a lot of proprietary and open source development going on. It's almost the iPhone, Android moment in this whole generative space and foundational side. This is kind of important, the open source piece of it. Can you share how you guys started? And I can imagine your customers probably have their hair on fire and are probably building stuff on their own. How do you guys, are you guys helping them? Take us through, 'cuz you guys are on the front end of a big, big wave and that is to make sense of the chaos, reigning it in. Take us through your journey and why this is important. >> Yeah Paola, I can take a crack at this and then I'll kind of hand it over to you to fill in whatever I miss in details. But you know, like Paola is saying, the heart of our company is open source because we started using Airflow as an end user and started to say like, "Hey wait a second". Like more and more people need this. Airflow, for background, started at Airbnb and they were actually using that as the foundation for their whole data stack. Kind of how they made it so that they could give you recommendations and predictions and all of the processes that need to be or needed to be orchestrated. Airbnb created Airflow, gave it away to the public and then, you know, fast forward a couple years and you know, we're building a company around it and we're really excited about that. >> That's a beautiful thing. That's exactly why open source is so great. >> Yeah, yeah. And for us it's really been about like watching the community and our customers take these problems, find solution to those problems, build standardized solutions, and then building on top of that, right? So we're reaching to a point where a lot of our earlier customers who started to just using Airflow to get the base of their BI stack down and their reporting and their ELP infrastructure, you know, they've solved that problem and now they're moving onto things like doing machine learning with their data, right? Because now that they've built that foundation, all the connective tissue for their data arriving on time and being orchestrated correctly is happening, they can build the layer on top of that. And it's just been really, really exciting kind of watching what customers do once they're empowered to pick all the tools that they need, tie them together in the way they need to, and really deliver real value to their business. >> Can you share some of the use cases of these customers? Because I think that's where you're starting to see the innovation. What are some of the companies that you're working with, what are they doing? >> Raj, I'll let you take that one too. (all laughing) >> Yeah. (all laughing) So you know, a lot of it is, it goes across the gamut, right? Because all doesn't matter what you are, what you're doing with data, it needs to be orchestrated. So there's a lot of customers using us for their ETL and ELT reporting, right? Just getting data from all the disparate sources into one place and then building on top of that, be it building dashboards, answering questions for the business, building other data products and so on and so forth. From there, these use cases evolve a lot. You do see folks doing things like fraud detection because Airflow's orchestrating how transactions go. Transactions get analyzed, they do things like analyzing marketing spend to see where your highest ROI is. And then, you know, you kind of can't not talk about all of the machine learning that goes on, right? Where customers are taking data about their own customers kind of analyze and aggregating that at scale and trying to automate decision making processes. So it goes from your most basic, what we call like data plumbing, right? Just to make sure data's moving as needed. All the ways to your more exciting and sexy use cases around like automated decision making and machine learning. >> And I'd say, I mean, I'd say that's one of the things that I think gets me most excited about our future is how critical Airflow is to all of those processes, you know? And I think when, you know, you know a tool is valuable is when something goes wrong and one of those critical processes doesn't work. And we know that our system is so mission critical to answering basic, you know, questions about your business and the growth of your company for so many organizations that we work with. So it's, I think one of the things that gets Viraj and I, and the rest of our company up every single morning, is knowing how important the work that we do for all of those use cases across industries, across company sizes. And it's really quite energizing. >> It was such a big focus this year at AWS re:Invent, the role of data. And I think one of the things that's exciting about the open AI and all the movement towards large language models, is that you can integrate data into these models, right? From outside, right? So you're starting to see the integration easier to deal with, still a lot of plumbing issues. So a lot of things happening. So I have to ask you guys, what is the state of the data orchestration area? Is it ready for disruption? Is it already been disrupted? Would you categorize it as a new first inning kind of opportunity or what's the state of the data orchestration area right now? Both, you know, technically and from a business model standpoint, how would you guys describe that state of the market? >> Yeah, I mean I think, I think in a lot of ways we're, in some ways I think we're categoric rating, you know, schedulers have been around for a long time. I recently did a presentation sort of on the evolution of going from, you know, something like KRON, which I think was built in like the 1970s out of Carnegie Mellon. And you know, that's a long time ago. That's 50 years ago. So it's sort of like the basic need to schedule and do something with your data on a schedule is not a new concept. But to our point earlier, I think everything that you need around your ecosystem, first of all, the number of data tools and developer tooling that has come out the industry has, you know, has some 5X over the last 10 years. And so obviously as that ecosystem grows and grows and grows and grows, the need for orchestration only increases. And I think, you know, as Astronomer, I think we, and there's, we work with so many different types of companies, companies that have been around for 50 years and companies that got started, you know, not even 12 months ago. And so I think for us, it's trying to always category create and adjust sort of what we sell and the value that we can provide for companies all across that journey. There are folks who are just getting started with orchestration and then there's folks who have such advanced use case 'cuz they're hitting sort of a ceiling and only want to go up from there. And so I think we as a company, care about both ends of that spectrum and certainly have want to build and continue building products for companies of all sorts, regardless of where they are on the maturity curve of data orchestration. >> That's a really good point Paola. And I think the other thing to really take into account is it's the companies themselves, but also individuals who have to do their jobs. You know, if you rewind the clock like five or 10 years ago, data engineers would be the ones responsible for orchestrating data through their org. But when we look at our customers today, it's not just data engineers anymore. There's data analysts who sit a lot closer to the business and the data scientists who want to automate things around their models. So this idea that orchestration is this new category is spot on, is right on the money. And what we're finding is it's spreading, the need for it, is spreading to all parts of the data team naturally where Airflows have emerged as an open source standard and we're hoping to take things to the next level. >> That's awesome. You know, we've been up saying that the data market's kind of like the SRE with servers, right? You're going to need one person to deal with a lot of data and that's data engineering and then you're going to have the practitioners, the democratization. Clearly that's coming in what you're seeing. So I got to ask, how do you guys fit in from a value proposition standpoint? What's the pitch that you have to customers or is it more inbound coming into you guys? Are you guys doing a lot of outreach, customer engagements? I'm sure they're getting a lot of great requirements from customers. What's the current value proposition? How do you guys engage? >> Yeah, I mean we've, there's so many, there's so many. Sorry Raj, you can jump in. - >> It's okay. So there's so many companies using Airflow, right? So our, the baseline is that the open source project that is Airflow that was, that came out of Airbnb, you know, over five years ago at this point, has grown exponentially in users and continues to grow. And so the folks that we sell to primarily are folks who are already committed to using Apache Airflow, need data orchestration in the organization and just want to do it better, want to do it more efficiently, want to do it without managing that infrastructure. And so our baseline proposition is for those organizations. Now to Raj's point, obviously I think our ambitions go beyond that, both in terms of the personas that we addressed and going beyond that data engineer, but really it's for, to start at the baseline. You know, as we continue to grow our company, it's really making sure that we're adding value to folks using Airflow and help them do so in a better way, in a larger way and a more efficient way. And that's really the crux of who we sell to. And so to answer your question on, we actually, we get a lot of inbound because they're are so many - >> A built-in audience. >> In the world that use it, that those are the folks who we talk to and come to our website and chat with us and get value from our content. I mean the power of the open source community is really just so, so big. And I think that's also one of the things that makes this job fun, so. >> And you guys are in a great position, Viraj, you can comment, to get your reaction. There's been a big successful business model to starting a company around these big projects for a lot of reasons. One is open source is continuing to be great, but there's also supply chain challenges in there. There's also, you know, we want to continue more innovation and more code and keeping it free and and flowing. And then there's the commercialization of product-izing it, operationalizing it. This is a huge new dynamic. I mean, in the past, you know, five or so years, 10 years, it's been happening all on CNCF from other areas like Apache, Linux Foundation, they're all implementing this. This is a huge opportunity for entrepreneurs to do this. >> Yeah, yeah. Open source is always going to be core to what we do because, you know, we wouldn't exist without the open source community around us. They are huge in numbers. Oftentimes they're nameless people who are working on making something better in a way that everybody benefits from it. But open source is really hard, especially if you're a company whose core competency is running a business, right? Maybe you're running e-commerce business or maybe you're running, I don't know, some sort of like any sort of business, especially if you're a company running a business, you don't really want to spend your time figuring out how to run open source software. You just want to use it, you want to use the best of it, you want to use the community around it. You want to take, you want to be able to google something and get answers for it. You want the benefits of open source. You don't want to have, you don't have the time or the resources to invest in becoming an expert in open source, right? And I think that dynamic is really what's given companies like us an ability to kind of form businesses around that, in the sense that we'll make it so people get the best of both worlds. You'll get this vast open ecosystem that you can build on top of, you can benefit from, that you can learn from, but you won't have to spend your time doing undifferentiated heavy lifting. You can do things that are just specific to your business. >> It's always been great to see that business model evolved. We used to debate 10 years ago, can there be another red hat? And we said, not really the same, but there'll be a lot of little ones that'll grow up to be big soon. Great stuff. Final question, can you guys share the history of the company, the milestones of the Astronomer's journey in data orchestration? >> Yeah, we could. So yeah, I mean, I think, so Raj and I have obviously been at astronomer along with our other founding team and leadership folks, for over five years now. And it's been such an incredible journey of learning, of hiring really amazing people. Solving again, mission critical problems for so many types of organizations. You know, we've had some funding that has allowed us to invest in the team that we have and in the software that we have. And that's been really phenomenal. And so that investment, I think, keeps us confident even despite these sort of macroeconomic conditions that we're finding ourselves in. And so honestly, the milestones for us are focusing on our product, focusing on our customers over the next year, focusing on that market for us, that we know can get value out of what we do. And making developers' lives better and growing the open source community, you know, and making sure that everything that we're doing makes it easier for folks to get started to contribute to the project and to feel a part of the community that we're cultivating here. >> You guys raised a little bit of money. How much have you guys raised? >> I forget what the total is, but it's in the ballpark of 200, over $200 million. So it feels good - >> A little bit of capital. Got a little bit of cash to work with there. Great success. I know it's a Series C financing, you guys been down, so you're up and running. What's next? What are you guys looking to do? What's the big horizon look like for you? And from a vision standpoint, more hiring, more product, what is some of the key things you're looking at doing? >> Yeah, it's really a little of all of the above, right? Like, kind of one of the best and worst things about working at earlier stage startups is there's always so much to do and you often have to just kind of figure out a way to get everything done, but really invest in our product over the next, at least the next, over the course of our company lifetime. And there's a lot of ways we wanting to just make it more accessible to users, easier to get started with, easier to use all kind of on all areas there. And really, we really want to do more for the community, right? Like I was saying, we wouldn't be anything without the large open source community around us. And we want to figure out ways to give back more in more creative ways, in more code driven ways and more kind of events and everything else that we can do to keep those folks galvanized and just keeping them happy using Airflow. >> Paola, any final words as we close out? >> No, I mean, I'm super excited. You know, I think we'll keep growing the team this year. We've got a couple of offices in the US which we're excited about, and a fully global team that will only continue to grow. So Viraj and I are both here in New York and we're excited to be engaging with our coworkers in person. Finally, after years of not doing so, we've got a bustling office in San Francisco as well. So growing those teams and continuing to hire all over the world and really focusing on our product and the open source community is where our heads are at this year, so. >> Congratulations. - >> Excited. 200 million in funding plus good runway. Put that money in the bank, squirrel it away. You know, it's good to kind of get some good interest on it, but still grow. Congratulations on all the work you guys do. We appreciate you and the open sourced community does and good luck with the venture. Continue to be successful and we'll see you at the Startup Showcase. >> Thank you. - >> Yeah, thanks so much, John. Appreciate it. - >> It's theCube conversation, featuring astronomer.io, that's the website. Astronomer is doing well. Multiple rounds of funding, over 200 million in funding. Open source continues to lead the way in innovation. Great business model. Good solution for the next gen, Cloud, scale, data operations, data stacks that are emerging. I'm John Furrier, your host. Thanks for watching. (soft music)

Published Date : Feb 8 2023

SUMMARY :

and that is the future of for the path we've been on so far. take a minute to explain what you guys do. and that there's a lot of, of the value proposition And that data team needs to use tools You know, one of the and then a bunch of point solution. and the number of tools they're using and that is to make sense of the chaos, and all of the processes that need to be That's a beautiful thing. you know, they've solved that problem What are some of the companies Raj, I'll let you take that one too. And then, you know, and the growth of your company So I have to ask you guys, and companies that got started, you know, and the data scientists that the data market's kind of you can jump in. And so the folks that we and come to our website and chat with us I mean, in the past, you to what we do because, you history of the company, and in the software that we have. How much have you guys raised? but it's in the ballpark What are you guys looking to do? and you often have to just kind of and the open source community the work you guys do. Yeah, thanks so much, John. that's the website.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Viraj ParekhPERSON

0.99+

PaolaPERSON

0.99+

VirajPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

RajPERSON

0.99+

AirbnbORGANIZATION

0.99+

USLOCATION

0.99+

2017DATE

0.99+

New YorkLOCATION

0.99+

Paola Peraza CalderonPERSON

0.99+

AWSORGANIZATION

0.99+

ApacheORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

1970sDATE

0.99+

10 yearsQUANTITY

0.99+

fiveQUANTITY

0.99+

TwoQUANTITY

0.99+

first questionQUANTITY

0.99+

over 200 millionQUANTITY

0.99+

bothQUANTITY

0.99+

BothQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

50 years agoDATE

0.99+

oneQUANTITY

0.99+

fiveDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

this yearDATE

0.98+

OneQUANTITY

0.98+

AirflowTITLE

0.98+

10 years agoDATE

0.98+

Carnegie MellonORGANIZATION

0.98+

over five yearsQUANTITY

0.98+

200QUANTITY

0.98+

12 months agoDATE

0.98+

both worldsQUANTITY

0.98+

5XQUANTITY

0.98+

ChatGPTORGANIZATION

0.98+

firstQUANTITY

0.98+

one stackQUANTITY

0.97+

one personQUANTITY

0.97+

two thingsQUANTITY

0.97+

FivetranORGANIZATION

0.96+

sevenQUANTITY

0.96+

next yearDATE

0.96+

todayDATE

0.95+

50 yearsQUANTITY

0.95+

eachQUANTITY

0.95+

theCubeORGANIZATION

0.94+

HubSpotORGANIZATION

0.93+

SigmaORGANIZATION

0.92+

Series COTHER

0.92+

AstronomerORGANIZATION

0.91+

astronomer.ioOTHER

0.91+

HightouchTITLE

0.9+

one placeQUANTITY

0.9+

AndroidTITLE

0.88+

Startup ShowcaseEVENT

0.88+

Apache AirflowTITLE

0.86+

CNCFORGANIZATION

0.86+

Why Should Customers Care About SuperCloud


 

Hello and welcome back to Supercloud 2 where we examine the intersection of cloud and data in the 2020s. My name is Dave Vellante. Our Supercloud panel, our power panel is back. Maribel Lopez is the founder and principal analyst at Lopez Research. Sanjeev Mohan is former Gartner analyst and principal at Sanjeev Mohan. And Keith Townsend is the CTO advisor. Folks, welcome back and thanks for your participation today. Good to see you. >> Okay, great. >> Great to see you. >> Thanks. Let me start, Maribel, with you. Bob Muglia, we had a conversation as part of Supercloud the other day. And he said, "Dave, I like the work, you got to simplify this a little bit." So he said, quote, "A Supercloud is a platform." He said, "Think of it as a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." And then Nelu Mihai said, "Well, wait a minute. This is just going to create more stove pipes. We need more standards in an architecture," which is kind of what Berkeley Sky Computing initiative is all about. So there's a sort of a debate going on. Is supercloud an architecture, a platform? Or maybe it's just another buzzword. Maribel, do you have a thought on this? >> Well, the easy answer would be to say it's just a buzzword. And then we could just kill the conversation and be done with it. But I think the term, it's more than that, right? The term actually isn't new. You can go back to at least 2016 and find references to supercloud in Cornell University or assist in other documents. So, having said this, I think we've been talking about Supercloud for a while, so I assume it's more than just a fancy buzzword. But I think it really speaks to that undeniable trend of moving towards an abstraction layer to deal with the chaos of what we consider managing multiple public and private clouds today, right? So one definition of the technology platform speaks to a set of services that allows companies to build and run that technology smoothly without worrying about the underlying infrastructure, which really gets back to something that Bob said. And some of the question is where that lives. And you could call that an abstraction layer. You could call it cross-cloud services, hybrid cloud management. So I see momentum there, like legitimate momentum with enterprise IT buyers that are trying to deal with the fact that they have multiple clouds now. So where I think we're moving is trying to define what are the specific attributes and frameworks of that that would make it so that it could be consistent across clouds. What is that layer? And maybe that's what the supercloud is. But one of the things I struggle with with supercloud is. What are we really trying to do here? Are we trying to create differentiated services in the supercloud layer? Is a supercloud just another variant of what AWS, GCP, or others do? You spoken to Walmart about its cloud native platform, and that's an example of somebody deciding to do it themselves because they need to deal with this today and not wait for some big standards thing to happen. So whatever it is, I do think it's something. I think we're trying to maybe create an architecture out of it would be a better way of saying it so that it does get to those set of principles, but it also needs to be edge aware. I think whenever we talk about supercloud, we're always talking about like the big centralized cloud. And I think we need to think about all the distributed clouds that we're looking at in edge as well. So that might be one of the ways that supercloud evolves. >> So thank you, Maribel. Keith, Brian Gracely, Gracely's law, things kind of repeat themselves. We've seen it all before. And so what Muglia brought to the forefront is this idea of a platform where the platform provider is really responsible for the architecture. Of course, the drawback is then you get a a bunch of stove pipes architectures. But practically speaking, that's kind of the way the industry has always evolved, right? >> So if we look at this from the practitioner's perspective and we talk about platforms, traditionally vendors have provided the platforms for us, whether it's distribution of lineage managed by or provided by Red Hat, Windows, servers, .NET, databases, Oracle. We think of those as platforms, things that are fundamental we can build on top. Supercloud isn't today that. It is a framework or idea, kind of a visionary goal to get to a point that we can have a platform or a framework. But what we're seeing repeated throughout the industry in customers, whether it's the Walmarts that's kind of supersized the idea of supercloud, or if it's regular end user organizations that are coming out with platform groups, groups who normalize cloud native infrastructure, AWS multi-cloud, VMware resources to look like one thing internally to their developers. We're seeing this trend that there's a desire for a platform that provides the capabilities of a supercloud. >> Thank you for that. Sanjeev, we often use Snowflake as a supercloud example, and now would presumably would be a platform with an architecture that's determined by the vendor. Maybe Databricks is pushing for a more open architecture, maybe more of that nirvana that we were talking about before to solve for supercloud. But regardless, the practitioner discussions show. At least currently, there's not a lot of cross-cloud data sharing. I think it could be a killer use case, egress charges or a barrier. But how do you see it? Will that change? Will we hide that underlying complexity and start sharing data across cloud? Is that something that you think Snowflake or others will be able to achieve? >> So I think we are already starting to see some of that happen. Snowflake is definitely one example that gets cited a lot. But even we don't talk about MongoDB in this like, but you could have a MongoDB cluster, for instance, with nodes sitting in different cloud providers. So there are companies that are starting to do it. The advantage that these companies have, let's take Snowflake as an example, it's a centralized proprietary platform. And they are building the capabilities that are needed for supercloud. So they're building things like you can push down your data transformations. They have the entire security and privacy suite. Data ops, they're adding those capabilities. And if I'm not mistaken, it'll be very soon, we will see them offer data observability. So it's all works great as long as you are in one platform. And if you want resilience, then Snowflake, Supercloud, great example. But if your primary goal is to choose the most cost-effective service irrespective of which cloud it sits in, then things start falling sideways. For example, I may be a very big Snowflake user. And I like Snowflake's resilience. I can move from one cloud to another cloud. Snowflake does it for me. But what if I want to train a very large model? Maybe Databricks is a better platform for that. So how do I do move my workload from one platform to another platform? That tooling does not exist. So we need server hybrid, cross-cloud, data ops platform. Walmart has done a great job, but they built it by themselves. Not every company is Walmart. Like Maribel and Keith said, we need standards, we need reference architectures, we need some sort of a cost control. I was just reading recently, Accenture has been public about their AWS bill. Every time they get the bill is tens of millions of lines, tens of millions 'cause there are over thousand teams using AWS. If we have not been able to corral a usage of a single cloud, now we're talking about supercloud, we've got multiple clouds, and hybrid, on-prem, and edge. So till we've got some cross-platform tooling in place, I think this will still take quite some time for it to take shape. >> It's interesting. Maribel, Walmart would tell you that their on-prem infrastructure is cheaper to run than the stuff in the cloud. but at the same time, they want the flexibility and the resiliency of their three-legged stool model. So the point as Sanjeev was making about hybrid. It's an interesting balance, isn't it, between getting your lowest cost and at the same time having best of breed and scale? >> It's basically what you're trying to optimize for, as you said, right? And by the way, to the earlier point, not everybody is at Walmart's scale, so it's not actually cheaper for everybody to have the purchasing power to make the cloud cheaper to have it on-prem. But I think what you see almost every company, large or small, moving towards is this concept of like, where do I find the agility? And is the agility in building the infrastructure for me? And typically, the thing that gives you outside advantage as an organization is not how you constructed your cloud computing infrastructure. It might be how you structured your data analytics as an example, which cloud is related to that. But how do you marry those two things? And getting back to sort of Sanjeev's point. We're in a real struggle now where one hand we want to have best of breed services and on the other hand we want it to be really easy to manage, secure, do data governance. And those two things are really at odds with each other right now. So if you want all the knobs and switches of a service like geospatial analytics and big query, you're going to have to use Google tools, right? Whereas if you want visibility across all the clouds for your application of state and understand the security and governance of that, you're kind of looking for something that's more cross-cloud tooling at that point. But whenever you talk to somebody about cross-cloud tooling, they look at you like that's not really possible. So it's a very interesting time in the market. Now, we're kind of layering this concept of supercloud on it. And some people think supercloud's about basically multi-cloud tooling, and some people think it's about a whole new architectural stack. So we're just not there yet. But it's not all about cost. I mean, cloud has not been about cost for a very, very long time. Cloud has been about how do you really make the most of your data. And this gets back to cross-cloud services like Snowflake. Why did they even exist? They existed because we had data everywhere, but we need to treat data as a unified object so that we can analyze it and get insight from it. And so that's where some of the benefit of these cross-cloud services are moving today. Still a long way to go, though, Dave. >> Keith, I reached out to my friends at ETR given the macro headwinds, And you're right, Maribel, cloud hasn't really been about just about cost savings. But I reached out to the ETR, guys, what's your data show in terms of how customers are dealing with the economic headwinds? And they said, by far, their number one strategy to cut cost is consolidating redundant vendors. And a distant second, but still notable was optimizing cloud costs. Maybe using reserve instances, or using more volume buying. Nowhere in there. And I asked them to, "Could you go look and see if you can find it?" Do we see repatriation? And you hear this a lot. You hear people whispering as analysts, "You better look into that repatriation trend." It's pretty big. You can't find it. But some of the Walmarts in the world, maybe even not repatriating, but they maybe have better cost structure on-prem. Keith, what are you seeing from the practitioners that you talk to in terms of how they're dealing with these headwinds? >> Yeah, I just got into a conversation about this just this morning with (indistinct) who is an analyst over at GigaHome. He's reading the same headlines. Repatriation is happening at large scale. I think this is kind of, we have these quiet terms now. We have quiet quitting, we have quiet hiring. I think we have quiet repatriation. Most people haven't done away with their data centers. They're still there. Whether they're completely on-premises data centers, and they own assets, or they're partnerships with QTX, Equinix, et cetera, they have these private cloud resources. What I'm seeing practically is a rebalancing of workloads. Do I really need to pay AWS for this instance of SAP that's on 24 hours a day versus just having it on-prem, moving it back to my data center? I've talked to quite a few customers who were early on to moving their static SAP workloads onto the public cloud, and they simply moved them back. Surprising, I was at VMware Explore. And we can talk about this a little bit later on. But our customers, net new, not a lot that were born in the cloud. And they get to this point where their workloads are static. And they look at something like a Kubernetes, or a OpenShift, or VMware Tanzu. And they ask the question, "Do I need the scalability of cloud?" I might consider being a net new VMware customer to deliver this base capability. So are we seeing repatriation as the number one reason? No, I think internal IT operations are just naturally come to this realization. Hey, I have these resources on premises. The private cloud technologies have moved far along enough that I can just simply move this workload back. I'm not calling it repatriation, I'm calling it rightsizing for the operating model that I have. >> Makes sense. Yeah. >> Go ahead. >> If I missed something, Dave, why we are on this topic of repatriation. I'm actually surprised that we are talking about repatriation as a very big thing. I think repatriation is happening, no doubt, but it's such a small percentage of cloud migration that to me it's a rounding error in my opinion. I think there's a bigger problem. The problem is that people don't know where the cost is. If they knew where the cost was being wasted in the cloud, they could do something about it. But if you don't know, then the easy answer is cloud costs a lot and moving it back to on-premises. I mean, take like Capital One as an example. They got rid of all the data centers. Where are they going to repatriate to? They're all in the cloud at this point. So I think my point is that data observability is one of the places that has seen a lot of traction is because of cost. Data observability, when it first came into existence, it was all about data quality. Then it was all about data pipeline reliability. And now, the number one killer use case is FinOps. >> Maribel, you had a comment? >> Yeah, I'm kind of in violent agreement with both Sanjeev and Keith. So what are we seeing here? So the first thing that we see is that many people wildly overspent in the big public cloud. They had stranded cloud credits, so to speak. The second thing is, some of them still had infrastructure that was useful. So why not use it if you find the right workloads to what Keith was talking about, if they were more static workloads, if it was already there? So there is a balancing that's going on. And then I think fundamentally, from a trend standpoint, these things aren't binary. Everybody, for a while, everything was going to go to the public cloud and then people are like, "Oh, it's kind of expensive." Then they're like, "Oh no, they're going to bring it all on-prem 'cause it's really expensive." And it's like, "Well, that doesn't necessarily get me some of the new features and functionalities I might want for some of my new workloads." So I'm going to put the workloads that have a certain set of characteristics that require cloud in the cloud. And if I have enough capability on-prem and enough IT resources to manage certain things on site, then I'm going to do that there 'cause that's a more cost-effective thing for me to do. It's not binary. That's why we went to hybrid. And then we went to multi just to describe the fact that people added multiple public clouds. And now we're talking about super, right? So I don't look at it as a one-size-fits-all for any of this. >> A a number of practitioners leading up to Supercloud2 have told us that they're solving their cloud complexity by going in monocloud. So they're putting on the blinders. Even though across the organization, there's other groups using other clouds. You're like, "In my group, we use AWS, or my group, we use Azure. And those guys over there, they use Google. We just kind of keep it separate." Are you guys hearing this in your view? Is that risky? Are they missing out on some potential to tap best of breed? What do you guys think about that? >> Everybody thinks they're monocloud. Is anybody really monocloud? It's like a group is monocloud, right? >> Right. >> This genie is out of the bottle. We're not putting the genie back in the bottle. You might think your monocloud and you go like three doors down and figure out the guy or gal is on a fundamentally different cloud, running some analytics workload that you didn't know about. So, to Sanjeev's earlier point, they don't even know where their cloud spend is. So I think the concept of monocloud, how that's actually really realized by practitioners is primary and then secondary sources. So they have a primary cloud that they run most of their stuff on, and that they try to optimize. And we still have forked workloads. Somebody decides, "Okay, this SAP runs really well on this, or these analytics workloads run really well on that cloud." And maybe that's how they parse it. But if you really looked at it, there's very few companies, if you really peaked under the hood and did an analysis that you could find an actual monocloud structure. They just want to pull it back in and make it more manageable. And I respect that. You want to do what you can to try to streamline the complexity of that. >> Yeah, we're- >> Sorry, go ahead, Keith. >> Yeah, we're doing this thing where we review AWS service every day. Just in your inbox, learn about a new AWS service cursory. There's 238 AWS products just on the AWS cloud itself. Some of them are redundant, but you get the idea. So the concept of monocloud, I'm in filing agreement with Maribel on this that, yes, a group might say I want a primary cloud. And that primary cloud may be the AWS. But have you tried the licensed Oracle database on AWS? It is really tempting to license Oracle on Oracle Cloud, Microsoft on Microsoft. And I can't get RDS anywhere but Amazon. So while I'm driven to desire the simplicity, the reality is whether be it M&A, licensing, data sovereignty. I am forced into a multi-cloud management style. But I do agree most people kind of do this one, this primary cloud, secondary cloud. And I guarantee you're going to have a third cloud or a fourth cloud whether you want to or not via shadow IT, latency, technical reasons, et cetera. >> Thank you. Sanjeev, you had a comment? >> Yeah, so I just wanted to mention, as an organization, I'm complete agreement, no organization is monocloud, at least if it's a large organization. Large organizations use all kinds of combinations of cloud providers. But when you talk about a single workload, that's where the program arises. As Keith said, the 238 services in AWS. How in the world am I going to be an expert in AWS, but then say let me bring GCP or Azure into a single workload? And that's where I think we probably will still see monocloud as being predominant because the team has developed its expertise on a particular cloud provider, and they just don't have the time of the day to go learn yet another stack. However, there are some interesting things that are happening. For example, if you look at a multi-cloud example where Oracle and Microsoft Azure have that interconnect, so that's a beautiful thing that they've done because now in the newest iteration, it's literally a few clicks. And then behind the scene, your .NET application and your Oracle database in OCI will be configured, the identities in active directory are federated. And you can just start using a database in one cloud, which is OCI, and an application, your .NET in Azure. So till we see this kind of a solution coming out of the providers, I think it's is unrealistic to expect the end users to be able to figure out multiple clouds. >> Well, I have to share with you. I can't remember if he said this on camera or if it was off camera so I'll hold off. I won't tell you who it is, but this individual was sort of complaining a little bit saying, "With AWS, I can take their best AI tools like SageMaker and I can run them on my Snowflake." He said, "I can't do that in Google. Google forces me to go to BigQuery if I want their excellent AI tools." So he was sort of pushing, kind of tweaking a little bit. Some of the vendor talked that, "Oh yeah, we're so customer-focused." Not to pick on Google, but I mean everybody will say that. And then you say, "If you're so customer-focused, why wouldn't you do a ABC?" So it's going to be interesting to see who leads that integration and how broadly it's applied. But I digress. Keith, at our first supercloud event, that was on August 9th. And it was only a few months after Broadcom announced the VMware acquisition. A lot of people, myself included said, "All right, cuts are coming." Generally, Tanzu is probably going to be under the radar, but it's Supercloud 22 and presumably VMware Explore, the company really... Well, certainly the US touted its Tanzu capabilities. I wasn't at VMware Explore Europe, but I bet you heard similar things. Hawk Tan has been blogging and very vocal about cross-cloud services and multi-cloud, which doesn't happen without Tanzu. So what did you hear, Keith, in Europe? What's your latest thinking on VMware's prospects in cross-cloud services/supercloud? >> So I think our friend and Cube, along host still be even more offended at this statement than he was when I sat in the Cube. This was maybe five years ago. There's no company better suited to help industries or companies, cross-cloud chasm than VMware. That's not a compliment. That's a reality of the industry. This is a very difficult, almost intractable problem. What I heard that VMware Europe were customers serious about this problem, even more so than the US data sovereignty is a real problem in the EU. Try being a company in Switzerland and having the Swiss data solvency issues. And there's no local cloud presence there large enough to accommodate your data needs. They had very serious questions about this. I talked to open source project leaders. Open source project leaders were asking me, why should I use the public cloud to host Kubernetes-based workloads, my projects that are building around Kubernetes, and the CNCF infrastructure? Why should I use AWS, Google, or even Azure to host these projects when that's undifferentiated? I know how to run Kubernetes, so why not run it on-premises? I don't want to deal with the hardware problems. So again, really great questions. And then there was always the specter of the problem, I think, we all had with the acquisition of VMware by Broadcom potentially. 4.5 billion in increased profitability in three years is a unbelievable amount of money when you look at the size of the problem. So a lot of the conversation in Europe was about industry at large. How do we do what regulators are asking us to do in a practical way from a true technology sense? Is VMware cross-cloud great? >> Yeah. So, VMware, obviously, to your point. OpenStack is another way of it. Actually, OpenStack, uptake is still alive and well, especially in those regions where there may not be a public cloud, or there's public policy dictating that. Walmart's using OpenStack. As you know in IT, some things never die. Question for Sanjeev. And it relates to this new breed of data apps. And Bob Muglia and Tristan Handy from DBT Labs who are participating in this program really got us thinking about this. You got data that resides in different clouds, it maybe even on-prem. And the machine polls data from different systems. No humans involved, e-commerce, ERP, et cetera. It creates a plan, outcomes. No human involvement. Today, you're on a CRM system, you're inputting, you're doing forms, you're, you're automating processes. We're talking about a new breed of apps. What are your thoughts on this? Is it real? Is it just way off in the distance? How does machine intelligence fit in? And how does supercloud fit? >> So great point. In fact, the data apps that you're talking about, I call them data products. Data products first came into limelight in the last couple of years when Jamal Duggan started talking about data mesh. I am taking data products out of the data mesh concept because data mesh, whether data mesh happens or not is analogous to data products. Data products, basically, are taking a product management view of bringing data from different sources based on what the consumer needs. We were talking earlier today about maybe it's my vacation rentals, or it may be a retail data product, it may be an investment data product. So it's a pre-packaged extraction of data from different sources. But now I have a product that has a whole lifecycle. I can version it. I have new features that get added. And it's a very business data consumer centric. It uses machine learning. For instance, I may be able to tell whether this data product has stale data. Who is using that data? Based on the usage of the data, I may have a new data products that get allocated. I may even have the ability to take existing data products, mash them up into something that I need. So if I'm going to have that kind of power to create a data product, then having a common substrate underneath, it can be very useful. And that could be supercloud where I am making API calls. I don't care where the ERP, the CRM, the survey data, the pricing engine where they sit. For me, there's a logical abstraction. And then I'm building my data product on top of that. So I see a new breed of data products coming out. To answer your question, how early we are or is this even possible? My prediction is that in 2023, we will start seeing more of data products. And then it'll take maybe two to three years for data products to become mainstream. But it's starting this year. >> A subprime mortgages were a data product, definitely were humans involved. All right, let's talk about some of the supercloud, multi-cloud players and what their future looks like. You can kind of pick your favorites. VMware, Snowflake, Databricks, Red Hat, Cisco, Dell, HP, Hashi, IBM, CloudFlare. There's many others. cohesive rubric. Keith, I wanted to start with CloudFlare because they actually use the term supercloud. and just simplifying what they said. They look at it as taking serverless to the max. You write your code and then you can deploy it in seconds worldwide, of course, across the CloudFlare infrastructure. You don't have to spin up containers, you don't go to provision instances. CloudFlare worries about all that infrastructure. What are your thoughts on CloudFlare this approach and their chances to disrupt the current cloud landscape? >> As Larry Ellison said famously once before, the network is the computer, right? I thought that was Scott McNeley. >> It wasn't Scott McNeley. I knew it was on Oracle Align. >> Oracle owns that now, owns that line. >> By purpose or acquisition. >> They should have just called it cloud. >> Yeah, they should have just called it cloud. >> Easier. >> Get ahead. >> But if you think about the CloudFlare capability, CloudFlare in its own right is becoming a decent sized cloud provider. If you have compute out at the edge, when we talk about edge in the sense of CloudFlare and points of presence, literally across the globe, you have all of this excess computer, what do you do with it? First offering, let's disrupt data in the cloud. We can't start the conversation talking about data. When they say we're going to give you object-oriented or object storage in the cloud without egress charges, that's disruptive. That we can start to think about supercloud capability of having compute EC2 run in AWS, pushing and pulling data from CloudFlare. And now, I've disrupted this roach motel data structure, and that I'm freely giving away bandwidth, basically. Well, the next layer is not that much more difficult. And I think part of CloudFlare's serverless approach or supercloud approaches so that they don't have to commit to a certain type of compute. It is advantageous. It is a feature for me to be able to go to EC2 and pick a memory heavy model, or a compute heavy model, or a network heavy model, CloudFlare is taken away those knobs. and I'm just giving code and allowing that to run. CloudFlare has a massive network. If I can put the code closest using the CloudFlare workers, if I can put that code closest to where the data is at or residing, super compelling observation. The question is, does it scale? I don't get the 238 services. While Server List is great, I have to know what I'm going to build. I don't have a Cognito, or RDS, or all these other services that make AWS, GCP, and Azure appealing from a builder's perspective. So it is a very interesting nascent start. It's great because now they can hide compute. If they don't have the capacity, they can outsource that maybe at a cost to one of the other cloud providers, but kind of hiding the compute behind the surplus architecture is a really unique approach. >> Yeah. And they're dipping their toe in the water. And they've announced an object store and a database platform and more to come. We got to wrap. So I wonder, Sanjeev and Maribel, if you could maybe pick some of your favorites from a competitive standpoint. Sanjeev, I felt like just watching Snowflake, I said, okay, in my opinion, they had the right strategy, which was to run on all the clouds, and then try to create that abstraction layer and data sharing across clouds. Even though, let's face it, most of it might be happening across regions if it's happening, but certainly outside of an individual account. But I felt like just observing them that anybody who's traditional on-prem player moving into the clouds or anybody who's a cloud native, it just makes total sense to write to the various clouds. And to the extent that you can simplify that for users, it seems to be a logical strategy. Maybe as I said before, what multi-cloud should have been. But are there companies that you're watching that you think are ahead in the game , or ones that you think are a good model for the future? >> Yes, Snowflake, definitely. In fact, one of the things we have not touched upon very much, and Keith mentioned a little bit, was data sovereignty. Data residency rules can require that certain data should be written into certain region of a certain cloud. And if my cloud provider can abstract that or my database provider, then that's perfect for me. So right now, I see Snowflake is way ahead of this pack. I would not put MongoDB too far behind. They don't really talk about this thing. They are in a different space, but now they have a lakehouse, and they've got all of these other SQL access and new capabilities that they're announcing. So I think they would be quite good with that. Oracle is always a dark forest. Oracle seems to have revived its Cloud Mojo to some extent. And it's doing some interesting stuff. Databricks is the other one. I have not seen Databricks. They've been very focused on lakehouse, unity, data catalog, and some of those pieces. But they would be the obvious challenger. And if they come into this space of supercloud, then they may bring some open source technologies that others can rely on like Delta Lake as a table format. >> Yeah. One of these infrastructure players, Dell, HPE, Cisco, even IBM. I mean, I would be making my infrastructure as programmable and cloud friendly as possible. That seems like table stakes. But Maribel, any companies that stand out to you that we should be paying attention to? >> Well, we already mentioned a bunch of them, so maybe I'll go a slightly different route. I'm watching two companies pretty closely to see what kind of traction they get in their established companies. One we already talked about, which is VMware. And the thing that's interesting about VMware is they're everywhere. And they also have the benefit of having a foot in both camps. If you want to do it the old way, the way you've always done it with VMware, they got all that going on. If you want to try to do a more cross-cloud, multi-cloud native style thing, they're really trying to build tools for that. So I think they have really good access to buyers. And that's one of the reasons why I'm interested in them to see how they progress. The other thing, I think, could be a sleeping horse oddly enough is Google Cloud. They've spent a lot of work and time on Anthos. They really need to create a certain set of differentiators. Well, it's not necessarily in their best interest to be the best multi-cloud player. If they decide that they want to differentiate on a different layer of the stack, let's say they want to be like the person that is really transformative, they talk about transformation cloud with analytics workloads, then maybe they do spend a good deal of time trying to help people abstract all of the other underlying infrastructure and make sure that they get the sexiest, most meaningful workloads into their cloud. So those are two people that you might not have expected me to go with, but I think it's interesting to see not just on the things that might be considered, either startups or more established independent companies, but how some of the traditional providers are trying to reinvent themselves as well. >> I'm glad you brought that up because if you think about what Google's done with Kubernetes. I mean, would Google even be relevant in the cloud without Kubernetes? I could argue both sides of that. But it was quite a gift to the industry. And there's a motivation there to do something unique and different from maybe the other cloud providers. And I'd throw in Red Hat as well. They're obviously a key player and Kubernetes. And Hashi Corp seems to be becoming the standard for application deployment, and terraform, or cross-clouds, and there are many, many others. I know we're leaving lots out, but we're out of time. Folks, I got to thank you so much for your insights and your participation in Supercloud2. Really appreciate it. >> Thank you. >> Thank you. >> Thank you. >> This is Dave Vellante for John Furrier and the entire Cube community. Keep it right there for more content from Supercloud2.

Published Date : Jan 10 2023

SUMMARY :

And Keith Townsend is the CTO advisor. And he said, "Dave, I like the work, So that might be one of the that's kind of the way the that we can have a Is that something that you think Snowflake that are starting to do it. and the resiliency of their and on the other hand we want it But I reached out to the ETR, guys, And they get to this point Yeah. that to me it's a rounding So the first thing that we see is to Supercloud2 have told us Is anybody really monocloud? and that they try to optimize. And that primary cloud may be the AWS. Sanjeev, you had a comment? of a solution coming out of the providers, So it's going to be interesting So a lot of the conversation And it relates to this So if I'm going to have that kind of power and their chances to disrupt the network is the computer, right? I knew it was on Oracle Align. Oracle owns that now, Yeah, they should have so that they don't have to commit And to the extent that you And if my cloud provider can abstract that that stand out to you And that's one of the reasons Folks, I got to thank you and the entire Cube community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KeithPERSON

0.99+

Dave VellantePERSON

0.99+

Jamal DugganPERSON

0.99+

Nelu MihaiPERSON

0.99+

IBMORGANIZATION

0.99+

MaribelPERSON

0.99+

Bob MugliaPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

OracleORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Keith TownsendPERSON

0.99+

Larry EllisonPERSON

0.99+

Brian GracelyPERSON

0.99+

BobPERSON

0.99+

HPORGANIZATION

0.99+

AWSORGANIZATION

0.99+

EquinixORGANIZATION

0.99+

QTXORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

Maribel LopezPERSON

0.99+

August 9thDATE

0.99+

DavePERSON

0.99+

GracelyPERSON

0.99+

AmazonORGANIZATION

0.99+

WalmartsORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

SanjeevPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HashiORGANIZATION

0.99+

GigaHomeORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

2023DATE

0.99+

Hawk TanPERSON

0.99+

GoogleORGANIZATION

0.99+

two companiesQUANTITY

0.99+

two thingsQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

SwitzerlandLOCATION

0.99+

SnowflakeTITLE

0.99+

SnowflakeORGANIZATION

0.99+

HPEORGANIZATION

0.99+

twoQUANTITY

0.99+

238 servicesQUANTITY

0.99+

two peopleQUANTITY

0.99+

2016DATE

0.99+

GartnerORGANIZATION

0.99+

tens of millionsQUANTITY

0.99+

three yearsQUANTITY

0.99+

DBT LabsORGANIZATION

0.99+

fourth cloudQUANTITY

0.99+

Breaking Analysis: CIOs in a holding pattern but ready to strike at monetization


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> Recent conversations with IT decision makers show a stark contrast between exiting 2023 versus the mindset when we were leaving 2022. CIOs are generally funding new initiatives by pushing off or cutting lower priority items, while security efforts are still being funded. Those that enable business initiatives that generate revenue or taking priority over cleaning up legacy technical debt. The bottom line is, for the moment, at least, the mindset is not cut everything, rather, it's put a pause on cleaning up legacy hairballs and fund monetization. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we tap recent discussions from two primary sources, year-end ETR roundtables with IT decision makers, and CUBE conversations with data, cloud, and IT architecture practitioners. The sources of data for this breaking analysis come from the following areas. Eric Bradley's recent ETR year end panel featured a financial services DevOps and SRE manager, a CSO in a large hospitality firm, a director of IT for a big tech company, the head of IT infrastructure for a financial firm, and a CTO for global travel enterprise, and for our upcoming Supercloud2 conference on January 17th, which you can register free by the way, at supercloud.world, we've had CUBE conversations with data and cloud practitioners, specifically, heads of data in retail and financial services, a cloud architect and a biotech firm, the director of cloud and data at a large media firm, and the director of engineering at a financial services company. Now we've curated commentary from these sources and now we share them with you today as anecdotal evidence supporting what we've been reporting on in the marketplace for these last couple of quarters. On this program, we've likened the economy to the slingshot effect when you're driving, when you're cruising along at full speed on the highway, and suddenly you see red brake lights up ahead, so, you tap your own brakes and then you speed up again, and traffic is moving along at full speed, so, you think nothing of it, and then, all of a sudden, the same thing happens. You slow down to a crawl and you start wondering, "What the heck is happening?" And you become a lot more cautious about the rate of acceleration when you start moving again. Well, that's the trend in IT spend right now. Back in June, we reported that despite the macro headwinds, CIOs were still expecting 6% to 7% spending growth for 2022. Now that was down from 8%, which we reported at the beginning of 2022. That was before Ukraine, and Fed tightening, but given those two factors, you know that that seemed pretty robust, but throughout the fall, we began reporting consistently declining expectations where CIOs are now saying Q4 will come in at around 3% growth relative to last year, and they're expecting, or should we say hoping that it pops back up in 2023 to 4% to 5%. The recent ETR panelists, when they heard this, are saying based on their businesses and discussions with their peers, they could see low single digit growth for 2023, so, 1%, 2%, 3%, so, this sort of slingshotting, or sometimes we call it a seesaw economy, has caught everyone off guard. Amazon is a good example of this, and there are others, but Amazon entered the pandemic with around 800,000 employees. It doubled that workforce during the pandemic. Now, right before Thanksgiving in 2022, Amazon announced that it was laying off 10,000 employees, and, Jassy, the CEO of Amazon, just last week announced that number is now going to grow to 18,000. Now look, this is a rounding error at Amazon from a headcount standpoint and their headcount remains far above 2019 levels. Its stock price, however, does not and it's back down to 2019 levels. The point is that visibility is very poor right now and it's reflected in that uncertainty. We've seen a lot of layoffs, obviously, the stock market's choppy, et cetera. Now importantly, not everything is on hold, and this downturn is different from previous tech pullbacks in that the speed at which new initiatives can be rolled out is much greater thanks to the cloud, and if you can show a fast return, you're going to get funding. Organizations are pausing on the cleanup of technical debt, unless it's driving fast business value. They're holding off on modernization projects. Those business enablement initiatives are still getting funded. CIOs are finding the money by consolidating redundant vendors, and they're stealing from other pockets of budget, so, it's not surprising that cybersecurity remains the number one technology priority in 2023. We've been reporting that for quite some time now. It's specifically cloud, cloud native security container and API security. That's where all the action is, because there's still holes to plug from that forced march to digital that occurred during COVID. Cloud migration, kind of showing here on number two on this chart, still a high priority, while optimizing cloud spend is definitely a strategy that organizations are taking to cut costs. It's behind consolidating redundant vendors by a long shot. There's very little evidence that cloud repatriation, i.e., moving workloads back on prem is a major cost cutting trend. The data just doesn't show it. What is a trend is getting more real time with analytics, so, companies can do faster and more accurate customer targeting, and they're really prioritizing that, obviously, in this down economy. Real time, we sometimes lose it, what's real time? Real time, we sometimes define as before you lose the customer. Now in the hiring front, customers tell us they're still having a hard time finding qualified site reliability engineers, SREs, Kubernetes expertise, and deep analytics pros. These job markets remain very tight. Let's stay with security for just a moment. We said many times that, prior to COVID, zero trust was this undefined buzzword, and the joke, of course, is, if you ask three people, "What is zero trust?" You're going to get three different answers, but the truth is that virtually every security company that was resisting taking a position on zero trust in an attempt to avoid... They didn't want to get caught up in the buzzword vortex, but they're now really being forced to go there by CISOs, so, there are some good quotes here on cyber that we want to share that came out of the recent conversations that we cited up front. The first one, "Zero trust is the highest ROI, because it enables business transformation." In other words, if I can have good security, I can move fast, it's not a blocker anymore. Second quote here, "ZTA," zero trust architecture, "Is more than securing the perimeter. It encompasses strong authentication and multiple identity layers. It requires taking a software approach to security instead of a hardware focus." The next one, "I'd love to have a security data lake that I could apply to asset management, vulnerability management, incident management, incident response, and all aspects for my security team. I see huge promise in that space," and the last one, I see NLP, natural language processing, as the foundation for email security, so, instead of searching for IP addresses, you can now read emails at light speed and identify phishing threats, so, look at, this is a small snapshot of the mindset around security, but I'll add, when you talk to the likes of CrowdStrike, and Zscaler, and Okta, and Palo Alto Networks, and many other security firms, they're listening to these narratives around zero trust. I'm confident they're working hard on skating to this puck, if you will. A good example is this idea of a security data lake and using analytics to improve security. We're hearing a lot about that. We're hearing architectures, there's acquisitions in that regard, and so, that's becoming real, and there are many other examples, because data is at the heart of digital business. This is the next area that we want to talk about. It's obvious that data, as a topic, gets a lot of mind share amongst practitioners, but getting data right is still really hard. It's a challenge for most organizations to get ROI and expected return out of data. Most companies still put data at the periphery of their businesses. It's not at the core. Data lives within silos or different business units, different clouds, it's on-prem, and increasingly it's at the edge, and it seems like the problem is getting worse before it gets better, so, here are some instructive comments from our recent conversations. The first one, "We're publishing events onto Kafka, having those events be processed by Dataproc." Dataproc is a Google managed service to run Hadoop, and Spark, and Flank, and Presto, and a bunch of other open source tools. We're putting them into the appropriate storage models within Google, and then normalize the data into BigQuery, and only then can you take advantage of tools like ThoughtSpot, so, here's a company like ThoughtSpot, and they're all about simplifying data, democratizing data, but to get there, you have to go through some pretty complex processes, so, this is a good example. All right, another comment. "In order to use Google's AI tools, we have to put the data into BigQuery. They haven't integrated in the way AWS and Snowflake have with SageMaker. Moving the data is too expensive, time consuming, and risky," so, I'll just say this, sharing data is a killer super cloud use case, and firms like Snowflake are on top of it, but it's still not pretty across clouds, and Google's posture seems to be, "We're going to let our database product competitiveness drive the strategy first, and the ecosystem is going to take a backseat." Now, in a way, I get it, owning the database is critical, and Google doesn't want to capitulate on that front. Look, BigQuery is really good and competitive, but you can't help but roll your eyes when a CEO stands up, and look, I'm not calling out Thomas Kurian, every CEO does this, and talks about how important their customers are, and they'll do whatever is right by the customer, so, look, I'm telling you, I'm rolling my eyes on that. Now let me also comment, AWS has figured this out. They're killing it in database. If you take Redshift for example, it's still growing, as is Aurora, really fast growing services and other data stores, but AWS realizes it can make more money in the long-term partnering with the Snowflakes and Databricks of the world, and other ecosystem vendors versus sub optimizing their relationships with partners and customers in order to sell more of their own homegrown tools. I get it. It's hard not to feature your own product. IBM chose OS/2 over Windows, and tried for years to popularize it. It failed. Lotus, go back way back to Lotus 1, 2, and 3, they refused to run on Windows when it first came out. They were running on DEC VAX. Many of you young people in the United States have never even heard of DEC VAX. IBM wanted to run every everything only in its cloud, the same with Oracle, originally. VMware, as you might recall, tried to build its own cloud, but, eventually, when the market speaks and reveals what seems to be obvious to analysts, years before, the vendors come around, they face reality, and they stop wasting money, fighting a losing battle. "The trend is your friend," as the saying goes. All right, last pull quote on data, "The hardest part is transformations, moving traditional Informatica, Teradata, or Oracle infrastructure to something more modern and real time, and that's why people still run apps in COBOL. In IT, we rarely get rid of stuff, rather we add on another coat of paint until the wood rots out or the roof is going to cave in. All right, the last key finding we want to highlight is going to bring us back to the cloud repatriation myth. Followers of this program know it's a real sore spot with us. We've heard the stories about repatriation, we've read the thoughtful articles from VCs on the subject, we've been whispered to by vendors that you should investigate this trend. It's really happening, but the data simply doesn't support it. Here's the question that was posed to these practitioners. If you had unlimited budget and the economy miraculously flipped, what initiatives would you tackle first? Where would you really lean into? The first answer, "I'd rip out legacy on-prem infrastructure and move to the cloud even faster," so, the thing here is, look, maybe renting infrastructure is more expensive than owning, maybe, but if I can optimize my rental with better utilization, turn off compute, use things like serverless, get on a steeper and higher performance over time, and lower cost Silicon curve with things like Graviton, tap best of breed tools in AI, and other areas that make my business more competitive. Move faster, fail faster, experiment more quickly, and cheaply, what's that worth? Even the most hard-o CFOs understand the business benefits far outweigh the possible added cost per gigabyte, and, again, I stress "possible." Okay, other interesting comments from practitioners. "I'd hire 50 more data engineers and accelerate our real-time data capabilities to better target customers." Real-time is becoming a thing. AI is being injected into data and apps to make faster decisions, perhaps, with less or even no human involvement. That's on the rise. Next quote, "I'd like to focus on resolving the concerns around cloud data compliance," so, again, despite the risks of data being spread out in different clouds, organizations realize cloud is a given, and they want to find ways to make it work better, not move away from it. The same thing in the next one, "I would automate the data analytics pipeline and focus on a safer way to share data across the states without moving it," and, finally, "The way I'm addressing complexity is to standardize on a single cloud." MonoCloud is actually a thing. We're hearing this more and more. Yes, my company has multiple clouds, but in my group, we've standardized on a single cloud to simplify things, and this is a somewhat dangerous trend, because it's creating even more silos and it's an opportunity that needs to be addressed, and that's why we've been talking so much about supercloud is a cross-cloud, unifying, architectural framework, or, perhaps, it's a platform. In fact, that's a question that we will be exploring later this month at Supercloud2 live from our Palo Alto Studios. Is supercloud an architecture or is it a platform? And in this program, we're featuring technologists, analysts, practitioners to explore the intersection between data and cloud and the future of cloud computing, so, you don't want to miss this opportunity. Go to supercloud.world. You can register for free and participate in the event directly. All right, thanks for listening. That's a wrap. I'd like to thank Alex Myerson, who's on production and manages our podcast, Ken Schiffman as well, Kristen Martin and Cheryl Knight, they helped get the word out on social media, and in our newsletters, and Rob Hof is our editor-in-chief over at siliconangle.com. He does some great editing. Thank you, all. Remember, all these episodes are available as podcasts wherever you listen. All you've got to do is search "breaking analysis podcasts." I publish each week on wikibon.com and siliconangle.com where you can email me directly at david.vellante@siliconangle.com or DM me, @Dante, or comment on our LinkedIn posts. By all means, check out etr.ai. They get the best survey data in the enterprise tech business. We'll be doing our annual predictions post in a few weeks, once the data comes out from the January survey. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, everybody, and we'll see you next time on "Breaking Analysis." (upbeat music)

Published Date : Jan 7 2023

SUMMARY :

This is "Breaking Analysis" and the director of engineering

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

JassyPERSON

0.99+

Cheryl KnightPERSON

0.99+

Eric BradleyPERSON

0.99+

Rob HofPERSON

0.99+

OktaORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

ZscalerORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Thomas KurianPERSON

0.99+

6%QUANTITY

0.99+

IBMORGANIZATION

0.99+

2023DATE

0.99+

18,000QUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

10,000 employeesQUANTITY

0.99+

CrowdStrikeORGANIZATION

0.99+

JanuaryDATE

0.99+

2022DATE

0.99+

January 17thDATE

0.99+

BostonLOCATION

0.99+

Lotus 1TITLE

0.99+

2019DATE

0.99+

JuneDATE

0.99+

8%QUANTITY

0.99+

United StatesLOCATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

SnowflakesORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

LotusTITLE

0.99+

two factorsQUANTITY

0.99+

OracleORGANIZATION

0.99+

DataprocORGANIZATION

0.99+

three peopleQUANTITY

0.99+

last weekDATE

0.99+

Supercloud2EVENT

0.99+

TeradataORGANIZATION

0.99+

1%QUANTITY

0.99+

3TITLE

0.99+

WindowsTITLE

0.99+

5%QUANTITY

0.99+

3%QUANTITY

0.99+

BigQueryTITLE

0.99+

Second quoteQUANTITY

0.99+

4%QUANTITY

0.99+

DEC VAXTITLE

0.99+

ThanksgivingEVENT

0.98+

OS/2TITLE

0.98+

7%QUANTITY

0.98+

last yearDATE

0.98+

two primary sourcesQUANTITY

0.98+

each weekQUANTITY

0.98+

InformaticaORGANIZATION

0.98+

pandemicEVENT

0.98+

first oneQUANTITY

0.98+

siliconangle.comOTHER

0.97+

first answerQUANTITY

0.97+

2%QUANTITY

0.97+

around 800,000 employeesQUANTITY

0.97+

50 more data engineersQUANTITY

0.97+

zero trustQUANTITY

0.97+

SnowflakeORGANIZATION

0.96+

single cloudQUANTITY

0.96+

2TITLE

0.96+

todayDATE

0.95+

ETRORGANIZATION

0.95+

single cloudQUANTITY

0.95+

LinkedInORGANIZATION

0.94+

later this monthDATE

0.94+

Breaking Analysis: AI Goes Mainstream But ROI Remains Elusive


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> A decade of big data investments combined with cloud scale, the rise of much more cost effective processing power. And the introduction of advanced tooling has catapulted machine intelligence to the forefront of technology investments. No matter what job you have, your operation will be AI powered within five years and machines may actually even be doing your job. Artificial intelligence is being infused into applications, infrastructure, equipment, and virtually every aspect of our lives. AI is proving to be extremely helpful at things like controlling vehicles, speeding up medical diagnoses, processing language, advancing science, and generally raising the stakes on what it means to apply technology for business advantage. But business value realization has been a challenge for most organizations due to lack of skills, complexity of programming models, immature technology integration, sizable upfront investments, ethical concerns, and lack of business alignment. Mastering AI technology will not be a requirement for success in our view. However, figuring out how and where to apply AI to your business will be crucial. That means understanding the business case, picking the right technology partner, experimenting in bite-sized chunks, and quickly identifying winners to double down on from an investment standpoint. Hello and welcome to this week's Wiki-bond CUBE Insights powered by ETR. In this breaking analysis, we update you on the state of AI and what it means for the competition. And to do so, we invite into our studios Andy Thurai of Constellation Research. Andy covers AI deeply. He knows the players, he knows the pitfalls of AI investment, and he's a collaborator. Andy, great to have you on the program. Thanks for coming into our CUBE studios. >> Thanks for having me on. >> You're very welcome. Okay, let's set the table with a premise and a series of assertions we want to test with Andy. I'm going to lay 'em out. And then Andy, I'd love for you to comment. So, first of all, according to McKinsey, AI adoption has more than doubled since 2017, but only 10% of organizations report seeing significant ROI. That's a BCG and MIT study. And part of that challenge of AI is it requires data, is requires good data, data proficiency, which is not trivial, as you know. Firms that can master both data and AI, we believe are going to have a competitive advantage this decade. Hyperscalers, as we show you dominate AI and ML. We'll show you some data on that. And having said that, there's plenty of room for specialists. They need to partner with the cloud vendors for go to market productivity. And finally, organizations increasingly have to put data and AI at the center of their enterprises. And to do that, most are going to rely on vendor R&D to leverage AI and ML. In other words, Andy, they're going to buy it and apply it as opposed to build it. What are your thoughts on that setup and that premise? >> Yeah, I see that a lot happening in the field, right? So first of all, the only 10% of realizing a return on investment. That's so true because we talked about this earlier, the most companies are still in the innovation cycle. So they're trying to innovate and see what they can do to apply. A lot of these times when you look at the solutions, what they come up with or the models they create, the experimentation they do, most times they don't even have a good business case to solve, right? So they just experiment and then they figure it out, "Oh my God, this model is working. Can we do something to solve it?" So it's like you found a hammer and then you're trying to find the needle kind of thing, right? That never works. >> 'Cause it's cool or whatever it is. >> It is, right? So that's why, I always advise, when they come to me and ask me things like, "Hey, what's the right way to do it? What is the secret sauce?" And, we talked about this. The first thing I tell them is, "Find out what is the business case that's having the most amount of problems, that that can be solved using some of the AI use cases," right? Not all of them can be solved. Even after you experiment, do the whole nine yards, spend millions of dollars on that, right? And later on you make it efficient only by saving maybe $50,000 for the company or a $100,000 for the company, is it really even worth the experiment, right? So you got to start with the saying that, you know, where's the base for this happening? Where's the need? What's a business use case? It doesn't have to be about cost efficient and saving money in the existing processes. It could be a new thing. You want to bring in a new revenue stream, but figure out what is a business use case, how much money potentially I can make off of that. The same way that start-ups go after. Right? >> Yeah. Pretty straightforward. All right, let's take a look at where ML and AI fit relative to the other hot sectors of the ETR dataset. This XY graph shows net score spending velocity in the vertical axis and presence in the survey, they call it sector perversion for the October survey, the January survey's in the field. Then that squiggly line on ML/AI represents the progression. Since the January 21 survey, you can see the downward trajectory. And we position ML and AI relative to the other big four hot sectors or big three, including, ML/AI is four. Containers, cloud and RPA. These have consistently performed above that magic 40% red dotted line for most of the past two years. Anything above 40%, we think is highly elevated. And we've just included analytics and big data for context and relevant adjacentness, if you will. Now note that green arrow moving toward, you know, the 40% mark on ML/AI. I got a glimpse of the January survey, which is in the field. It's got more than a thousand responses already, and it's trending up for the current survey. So Andy, what do you make of this downward trajectory over the past seven quarters and the presumed uptick in the coming months? >> So one of the things you have to keep in mind is when the pandemic happened, it's about survival mode, right? So when somebody's in a survival mode, what happens, the luxury and the innovations get cut. That's what happens. And this is exactly what happened in the situation. So as you can see in the last seven quarters, which is almost dating back close to pandemic, everybody was trying to keep their operations alive, especially digital operations. How do I keep the lights on? That's the most important thing for them. So while the numbers spent on AI, ML is less overall, I still think the AI ML to spend to sort of like a employee experience or the IT ops, AI ops, ML ops, as we talked about, some of those areas actually went up. There are companies, we talked about it, Atlassian had a lot of platform issues till the amount of money people are spending on that is exorbitant and simply because they are offering the solution that was not available other way. So there are companies out there, you can take AoPS or incident management for that matter, right? A lot of companies have a digital insurance, they don't know how to properly manage it. How do you find an intern solve it immediately? That's all using AI ML and some of those areas actually growing unbelievable, the companies in that area. >> So this is a really good point. If you can you bring up that chart again, what Andy's saying is a lot of the companies in the ETR taxonomy that are doing things with AI might not necessarily show up in a granular fashion. And I think the other point I would make is, these are still highly elevated numbers. If you put on like storage and servers, they would read way, way down the list. And, look in the pandemic, we had to deal with work from home, we had to re-architect the network, we had to worry about security. So those are really good points that you made there. Let's, unpack this a little bit and look at the ML AI sector and the ETR data and specifically at the players and get Andy to comment on this. This chart here shows the same x y dimensions, and it just notes some of the players that are specifically have services and products that people spend money on, that CIOs and IT buyers can comment on. So the table insert shows how the companies are plotted, it's net score, and then the ends in the survey. And Andy, the hyperscalers are dominant, as you can see. You see Databricks there showing strong as a specialist, and then you got to pack a six or seven in there. And then Oracle and IBM, kind of the big whales of yester year are in the mix. And to your point, companies like Salesforce that you mentioned to me offline aren't in that mix, but they do a lot in AI. But what are your takeaways from that data? >> If you could put the slide back on please. I want to make quick comments on a couple of those. So the first one is, it's surprising other hyperscalers, right? As you and I talked about this earlier, AWS is more about logo blocks. We discussed that, right? >> Like what? Like a SageMaker as an example. >> We'll give you all the components what do you need. Whether it's MLOps component or whether it's, CodeWhisperer that we talked about, or a oral platform or data or data, whatever you want. They'll give you the blocks and then you'll build things on top of it, right? But Google took a different way. Matter of fact, if we did those numbers a few years ago, Google would've been number one because they did a lot of work with their acquisition of DeepMind and other things. They're way ahead of the pack when it comes to AI for longest time. Now, I think Microsoft's move of partnering and taking a huge competitor out would open the eyes is unbelievable. You saw that everybody is talking about chat GPI, right? And the open AI tool and ChatGPT rather. Remember as Warren Buffet is saying that, when my laundry lady comes and talk to me about stock market, it's heated up. So that's how it's heated up. Everybody's using ChatGPT. What that means is at the end of the day is they're creating, it's still in beta, keep in mind. It's not fully... >> Can you play with it a little bit? >> I have a little bit. >> I have, but it's good and it's not good. You know what I mean? >> Look, so at the end of the day, you take the massive text of all the available text in the world today, mass them all together. And then you ask a question, it's going to basically search through that and figure it out and answer that back. Yes, it's good. But again, as we discussed, if there's no business use case of what problem you're going to solve. This is building hype. But then eventually they'll figure out, for example, all your chats, online chats, could be aided by your AI chat bots, which is already there, which is not there at that level. This could build help that, right? Or the other thing we talked about is one of the areas where I'm more concerned about is that it is able to produce equal enough original text at the level that humans can produce, for example, ChatGPT or the equal enough, the large language transformer can help you write stories as of Shakespeare wrote it. Pretty close to it. It'll learn from that. So when it comes down to it, talk about creating messages, articles, blogs, especially during political seasons, not necessarily just in US, but anywhere for that matter. If people are able to produce at the emission speed and throw it at the consumers and confuse them, the elections can be won, the governments can be toppled. >> Because to your point about chatbots is chatbots have obviously, reduced the number of bodies that you need to support chat. But they haven't solved the problem of serving consumers. Most of the chat bots are conditioned response, which of the following best describes your problem? >> The current chatbot. >> Yeah. Hey, did we solve your problem? No. Is the answer. So that has some real potential. But if you could bring up that slide again, Ken, I mean you've got the hyperscalers that are dominant. You talked about Google and Microsoft is ubiquitous, they seem to be dominant in every ETR category. But then you have these other specialists. How do those guys compete? And maybe you could even, cite some of the guys that you know, how do they compete with the hyperscalers? What's the key there for like a C3 ai or some of the others that are on there? >> So I've spoken with at least two of the CEOs of the smaller companies that you have on the list. One of the things they're worried about is that if they continue to operate independently without being part of hyperscaler, either the hyperscalers will develop something to compete against them full scale, or they'll become irrelevant. Because at the end of the day, look, cloud is dominant. Not many companies are going to do like AI modeling and training and deployment the whole nine yards by independent by themselves. They're going to depend on one of the clouds, right? So if they're already going to be in the cloud, by taking them out to come to you, it's going to be extremely difficult issue to solve. So all these companies are going and saying, "You know what? We need to be in hyperscalers." For example, you could have looked at DataRobot recently, they made announcements, Google and AWS, and they are all over the place. So you need to go where the customers are. Right? >> All right, before we go on, I want to share some other data from ETR and why people adopt AI and get your feedback. So the data historically shows that feature breadth and technical capabilities were the main decision points for AI adoption, historically. What says to me that it's too much focus on technology. In your view, is that changing? Does it have to change? Will it change? >> Yes. Simple answer is yes. So here's the thing. The data you're speaking from is from previous years. >> Yes >> I can guarantee you, if you look at the latest data that's coming in now, those two will be a secondary and tertiary points. The number one would be about ROI. And how do I achieve? I've spent ton of money on all of my experiments. This is the same thing theme I'm seeing across when talking to everybody who's spending money on AI. I've spent so much money on it. When can I get it live in production? How much, how can I quickly get it? Because you know, the board is breathing down their neck. You already spend this much money. Show me something that's valuable. So the ROI is going to become, take it from me, I'm predicting this for 2023, that's going to become number one. >> Yeah, and if people focus on it, they'll figure it out. Okay. Let's take a look at some of the top players that won, some of the names we just looked at and double click on that and break down their spending profile. So the chart here shows the net score, how net score is calculated. So pay attention to the second set of bars that Databricks, who was pretty prominent on the previous chart. And we've annotated the colors. The lime green is, we're bringing the platform in new. The forest green is, we're going to spend 6% or more relative to last year. And the gray is flat spending. The pinkish is our spending's going to be down on AI and ML, 6% or worse. And the red is churn. So you don't want big red. You subtract the reds from the greens and you get net score, which is shown by those blue dots that you see there. So AWS has the highest net score and very little churn. I mean, single low single digit churn. But notably, you see Databricks and DataRobot are next in line within Microsoft and Google also, they've got very low churn. Andy, what are your thoughts on this data? >> So a couple of things that stands out to me. Most of them are in line with my conversation with customers. Couple of them stood out to me on how bad IBM Watson is doing. >> Yeah, bring that back up if you would. Let's take a look at that. IBM Watson is the far right and the red, that bright red is churning and again, you want low red here. Why do you think that is? >> Well, so look, IBM has been in the forefront of innovating things for many, many years now, right? And over the course of years we talked about this, they moved from a product innovation centric company into more of a services company. And over the years they were making, as at one point, you know that they were making about majority of that money from services. Now things have changed Arvind has taken over, he came from research. So he's doing a great job of trying to reinvent themselves as a company. But it's going to have a long way to catch up. IBM Watson, if you think about it, that played what, jeopardy and chess years ago, like 15 years ago? >> It was jaw dropping when you first saw it. And then they weren't able to commercialize that. >> Yeah. >> And you're making a good point. When Gerstner took over IBM at the time, John Akers wanted to split the company up. He wanted to have a database company, he wanted to have a storage company. Because that's where the industry trend was, Gerstner said no, he came from AMEX, right? He came from American Express. He said, "No, we're going to have a single throat to choke for the customer." They bought PWC for relatively short money. I think it was $15 billion, completely transformed and I would argue saved IBM. But the trade off was, it sort of took them out of product leadership. And so from Gerstner to Palmisano to Remedi, it was really a services led company. And I think Arvind is really bringing it back to a product company with strong consulting. I mean, that's one of the pillars. And so I think that's, they've got a strong story in data and AI. They just got to sort of bring it together and better. Bring that chart up one more time. I want to, the other point is Oracle, Oracle sort of has the dominant lock-in for mission critical database and they're sort of applying AI there. But to your point, they're really not an AI company in the sense that they're taking unstructured data and doing sort of new things. It's really about how to make Oracle better, right? >> Well, you got to remember, Oracle is about database for the structure data. So in yesterday's world, they were dominant database. But you know, if you are to start storing like videos and texts and audio and other things, and then start doing search of vector search and all that, Oracle is not necessarily the database company of choice. And they're strongest thing being apps and building AI into the apps? They are kind of surviving in that area. But again, I wouldn't name them as an AI company, right? But the other thing that that surprised me in that list, what you showed me is yes, AWS is number one. >> Bring that back up if you would, Ken. >> AWS is number one as you, it should be. But what what actually caught me by surprise is how DataRobot is holding, you know? I mean, look at that. The either net new addition and or expansion, DataRobot seem to be doing equally well, even better than Microsoft and Google. That surprises me. >> DataRobot's, and again, this is a function of spending momentum. So remember from the previous chart that Microsoft and Google, much, much larger than DataRobot. DataRobot more niche. But with spending velocity and has always had strong spending velocity, despite some of the recent challenges, organizational challenges. And then you see these other specialists, H2O.ai, Anaconda, dataiku, little bit of red showing there C3.ai. But these again, to stress are the sort of specialists other than obviously the hyperscalers. These are the specialists in AI. All right, so we hit the bigger names in the sector. Now let's take a look at the emerging technology companies. And one of the gems of the ETR dataset is the emerging technology survey. It's called ETS. They used to just do it like twice a year. It's now run four times a year. I just discovered it kind of mid-2022. And it's exclusively focused on private companies that are potential disruptors, they might be M&A candidates and if they've raised enough money, they could be acquirers of companies as well. So Databricks would be an example. They've made a number of investments in companies. SNEAK would be another good example. Companies that are private, but they're buyers, they hope to go IPO at some point in time. So this chart here, shows the emerging companies in the ML AI sector of the ETR dataset. So the dimensions of this are similar, they're net sentiment on the Y axis and mind share on the X axis. Basically, the ETS study measures awareness on the x axis and intent to do something with, evaluate or implement or not, on that vertical axis. So it's like net score on the vertical where negatives are subtracted from the positives. And again, mind share is vendor awareness. That's the horizontal axis. Now that inserted table shows net sentiment and the ends in the survey, which informs the position of the dots. And you'll notice we're plotting TensorFlow as well. We know that's not a company, but it's there for reference as open source tooling is an option for customers. And ETR sometimes like to show that as a reference point. Now we've also drawn a line for Databricks to show how relatively dominant they've become in the past 10 ETS surveys and sort of mind share going back to late 2018. And you can see a dozen or so other emerging tech vendors. So Andy, I want you to share your thoughts on these players, who were the ones to watch, name some names. We'll bring that data back up as you as you comment. >> So Databricks, as you said, remember we talked about how Oracle is not necessarily the database of the choice, you know? So Databricks is kind of trying to solve some of the issue for AI/ML workloads, right? And the problem is also there is no one company that could solve all of the problems. For example, if you look at the names in here, some of them are database names, some of them are platform names, some of them are like MLOps companies like, DataRobot (indistinct) and others. And some of them are like future based companies like, you know, the Techton and stuff. >> So it's a mix of those sub sectors? >> It's a mix of those companies. >> We'll talk to ETR about that. They'd be interested in your input on how to make this more granular and these sub-sectors. You got Hugging Face in here, >> Which is NLP, yeah. >> Okay. So your take, are these companies going to get acquired? Are they going to go IPO? Are they going to merge? >> Well, most of them going to get acquired. My prediction would be most of them will get acquired because look, at the end of the day, hyperscalers need these capabilities, right? So they're going to either create their own, AWS is very good at doing that. They have done a lot of those things. But the other ones, like for particularly Azure, they're going to look at it and saying that, "You know what, it's going to take time for me to build this. Why don't I just go and buy you?" Right? Or or even the smaller players like Oracle or IBM Cloud, this will exist. They might even take a look at them, right? So at the end of the day, a lot of these companies are going to get acquired or merged with others. >> Yeah. All right, let's wrap with some final thoughts. I'm going to make some comments Andy, and then ask you to dig in here. Look, despite the challenge of leveraging AI, you know, Ken, if you could bring up the next chart. We're not repeating, we're not predicting the AI winter of the 1990s. Machine intelligence. It's a superpower that's going to permeate every aspect of the technology industry. AI and data strategies have to be connected. Leveraging first party data is going to increase AI competitiveness and shorten time to value. Andy, I'd love your thoughts on that. I know you've got some thoughts on governance and AI ethics. You know, we talked about ChatGBT, Deepfakes, help us unpack all these trends. >> So there's so much information packed up there, right? The AI and data strategy, that's very, very, very important. If you don't have a proper data, people don't realize that AI is, your AI is the morals that you built on, it's predominantly based on the data what you have. It's not, AI cannot predict something that's going to happen without knowing what it is. It need to be trained, it need to understand what is it you're talking about. So 99% of the time you got to have a good data for you to train. So this where I mentioned to you, the problem is a lot of these companies can't afford to collect the real world data because it takes too long, it's too expensive. So a lot of these companies are trying to do the synthetic data way. It has its own set of issues because you can't use all... >> What's that synthetic data? Explain that. >> Synthetic data is basically not a real world data, but it's a created or simulated data equal and based on real data. It looks, feels, smells, taste like a real data, but it's not exactly real data, right? This is particularly useful in the financial and healthcare industry for world. So you don't have to, at the end of the day, if you have real data about your and my medical history data, if you redact it, you can still reverse this. It's fairly easy, right? >> Yeah, yeah. >> So by creating a synthetic data, there is no correlation between the real data and the synthetic data. >> So that's part of AI ethics and privacy and, okay. >> So the synthetic data, the issue with that is that when you're trying to commingle that with that, you can't create models based on just on synthetic data because synthetic data, as I said is artificial data. So basically you're creating artificial models, so you got to blend in properly that that blend is the problem. And you know how much of real data, how much of synthetic data you could use. You got to use judgment between efficiency cost and the time duration stuff. So that's one-- >> And risk >> And the risk involved with that. And the secondary issues which we talked about is that when you're creating, okay, you take a business use case, okay, you think about investing things, you build the whole thing out and you're trying to put it out into the market. Most companies that I talk to don't have a proper governance in place. They don't have ethics standards in place. They don't worry about the biases in data, they just go on trying to solve a business case >> It's wild west. >> 'Cause that's what they start. It's a wild west! And then at the end of the day when they are close to some legal litigation action or something or something else happens and that's when the Oh Shit! moments happens, right? And then they come in and say, "You know what, how do I fix this?" The governance, security and all of those things, ethics bias, data bias, de-biasing, none of them can be an afterthought. It got to start with the, from the get-go. So you got to start at the beginning saying that, "You know what, I'm going to do all of those AI programs, but before we get into this, we got to set some framework for doing all these things properly." Right? And then the-- >> Yeah. So let's go back to the key points. I want to bring up the cloud again. Because you got to get cloud right. Getting that right matters in AI to the points that you were making earlier. You can't just be out on an island and hyperscalers, they're going to obviously continue to do well. They get more and more data's going into the cloud and they have the native tools. To your point, in the case of AWS, Microsoft's obviously ubiquitous. Google's got great capabilities here. They've got integrated ecosystems partners that are going to continue to strengthen through the decade. What are your thoughts here? >> So a couple of things. One is the last mile ML or last mile AI that nobody's talking about. So that need to be attended to. There are lot of players in the market that coming up, when I talk about last mile, I'm talking about after you're done with the experimentation of the model, how fast and quickly and efficiently can you get it to production? So that's production being-- >> Compressing that time is going to put dollars in your pocket. >> Exactly. Right. >> So once, >> If you got it right. >> If you get it right, of course. So there are, there are a couple of issues with that. Once you figure out that model is working, that's perfect. People don't realize, the moment you decide that moment when the decision is made, it's like a new car. After you purchase the value decreases on a minute basis. Same thing with the models. Once the model is created, you need to be in production right away because it starts losing it value on a seconds minute basis. So issue number one, how fast can I get it over there? So your deployment, you are inferencing efficiently at the edge locations, your optimization, your security, all of this is at issue. But you know what is more important than that in the last mile? You keep the model up, you continue to work on, again, going back to the car analogy, at one point you got to figure out your car is costing more than to operate. So you got to get a new car, right? And that's the same thing with the models as well. If your model has reached a stage, it is actually a potential risk for your operation. To give you an idea, if Uber has a model, the first time when you get a car from going from point A to B cost you $60. If the model decayed the next time I might give you a $40 rate, I would take it definitely. But it's lost for the company. The business risk associated with operating on a bad model, you should realize it immediately, pull the model out, retrain it, redeploy it. That's is key. >> And that's got to be huge in security model recency and security to the extent that you can get real time is big. I mean you, you see Palo Alto, CrowdStrike, a lot of other security companies are injecting AI. Again, they won't show up in the ETR ML/AI taxonomy per se as a pure play. But ServiceNow is another company that you have have mentioned to me, offline. AI is just getting embedded everywhere. >> Yep. >> And then I'm glad you brought up, kind of real-time inferencing 'cause a lot of the modeling, if we can go back to the last point that we're going to make, a lot of the AI today is modeling done in the cloud. The last point we wanted to make here, I'd love to get your thoughts on this, is real-time AI inferencing for instance at the edge is going to become increasingly important for us. It's going to usher in new economics, new types of silicon, particularly arm-based. We've covered that a lot on "Breaking Analysis", new tooling, new companies and that could disrupt the sort of cloud model if new economics emerge. 'Cause cloud obviously very centralized, they're trying to decentralize it. But over the course of this decade we could see some real disruption there. Andy, give us your final thoughts on that. >> Yes and no. I mean at the end of the day, cloud is kind of centralized now, but a lot of this companies including, AWS is kind of trying to decentralize that by putting their own sub-centers and edge locations. >> Local zones, outposts. >> Yeah, exactly. Particularly the outpost concept. And if it can even become like a micro center and stuff, it won't go to the localized level of, I go to a single IOT level. But again, the cloud extends itself to that level. So if there is an opportunity need for it, the hyperscalers will figure out a way to fit that model. So I wouldn't too much worry about that, about deployment and where to have it and what to do with that. But you know, figure out the right business use case, get the right data, get the ethics and governance place and make sure they get it to production and make sure you pull the model out when it's not operating well. >> Excellent advice. Andy, I got to thank you for coming into the studio today, helping us with this "Breaking Analysis" segment. Outstanding collaboration and insights and input in today's episode. Hope we can do more. >> Thank you. Thanks for having me. I appreciate it. >> You're very welcome. All right. I want to thank Alex Marson who's on production and manages the podcast. Ken Schiffman as well. Kristen Martin and Cheryl Knight helped get the word out on social media and our newsletters. And Rob Hoof is our editor-in-chief over at Silicon Angle. He does some great editing for us. Thank you all. Remember all these episodes are available as podcast. Wherever you listen, all you got to do is search "Breaking Analysis" podcast. I publish each week on wikibon.com and silicon angle.com or you can email me at david.vellante@siliconangle.com to get in touch, or DM me at dvellante or comment on our LinkedIn posts. Please check out ETR.AI for the best survey data and the enterprise tech business, Constellation Research. Andy publishes there some awesome information on AI and data. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching everybody and we'll see you next time on "Breaking Analysis". (gentle closing tune plays)

Published Date : Dec 29 2022

SUMMARY :

bringing you data-driven Andy, great to have you on the program. and AI at the center of their enterprises. So it's like you found a of the AI use cases," right? I got a glimpse of the January survey, So one of the things and it just notes some of the players So the first one is, Like a And the open AI tool and ChatGPT rather. I have, but it's of all the available text of bodies that you need or some of the others that are on there? One of the things they're So the data historically So here's the thing. So the ROI is going to So the chart here shows the net score, Couple of them stood out to me IBM Watson is the far right and the red, And over the course of when you first saw it. I mean, that's one of the pillars. Oracle is not necessarily the how DataRobot is holding, you know? So it's like net score on the vertical database of the choice, you know? on how to make this more Are they going to go IPO? So at the end of the day, of the technology industry. So 99% of the time you What's that synthetic at the end of the day, and the synthetic data. So that's part of AI that blend is the problem. And the risk involved with that. So you got to start at data's going into the cloud So that need to be attended to. is going to put dollars the first time when you that you can get real time is big. a lot of the AI today is I mean at the end of the day, and make sure they get it to production Andy, I got to thank you for Thanks for having me. and manages the podcast.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Alex MarsonPERSON

0.99+

AndyPERSON

0.99+

Andy ThuraiPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Tom DavenportPERSON

0.99+

AMEXORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Rashmi KumarPERSON

0.99+

Rob HoofPERSON

0.99+

GoogleORGANIZATION

0.99+

UberORGANIZATION

0.99+

KenPERSON

0.99+

OracleORGANIZATION

0.99+

OctoberDATE

0.99+

6%QUANTITY

0.99+

$40QUANTITY

0.99+

January 21DATE

0.99+

ChipotleORGANIZATION

0.99+

$15 billionQUANTITY

0.99+

fiveQUANTITY

0.99+

RashmiPERSON

0.99+

$50,000QUANTITY

0.99+

$60QUANTITY

0.99+

USLOCATION

0.99+

JanuaryDATE

0.99+

AntonioPERSON

0.99+

John AkersPERSON

0.99+

Warren BuffetPERSON

0.99+

late 2018DATE

0.99+

IkeaORGANIZATION

0.99+

American ExpressORGANIZATION

0.99+

MITORGANIZATION

0.99+

PWCORGANIZATION

0.99+

99%QUANTITY

0.99+

HPEORGANIZATION

0.99+

DominoORGANIZATION

0.99+

ArvindPERSON

0.99+

Palo AltoLOCATION

0.99+

30 billionQUANTITY

0.99+

last yearDATE

0.99+

Constellation ResearchORGANIZATION

0.99+

GerstnerPERSON

0.99+

120 billionQUANTITY

0.99+

$100,000QUANTITY

0.99+

Gunnar Hellekson, Red Hat & Adnan Ijaz, AWS | AWS re:Invent 2022


 

(bright music) >> Hello everyone. Welcome to theCUBE's coverage of AWS re:Invent 22. I'm John Furrier, host of theCUBE. Got some great coverage here talking about software supply chain and sustainability in the cloud. We've got a great conversation. Gunnar Hellekson, vice president and general manager at Red Hat Enterprise Linux and Business Unit of Red Hat. Thanks for coming on. And Adnan Ijaz, director of product management of commercial software services, AWS. Gentlemen, thanks for joining me today. >> It's a pleasure. (Adnan speaks indistinctly) >> You know, the hottest topic coming out of Cloud Native developer communities is slide chain software sustainability. This is a huge issue. As open source continues to power away and fund and grow this next generation modern development environment, you know, supply chain, you know, sustainability is a huge discussion because you got to check things out, what's in the code. Okay, open source is great, but now we got to commercialize it. This is the topic, Gunnar, let's get in with you. What are you seeing here and what's some of the things that you're seeing around the sustainability piece of it? Because, you know, containers, Kubernetes, we're seeing that that run time really dominate this new abstraction layer, cloud scale. What's your thoughts? >> Yeah, so I, it's interesting that the, you know, so Red Hat's been doing this for 20 years, right? Making open source safe to consume in the enterprise. And there was a time when in order to do that you needed to have a long term life cycle and you needed to be very good at remediating security vulnerabilities. And that was kind of, that was the bar that you had to climb over. Nowadays with the number of vulnerabilities coming through, what people are most worried about is, kind of, the providence of the software and making sure that it has been vetted and it's been safe, and that things that you get from your vendor should be more secure than things that you've just downloaded off of GitHub, for example. Right? And that's a place where Red Hat's very comfortable living, right? Because we've been doing it for 20 years. I think there's another aspect to this supply chain question as well, especially with the pandemic. You know, we've got these supply chains have been jammed up. The actual physical supply chains have been jammed up. And the two of these issues actually come together, right? Because as we go through the pandemic, we've got these digital transformation efforts, which are in large part, people creating software in order to manage better their physical supply chain problems. And so as part of that digital transformation, you have another supply chain problem, which is the software supply chain problem, right? And so these two things kind of merge on these as people are trying to improve the performance of transportation systems, logistics, et cetera. Ultimately, it all boils down to, both supply chain problems actually boil down to a software problem. It's very interesting. >> Well, that is interesting. I want to just follow up on that real quick if you don't mind. Because if you think about the convergence of the software and physical world, you know, that's, you know, IOT and also hybridcloud kind of plays into that at scale, this opens up more surface area for attacks, especially when you're under a lot of pressure. This is where, you know, you have a service area on the physical side and you have constraints there. And obviously the pandemic causes problems. But now you've got the software side. How are you guys handling that? Can you just share a little bit more of how you guys looking at that with Red Hat? What's the customer challenge? Obviously, you know, skills gaps is one, but, like, that's a convergence at the same time more security problems. >> Yeah, yeah, that's right. And certainly the volume of, if we just look at security vulnerabilities themselves, just the volume of security vulnerabilities has gone up considerably as more people begin using the software. And as the software becomes more important to, kind of, critical infrastructure. More eyeballs around it and so we're uncovering more problems, which is kind of, that's okay, that's how the world works. And so certainly the number of remediations required every year has gone up. But also the customer expectations, as I mentioned before, the customer expectations have changed, right? People want to be able to show to their auditors and to their regulators that no, in fact, I can show the providence of the software that I'm using. I didn't just download something random off the internet. I actually have like, you know, adults paying attention to how the software gets put together. And it's still, honestly, it's still very early days. I think as an industry, I think we're very good at managing, identifying remediating vulnerabilities in the aggregate. We're pretty good at that. I think things are less clear when we talk about, kind of, the management of that supply chain, proving the providence, and creating a resilient supply chain for software. We have lots of tools, but we don't really have lots of shared expectations. And so it's going to be interesting over the next few years, I think we're going to have more rules are going to come out. I see NIST has already published some of them. And as these new rules come out, the whole industry is going to have to kind of pull together and really rally around some of this shared understanding so we can all have shared expectations and we can all speak the same language when we're talking about this problem. >> That's awesome. Adnan, Amazon web service is obviously the largest cloud platform out there. You know, the pandemic, even post pandemic, some of these supply chain issues, whether it's physical or software, you're also an outlet for that. So if someone can't buy hardware or something physical, they can always get to the cloud. You guys have great network compute and whatnot and you got thousands of ISVs across the globe. How are you helping customers with this supply chain problem? Because whether it's, you know, I need to get in my networking gears and delay, I'm going to go to the cloud and get help there. Or whether it's knowing the workloads and what's going on inside them with respect to open source. 'Cause you've got open source, which is kind of an external forcing function. You've got AWS and you got, you know, physical compute stores, networking, et cetera. How are you guys helping customers with the supply chain challenge, which could be an opportunity? >> Yeah, thanks John. I think there are multiple layers to that. At the most basic level, we are helping customers by abstracting away all these data center constructs that they would have to worry about if they were running their own data centers. They would have to figure out how the networking gear, you talk about, you know, having the right compute, right physical hardware. So by moving to the cloud, at least they're delegating that problem to AWS and letting us manage and making sure that we have an instance available for them whenever they want it. And if they want to scale it, the capacity is there for them to use. Now then, so we kind of give them space to work on the second part of the problem, which is building their own supply chain solutions. And we work with all kinds of customers here at AWS from all different industry segments, automotive, retail, manufacturing. And you know, you see the complexity of the supply chain with all those moving pieces, like hundreds and thousands of moving pieces, it's very daunting. And then on the other hand, customers need more better services. So you need to move fast. So you need to build your agility in the supply chain itself. And that is where, you know, Red Hat and AWS come together. Where we can enable customers to build their supply chain solutions on platforms like Red Hat Enterprise Linux RHEL or Red Hat OpenShift on AWS, we call it ROSA. And the benefit there is that you can actually use the services that are relevant for the supply chain solutions like Amazon managed blockchain, you know, SageMaker. So you can actually build predictive analytics, you can improve forecasting, you can make sure that you have solutions that help you identify where you can cut costs. And so those are some of the ways we're helping customers, you know, figure out how they actually want to deal with the supply chain challenges that we're running into in today's world. >> Yeah, and you know, you mentioned sustainability outside of software sustainability, you know, as people move to the cloud, we've reported on SiliconANGLE here in theCUBE, that it's better to have the sustainability with the cloud because then the data centers aren't using all that energy too. So there's also all kinds of sustainability advantages. Gunnar, because this is kind of how your relationship with Amazon's expanded. You mentioned ROSA, which is Red Hat, you know, on OpenShift, on AWS. This is interesting because one of the biggest discussions is skills gap, but we were also talking about the fact that the humans are a huge part of the talent value. In other words, the humans still need to be involved. And having that relationship with managed services and Red Hat, this piece becomes one of those things that's not talked about much, which is the talent is increasing in value, the humans, and now you got managed services on the cloud. So we'll look at scale and human interaction. Can you share, you know, how you guys are working together on this piece? 'Cause this is interesting, 'cause this kind of brings up the relationship of that operator or developer. >> Yeah, yeah. So I think there's, so I think about this in a few dimensions. First is that it's difficult to find a customer who is not talking about automation at some level right now. And obviously you can automate the processes and the physical infrastructure that you already have, that's using tools like Ansible, right? But I think that combining it with the elasticity of a solution like AWS, so you combine the automation with kind of elastic and converting a lot of the capital expenses into operating expenses, that's a great way actually to save labor, right? So instead of like racking hard drives, you can have somebody do something a little more like, you know, more valuable work, right? And so, okay, but that gives you a platform. And then what do you do with that platform? You know, if you've got your systems automated and you've got this kind of elastic infrastructure underneath you, what you do on top of it is really interesting. So a great example of this is the collaboration that we had with running the RHEL workstation on AWS. So you might think, like, well why would anybody want to run a workstation on a cloud? That doesn't make a whole lot of sense. Unless you consider how complex it is to set up, if you have, the use case here is like industrial workstations, right? So it's animators, people doing computational fluid dynamics, things like this. So these are industries that are extremely data heavy. Workstations have very large hardware requirements, often with accelerated GPUs and things like this. That is an extremely expensive thing to install on-premise anywhere. And if the pandemic taught us anything, it's if you have a bunch of very expensive talent and they all have to work from home, it is very difficult to go provide them with, you know, several tens of thousands of dollars worth of workstation equipment. And so combine the RHEL workstation with the AWS infrastructure and now all that workstation computational infrastructure is available on demand and available right next to the considerable amount of data that they're analyzing or animating or working on. So it's a really interesting, it was actually, this is an idea that was actually born with the pandemic. >> Yeah. >> And it's kind of a combination of everything that we're talking about, right? It's the supply chain challenges of the customer, it's the lack of talent, making sure that people are being put to their best and highest use. And it's also having this kind of elastic, I think, OpEx heavy infrastructure as opposed to a CapEx heavy infrastructure. >> That's a great example. I think that illustrates to me what I love about cloud right now is that you can put stuff in the cloud and then flex what you need, when you need it, in the cloud rather than either ingress or egress of data. You just get more versatility around the workload needs, whether it's more compute or more storage or other high level services. This is kind of where this next gen cloud is going. This is where customers want to go once their workloads are up and running. How do you simplify all this and how do you guys look at this from a joint customer perspective? Because that example I think will be something that all companies will be working on, which is put it in the cloud and flex to whatever the workload needs and put it closer to the compute. I want to put it there. If I want to leverage more storage and networking, well, I'll do that too. It's not one thing, it's got to flex around. How are you guys simplifying this? >> Yeah, I think, so, I'll give my point of view and then I'm very curious to hear what Adnan has to say about it. But I think about it in a few dimensions, right? So there is a technically, like, any solution that Adnan's team and my team want to put together needs to be kind of technically coherent, right? Things need to work well together. But that's not even most of the job. Most of the job is actually ensuring an operational consistency and operational simplicity, so that everything is, the day-to-day operations of these things kind of work well together. And then also, all the way to things like support and even acquisition, right? Making sure that all the contracts work together, right? It's a really... So when Adnan and I think about places of working together, it's very rare that we're just looking at a technical collaboration. It's actually a holistic collaboration across support, acquisition, as well as all the engineering that we have to do. >> Adnan, your view on how you're simplifying it with Red Hat for your joint customers making collaborations? >> Yeah, Gunnar covered it well. I think the benefit here is that Red Hat has been the leading Linux distribution provider. So they have a lot of experience. AWS has been the leading cloud provider. So we have both our own points of view, our own learning from our respective set of customers. So the way we try to simplify and bring these things together is working closely. In fact, I sometimes joke internally that if you see Gunnar and my team talking to each other on a call, you cannot really tell who belongs to which team. Because we're always figuring out, okay, how do we simplify discount experience? How do we simplify programs? How do we simplify go to market? How do we simplify the product pieces? So it's really bringing our learning and share our perspective to the table and then really figure out how do we actually help customers make progress. ROSA that we talked about is a great example of that, you know, together we figured out, hey, there is a need for customers to have this capability in AWS and we went out and built it. So those are just some of the examples in how both teams are working together to simplify the experience, make it complete, make it more coherent. >> Great, that's awesome. Next question is really around how you help organizations with the sustainability piece, how to support them simplifying it. But first, before we get into that, what is the core problem around this sustainability discussion we're talking about here, supply chain sustainability, what is the core challenge? Can you both share your thoughts on what that problem is and what the solution looks like and then we can get into advice? >> Yeah. Well from my point of view, it's, I think, you know, one of the lessons of the last three years is every organization is kind of taking a careful look at how resilient it is, or I should say, every organization learned exactly how resilient it was, right? And that comes from both the physical challenges and the logistics challenges that everyone had, the talent challenges you mentioned earlier. And of course the software challenges, you know, as everyone kind of embarks on this digital transformation journey that we've all been talking about. And I think, so I really frame it as resilience, right? And resilience at bottom is really about ensuring that you have options and that you have choices. The more choices you have, the more options you have, the more resilient you and your organization is going to be. And so I know that's how I approach the market. I'm pretty sure that's how Adnan is approaching the market, is ensuring that we are providing as many options as possible to customers so that they can assemble the right pieces to create a solution that works for their particular set of challenges or their unique set of challenges and unique context. Adnan, does that sound about right to you? >> Yeah, I think you covered it well. I can speak to another aspect of sustainability, which is becoming increasingly top of mind for our customers. Like, how do they build products and services and solutions and whether it's supply chain or anything else which is sustainable, which is for the long term good of the planet. And I think that is where we have also been very intentional and focused in how we design our data center, how we actually build our cooling system so that those are energy efficient. You know, we are on track to power all our operations with renewable energy by 2025, which is five years ahead of our initial commitment. And perhaps the most obvious example of all of this is our work with ARM processors, Graviton3, where, you know, we are building our own chip to make sure that we are designing energy efficiency into the process. And you know, the ARM Graviton3 processor chips, they are about 60% more energy efficient compared to some of the CD6 comparable. So all those things that also we are working on in making sure that whatever our customers build on our platform is long term sustainable. So that's another dimension of how we are working that into our platform. >> That's awesome. This is a great conversation. You know, the supply chain is on both sides, physical and software. You're starting to see them come together in great conversations. And certainly moving workloads to the cloud and running them more efficiently will help on the sustainability side, in my opinion. Of course, you guys talked about that and we've covered it. But now you start getting into how to refactor, and this is a big conversation we've been having lately is as you not just lift and shift, but replatform it and refactor, customers are seeing great advantages on this. So I have to ask you guys, how are you helping customers and organizations support sustainability and simplify the complex environment that has a lot of potential integrations? Obviously API's help of course, but that's the kind of baseline. What's the advice that you give customers? 'Cause you know, it can look complex and it becomes complex, but there's an answer here. What's your thoughts? >> Yeah, I think, so whenever I get questions like this from customers, the first thing I guide them to is, we talked earlier about this notion of consistency and how important that is. One way to solve the problem is to create an entirely new operational model, an entirely new acquisition model, and an entirely new stack of technologies in order to be more sustainable. That is probably not in the cards for most folks. What they want to do is have their existing estate and they're trying to introduce sustainability into the work that they are already doing. They don't need to build another silo in order to create sustainability, right? And so there has to be some common threads, there has to be some common platforms across the existing estate and your more sustainable estate, right? And so things like Red Hat Enterprise Linux, which can provide this kind of common, not just a technical substrate, but a common operational substrate on which you can build these solutions. If you have a common platform on which you are building solutions, whether it's RHEL or whether it's OpenShift or any of our other platforms, that creates options for you underneath. So that in some cases maybe you need to run things on-premises, some things you need to run in the cloud, but you don't have to profoundly change how you work when you're moving from one place to another. >> Adnan, what's your thoughts on the simplification? >> Yeah, I mean, when you talk about replatforming and refactoring, it is a daunting undertaking, you know, especially in today's fast paced world. But the good news is you don't have to do it by yourself. Customers don't have to do it on their own. You know, together AWS and Red Hat, we have our rich partner ecosystem, you know, AWS has over 100,000 partners that can help you take that journey, the transformation journey. And within AWS and working with our partners like Red Hat, we make sure that we have- In my mind, there are really three big pillars that you have to have to make sure that customers can successfully re-platform, refactor their applications to the modern cloud architecture. You need to have the rich set of services and tools that meet their different scenarios, different use cases. Because no one size fits all. You have to have the right programs because sometimes customers need those incentives, they need those, you know, that help in the first step. And last but not least, they need training. So all of that, we try to cover that as we work with our customers, work with our partners. And that is where, you know, together we try to help customers take that step, which is a challenging step to take. >> Yeah, you know, it's great to talk to you guys, both leaders in your field. Obviously Red Hats, I remember the days back when I was provisioning and loading OSs on hardware with CDs, if you remember those days, Gunnar. But now with the high level services, if you look at this year's reinvent, and this is kind of my final question for the segment is, that we'll get your reaction to, last year we talked about higher level service. I sat down with Adam Saleski, we talked about that. If you look at what's happened this year, you're starting to see people talk about their environment as their cloud. So Amazon has the gift of the CapEx, all that investment and people can operate on top of it. They're calling that environment their cloud. Okay? For the first time we're seeing this new dynamic where it's like they have a cloud, but Amazon's the CapEx, they're operating. So, you're starting to see the operational visibility, Gunnar, around how to operate this environment. And it's not hybrid, this, that, it's just, it's cloud. This is kind of an inflection point. Do you guys agree with that or have a reaction to that statement? Because I think this is, kind of, the next gen supercloud-like capability. We're going, we're building the cloud. It's now an environment. It's not talking about private cloud, this cloud, it's all cloud. What's your reaction? >> Yeah, I think, well, I think it's very natural. I mean, we use words like hybridcloud, multicloud, I guess supercloud is what the kids are saying now, right? It's all describing the same phenomena, right? Which is being able to take advantage of lots of different infrastructure options, but still having something that creates some commonality among them so that you can manage them effectively, right? So that you can have, kind of, uniform compliance across your estate. So that you can have, kind of, you can make the best use of your talent across the estate. I mean this is, it's a very natural thing. >> John: They're calling it cloud, the estate is the cloud. >> Yeah. So yeah, so fine, if it means that we no longer have to argue about what's multicloud and what's hybridcloud, I think that's great. Let's just call it cloud. >> Adnan, what's your reaction, 'cause this is kind of the next gen benefits of higher level services combined with amazing, you know, compute and resource at the infrastructure level. What's your view on that? >> Yeah, I think the construct of a unified environment makes sense for customers who have all these use cases which require, like for instance, if you are doing some edge computing and you're running WS outpost or you know, wavelength and these things. So, and it is fair for customer to think that, hey, this is one environment, same set of tooling that they want to build that works across all their different environments. That is why we work with partners like Red Hat so that customers who are running Red Hat Enterprise Linux on-premises and who are running in AWS get the same level of support, get the same level of security features, all of that. So from that sense, it actually makes sense for us to build these capabilities in a way that customers don't have to worry about, okay, now I'm actually in the AWS data center versus I'm running outpost on-premises. It is all one. They just use the same set of CLI, command line APIs and all of that. So in that sense it actually helps customers have that unification so that consistency of experience helps their workforce and be more productive versus figuring out, okay, what do I do, which tool I use where? >> Adnan, you just nailed it. This is about supply chain sustainability, moving the workloads into a cloud environment. You mentioned wavelength, this conversation's going to continue. We haven't even talked about the edge yet. This is something that's going to be all about operating these workloads at scale and all with the cloud services. So thanks for sharing that and we'll pick up that edge piece later. But for re:Invent right now, this is really the key conversation. How to make the sustained supply chain work in a complex environment, making it simpler. And so thanks you for sharing your insights here on theCUBE. >> Thanks, thanks for having us. >> Okay, this is theCUBE's coverage of AWS re:Invent 22. I'm John Furrier, your host. Thanks for watching. (bright music)

Published Date : Dec 7 2022

SUMMARY :

sustainability in the cloud. It's a pleasure. you know, supply chain, you know, interesting that the, you know, This is where, you know, And so certainly the and you got thousands of And that is where, you know, Yeah, and you know, you that you already have, challenges of the customer, is that you can put stuff in the cloud Making sure that all the that if you see Gunnar and my team Can you both share your thoughts on and that you have choices. And you know, the ARM So I have to ask you guys, that creates options for you underneath. And that is where, you know, great to talk to you guys, So that you can have, kind of, cloud, the estate is the cloud. if it means that we no combined with amazing, you know, that customers don't have to worry about, And so thanks you for sharing coverage of AWS re:Invent 22.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

AdnanPERSON

0.99+

Gunnar HelleksonPERSON

0.99+

John FurrierPERSON

0.99+

Adnan IjazPERSON

0.99+

Adam SaleskiPERSON

0.99+

GunnarPERSON

0.99+

20 yearsQUANTITY

0.99+

2025DATE

0.99+

CapExORGANIZATION

0.99+

twoQUANTITY

0.99+

Red HatORGANIZATION

0.99+

last yearDATE

0.99+

firstQUANTITY

0.99+

FirstQUANTITY

0.99+

NISTORGANIZATION

0.99+

RHELTITLE

0.99+

bothQUANTITY

0.99+

first stepQUANTITY

0.99+

second partQUANTITY

0.99+

over 100,000 partnersQUANTITY

0.99+

ARMORGANIZATION

0.99+

thousandsQUANTITY

0.99+

OpenShiftTITLE

0.99+

both teamsQUANTITY

0.99+

oneQUANTITY

0.99+

pandemicEVENT

0.98+

two thingsQUANTITY

0.98+

this yearDATE

0.97+

five yearsQUANTITY

0.97+

todayDATE

0.97+

Red Hat Enterprise LinuxTITLE

0.97+

Red Hat OpenShiftTITLE

0.97+

about 60%QUANTITY

0.97+

both sidesQUANTITY

0.97+

Red HatTITLE

0.96+

Red Hat Enterprise LinuxTITLE

0.96+

GitHubORGANIZATION

0.96+

first timeQUANTITY

0.96+

Red Hat Enterprise LinuxORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

LinuxTITLE

0.93+

Faye Ellis & Mattias Andersson, Pluralsight | AWS re:Invent 2022


 

(digital music) >> Welcome back to "theCUBE's" live coverage of AWS re:Invent 2022. Lisa Martin here in Las Vegas with Dave Vellante. Dave, we've been here.. This is our third day, we started Monday night. We've done well over 70 interviews so far. I've lost count. >> Yeah, I don't count anymore. (Lisa laughing) >> Just go with the flow. >> We've been talking all things Cloud with AWS it's ecosystem of partners and customers. We're excited to welcome a couple of folks from Pluralsight to the program. Talking about the state of Cloud. Faye Ellis joins us, Principle Training Architect at A Cloud Guru, Pluralsight. Mattias Andersson is also here, Principle Developer Advocate at Pluralsight. Guys welcome to the queue. >> Thank you >> Thank you so much for having us. >> Great to have you. >> Mattias: Glad to be here. >> Just in case our audience isn't familiar with A Cloud Guru and Pluralsight, why don't you give us just that high level elevator pitch? >> Yeah, well we basically help organizations transform their people so that they can deliver Cloud transformations within their own organizations. So it's all about upskilling and getting people Cloud fluent and ready to rock Cloud in their own organizations. >> Love that, Cloud fluent. But what are you hearing from the developer community? Your a developer advocate. We've seen so much pivot towards the developers really influencing business decisions, business direction. What's the voice of the developer like these days? >> Well, I think that a lot of developers are recognizing that the Cloud does offer a lot of value for the things that they're wanting to get done. Developers generally want to do things, they want to build things, they want stuff that they can look at and say, "Hey I made that and it's really good and it solves problems." And so I'm hearing a lot of people talking about how they value things like serverless, to be able to build those sorts of systems without a whole lot of other people necessarily needing to support them. They can get so much built on their own even. And then as teams, they can accomplish a lot of, again, the same sorts of projects. They can build those forward much more efficiently as a smaller team than they could have in the past without that technology. So I'm hearing a lot about that. Especially because I'm working with Cloud so much is what I mean, right? >> So it's kind of putting the power back into their hands as developers. Instead of having to wait for the infrastructure people or the support people to create a server so that they can deploy applications, there's a lot more tools to allow them to actually do that for themselves, isn't there? >> Absolutely, absolutely. It opens up so many doors. >> So pre-Ukraine, we were writing about the skills shortage. I call it the slingshot economy. All right. Oh wow it's like this talent war. And then all of a sudden, Twitter layoffs and there's this talent on the street. Now it might not be a perfect match, but what are you seeing in terms of new talent coming on that you can train and coach. How are you seeing the match and the alignment with what the demand for talent? Now I know your philosophy is you should be producers of talent, not consumers of talent. I get that. >> Faye: Yeah. >> But to produce talent you've got to coach, train, assist people. So what are you seeing today? What's the state of that sort of market? >> That's a really good question. I mean our State of Cloud report, it says that 75% of tech leaders are building all their new products and features in the Cloud. But what was the other stat, Mattias? >> Only 8% of the actual individuals that are working with the technology say that they have extensive skills with the Cloud. So that's a huge gap between the people who are wanting to build that forward as the leadership of the organization and the people that they have available, whether it's internal to their organization or external. So they do have a lot of people who are working in technology already in their organizations in general. But they do need to invest in that. Those technologists are learning things all the time. But are they maybe not learning the right things? Are they not learning them effectively? Are they not moving the organization forward? >> Dave: So go ahead, please. >> Yeah, so we think it's all about like nurturing the talent that you have already in your own organization. And those are the people who really know your business. And you know, it takes time to kind of upskill and really, really develop those Cloud skills and develop that experience. But it's not always the right thing to take on new teams. Like bring in new people and then you've got to get them up to speed with your own business. And actually isn't it much more wonderful to be able to nurture the talent within your own organization and and create that long-term relationship with your own employees. >> So where do you start? Like to get to work for Amazon you got to prove that you're reasonably professional. I mean everybody, the whole company has to like spin up an EC2 instance and do something with it. Is that where you start? Is it sort of education and what's available? What's the Cloud? Or is it more advanced than that? You're looking for maybe people with a technical mind that you're.. or do you have.. obviously have different levels, but take us through sort of the anatomy of experience. >> When you say, "Where do you start?" Who are you meaning? Are you meaning an organization, an individual, a team? >> You guys, when you bring on.. begin to expose an individual to the Cloud, >> Mattias: Right. >> Their objective is to become proficient at something. >> Right. >> Right. And so is it something that you have 100, 101, 201, basically? >> Well, you know what, if you want to learn how to swim you got to jump in the water. That's what I always think. And we focus on practical skills, the ability to do something, to get something done. Get something configured within the Cloud. A lot of the time our customers are asking us for skills that kind of go beyond certification. And for a really long time we were.. A Cloud Guru has been famous for getting engineers certified. But that's just one piece of the puzzle, isn't it? Certification is wonderful, but it's that chicken and egg scenario that I think that you were alluding to which is that you need experience to get the experience. So how are you going to get that experience? And we've got loads of different ideas to help people to actually do that. On our platform we've got lots of practical exercises that you can do. Building out serverless websites, configuring a web application firewall, building a VPC. We've got troubleshooting labs, we've got challenge labs, that kind of thing. And we've also got some free resources, haven't we as well, Mattias. >> Yes. >> Things like our Cloud Portfolio Challenges, which are like little projects that you can complete all by yourself. Creating serverless websites, playing around with SageMaker. You get some requirements and you have to design and actually build that. But it's all about getting that hands-on practice and that's kind of what we focus on. And we start off with easy things, and then we kind of layer it up and layer it up. And we kind of build on the easy foundations until, before you know it, you're Cloud fluent. >> Yeah, I think that there is a lot of value.. You were mentioning to, just to circle back on certifications, that is a really valuable way for a lot of people to start to take a look at the certifications that AWS offers, for example, and say, "How can I use those to guide my learning?" Because I know that sometimes people look at certifications as like a replacement for some sort of an assessment or whatever. And it's not really that most of the time. Most of the time the key value is that it guides people to learn a scope of material that is really valuable to them. And in particular it uncovers blind spots for them. So to answer your question of "Where do you start as an individual?".. People often ask me, "Okay, so I know all these things, which certifications should I get?" And I say, the Cloud Practitioner is the place to start. And they're like, "Oh, but maybe that's too easy." And I say, maybe it is, but then it's going to be really quick for you. If it's not really quick for you, then it was really valuable. You learned those key things. And if it was really quick but you didn't spend a lot of time on it and now you're just that much further along on the next certification that sort of guides you to the next larger scope. So it's a really valuable system that I often guide people to. To say that you can jump into that, anyone actually can jump into the Cloud Practitioner and learn that. And we often recommend that across an entire organization, you could potentially have everyone that gets that Cloud Practitioner. Whether you're finance or sales or leadership executive, the individual teams in technology departments of course. But everyone can get that Cloud fluency and then they can communicate far more effectively with each other. So it's not just the technologists that are needing to do that. >> Absolutely. And I think also it's about leading by example. If you're in leadership and you are asking your engineers to upskill themselves so that you can deliver your transformation goals, well actually, it's leadership responsibility to lead by example as well. And I heard a wonderful story from a customer. Just yesterday, a female CFO in her seventies just got her Cloud Practitioner certification. >> Lisa: Right on. >> I mean, that's wonderful. As I said before, a career in Cloud is a commitment to learning. It's lifelong learning. So yeah, that's wonderful. And long may it continue. I'd love to be in my seventies still learning new things and still rocking it. Maybe not the CFO, maybe something different. But yeah, that would be wonderful. >> How do you define Cloud fluency? There's so many opportunities that you both talked about and you walked through really kind of the step-by-step process. But how would someone define themselves as Cloud fluent? And how.. it's almost like what you were talking about, Mattias, is sort of the democratization of Cloud fluency across an organization, but what does it actually look like? >> Wow, good question. For me, I think it means everybody speaking the same language and having a common understanding. And I think that does kind of hark back to what you were saying before, Mattias, about the foundational certifications. The Cloud Practitioner type certification. What do you think? >> Yeah, I think a part of it is a mindset shift that people need to understand a different way of thinking about technology. That Cloud isn't just another tool just like all the others. It's a different way, a higher level of abstraction in technology that makes us more effective and efficient because of that. But because of that, also, we need to think about it in not the same way as we were before. So if you take it to the language analogy, instead of memorizing a few phrases like "Where is the bathroom?" or "How much does that cost?" or whatever, you have an understanding of the flow of the language. You understand that okay, there are verbs and nouns and I can put them together in this way. Oh, adjectives, those are kind of interesting. I can add those to things. And you have this model, mental model for how you can interact with the technology just like you would interact with the language or whatever other things. So the mental model actually, I think, is really the key thing that I keep coming back to a lot when people are learning that the mental model that you have for something is really what.. this sort of helps you understand the mastery of that. It's whether your mental model is mature and it's not changing a lot as you're learning new information, that's a really valuable milestone for someone to get to. Because as you're learning new things.. otherwise you would make assumptions, and then you learn new things that challenge those assumptions and you have to change the mental model to move forward. So the fluency is when that mental model, you have the understanding and you can then communicate. >> Yep. Love that. Last question for you guys is, we have about a minute left. If you had a billboard that you could put anywhere about A Cloud Guru at Pluralsight and what you're enabling with respect to Cloud fluency. I want you to each kind of take about 30 seconds to.. from your perspective, what would it say? >> Oh my goodness. I think it would say something like, Cloud is for everybody. It's no longer this elitist, difficult to understand, abstract thing. And I think it's something that is inclusive to everybody and that we should all be embracing it. And if you don't do it, you are going to be left behind because your competitors are going to be getting the advantages from Cloud. You're going to miss that competitive advantage and you're going to lose out. So yeah, that's probably quite a lot to put on a billboard. >> I love it. And Mattias, what would your billboard say? >> Ah, let me think. Okay. I might say something like, "The future of technology is accessible and important if you're in a technology career." I don't know, now it's getting more wordy. That's not quite right. But the point is that the Cloud really is the future of technology. It's not just some other little tool that's a fad or whatever. It's a different way of approaching technology. I'm realizing you're asking about the billboard as a short thing. The Cloud is the future. You can do it. You should do it. (everyone laughing) >> Drop the mic. Nailed it! Faye, Mattias, thank you so much joining us.. >> Thank you so much, we really appreciate it. >> Lisa: This was a great session. >> Thank you. >> Lisa: Great to have A Cloud Guru by Pluralsight on the program. We appreciate you stopping by. >> Oh, thank you so much. >> Thank you both so very much. >> We appreciate it. >> Lisa: Our pleasure. >> Thank you. For our guests and for Dave Vellante, I'm Lisa Martin. You're watching "theCUBE", the leader in live enterprise and emerging tech coverage. (digital music)

Published Date : Dec 1 2022

SUMMARY :

Welcome back to Yeah, I don't count anymore. Talking about the state of Cloud. and ready to rock Cloud in But what are you hearing that the Cloud does offer a lot of value or the support people to create a server It opens up so many doors. but what are you seeing in terms of So what are you seeing today? and features in the Cloud. and the people that they have available, talent that you have already Is that where you start? You guys, when you bring on.. Their objective is to And so is it something that you that I think that you were alluding to projects that you can complete And it's not really that most of the time. that you can deliver your Maybe not the CFO, maybe that you both talked kind of hark back to what that the mental model that you have that you could put anywhere that is inclusive to everybody And Mattias, what would But the point is that you so much joining us.. Thank you so much, We appreciate you stopping by. the leader in live enterprise

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

MattiasPERSON

0.99+

Lisa MartinPERSON

0.99+

FayePERSON

0.99+

Faye EllisPERSON

0.99+

Mattias AnderssonPERSON

0.99+

AmazonORGANIZATION

0.99+

LisaPERSON

0.99+

AWSORGANIZATION

0.99+

Monday nightDATE

0.99+

75%QUANTITY

0.99+

Las VegasLOCATION

0.99+

yesterdayDATE

0.99+

one pieceQUANTITY

0.99+

PluralsightORGANIZATION

0.99+

third dayQUANTITY

0.99+

seventiesQUANTITY

0.99+

todayDATE

0.98+

8%QUANTITY

0.98+

bothQUANTITY

0.98+

100QUANTITY

0.98+

UkraineLOCATION

0.98+

EC2TITLE

0.97+

TwitterORGANIZATION

0.97+

about 30 secondsQUANTITY

0.97+

theCUBETITLE

0.96+

CloudTITLE

0.93+

eachQUANTITY

0.92+

theCUBEORGANIZATION

0.92+

A CloudORGANIZATION

0.91+

201QUANTITY

0.9+

over 70 interviewsQUANTITY

0.9+

101QUANTITY

0.89+

re:Invent 2022EVENT

0.83+

of CloudTITLE

0.78+

about a minuteQUANTITY

0.7+

2022DATE

0.65+

InventEVENT

0.62+

puzzleQUANTITY

0.62+

PrinciplePERSON

0.55+

SageMakerTITLE

0.54+

Itamar Ankorion, Qlik & Peter MacDonald, Snowflake | AWS re:Invent 2022


 

(upbeat music) >> Hello, welcome back to theCUBE's AWS RE:Invent 2022 Coverage. I'm John Furrier, host of theCUBE. Got a great lineup here, Itamar Ankorion SVP Technology Alliance at Qlik and Peter McDonald, vice President, cloud partnerships and business development Snowflake. We're going to talk about bringing SAP data to life, for joint Snowflake, Qlik and AWS Solution. Gentlemen, thanks for coming on theCUBE Really appreciate it. >> Thank you. >> Thank you, great meeting you John. >> Just to get started, introduce yourselves to the audience, then going to jump into what you guys are doing together, unique relationship here, really compelling solution in cloud. Big story about applications and scale this year. Let's introduce yourselves. Peter, we'll start with you. >> Great. I'm Peter MacDonald. I am vice president of Cloud Partners and business development here at Snowflake. On the Cloud Partner side, that means I manage AWS relationship along with Microsoft and Google Cloud. What we do together in terms of complimentary products, GTM, co-selling, things like that. Importantly, working with other third parties like Qlik for joint solutions. On business development, it's negotiating custom commercial partnerships, large companies like Salesforce and Dell, smaller companies at most for our venture portfolio. >> Thanks Peter and hi John. It's great to be back here. So I'm Itamar Ankorion and I'm the senior vice president responsible for technology alliances here at Qlik. With that, own strategic alliances, including our key partners in the cloud, including Snowflake and AWS. I've been in the data and analytics enterprise software market for 20 plus years, and my main focus is product management, marketing, alliances, and business development. I joined Qlik about three and a half years ago through the acquisition of Attunity, which is now the foundation for Qlik data integration. So again, we focus in my team on creating joint solution alignment with our key partners to provide more value to our customers. >> Great to have both you guys, senior executives in the industry on theCUBE here, talking about data, obviously bringing SAP data to life is the theme of this segment, but this reinvent, it's all about the data, big data end-to-end story, a lot about data being intrinsic as the CEO says on stage around in the organizations in all aspects. Take a minute to explain what you guys are doing as from a company standpoint. Snowflake and Qlik and the solutions, why here at AWS? Peter, we'll start with you at Snowflake, what you guys do as a company, your mission, your focus. >> That was great, John. Yeah, so here at Snowflake, we focus on the data platform and until recently, data platforms required expensive on-prem hardware appliances. And despite all that expense, customers had capacity constraints, inexpensive maintenance, and had limited functionality that all impeded these organizations from reaching their goals. Snowflake is a cloud native SaaS platform, and we've become so successful because we've addressed these pain points and have other new special features. For example, securely sharing data across both the organization and the value chain without copying the data, support for new data types such as JSON and structured data, and also advance in database data governance. Snowflake integrates with complimentary AWS services and other partner products. So we can enable holistic solutions that include, for example, here, both Qlik and AWS SageMaker, and comprehend and bring those to joint customers. Our customers want to convert data into insights along with advanced analytics platforms in AI. That is how they make holistic data-driven solutions that will give them competitive advantage. With Snowflake, our approach is to focus on customer solutions that leverage data from existing systems such as SAP, wherever they are in the cloud or on-premise. And to do this, we leverage partners like Qlik native US to help customers transform their businesses. We provide customers with a premier data analytics platform as a result. Itamar, why don't you talk about Qlik a little bit and then we can dive into the specific SAP solution here and some trends >> Sounds great, Peter. So Qlik provides modern data integration and analytics software used by over 38,000 customers worldwide. Our focus is to help our customers turn data into value and help them close the gap between data all the way through insight and action. We offer click data integration and click data analytics. Click data integration helps to automate the data pipelines to deliver data to where they want to use them in real-time and make the data ready for analytics and then Qlik data analytics is a robust platform for analytics and business intelligence has been a leader in the Gartner Magic Quadrant for over 11 years now in the market. And both of these come together into what we call Qlik Cloud, which is our SaaS based platform. So providing a more seamless way to consume all these services and accelerate time to value with customer solutions. In terms of partnerships, both Snowflake and AWS are very strategic to us here at Qlik, so we have very comprehensive investment to ensure strong joint value proposition to we can bring to our mutual customers, everything from aligning our roadmaps through optimizing and validating integrations, collaborating on best practices, packaging joint solutions like the one we'll talk about today. And with that investment, we are an elite level, top level partner with Snowflake. We fly that our technology is Snowflake-ready across the entire product set and we have hundreds of joint customers together and with AWS we've also partnered for a long time. We're here to reinvent. We've been here with the first reinvent since the inaugural one, so it kind of gives you an idea for how long we've been working with AWS. We provide very comprehensive integration with AWS data analytics services, and we have several competencies ranging from data analytics to migration and modernization. So that's our focus and again, we're excited about working with Snowflake and AWS to bring solutions together to market. >> Well, I'm looking forward to unpacking the solutions specifically, and congratulations on the continued success of both your companies. We've been following them obviously for a very long time and seeing the platform evolve beyond just SaaS and a lot more going on in cloud these days, kind of next generation emerging. You know, we're seeing a lot of macro trends that are going to be powering some of the things we're going to get into real quickly. But before we get into the solution, what are some of those power dynamics in the industry that you're seeing in trends specifically that are impacting your customers that are taking us down this road of getting more out of the data and specifically the SAP, but in general trends and dynamics. What are you hearing from your customers? Why do they care? Why are they going down this road? Peter, we'll start with you. >> Yeah, I'll go ahead and start. Thanks. Yeah, I'd say we continue to see customers being, being very eager to transform their businesses and they know they need to leverage technology and data to do so. They're also increasingly depending upon the cloud to bring that agility, that elasticity, new functionality necessary to react in real-time to every evolving customer needs. You look at what's happened over the last three years, and boy, the macro environment customers, it's all changing so fast. With our partnerships with AWS and Qlik, we've been able to bring to market innovative solutions like the one we're announcing today that spans all three companies. It provides a holistic solution and an integrated solution for our customer. >> Itamar let's get into it, you've been with theCUBE, you've seen the journey, you have your own journey, many, many years, you've seen the waves. What's going on now? I mean, what's the big wave? What's the dynamic powering this trend? >> Yeah, in a nutshell I'll call it, it's all about time. You know, it's time to value and it's about real-time data. I'll kind of talk about that a bit. So, I mean, you hear a lot about the data being the new oil, but it's definitely, we see more and more customers seeing data as their critical enabler for innovation and digital transformation. They look for ways to monetize data. They look as the data as the way in which they can innovate and bring different value to the customers. So we see customers want to use more data so to get more value from data. We definitely see them wanting to do it faster, right, than before. And we definitely see them looking for agility and automation as ways to accelerate time to value, and also reduce overall costs. I did mention real-time data, so we definitely see more and more customers, they want to be able to act and make decisions based on fresh data. So yesterday's data is just not good enough. >> John: Yeah. >> It's got to be down to the hour, down to the minutes and sometimes even lower than that. And then I think we're also seeing customers look to their core business systems where they have a lot of value, like the SAP, like mainframe and thinking, okay, our core data is there, how can we get more value from this data? So that's key things we see all the time with customers. >> Yeah, we did a big editorial segment this year on, we called data as code. Data as code is kind of a riff on infrastructure as code and you start to see data becoming proliferating into all aspects, fresh data. It's not just where you store it, it's how you share it, it's how you turn it into an application intrinsically involved in all aspects. This is the big theme this year and that's driving all the conversations here at RE:Invent. And I'm guaranteeing you, it's going to happen for another five and 10 years. It's not stopping. So I got to get into the solution, you guys mentioned SAP and you've announced the solution by Qlik, Snowflake and AWS for your customers using SAP. Can you share more about this solution? What's unique about it? Why is it important and why now? Peter, Itamar, we'll start with you first. >> Let me jump in, this is really, I'll jump because I'm excited. We're very excited about this solution and it's also a solution by the way and again, we've seen proven customer success with it. So to your point, it's ready to scale, it's starting, I think we're going to see a lot of companies doing this over the next few years. But before we jump to the solution, let me maybe take a few minutes just to clarify the need, why we're seeing, why we're seeing customers jump to do this. So customers that use SAP, they use it to manage the core of their business. So think order processing, management, finance, inventory, supply chain, and so much more. So if you're running SAP in your company, that data creates a great opportunity for you to drive innovation and modernization. So what we see customers want to do, they want to do more with their data and more means they want to take SAP with non-SAP data and use it together to drive new insights. They want to use real-time data to drive real-time analytics, which they couldn't do to date. They want to bring together descriptive with predictive analytics. So adding machine learning in AI to drive more value from the data. And naturally they want to do it faster. So find ways to iterate faster on their solutions, have freedom with the data and agility. And I think this is really where cloud data platforms like Snowflake and AWS, you know, bring that value to be able to drive that. Now to do that you need to unlock the SAP data, which is a lot of also where Qlik comes in because typical challenges these customers run into is the complexity, inherent in SAP data. Tens of thousands of tables, proprietary formats, complex data models, licensing restrictions, and more than, you have performance issues, they usually run into how do we handle the throughput, the volumes while maintaining lower latency and impact. Where do we find knowledge to really understand how to get all this done? So these are the things we've looked at when we came together to create a solution and make it unique. So when you think about its uniqueness, because we put together a lot, and I'll go through three, four key things that come together to make this unique. First is about data delivery. How do you have the SAP data delivery? So how do you get it from ECC, from HANA from S/4HANA, how do you deliver the data and the metadata and how that integration well into Snowflake. And what we've done is we've focused a lot on optimizing that process and the continuous ingestion, so the real-time ingestion of the data in a way that works really well with the Snowflake system, data cloud. Second thing is we looked at SAP data transformation, so once the data arrives at Snowflake, how do we turn it into being analytics ready? So that's where data transformation and data worth automation come in. And these are all elements of this solution. So creating derivative datasets, creating data marts, and all of that is done by again, creating an optimized integration that pushes down SQL based transformations, so they can be processed inside Snowflake, leveraging its powerful engine. And then the third element is bringing together data visualization analytics that can also take all the data now that in organizing inside Snowflake, bring other data in, bring machine learning from SageMaker, and then you go to create a seamless integration to bring analytic applications to life. So these are all things we put together in the solution. And maybe the last point is we actually took the next step with this and we created something we refer to as solution accelerators, which we're really, really keen about. Think about this as prepackaged templates for common business analytic needs like order to cash, finance, inventory. And we can either dig into that a little more later, but this gets the next level of value to the customers all built into this joint solution. >> Yeah, I want to get to the accelerators, but real quick, Peter, your reaction to the solution, what's unique about it? And obviously Snowflake, we've been seeing the progression data applications, more developers developing on top of Snowflake, data as code kind of implies developer ecosystem. This is kind of interesting. I mean, you got partnering with Qlik and AWS, it's kind of a developer-like thinking real solution. What's unique about this SAP solution that's, that's different than what customers can get anywhere else or not? >> Yeah, well listen, I think first of all, you have to start with the idea of the solution. This are three companies coming together to build a holistic solution that is all about, you know, creating a great opportunity to turn SAP data into value this is Itamar was talking about, that's really what we're talking about here and there's a lot of technology underneath it. I'll talk more about the Snowflake technology, what's involved here, and then cover some of the AWS pieces as well. But you know, we're focusing on getting that value out and accelerating time to value for our joint customers. As Itamar was saying, you know, there's a lot of complexity with the SAP data and a lot of value there. How can we manage that in a prepackaged way, bringing together best of breed solutions with proven capabilities and bringing this to market quickly for our joint customers. You know, Snowflake and AWS have been strong partners for a number of years now, and that's not only on how Snowflake runs on top of AWS, but also how we integrate with their complementary analytics and then all products. And so, you know, we want to be able to leverage those in addition to what Qlik is bringing in terms of the data transformations, bringing data out of SAP in the visualization as well. All very critical. And then we want to bring in the predictive analytics, AWS brings and what Sage brings. We'll talk about that a little bit later on. Some of the technologies that we're leveraging are some of our latest cutting edge technologies that really make things easier for both our partners and our customers. For example, Qlik leverages Snowflakes recently released Snowpark for Python functionality to push down those data transformations from clicking the Snowflake that Itamar's mentioning. And while we also leverage Snowpark for integrations with Amazon SageMaker, but there's a lot of great new technology that just makes this easy and compelling for customers. >> I think that's the big word, easy button here for what may look like a complex kind of integration, kind of turnkey, really, really compelling example of the modern era we're living in, as we always say in theCUBE. You mentioned accelerators, SAP accelerators. Can you give an example of how that works with the technology from the third party providers to deliver this business value Itamar, 'cause that was an interesting comment. What's the example? Give an example of this acceleration. >> Yes, certainly. I think this is something that really makes this truly, truly unique in the industry and again, a great opportunity for customers. So we kind talked earlier about there's a lot of things that need to be done with SP data to turn it to value. And these accelerator, as the name suggests, are designed to do just that, to kind of jumpstart the process and reduce the time and the risk involved in such project. So again, these are pre-packaged templates. We basically took a lot of knowledge, and a lot of configurations, best practices about to get things done and we put 'em together. So think about all the steps, it includes things like data extraction, so already knowing which tables, all the relevant tables that you need to get data from in the contexts of the solution you're looking for, say like order to cash, we'll get back to that one. How do you continuously deliver that data into Snowflake in an in efficient manner, handling things like data type mappings, metadata naming conventions and transformations. The data models you build all the way to data mart definitions and all the transformations that the data needs to go through moving through steps until it's fully analytics ready. And then on top of that, even adding a library of comprehensive analytic dashboards and integrations through machine learning and AI and put all of that in a way that's in pre-integrated and tested to work with Snowflake and AWS. So this is where again, you get this entire recipe that's ready. So take for example, I think I mentioned order to cash. So again, all these things I just talked about, I mean, for those who are not familiar, I mean order to cash is a critical business process for every organization. So especially if you're in retail, manufacturing, enterprise, it's a big... This is where, you know, starting with booking a sales order, following by fulfilling the order, billing the customer, then managing the accounts receivable when the customer actually pays, right? So this all process, you got sales order fulfillment and the billing impacts customer satisfaction, you got receivable payments, you know, the impact's working capital, cash liquidity. So again, as a result this order to cash process is a lifeblood for many businesses and it's critical to optimize and understand. So the solution accelerator we created specifically for order to cash takes care of understanding all these aspects and the data that needs to come with it. So everything we outline before to make the data available in Snowflake in a way that's really useful for downstream analytics, along with dashboards that are already common for that, for that use case. So again, this enables customers to gain real-time visibility into their sales orders, fulfillment, accounts receivable performance. That's what the Excel's are all about. And very similarly, we have another one for example, for finance analytics, right? So this will optimize financial data reporting, helps customers get insights into P&L, financial risk of stability or inventory analytics that helps with, you know, improve planning and inventory management, utilization, increased efficiencies, you know, so in supply chain. So again, these accelerators really help customers get a jumpstart and move faster with their solutions. >> Peter, this is the easy button we just talked about, getting things going, you know, get the ball rolling, get some acceleration. Big part of this are the three companies coming together doing this. >> Yeah, and to build on what Itamar just said that the SAP data obviously has tremendous value. Those sales orders, distribution data, financial data, bringing that into Snowflake makes it easily accessible, but also it enables it to be combined with other data too, is one of the things that Snowflake does so well. So you can get a full view of the end-to-end process and the business overall. You know, for example, I'll just take one, you know, one example that, that may not come to mind right away, but you know, looking at the impact of weather conditions on supply chain logistics is relevant and material and have interest to our customers. How do you bring those different data sets together in an easy way, bringing the data out of SAP, bringing maybe other data out of other systems through Qlik or through Snowflake, directly bringing data in from our data marketplace and bring that all together to make it work. You know, fundamentally organizational silos and the data fragmentation exist otherwise make it really difficult to drive modern analytics projects. And that in turn limits the value that our customers are getting from SAP data and these other data sets. We want to enable that and unleash. >> Yeah, time for value. This is great stuff. Itamar final question, you know, what are customers using this? What do you have? I'm sure you have customers examples already using the solution. Can you share kind of what these examples look like in the use cases and the value? >> Oh yeah, absolutely. Thank you. Happy to. We have customers across different, different sectors. You see manufacturing, retail, energy, oil and gas, CPG. So again, customers in those segments, typically sectors typically have SAP. So we have customers in all of them. A great example is like Siemens Energy. Siemens Energy is a global provider of gas par services. You know, over what, 28 billion, 30 billion in revenue. 90,000 employees. They operate globally in over 90 countries. So they've used SAP HANA as a core system, so it's running on premises, multiple locations around the world. And what they were looking for is a way to bring all these data together so they can innovate with it. And the thing is, Peter mentioned earlier, not just the SAP data, but also bring other data from other systems to bring it together for more value. That includes finance data, these logistics data, these customer CRM data. So they bring data from over 20 different SAP systems. Okay, with Qlik data integration, feeding that into Snowflake in under 20 minutes, 24/7, 365, you know, days a year. Okay, they get data from over 20,000 tables, you know, over million, hundreds of millions of records daily going in. So it is a great example of the type of scale, scalability, agility and speed that they can get to drive these kind of innovation. So that's a great example with Siemens. You know, another one comes to mind is a global manufacturer. Very similar scenario, but you know, they're using it for real-time executive reporting. So it's more like feasibility to the production data as well as for financial analytics. So think, think, think about everything from audit to texts to innovate financial intelligence because all the data's coming from SAP. >> It's a great time to be in the data business again. It keeps getting better and better. There's more data coming. It's not stopping, you know, it's growing so fast, it keeps coming. Every year, it's the same story, Peter. It's like, doesn't stop coming. As we wrap up here, let's just get customers some information on how to get started. I mean, obviously you're starting to see the accelerators, it's a great program there. What a great partnership between the two companies and AWS. How can customers get started to learn about the solution and take advantage of it, getting more out of their SAP data, Peter? >> Yeah, I think the first place to go to is talk to Snowflake, talk to AWS, talk to our account executives that are assigned to your account. Reach out to them and they will be able to educate you on the solution. We have packages up very nicely and can be deployed very, very quickly. >> Well gentlemen, thank you so much for coming on. Appreciate the conversation. Great overview of the partnership between, you know, Snowflake and Qlik and AWS on a joint solution. You know, getting more out of the SAP data. It's really kind of a key, key solution, bringing SAP data to life. Thanks for coming on theCUBE. Appreciate it. >> Thank you. >> Thank you John. >> Okay, this is theCUBE coverage here at RE:Invent 2022. I'm John Furrier, your host of theCUBE. Thanks for watching. (upbeat music)

Published Date : Dec 1 2022

SUMMARY :

bringing SAP data to life, great meeting you John. then going to jump into what On the Cloud Partner side, and I'm the senior vice and the solutions, and the value chain and accelerate time to value that are going to be powering and data to do so. What's the dynamic powering this trend? You know, it's time to value all the time with customers. and that's driving all the and it's also a solution by the way I mean, you got partnering and bringing this to market of the modern era we're living in, that the data needs to go through getting things going, you know, Yeah, and to build in the use cases and the value? agility and speed that they can get It's a great time to be to educate you on the solution. key solution, bringing SAP data to life. Okay, this is theCUBE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

PeterPERSON

0.99+

DellORGANIZATION

0.99+

John FurrierPERSON

0.99+

SiemensORGANIZATION

0.99+

Peter MacDonaldPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Peter McDonaldPERSON

0.99+

QlikORGANIZATION

0.99+

28 billionQUANTITY

0.99+

two companiesQUANTITY

0.99+

TensQUANTITY

0.99+

three companiesQUANTITY

0.99+

Siemens EnergyORGANIZATION

0.99+

20 plus yearsQUANTITY

0.99+

yesterdayDATE

0.99+

SnowflakeORGANIZATION

0.99+

Itamar AnkorionPERSON

0.99+

third elementQUANTITY

0.99+

FirstQUANTITY

0.99+

threeQUANTITY

0.99+

ItamarPERSON

0.99+

over 20,000 tablesQUANTITY

0.99+

bothQUANTITY

0.99+

90,000 employeesQUANTITY

0.99+

firstQUANTITY

0.99+

SalesforceORGANIZATION

0.99+

Cloud PartnersORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

over 38,000 customersQUANTITY

0.99+

under 20 minutesQUANTITY

0.99+

10 yearsQUANTITY

0.99+

fiveQUANTITY

0.99+

ExcelTITLE

0.99+

oneQUANTITY

0.99+

over 11 yearsQUANTITY

0.98+

SnowparkTITLE

0.98+

Second thingQUANTITY

0.98+

John Purcell, DoiT International & Danislav Penev, INFINOX Global | AWS re:Invent 2022


 

>>Hello friends and welcome back to Fabulous Las Vegas, Nevada, where we are live from the show floor at AWS Reinvent. My name is Savannah Peterson, joined by my fabulous co-host John Furrier. John, how was your lunch? >>My lunch was great. Wasn't very complex like it is today, so it was very easy, >>Appropriate for the conversation we're about >>To have. Great, great guests coming up Cube alumni and great question around complexity and how is wellbeing teams be good? >>Yes. And, and and on that note, let's welcome John from DeWit as well as Danny from Inox. I swear I'll be able to say that right by the end of this. Thank you guys so much for being here. How's the show going for you? >>Excellent so far. It's been a great, a great event. You know, back back to pre Covid days, >>You're still smiling day three. That's an awesome sign. John, what about you? >>Fantastic. It's, it's been busier than ever >>That that's exciting. I, I think we certainly feel that way here on the cube. We're doing dozens of videos, it's absolutely awesome. Just in case. So we can dig in a little deeper throughout the rest of the segment just in case the audience isn't familiar, let's get them acquainted with your companies. Let's start with do it John. >>Yeah, thanks Savannah. So do it as a global technology company and we're partnering with deleted cloud providers around the world and digital native companies to provide value and solve complexity. John, to your, to your introductory point with all of the complexities associated with operating in the cloud, scaling a business in the cloud, a lot of companies are just looking to sort of have somebody else take care of that problem for them or have somebody they can call when they run into, you know, into problems scaling. And so with a combination of tech, advanced technology, some of the best cloud experts in the world and unlimited tech support or we're offloading a lot of those problems for our customers and we're doing that on a global basis. So it's, it's an exciting time. >>I can imagine pretty much everyone here on the show floor is dealing with that challenge of complexity. So a couple customers for you in the house. What about you Danny? >>I, I come from a company which operates in a financial industry market. So we essentially a global broker, financial trading broker. Which what this means for those people who don't really understand, essentially we allow clients to be able to trade digitally and speculate with different pricing, pricing tools online. We offer a different products for different type of clients. We have institutional clients, we've got our affiliates, partners programs and we've got a retail clients and this is where AWS and Doit comes handy allows us to offer our products digitally across the globe. And one of the key values for us here is that we can actually offer a product in regions where other people don't. So for example, we don't compete in North America, we don't compete in EME in Europe, but we just do it in AWS to solve our complex challenges in regions that naturally by, depending on where they base, they have like issues and that's how we deliver our product. >>And which regions, Latin >>America, Latin, the entire Africa, subcontinent, middle East, southeast Asia, the culture is just demographic is different. And what you used to have here is not exactly what you have over there. And obviously that brings a lot of challenges with onboarding and clients, deposit, trading activities, CDN latency, all of >>That stuff. It's interesting how each region's different in their, their posture with the cloud. Someone roll their own, someone outta the box. So again, this brings up this theme this year guys, which is about end to end seeing purpose built like specialty solutions. A lot of solutions going end to end with data makes kind of makes it more complicated. So again, we got more complexity coming, but the greatest the cloud is, you can abstract that away. So we are seeing this is a big opportunity for partners to innovate. You're seeing a lot of joint engineering, a lot more complexities coming still, but still end to end is the end game so to speak. >>A absolutely John, I mean one, one of the sort of ways we describe what we try to do for our customers like Equinox is to be your co-pilot in the cloud, which essentially means, you know, >>What an apt analogy. >>I think so, yeah, >>Well, well >>Done there. I think it works. Yvanna. Yeah, so, so as I mentioned, these are the majority or almost all of our customers are pretty sophisticated tech savvy companies. So they don't, you know, they know for most, for the most part what they're trying to achieve. They're approaching scale, they're at scale or they're, or they're through that scale point and they, they just wanna have somebody they can call, right? They need technology to help abstract away the complex problem. So they're not doing so much manual cloud operational work or sometimes they just need help picking the next tech right to solve the end to end use case that that they're, that they're dealing with >>In business. And Danny, you're rolling out solutions so you're on, you're on the front lines, you gotta make it easier. You didn't want to get in the weeds on something that should be taken care of. >>Correct. I mean one of the reasons we go do it is you need to, in order to involve do it, you need to know your problems, understand your challenges, also like a self review only. And you have to be one way halfway through the cloud journey. You need to know your problems, what you want to achieve, where you want to end up a roadmap for the next five years, what you want to achieve. Are we fixing or developing a building? And then involve those guys to come and help you because they cannot just come with magic one and fix all your problems. You need to do that yourself. It's not like starting the journey by yourself. >>Yeah. One thing that's not played up in this event, I will say they may, I don't, they missed, maybe Verner will hit it tomorrow, but I think they kind of missed it a little bit. But the developer productivity's been a big issue. We've seen that this year. One of the big themes on the cube is developer productivity, more velocity on the development side to keep pace with what's on, what solutions are rolling out the customers. And the other one is skills gap. So, and people like, and people have old skills, like we see VMware being bought by Broadcom for instance, got a lot of IT operators at VMware, they gotta go cloud somewhere. So you got new talent, existing talent, skill gaps, people are comfortable, yet the new stuff's there, developers gotta be more productive. How do you guys see that? Cuz that's gonna be how that plays now, it's gonna impact the channel, the partnership relationship, your ability to deliver. >>What's your reaction to that first? Well I think we obviously have a tech savvy team. We've got developers, we've got dev, we've got infrastructure guys, but we only got so much resource that we can afford. And essentially by evolving due it, I've doubled our staff. So we got a tech savvy senior solution architects which comes to do the sexy stuff, actually develop and design a new better offering, better product that makes us competitive. And this is where we involved, essentially we use the due IT staff as an staff employees that our demand is richly army of qualified people. We can actually cherry pick who we want for the call to do X, Y, and Z. And they're there to, to support you. We just have to ask for help. And this is how we fill our gap from technical skills or budget constrained within, you know, within recruitment. >>And I think, I think what, what Danny is touching on, John, what you mentioned is, is really the, the sort of the core family principle of the company, right? It's hard enough for companies like Equinox to hire staff that can help them build their business and deliver the value proposition that they're, that they see, right? And so our reason for existence is to sort of take care of the rest, right? We can help, you know, operate your cloud, show you the most effective way to do that. Whether they're finops problems, whether they're DevOps problems, whether dev SEC ops problems, all of these sort of classic operational problems that get 'em the way of the core business mission. You're not in the business of running the cloud, you're in the business of delivering customer value. We can help you, you know, manage your cloud >>And it's your job to do it. >>It is to do it >>Can, couldn't raise this upon there. How long have y'all been working together? >>I would say 15 months. We took, we took a bit of a conservative approach. We hope for the baseball, prepare for the worst. So I didn't trust do it. I give them one account, start with DEF U A C because you cannot, you just have to learn the journey yourself. So I think I would, my advice for clients is give it the six months. Once you establish a relationship, build a relationship, give them one by one start slowly. You actually understand by yourself the skills, the capacity that they have. And also the, for me consultants is really important And after that just opens up and we are now involving them. We've got new project, we've got problem statement. The first thing we do, we don't Google it, we just say do it. Log a ticket, we got the team. You're >>A verb. >>Yeah. So >>In this case we have >>The puns are on list here on the Cuban general. But with something like that, it's great. >>I gotta ask you a question cuz this is interesting John. You know, we talked last year on the cube and, and again this is an example of how innovations playing out. If you look at the announcements, Adam Celski did and then sw, he had 13 or so announcements. I won't say it's getting boring, but when you hear boring, boring is good. When you start getting into these, these gaps in the platforms as it grows. I won't say they was boring cause that really wasn't boring. I like the data >>Itself. It's all fascinating, John, >>But it, but it's a lot of gap filling, you know, 50 connectors you got, you know, yeah. All glue layers being built in AI's critical. The match cloud is there. What's the innovation? You got a lot of gaps being filled, boring is good. Like Kubernetes, we say there boring means, it's being invisible. That means it's going away. What's the exciting things from your perspective in cloud here? >>Well, I think, I mean, boring is an interesting word to use cuz a company with the heritage of AWS is constantly evolving. I mean, at the core of that company's culture is innovation, technology, development and innovation. And they're building for builders as, as you know, just as well as I do. Yeah. And so, but what we find across our customer base is that companies that are scaling or at scale are using maybe a smaller set of those services, but they're really leveraging them in interesting ways. And there is a very long tail of deeper, more sophisticated fit for purpose, more specific services. And Adam announced, you know, who knows him another 20 or 30 services and it's happening year after year after year. And I think one of the things that, that Danny might attest to is, I, I spoke about the reason we exist and the reason we form the company is we hold it very, a very critical part of our mission is to stay abreast of all of those developments as they emerge so that Danny and and his crew don't have to, right? And so when they have a, a, a question about SageMaker or they have a question about sort of the new big data service that Adam has announced, we take it very seriously. Our job is to be able to answer that question quickly and >>Accurately. And I notice your shirt, if you could just give a little shirt there, ops, cloud ops, DevOps do it. The intersection of the finance, the tuning is now we're hearing a lot of price performance, cost recovery, not cost recovery, but cost management. Yeah. Optimizing. So we're seeing building scale, but now, now tuning almost a craft, the craft of the cloud is here. What's your reaction to that? It, >>It absolutely is. And this is a story as old as the cloud, honestly. And companies, you know, they'll, they'll, companies tend to follow the same sort of maturity journey when they first start, whether they're migrating to the cloud or they were born in the cloud as most of our customers are. There's a, there's a, there's an, there's an access to visibility and understanding and optimization to tuning a craft to use your term. And, and cost management truly is a 10 year old problem that is as prevalent and relevant today as it was, you know, 10 years ago. And there's a lot of talk about the economics associated with the cloud and it's not, certainly not always cheaper to run. In fact, it rarely is cheaper to run your business from any of the public cloud providers. The key is to do it and right size it and make sure it's operating in accordance and alignment with your business, right? It's okay for cloud process to go up so long as your top line is also >>Selling your proportion. You spend more cloud to save cloud. That's it's >>Penny wise, pound full. It's always a little bit, always a little bit of a, of a >>Dilemma on, on the cost saving. We didn't want to just save money. If you want to save money, just shut down your services, right? So it's about making money. So this is where do it comes, like we actually start making, okay, we spend a bit more now, but in about six months time I will be making more money. And we've just did that. We roll out the new application for all the new product offering host to AWS fully with the guys support, a lot of long, boring, boring, boring calls, but they're productive because we actually now have a better product, competitive, it's tailored for our clients, it's cost effective. And we are actually making money >>When something's invisible. It's working, you know, talking about it means it's, it's, it's operational. >>It's exactly, it's, >>Well to that point, John, one of the things we're most proud of in, you know, know this year was, was the launch of our product we called Flex Save, which essentially does exactly what you've described. It's, it's looking for automation and, and, and, and automatic ways of, yes. Saving money, but offering the opportunities to, to to improve the economics associated with your cloud infrastructure. >>Yeah. And improving the efficiency across the board. A hundred percent. It, it's, oh, it's awesome. Let's, and, and it's, it's my understanding there's some reporting and insights that you're able to then translate through from do it to your CTO and across the company. Denny, what's that like? What do you get to see working >>With them? Well, the problem is, like the CTO asked me to do all of that. It is funny he thinks that he's doing it, but essentially they have a excellent portal that basically looks up all of our instances on the one place. You got like good analytics on your cost, cost, anomalies, budget, costal location. But I didn't want to do that either. So what I have done is taken the next step. I actually sold this to the, to my company completely. So my finance teams goes there, they do it themselves, they log in, check, check, all the billing, the costal location. I actually has zero iteration with them if I don't hear anything from them, which is one of the benefits. But also there is lot of other products like the Flexe is virtually like you just click a finger and you start saving money just like that. Easy >>Is that easy button we've been talking about on >>The show? Yeah, exactly, exactly how it is. But there is obviously outside of the cost management, you actually can look at what is the resource you using do actually need it, how often you use it, think about the long term goal, what you're trying to achieve, and use the analytics to, and actually I have to say the analytics much better than AWS in, in, in, in cmp. It's, it's just more user friendly, more interactive as opposed to, you know, building the one in aws. >>It's good business model. Make things easy for your customers. Easy, simple >>To use. >>It's gotta be nice to hear John. >>Well, so first of all, thank you daddy. >>We, we work, but in all seriousness, you know, we, we work, Danny mentioned the trust word earlier. This is at the core of if we don't, if we're not able to build trust with our clients, our business is dead. It, it just doesn't exist. It can't scale. In fact, it'll go the opposite direction. And so we're, we work very, very hard to earn that trust and we're willing to start small to Danny's example, start small and grow. And that's why we're very, one of the things we're most proud of is, is how few customers tend to leave us year over year. We have customers that have been with us for 10 years. >>You know, Andy, Jesse always has, I just saw an interview, he was on the New York Times event in New York today as a CEO of Amazon. But he's always said in these build out phases, you gotta work backwards from the customer and innovate on behalf of the customer. Cause that's the answer that will always be a good answer for the outcome versus optimizing for just profit, you know what I'm saying? Or other things. So we're still in build out mode, >>You know, as a, as a, as a core fundamental sort of product concept. If you're not solving important problems for our customer, what are you, why, why are you investing? It just >>Doesn't make it. This is the beauty we do it. We actually, they wait for you to come to do the next step. They don't sell me anything. They don't bug me with emails. They're ready. When you're ready to make that journey, you just log a ticket and then come and help you. And this is the beauty. You just, it's just not your, your journey. >>I love it. That's a, that's a beautiful note to lead us to our new tradition on the cube. We have a little bit of a challenge for the both of you. We're looking for your 32nd Instagram real thought leadership sizzle anecdote. Either one of you wanna go first. John looks a little nauseous. Danny, you wanna give it a go? >>Well, we've got a few expressions, but we don't Google it. We just do it. And the key take, that's what we do now at, at, and also what we do is actually using their stuff as an influence employees richly. Like that's what we do. >>Well done, well done. Didn't even need the 30 seconds. Fantastic work, Danny. I love that. All right, John, now you do have to go. Okay, >>I'll goodness. You know, I'll, I'll, I'll, I'll I'll go back to what I mentioned earlier, if that's okay. I think we, you know, we exist as a company to sort of help our customers get back to focusing on why they started the business in the first place, which is innovating and delivering value to customers. And we'll help you take care of the rest. It's as simple as that. Awesome. >>Well done. You absolutely nailed it. I wanna just acknowledge your fan club over there watching. Hello everyone from the doit team. Good job team. I love, it's very cute when guests show up with an entourage to the cube. We like to see it. You obviously deserve the entourage. You're, you're both wonderful. Thanks again for being here on the show with Oh yeah, go ahead >>John. Well, I would just like to thank Danny for, for agreeing to >>Discern, thankfully >>Great to spend time with you. Absolutely. Let's do it. >>Thank you. Yeah, >>Yeah. Fantastic gentlemen. Well thank you all for tuning into this wonderful start to the afternoon here from AWS Reinvent. We are in Las Vegas, Nevada with John Furier. My name's Savannah Peterson, you're watching The Cube, the leader in high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

from the show floor at AWS Reinvent. Wasn't very complex like it is today, so it was very easy, Great, great guests coming up Cube alumni and great question around complexity and how is wellbeing teams be I swear I'll be able to say that right by the end of this. You know, back back to pre Covid days, John, what about you? It's, it's been busier than ever in case the audience isn't familiar, let's get them acquainted with your companies. in the cloud, scaling a business in the cloud, a lot of companies are just looking to sort of have I can imagine pretty much everyone here on the show floor is dealing with that challenge of complexity. And one of the key values for us here is that we can actually offer a product in regions And what you used to have here So again, we got more complexity coming, but the greatest the cloud is, you can abstract that you know, they know for most, for the most part what they're trying to achieve. And Danny, you're rolling out solutions so you're on, you're on the front lines, you gotta make it easier. I mean one of the reasons we go do it is you need to, And the other one is skills gap. And this is how we fill our gap from We can help, you know, operate your cloud, show you the most effective way to do that. Can, couldn't raise this upon there. start with DEF U A C because you cannot, you just have to learn The puns are on list here on the Cuban general. I like the data But it, but it's a lot of gap filling, you know, 50 connectors you got, you know, yeah. I spoke about the reason we exist and the reason we form the company is we hold it very, The intersection of the finance, the tuning is now we're hearing a lot of price performance, that is as prevalent and relevant today as it was, you know, 10 years ago. You spend more cloud to save cloud. It's always a little bit, always a little bit of a, of a We roll out the new application for all the new product offering host It's working, you know, talking about it means it's, it's, it's operational. Well to that point, John, one of the things we're most proud of in, you know, know this year was, was the launch of our product we from do it to your CTO and across the company. Well, the problem is, like the CTO asked me to do all of that. more interactive as opposed to, you know, building the one in aws. Make things easy for your customers. This is at the core of if we don't, if we're not able to build trust with our clients, the outcome versus optimizing for just profit, you know what I'm saying? You know, as a, as a, as a core fundamental sort of product concept. This is the beauty we do it. for the both of you. And the key take, All right, John, now you do have to go. I think we, you know, we exist as a company to sort of help our customers get back to focusing Thanks again for being here on the show with Oh yeah, go ahead Great to spend time with you. Thank you. Well thank you all for tuning into this wonderful start to the afternoon here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Adam CelskiPERSON

0.99+

DannyPERSON

0.99+

SavannahPERSON

0.99+

John FurierPERSON

0.99+

Savannah PetersonPERSON

0.99+

13QUANTITY

0.99+

AndyPERSON

0.99+

John FurrierPERSON

0.99+

EquinoxORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

New YorkLOCATION

0.99+

Danislav PenevPERSON

0.99+

JessePERSON

0.99+

AdamPERSON

0.99+

50 connectorsQUANTITY

0.99+

EuropeLOCATION

0.99+

YvannaPERSON

0.99+

AWSORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

AmericaLOCATION

0.99+

15 monthsQUANTITY

0.99+

North AmericaLOCATION

0.99+

firstQUANTITY

0.99+

last yearDATE

0.99+

30 secondsQUANTITY

0.99+

DennyPERSON

0.99+

AfricaLOCATION

0.99+

32ndQUANTITY

0.99+

The CubeTITLE

0.99+

30 servicesQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.98+

todayDATE

0.98+

20QUANTITY

0.98+

LatinLOCATION

0.98+

tomorrowDATE

0.98+

one accountQUANTITY

0.98+

VMwareORGANIZATION

0.98+

this yearDATE

0.98+

John PurcellPERSON

0.97+

GoogleORGANIZATION

0.97+

southeast AsiaLOCATION

0.97+

Las Vegas, NevadaLOCATION

0.96+

about six monthsQUANTITY

0.96+

zeroQUANTITY

0.96+

dozens of videosQUANTITY

0.96+

DoiT InternationalORGANIZATION

0.96+

each regionQUANTITY

0.96+

10 years agoDATE

0.95+

INFINOX GlobalORGANIZATION

0.95+

AWS ReinventORGANIZATION

0.95+

CubeORGANIZATION

0.94+

this yearDATE

0.93+

DeWitORGANIZATION

0.93+

ML & AI Keynote Analysis | AWS re:Invent 2022


 

>>Hey, welcome back everyone. Day three of eight of us Reinvent 2022. I'm John Farmer with Dave Volante, co-host the q Dave. 10 years for us, the leader in high tech coverage is our slogan. Now 10 years of reinvent day. We've been to every single one except with the original, which we would've come to if Amazon actually marketed the event, but they didn't. It's more of a customer event. This is day three. Is the machine learning ai keynote sws up there. A lot of announcements. We're gonna break this down. We got, we got Andy Thra here, vice President, prince Constellation Research. Andy, great to see you've been on the cube before one of our analysts bringing the, bringing the, the analysis, commentary to the keynote. This is your wheelhouse. Ai. What do you think about Swami up there? I mean, he's awesome. We love him. Big fan Oh yeah. Of of the Cuban we're fans of him, but he got 13 announcements. >>A lot. A lot, >>A lot. >>So, well some of them are, first of all, thanks for having me here and I'm glad to have both of you on the same show attacking me. I'm just kidding. But some of the announcement really sort of like a game changer announcements and some of them are like, meh, you know, just to plug in the holes what they have and a lot of golf claps. Yeah. Meeting today. And you could have also noticed that by, when he was making the announcements, you know, the, the, the clapping volume difference, you could say, which is better, right? But some of the announcements are, are really, really good. You know, particularly we talked about, one of that was Microsoft took that out of, you know, having the open AI in there, doing the large language models. And then they were going after that, you know, having the transformer available to them. And Amazon was a little bit weak in the area, so they couldn't, they don't have a large language model. So, you know, they, they are taking a different route saying that, you know what, I'll help you train the large language model by yourself, customized models. So I can provide the necessary instance. I can provide the instant volume, memory, the whole thing. Yeah. So you can train the model by yourself without depending on them kind >>Of thing. So Dave and Andy, I wanna get your thoughts cuz first of all, we've been following Amazon's deep bench on the, on the infrastructure pass. They've been doing a lot of machine learning and ai, a lot of data. It just seems that the sentiment is that there's other competitors doing a good job too. Like Google, Dave. And I've heard folks in the hallway, even here, ex Amazonians saying, Hey, they're train their models on Google than they bring up the SageMaker cuz it's better interface. So you got, Google's making a play for being that data cloud. Microsoft's obviously putting in a, a great kind of package to kind of make it turnkey. How do they really stand versus the competition guys? >>Good question. So they, you know, each have their own uniqueness and the we variation that take it to the field, right? So for example, if you were to look at it, Microsoft is known for as industry or later things that they are been going after, you know, industry verticals and whatnot. So that's one of the things I looked here, you know, they, they had this omic announcement, particularly towards that healthcare genomics space. That's a huge space for hpz related AIML applications. And they have put a lot of things in together in here in the SageMaker and in the, in their models saying that, you know, how do you, how do you use this transmit to do things like that? Like for example, drug discovery, for genomics analysis, for cancer treatment, the whole, right? That's a few volumes of data do. So they're going in that healthcare area. Google has taken a different route. I mean they want to make everything simple. All I have to do is I gotta call an api, give what I need and then get it done. But Amazon wants to go at a much deeper level saying that, you know what? I wanna provide everything you need. You can customize the whole thing for what you need. >>So to me, the big picture here is, and and Swami references, Hey, we are a data company. We started, he talked about books and how that informed them as to, you know, what books to place front and center. Here's the, here's the big picture. In my view, companies need to put data at the core of their business and they haven't, they've generally put humans at the core of their business and data. And now machine learning are at the, at the outside and the periphery. Amazon, Google, Microsoft, Facebook have put data at their core. So the question is how do incumbent companies, and you mentioned some Toyota Capital One, Bristol Myers Squibb, I don't know, are those data companies, you know, we'll see, but the challenge is most companies don't have the resources as you well know, Andy, to actually implement what Google and Facebook and others have. >>So how are they gonna do that? Well, they're gonna buy it, right? So are they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft and Google, I pulled some ETR data to say, okay, who are the top companies that are showing up in terms of spending? Who's spending with whom? AWS number one, Microsoft number two, Google number three, data bricks. Number four, just in terms of, you know, presence. And then it falls down DataRobot, Anaconda data icu, Oracle popped up actually cuz they're embedding a lot of AI into their products and, and of course IBM and then a lot of smaller companies. But do companies generally customers have the resources to do what it takes to implement AI into applications and into workflows? >>So a couple of things on that. One is when it comes to, I mean it's, it's no surprise that the, the top three or the hyperscalers, because they all want to bring their business to them to run the specific workloads on the next biggest workload. As you was saying, his keynote are two things. One is the A AIML workloads and the other one is the, the heavy unstructured workloads that he was talking about. 80%, 90% of the data that's coming off is unstructured. So how do you analyze that? Such as the geospatial data. He was talking about the volumes of data you need to analyze the, the neural deep neural net drug you ought to use, only hyperscale can do it, right? So that's no wonder all of them on top for the data, one of the things they announced, which not many people paid attention, there was a zero eight L that that they talked about. >>What that does is a little bit of a game changing moment in a sense that you don't have to, for example, if you were to train the data, data, if the data is distributed everywhere, if you have to bring them all together to integrate it, to do that, it's a lot of work to doing the dl. So by taking Amazon, Aurora, and then Rich combine them as zero or no ETL and then have Apaches Apaches Spark applications run on top of analytical applications, ML workloads. That's huge. So you don't have to move around the data, use the data where it is, >>I, I think you said it, they're basically filling holes, right? Yeah. They created this, you know, suite of tools, let's call it. You might say it's a mess. It's not a mess because it's, they're really powerful but they're not well integrated and now they're starting to take the seams as I say. >>Well yeah, it's a great point. And I would double down and say, look it, I think that boring is good. You know, we had that phase in Kubernetes hype cycle where it got boring and that was kind of like, boring is good. Boring means we're getting better, we're invisible. That's infrastructure that's in the weeds, that's in between the toes details. It's the stuff that, you know, people we have to get done. So, you know, you look at their 40 new data sources with data Wrangler 50, new app flow connectors, Redshift Auto Cog, this is boring. Good important shit Dave. The governance, you gotta get it and the governance is gonna be key. So, so to me, this may not jump off the page. Adam's keynote also felt a little bit of, we gotta get these gaps done in a good way. So I think that's a very positive sign. >>Now going back to the bigger picture, I think the real question is can there be another independent cloud data cloud? And that's the, to me, what I try to get at my story and you're breaking analysis kind of hit a home run on this, is there's interesting opportunity for an independent data cloud. Meaning something that isn't aws, that isn't, Google isn't one of the big three that could sit in. And so let me give you an example. I had a conversation last night with a bunch of ex Amazonian engineering teams that left the conversation was interesting, Dave. They were like talking, well data bricks and Snowflake are basically batch, okay, not transactional. And you look at Aerospike, I can see their booth here. Transactional data bases are hot right now. Streaming data is different. Confluence different than data bricks. Is data bricks good at hosting? >>No, Amazon's better. So you start to see these kinds of questions come up where, you know, data bricks is great, but maybe not good for this, that and the other thing. So you start to see the formation of swim lanes or visibility into where people might sit in the ecosystem, but what came out was transactional. Yep. And batch the relationship there and streaming real time and versus you know, the transactional data. So you're starting to see these new things emerge. Andy, what do you, what's your take on this? You're following this closely. This seems to be the alpha nerd conversation and it all points to who's gonna have the best data cloud, say data, super clouds, I call it. What's your take? >>Yes, data cloud is important as well. But also the computational that goes on top of it too, right? Because when, when the data is like unstructured data, it's that much of a huge data, it's going to be hard to do that with a low model, you know, compute power. But going back to your data point, the training of the AIML models required the batch data, right? That's when you need all the, the historical data to train your models. And then after that, when you do inference of it, that's where you need the streaming real time data that's available to you too. You can make an inference. One of the things, what, what they also announced, which is somewhat interesting, is you saw that they have like 700 different instances geared towards every single workload. And there are some of them very specifically run on the Amazon's new chip. The, the inference in two and theran tr one chips that basically not only has a specific instances but also is run on a high powered chip. And then if you have that data to support that, both the training as well as towards the inference, the efficiency, again, those numbers have to be proven. They claim that it could be anywhere between 40 to 60% faster. >>Well, so a couple things. You're definitely right. I mean Snowflake started out as a data warehouse that was simpler and it's not architected, you know, in and it's first wave to do real time inference, which is not now how, how could they, the other second point is snowflake's two or three years ahead when it comes to governance, data sharing. I mean, Amazon's doing what always does. It's copying, you know, it's customer driven. Cuz they probably walk into an account and they say, Hey look, what's Snowflake's doing for us? This stuff's kicking ass. And they go, oh, that's a good idea, let's do that too. You saw that with separating compute from storage, which is their tiering. You saw it today with extending data, sharing Redshift, data sharing. So how does Snowflake and data bricks approach this? They deal with ecosystem. They bring in ecosystem partners, they bring in open source tooling and that's how they compete. I think there's unquestionably an opportunity for a data cloud. >>Yeah, I think, I think the super cloud conversation and then, you know, sky Cloud with Berkeley Paper and other folks talking about this kind of pre, multi-cloud era. I mean that's what I would call us right now. We are, we're kind of in the pre era of multi-cloud, which by the way is not even yet defined. I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. Yeah. People have multiple clouds. They got, they, they end up by default, not by design as Dell likes to say. Right? And they gotta deal with it. So it's more of they're inheriting multiple cloud environments. It's not necessarily what they want in the situation. So to me that is a big, big issue. >>Yeah, I mean, again, going back to your snowflake and data breaks announcements, they're a data company. So they, that's how they made their mark in the market saying that, you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. And, and Amazon is catching up with that with a lot of that announcements they made, how far it's gonna get traction, you know, to change when I to say, >>Yeah, I mean to me, to me there's no doubt about Dave. I think, I think what Swamee is doing, if Amazon can get corner the market on out of the box ML and AI capabilities so that people can make it easier, that's gonna be the end of the day tell sign can they fill in the gaps. Again, boring is good competition. I don't know mean, mean I'm not following the competition. Andy, this is a real question mark for me. I don't know where they stand. Are they more comprehensive? Are they more deeper? Are they have deeper services? I mean, obviously shows to all the, the different, you know, capabilities. Where, where, where does Amazon stand? What's the process? >>So what, particularly when it comes to the models. So they're going at, at a different angle that, you know, I will help you create the models we talked about the zero and the whole data. We'll get the data sources in, we'll create the model. We'll move the, the whole model. We are talking about the ML ops teams here, right? And they have the whole functionality that, that they built ind over the year. So essentially they want to become the platform that I, when you come in, I'm the only platform you would use from the model training to deployment to inference, to model versioning to management, the old s and that's angle they're trying to take. So it's, it's a one source platform. >>What about this idea of technical debt? Adrian Carro was on yesterday. John, I know you talked to him as well. He said, look, Amazon's Legos, you wanna buy a toy for Christmas, you can go out and buy a toy or do you wanna build a, to, if you buy a toy in a couple years, you could break and what are you gonna do? You're gonna throw it out. But if you, if you, if part of your Lego needs to be extended, you extend it. So, you know, George Gilbert was saying, well, there's a lot of technical debt. Adrian was countering that. Does Amazon have technical debt or is that Lego blocks analogy the right one? >>Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes APIs? It depends on what team you're on. If you're on the runtime gene, you're gonna optimize for Kubernetes, but E two is the resources you want to use. So I think the idea of the 15 years of technical debt, I, I don't believe that. I think the APIs are still hardened. The issue that he brings up that I think is relevant is it's an end situation, not an or. You can have the bag of Legos, which is the primitives and build a durable application platform, monitor it, customize it, work with it, build it. It's harder, but the outcome is durability and sustainability. Building a toy, having a toy with those Legos glued together for you, you can get the play with, but it'll break over time. Then you gotta replace it. So there's gonna be a toy business and there's gonna be a Legos business. Make your own. >>So who, who are the toys in ai? >>Well, out of >>The box and who's outta Legos? >>The, so you asking about what what toys Amazon building >>Or, yeah, I mean Amazon clearly is Lego blocks. >>If people gonna have out the box, >>What about Google? What about Microsoft? Are they basically more, more building toys, more solutions? >>So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. But, but if it comes to vertical industry solutions, Microsoft is, is is ahead, right? Because they have, they have had years of indu industry experience. I mean there are other smaller cloud are trying to do that too. IBM being an example, but you know, the, now they are starting to go after the specific industry use cases. They think that through, for example, you know the medical one we talked about, right? So they want to build the, the health lake, security health lake that they're trying to build, which will HIPPA and it'll provide all the, the European regulations, the whole line yard, and it'll help you, you know, personalize things as you need as well. For example, you know, if you go for a certain treatment, it could analyze you based on your genome profile saying that, you know, the treatment for this particular person has to be individualized this way, but doing that requires a anomalous power, right? So if you do applications like that, you could bring in a lot of the, whether healthcare, finance or what have you, and then easy for them to use. >>What's the biggest mistake customers make when it comes to machine intelligence, ai, machine learning, >>So many things, right? I could start out with even the, the model. Basically when you build a model, you, you should be able to figure out how long that model is effective. Because as good as creating a model and, and going to the business and doing things the right way, there are people that they leave the model much longer than it's needed. It's hurting your business more than it is, you know, it could be things like that. Or you are, you are not building a responsibly or later things. You are, you are having a bias and you model and are so many issues. I, I don't know if I can pinpoint one, but there are many, many issues. Responsible ai, ethical ai. All >>Right, well, we'll leave it there. You're watching the cube, the leader in high tech coverage here at J three at reinvent. I'm Jeff, Dave Ante. Andy joining us here for the critical analysis and breaking down the commentary. We'll be right back with more coverage after this short break.

Published Date : Nov 30 2022

SUMMARY :

Ai. What do you think about Swami up there? A lot. of, you know, having the open AI in there, doing the large language models. So you got, Google's making a play for being that data cloud. So they, you know, each have their own uniqueness and the we variation that take it to have the resources as you well know, Andy, to actually implement what Google and they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft the neural deep neural net drug you ought to use, only hyperscale can do it, right? So you don't have to move around the data, use the data where it is, They created this, you know, It's the stuff that, you know, people we have to get done. And so let me give you an example. So you start to see these kinds of questions come up where, you know, it's going to be hard to do that with a low model, you know, compute power. was simpler and it's not architected, you know, in and it's first wave to do real time inference, I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. the different, you know, capabilities. at a different angle that, you know, I will help you create the models we talked about the zero and you know, George Gilbert was saying, well, there's a lot of technical debt. Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. you know, it could be things like that. We'll be right back with more coverage after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AdrianPERSON

0.99+

DavePERSON

0.99+

AndyPERSON

0.99+

GoogleORGANIZATION

0.99+

IBMORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Adrian CarroPERSON

0.99+

Dave VolantePERSON

0.99+

Andy ThraPERSON

0.99+

90%QUANTITY

0.99+

15 yearsQUANTITY

0.99+

JohnPERSON

0.99+

AdamPERSON

0.99+

13 announcementsQUANTITY

0.99+

LegoORGANIZATION

0.99+

John FarmerPERSON

0.99+

Dave AntePERSON

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Bristol Myers SquibbORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Constellation ResearchORGANIZATION

0.99+

OneQUANTITY

0.99+

ChristmasEVENT

0.99+

second pointQUANTITY

0.99+

yesterdayDATE

0.99+

AnacondaORGANIZATION

0.99+

todayDATE

0.99+

Berkeley PaperORGANIZATION

0.99+

oneQUANTITY

0.99+

eightQUANTITY

0.98+

700 different instancesQUANTITY

0.98+

three yearsQUANTITY

0.98+

SwamiPERSON

0.98+

AerospikeORGANIZATION

0.98+

bothQUANTITY

0.98+

SnowflakeORGANIZATION

0.98+

two thingsQUANTITY

0.98+

60%QUANTITY

0.98+

Chris Casey, AWS | AWS re:Invent 2022


 

>> Hello, wonderful humans and welcome back to theCUBE. We are live from Las Vegas, Nevada, this week at AWS Reinvent. I am joined by analyst and 10 year reinvent veteran John Furrier. John, pleasure to join you today. >> Great to see you, great event. This is 10 years. We've got great guests coming on the Q3 days of after this wall to wall, we'll lose our voice every year, Thursday >> Host: I can feel the energy. Can you feel the volume already? >> Yes. Everyone's getting bigger, stronger, in the marketplace seeing a lot more activity new players coming into the cloud. Ones that have been around for 10 years or growing up and turning into platforms and just the growth of software in the industry is phenomenal. Our next guest is going to be great to chat about. >> I know it's funny you mentioned marketplace. We're going to be talking marketplace, in our next segment. We're bringing back a Cube alumni Chris Casey welcome back to the show. How, how you Feeling today? >> Thank you for having me. Yeah, I mean this week is the most exciting week of the year for us at AWS and you know, it's just a fantastic energy. You mentioned it before, to be here in Las Vegas at Reinvent and thank you very much for having me back. It's great to talk to John last year and lovely to meet you and talk to you this year. >> It is, it is our pleasure. It is definitely the biggest event of the year. It's wild that Amazon would do this on the biggest online shopping day of the year as well. It goes to show about the boldness and the bravery of the team, which is very impressive. So you cover a few different things at AWS So you cover a few different things at AWS you're talking about and across industries as well. Can you talk to me a little bit about why the software alliances and the data exchange are so important to the partner organization at AWS? >> Yeah, it really comes back to the importance to, to the AWS customer. As we've been working with customers over the, you know the past few years especially, and they've been embarking on their enterprise transformation and their digital transformation moving workloads to to the cloud, they've really been asking us for more and more support from the AWS ecosystem, and that includes native AWS services as well as partners to really help them start to solve some of the industry specific use cases and challenges that they're facing and really incorporate those as part of the enterprise transformation journey that they're embarking on with AWS. What, how that translates back to the AWS marketplace and the partner organization is customers have told us they're really looking for us to have the breadth and depth of the ecosystem of partners available to them that have the intellectual property that solves very niche use cases and workloads that they're looking to migrate to the cloud. A lot of the time that furnishes itself as an independent software vendor and they have software that the customer is trying to use to solve, you know an insurance workflow or an analytics workflow for your utility company as well as third party data that they need to feed into that software. And so my team's responsibility is helping work backwards from the customer need there and making sure that we have the partners available to them. Ideally in the AWS marketplace so they can go and procure those products and make them part of solutions that they're trying to build or migrate to AWS. >> A lot of success in marketplace over the past couple years especially during the pandemic people were buying and procuring through the marketplace. You guys have changed some of the operational things, data exchange enterprise sellers or your sales reps can sell in there. The partners have been glowingly saying great things about how it's just raining money for them if they do it right. And some are like, well, I don't get the marketplace. So there's a, there's kind of a new game in town and the marketplace with some of the successes. What, what is this new momentum that's happening? Is it just people are getting more comfortable they're doing it right? How does the marketplace work effectively? >> Yeah, I mean, marketplace has been around for for 10 years as well as the AWS partner organization. >> Host: It's like our coverage. >> Yes, just like. >> Host: What a nice coincidence. Decades all around happy anniversary everyone. >> Yeah, everyone's selling, celebrating the 10 year birthday, but I think to your point, John, you know, we we've continued iterate on features and functionality that have made the partner experience a much more welcoming digital experience for them to go to market with AWS. So that certainly helped and we've seen more and more customers start to adopt marketplace especially for, for some of their larger applications that they're trying to transform on the cloud. And that extends into industry verticals as well as horizontal sort of business applications whether they be ERP systems like Infor the customers are trying to procure through the marketplace. And I think even for our partners, it's customer driven. You know, we, we've, we've heard from our customers that the, the streamlining the payments and procurement process is a really key benefit for them procuring by the marketplace and also the extra governance and control and visibility they get on their third party licensing contracts is a really material benefit for them which is helping our partners lean in to marketplace as a as a digital channel for them to go to market with us. >> And also you guys have this program it's what's it called enterprise buying or something where clients can just take their spend and move it over into other products like MongoDB more Mongo gimme some more Splunk, gimme some more influence. I mean all these things are possible now, right. For some of the partners. Isn't that, that's like that's like found money for the, for the partners. >> Yeah, going back to what I said before about the AWS ecosystem, we're really looking to help customers holistically with regard to that, and certainly when customers are looking to make commitments to AWS and and move a a large swath of workloads to AWS we want to make sure they can benefit from that commitment not only from native AWS services but also third party data and software applications that they might be procuring through the marketplace. So certainly for the procurement teams not only is there technical benefits for them on the marketplace and you know foresters total economic impact study really helped quantify that for us more recently. You know, 66% of time saving for procurement professionals. >> Host: Wow. >> Which is when you calculate that in hours in person weeks or a year, that's a lot of time on undifferentiated heavy lifting that they can now be doing on value added activities. >> Host: That's a massive shift for >> Yeah, massive shift. So that in addition, you know, to, you know, some of the more contractual and commercial benefits is really helping customers look holistically at how AWS is helping them transform with third party applications and data. >> I want to stick on customers for a second 'cause in my show notes are some pretty well known customers and you mentioned in for a moment ago can you tell us a little bit about what's going on with Ferrari? >> Chris: Sure. So in four is one of our horizontal business application partners and sellers in the AWS marketplace and they sell ERP systems so helping enterprises with resource planning and Ferrari is obviously a very well known brand and you know, the oldest and most successful >> May have heard of them. >> Chris: Yes. Right. The most successful formula one racing team and Ferrari, you know a really meaningful customer for AWS from multiple angles whether they're using AWS to enhance their car design, as well as their fan engagement, as well as their actual end car consumer experience. But as it specifically relates to marketplace as part of Ferrari's technical transformation they were looking to upgrade their ERP system. And so they went through a whole swath of vendors that they wanted to assess and they actually chose Infor as their ERP system. And one of the reasons was >> Nice. >> Chris: because Infor actually have an automotive specific instance of their SaaS application. So when we're talking about really solving for some of those niche challenges for customers who operate in an industry, that was one of the key benefits. And then as an added bonus for Ferrari being able to procure that software through the AWS marketplace gave them all the procurement benefits that we just talked about. So it's super exciting that we're able to play a, you know a part in accelerating that digital transformation with Ferrari and also help Infor in terms of getting a really meaningful customer using their software services on AWS. >> Yeah. Putting a new meaning to turn key your push start. (laughing) >> You mentioned horizontal services earlier. What is it all about there? What's new there? We're hearing, I'm expecting to see that in the keynote tomorrow. Horizontal and vertical solutions and let's get the CEOs. What, what's the focus there? What's this horizontal focus for you? >> Yeah, I, I think the, the big thing is is really helping line of business users. So people in operations or marketing functions, that our customers, see the the partners and the solutions that they use on a daily basis today and how they can actually help accelerate their overall enterprise transformation. With those partners, now on AWS. Historically, you know, those line of business users might not have cared where an application historically ran whether it was on-prem or on AWS but now just the depth of those transformation journeys their enterprises are on that's really the next frontier of applications and use cases that many of our customers are saying they want to move to AWS. >> John: And what are some of those horizontal examples that you see emerging? >> So Salesforce is, is probably one, one of the best ones to call out there. And really the two meaningful things Salesforce have done there is a deep integration with our ML and AI services like SageMaker so people can actually perform some of those activities without leaving the Salesforce application. And then AWS and Salesforce have worked on a unified developer experience, which really helps remove friction in terms of data flows for anyone that's trying to build on both of those services. So the partnership with horizontal business applications like Salesforce is much deeper than just to go to market. It's also on the build side to help make it much more seamless for customers as they're trying to migrate to Salesforce on AWS as an example there. >> It's like having too many tabs open at once, everybody wants it all in one place all at one time. >> Chris: Yeah. >> And it makes sense that you're doing so much in, in the partner marketplace. Let's talk a little bit more about the data exchange. How, how is this intertwined with your vertical and horizontal efforts that the team's striving as well as with another big name example that folks know probably only because of the last few, few years, excuse me, with Moderna? Can you tell us a little more about that? >> Sure. I think when we're, when we're talking to customers about their needs when they're operating in a specific industry, but it probably goes for all customers and enterprise customers especially when they're thinking about software. Almost always that software also needs data to actually be analyzed or processed through it for really the end business outcome to be achieved. And so we're really making a conscious effort to really help our partners integrate with solutions that the AWS field teams and business development teams are talking to customers about and help tie those solutions to customer use cases, rather than it being an engagement with a specific customer on a product by product basis. And certainly software and and data going together is a really nice combination that many customers are looking for us to solve for and for looking for us to create pairings based on other customer needs or use cases that we've historically solved for in the past. >> I mean, with over a million customers, it's hard to imagine anyone could have more use cases to pull from when we're talking about these different instances >> Right. The challenge actually is identifying which are the key ones for each of the industries and which are the ones that are going to help move the needle the most for customers in there, it's, it's not an absence of selection in that case. >> Host: Right. (laughter) I can imagine. I can imagine that's actually the challenge. >> Chris: Yeah. >> Yeah. >> But it's really important. And then more specifically on the data exchange, you know I think it goes back to one of the leadership principles that we launched last year. The two new leadership principles, success and scale bring broad responsibility. You know, we take that very seriously at AWS and we think about that in our actions with our native services, but also in terms of, you know, the availability of partner solutions and then ultimately the end customer outcomes that we can help achieve. And I think Moderna's a great example of that. Moderna have been using the mRNA technology and they're using it to develop a a new vaccine for the RSV virus. And they're actually using the data exchange to procure and then analyze real world evidence data. And what that, what that helps them do is identify and and analyze in almost real time using data on Redshift who are the best vaccine candidates for the trials based on geography and demographics. So it's really helping them save costs, but not only cost really help optimize and be much more efficient in terms of how they're going about their trials from time to market.. >> Host: Time to market. >> vaccine perspective. Yeah. And more importantly, getting the analysis and the results back from those trials as fast as they possibly can. >> Yeah. >> And data exchange, great with the trend that we're going to hear and the keynote tomorrow. More data exchanging more data being more fluid addressable shows those advantages. That's a great example. Great call out there. Chris, I got to get your thoughts on the ecosystem. You know, Ruba Borno is the new head of partners, APN, Amazon Partner Network and marketplace comes together. How you guys serve your partners is also growing and evolving. What's the biggest thing going on in the ecosystem that you see from your perspective? You can put your Amazon hat on or take your your Amazon hat off a personal hat on what's going on. There's a real growth, I mean seeing people getting bigger and stronger as partners. There's more learning, there's more platforms developing. It's, it's kind of the next gen wave coming. What's going on there? What's the, what's the keynote going to be like, what's the what's this reinvent going to be for partners? Give us a share your, share your thoughts. >> Yeah, certainly. I, I think, you know, we are really trying to make sure that we're simplifying the partner experience as much as we possibly can to really help our partners become you know, more profitable or the most profitable they can be with AWS. And so, you know, certainly in Ruba's keynote on Wednesday you're going to hear a little bit about what we've done there from a programs perspective, what we're doing there from feature and capability perspectives to help, you know really push the digital custom, the digital partner experience, sorry, I should say as much as possible. And really looking holistically at that partner experience and listening to our partners as much as we possibly can to adapt partner pathways to ultimately simplify how they're going to market with AWS. Not only on the co-sell side of things and how we interact with our field teams and actually interact with the end customer, but also on how we, we build and help coil with them on AWS to make their solutions whether that be software, whether that be machine learning models, whether that be data sets most optimized to operate in the AWS ecosystem. So you're going to hear a lot of that in Ruba's keynote on Wednesday. There's certainly some really fantastic partner stories and partner launches that'll be featured. Also some customer outcomes that have been realized as a result of partners. So make sure you don't miss it >> John: More action than ever before, right now. >> It's jam-packed, certainly and throughout the week you're going to see multiple launches and releases related to what we're doing with partners on marketplace, but also more generally to help achieve those customer outcomes. >> Well said Brian. So your heart take, what is the future of partnerships the future of the cloud, if you want throw it in, what what are you going to be saying to us? Hopefully the next time you get to sit down with John and I here on theCUBE at reinvent next year. >> Chris: Yeah, I think Adam, Adam was quoted today, as you know, saying that the, the partner ecosystem is going to be around and a foundation for decades. I think is a hundred percent right for me in terms of the industry verticals, the partner ecosystem we have and the availability of these niche solutions that really are solving very specific but mission critical use cases for our customers in each of the industries is super important and it's going to be a a foundation for AWS's growth strategy across all the industry segments for many years to come. So we're super excited about the opportunity ahead of us and we're ready to get after it. >> John: If you, if you could do an Instagram reel right now, what would you say is the most important >> The Insta challenge by go >> The Insta challenge, real >> Host: Chris's Insta challenge >> Insta challenge here, what would be the the real you'd say to the audience about why this year's reinvent is so important? >> I think this year's reinvent is going to give you a clear sense of the breadth and depth of partners that are available to you across the AWS ecosystem. And there's really no industry or use case that we can't solve with partners that we have available within the partner organization. >> Anything is possible. What a note to close on. Chris Casey, thank you so much for joining us for the second time here on theCUBE. John >> He nailed Instagram challenge. >> Yeah, he did. Did he pass the John test? >> I'd say, I'd say so. >> I'd say so. And and and he certainly teased us all with the content to come this week. I want to see all the keynotes here about some of those partners. You tease them in the gaming space with us earlier. It's going to be a very exciting week. Thank you John, for your commentary. Thank you Chris, one more time. >> Thanks for having me. >> And thank you all for tuning in here at theCUBE where we are the leader in high tech coverage. My name is Savannah Peterson, joined by John Furrier with Cube Team live from Las Vegas, Nevada. AWS Reinvent will be here all week and we hope you stay tuned.

Published Date : Nov 29 2022

SUMMARY :

John, pleasure to join you today. on the Q3 days of after this wall to wall, Host: I can feel the energy. of software in the industry is phenomenal. We're going to be talking marketplace, and thank you very much and the bravery of the team, and depth of the ecosystem of the operational things, data exchange for 10 years as well as the Host: What a nice coincidence. for them to go to market with AWS. For some of the partners. So certainly for the procurement teams Which is when you calculate that of the more contractual in the AWS marketplace And one of the reasons was one of the key benefits. your push start. that in the keynote tomorrow. AWS but now just the depth of the best ones to call out there. It's like having too because of the last few, few for really the end business for each of the industries actually the challenge. the data exchange to procure getting the analysis and the results back the ecosystem that you perspectives to help, you know John: More action than and releases related to what we're doing Hopefully the next time you get to sit and the availability of that are available to you What a note to close on. Did he pass the John test? It's going to be a very exciting week. and we hope you stay tuned.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Chris CaseyPERSON

0.99+

ChrisPERSON

0.99+

AWSORGANIZATION

0.99+

AdamPERSON

0.99+

BrianPERSON

0.99+

Savannah PetersonPERSON

0.99+

FerrariORGANIZATION

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

WednesdayDATE

0.99+

66%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

last yearDATE

0.99+

second timeQUANTITY

0.99+

Las VegasLOCATION

0.99+

twoQUANTITY

0.99+

10 yearQUANTITY

0.99+

ThursdayDATE

0.99+

Las Vegas, NevadaLOCATION

0.99+

todayDATE

0.99+

next yearDATE

0.99+

RubaPERSON

0.99+

bothQUANTITY

0.99+

APNORGANIZATION

0.99+

Ruba BornoPERSON

0.99+

this weekDATE

0.98+

Peter MacDonald & Itamar Ankorion | AWS re:Invent 2022


 

(upbeat music) >> Hello, welcome back to theCUBE's AWS RE:Invent 2022 Coverage. I'm John Furrier, host of theCUBE. Got a great lineup here, Itamar Ankorion SVP Technology Alliance at Qlik and Peter McDonald, vice President, cloud partnerships and business development Snowflake. We're going to talk about bringing SAP data to life, for joint Snowflake, Qlik and AWS Solution. Gentlemen, thanks for coming on theCUBE Really appreciate it. >> Thank you. >> Thank you, great meeting you John. >> Just to get started, introduce yourselves to the audience, then going to jump into what you guys are doing together, unique relationship here, really compelling solution in cloud. Big story about applications and scale this year. Let's introduce yourselves. Peter, we'll start with you. >> Great. I'm Peter MacDonald. I am vice president of Cloud Partners and business development here at Snowflake. On the Cloud Partner side, that means I manage AWS relationship along with Microsoft and Google Cloud. What we do together in terms of complimentary products, GTM, co-selling, things like that. Importantly, working with other third parties like Qlik for joint solutions. On business development, it's negotiating custom commercial partnerships, large companies like Salesforce and Dell, smaller companies at most for our venture portfolio. >> Thanks Peter and hi John. It's great to be back here. So I'm Itamar Ankorion and I'm the senior vice president responsible for technology alliances here at Qlik. With that, own strategic alliances, including our key partners in the cloud, including Snowflake and AWS. I've been in the data and analytics enterprise software market for 20 plus years, and my main focus is product management, marketing, alliances, and business development. I joined Qlik about three and a half years ago through the acquisition of Attunity, which is now the foundation for Qlik data integration. So again, we focus in my team on creating joint solution alignment with our key partners to provide more value to our customers. >> Great to have both you guys, senior executives in the industry on theCUBE here, talking about data, obviously bringing SAP data to life is the theme of this segment, but this reinvent, it's all about the data, big data end-to-end story, a lot about data being intrinsic as the CEO says on stage around in the organizations in all aspects. Take a minute to explain what you guys are doing as from a company standpoint. Snowflake and Qlik and the solutions, why here at AWS? Peter, we'll start with you at Snowflake, what you guys do as a company, your mission, your focus. >> That was great, John. Yeah, so here at Snowflake, we focus on the data platform and until recently, data platforms required expensive on-prem hardware appliances. And despite all that expense, customers had capacity constraints, inexpensive maintenance, and had limited functionality that all impeded these organizations from reaching their goals. Snowflake is a cloud native SaaS platform, and we've become so successful because we've addressed these pain points and have other new special features. For example, securely sharing data across both the organization and the value chain without copying the data, support for new data types such as JSON and structured data, and also advance in database data governance. Snowflake integrates with complimentary AWS services and other partner products. So we can enable holistic solutions that include, for example, here, both Qlik and AWS SageMaker, and comprehend and bring those to joint customers. Our customers want to convert data into insights along with advanced analytics platforms in AI. That is how they make holistic data-driven solutions that will give them competitive advantage. With Snowflake, our approach is to focus on customer solutions that leverage data from existing systems such as SAP, wherever they are in the cloud or on-premise. And to do this, we leverage partners like Qlik native US to help customers transform their businesses. We provide customers with a premier data analytics platform as a result. Itamar, why don't you talk about Qlik a little bit and then we can dive into the specific SAP solution here and some trends >> Sounds great, Peter. So Qlik provides modern data integration and analytics software used by over 38,000 customers worldwide. Our focus is to help our customers turn data into value and help them close the gap between data all the way through insight and action. We offer click data integration and click data analytics. Click data integration helps to automate the data pipelines to deliver data to where they want to use them in real-time and make the data ready for analytics and then Qlik data analytics is a robust platform for analytics and business intelligence has been a leader in the Gartner Magic Quadrant for over 11 years now in the market. And both of these come together into what we call Qlik Cloud, which is our SaaS based platform. So providing a more seamless way to consume all these services and accelerate time to value with customer solutions. In terms of partnerships, both Snowflake and AWS are very strategic to us here at Qlik, so we have very comprehensive investment to ensure strong joint value proposition to we can bring to our mutual customers, everything from aligning our roadmaps through optimizing and validating integrations, collaborating on best practices, packaging joint solutions like the one we'll talk about today. And with that investment, we are an elite level, top level partner with Snowflake. We fly that our technology is Snowflake-ready across the entire product set and we have hundreds of joint customers together and with AWS we've also partnered for a long time. We're here to reinvent. We've been here with the first reinvent since the inaugural one, so it kind of gives you an idea for how long we've been working with AWS. We provide very comprehensive integration with AWS data analytics services, and we have several competencies ranging from data analytics to migration and modernization. So that's our focus and again, we're excited about working with Snowflake and AWS to bring solutions together to market. >> Well, I'm looking forward to unpacking the solutions specifically, and congratulations on the continued success of both your companies. We've been following them obviously for a very long time and seeing the platform evolve beyond just SaaS and a lot more going on in cloud these days, kind of next generation emerging. You know, we're seeing a lot of macro trends that are going to be powering some of the things we're going to get into real quickly. But before we get into the solution, what are some of those power dynamics in the industry that you're seeing in trends specifically that are impacting your customers that are taking us down this road of getting more out of the data and specifically the SAP, but in general trends and dynamics. What are you hearing from your customers? Why do they care? Why are they going down this road? Peter, we'll start with you. >> Yeah, I'll go ahead and start. Thanks. Yeah, I'd say we continue to see customers being, being very eager to transform their businesses and they know they need to leverage technology and data to do so. They're also increasingly depending upon the cloud to bring that agility, that elasticity, new functionality necessary to react in real-time to every evolving customer needs. You look at what's happened over the last three years, and boy, the macro environment customers, it's all changing so fast. With our partnerships with AWS and Qlik, we've been able to bring to market innovative solutions like the one we're announcing today that spans all three companies. It provides a holistic solution and an integrated solution for our customer. >> Itamar let's get into it, you've been with theCUBE, you've seen the journey, you have your own journey, many, many years, you've seen the waves. What's going on now? I mean, what's the big wave? What's the dynamic powering this trend? >> Yeah, in a nutshell I'll call it, it's all about time. You know, it's time to value and it's about real-time data. I'll kind of talk about that a bit. So, I mean, you hear a lot about the data being the new oil, but it's definitely, we see more and more customers seeing data as their critical enabler for innovation and digital transformation. They look for ways to monetize data. They look as the data as the way in which they can innovate and bring different value to the customers. So we see customers want to use more data so to get more value from data. We definitely see them wanting to do it faster, right, than before. And we definitely see them looking for agility and automation as ways to accelerate time to value, and also reduce overall costs. I did mention real-time data, so we definitely see more and more customers, they want to be able to act and make decisions based on fresh data. So yesterday's data is just not good enough. >> John: Yeah. >> It's got to be down to the hour, down to the minutes and sometimes even lower than that. And then I think we're also seeing customers look to their core business systems where they have a lot of value, like the SAP, like mainframe and thinking, okay, our core data is there, how can we get more value from this data? So that's key things we see all the time with customers. >> Yeah, we did a big editorial segment this year on, we called data as code. Data as code is kind of a riff on infrastructure as code and you start to see data becoming proliferating into all aspects, fresh data. It's not just where you store it, it's how you share it, it's how you turn it into an application intrinsically involved in all aspects. This is the big theme this year and that's driving all the conversations here at RE:Invent. And I'm guaranteeing you, it's going to happen for another five and 10 years. It's not stopping. So I got to get into the solution, you guys mentioned SAP and you've announced the solution by Qlik, Snowflake and AWS for your customers using SAP. Can you share more about this solution? What's unique about it? Why is it important and why now? Peter, Itamar, we'll start with you first. >> Let me jump in, this is really, I'll jump because I'm excited. We're very excited about this solution and it's also a solution by the way and again, we've seen proven customer success with it. So to your point, it's ready to scale, it's starting, I think we're going to see a lot of companies doing this over the next few years. But before we jump to the solution, let me maybe take a few minutes just to clarify the need, why we're seeing, why we're seeing customers jump to do this. So customers that use SAP, they use it to manage the core of their business. So think order processing, management, finance, inventory, supply chain, and so much more. So if you're running SAP in your company, that data creates a great opportunity for you to drive innovation and modernization. So what we see customers want to do, they want to do more with their data and more means they want to take SAP with non-SAP data and use it together to drive new insights. They want to use real-time data to drive real-time analytics, which they couldn't do to date. They want to bring together descriptive with predictive analytics. So adding machine learning in AI to drive more value from the data. And naturally they want to do it faster. So find ways to iterate faster on their solutions, have freedom with the data and agility. And I think this is really where cloud data platforms like Snowflake and AWS, you know, bring that value to be able to drive that. Now to do that you need to unlock the SAP data, which is a lot of also where Qlik comes in because typical challenges these customers run into is the complexity, inherent in SAP data. Tens of thousands of tables, proprietary formats, complex data models, licensing restrictions, and more than, you have performance issues, they usually run into how do we handle the throughput, the volumes while maintaining lower latency and impact. Where do we find knowledge to really understand how to get all this done? So these are the things we've looked at when we came together to create a solution and make it unique. So when you think about its uniqueness, because we put together a lot, and I'll go through three, four key things that come together to make this unique. First is about data delivery. How do you have the SAP data delivery? So how do you get it from ECC, from HANA from S/4HANA, how do you deliver the data and the metadata and how that integration well into Snowflake. And what we've done is we've focused a lot on optimizing that process and the continuous ingestion, so the real-time ingestion of the data in a way that works really well with the Snowflake system, data cloud. Second thing is we looked at SAP data transformation, so once the data arrives at Snowflake, how do we turn it into being analytics ready? So that's where data transformation and data worth automation come in. And these are all elements of this solution. So creating derivative datasets, creating data marts, and all of that is done by again, creating an optimized integration that pushes down SQL based transformations, so they can be processed inside Snowflake, leveraging its powerful engine. And then the third element is bringing together data visualization analytics that can also take all the data now that in organizing inside Snowflake, bring other data in, bring machine learning from SageMaker, and then you go to create a seamless integration to bring analytic applications to life. So these are all things we put together in the solution. And maybe the last point is we actually took the next step with this and we created something we refer to as solution accelerators, which we're really, really keen about. Think about this as prepackaged templates for common business analytic needs like order to cash, finance, inventory. And we can either dig into that a little more later, but this gets the next level of value to the customers all built into this joint solution. >> Yeah, I want to get to the accelerators, but real quick, Peter, your reaction to the solution, what's unique about it? And obviously Snowflake, we've been seeing the progression data applications, more developers developing on top of Snowflake, data as code kind of implies developer ecosystem. This is kind of interesting. I mean, you got partnering with Qlik and AWS, it's kind of a developer-like thinking real solution. What's unique about this SAP solution that's, that's different than what customers can get anywhere else or not? >> Yeah, well listen, I think first of all, you have to start with the idea of the solution. This are three companies coming together to build a holistic solution that is all about, you know, creating a great opportunity to turn SAP data into value this is Itamar was talking about, that's really what we're talking about here and there's a lot of technology underneath it. I'll talk more about the Snowflake technology, what's involved here, and then cover some of the AWS pieces as well. But you know, we're focusing on getting that value out and accelerating time to value for our joint customers. As Itamar was saying, you know, there's a lot of complexity with the SAP data and a lot of value there. How can we manage that in a prepackaged way, bringing together best of breed solutions with proven capabilities and bringing this to market quickly for our joint customers. You know, Snowflake and AWS have been strong partners for a number of years now, and that's not only on how Snowflake runs on top of AWS, but also how we integrate with their complementary analytics and then all products. And so, you know, we want to be able to leverage those in addition to what Qlik is bringing in terms of the data transformations, bringing data out of SAP in the visualization as well. All very critical. And then we want to bring in the predictive analytics, AWS brings and what Sage brings. We'll talk about that a little bit later on. Some of the technologies that we're leveraging are some of our latest cutting edge technologies that really make things easier for both our partners and our customers. For example, Qlik leverages Snowflakes recently released Snowpark for Python functionality to push down those data transformations from clicking the Snowflake that Itamar's mentioning. And while we also leverage Snowpark for integrations with Amazon SageMaker, but there's a lot of great new technology that just makes this easy and compelling for customers. >> I think that's the big word, easy button here for what may look like a complex kind of integration, kind of turnkey, really, really compelling example of the modern era we're living in, as we always say in theCUBE. You mentioned accelerators, SAP accelerators. Can you give an example of how that works with the technology from the third party providers to deliver this business value Itamar, 'cause that was an interesting comment. What's the example? Give an example of this acceleration. >> Yes, certainly. I think this is something that really makes this truly, truly unique in the industry and again, a great opportunity for customers. So we kind talked earlier about there's a lot of things that need to be done with SP data to turn it to value. And these accelerator, as the name suggests, are designed to do just that, to kind of jumpstart the process and reduce the time and the risk involved in such project. So again, these are pre-packaged templates. We basically took a lot of knowledge, and a lot of configurations, best practices about to get things done and we put 'em together. So think about all the steps, it includes things like data extraction, so already knowing which tables, all the relevant tables that you need to get data from in the contexts of the solution you're looking for, say like order to cash, we'll get back to that one. How do you continuously deliver that data into Snowflake in an in efficient manner, handling things like data type mappings, metadata naming conventions and transformations. The data models you build all the way to data mart definitions and all the transformations that the data needs to go through moving through steps until it's fully analytics ready. And then on top of that, even adding a library of comprehensive analytic dashboards and integrations through machine learning and AI and put all of that in a way that's in pre-integrated and tested to work with Snowflake and AWS. So this is where again, you get this entire recipe that's ready. So take for example, I think I mentioned order to cash. So again, all these things I just talked about, I mean, for those who are not familiar, I mean order to cash is a critical business process for every organization. So especially if you're in retail, manufacturing, enterprise, it's a big... This is where, you know, starting with booking a sales order, following by fulfilling the order, billing the customer, then managing the accounts receivable when the customer actually pays, right? So this all process, you got sales order fulfillment and the billing impacts customer satisfaction, you got receivable payments, you know, the impact's working capital, cash liquidity. So again, as a result this order to cash process is a lifeblood for many businesses and it's critical to optimize and understand. So the solution accelerator we created specifically for order to cash takes care of understanding all these aspects and the data that needs to come with it. So everything we outline before to make the data available in Snowflake in a way that's really useful for downstream analytics, along with dashboards that are already common for that, for that use case. So again, this enables customers to gain real-time visibility into their sales orders, fulfillment, accounts receivable performance. That's what the Excel's are all about. And very similarly, we have another one for example, for finance analytics, right? So this will optimize financial data reporting, helps customers get insights into P&L, financial risk of stability or inventory analytics that helps with, you know, improve planning and inventory management, utilization, increased efficiencies, you know, so in supply chain. So again, these accelerators really help customers get a jumpstart and move faster with their solutions. >> Peter, this is the easy button we just talked about, getting things going, you know, get the ball rolling, get some acceleration. Big part of this are the three companies coming together doing this. >> Yeah, and to build on what Itamar just said that the SAP data obviously has tremendous value. Those sales orders, distribution data, financial data, bringing that into Snowflake makes it easily accessible, but also it enables it to be combined with other data too, is one of the things that Snowflake does so well. So you can get a full view of the end-to-end process and the business overall. You know, for example, I'll just take one, you know, one example that, that may not come to mind right away, but you know, looking at the impact of weather conditions on supply chain logistics is relevant and material and have interest to our customers. How do you bring those different data sets together in an easy way, bringing the data out of SAP, bringing maybe other data out of other systems through Qlik or through Snowflake, directly bringing data in from our data marketplace and bring that all together to make it work. You know, fundamentally organizational silos and the data fragmentation exist otherwise make it really difficult to drive modern analytics projects. And that in turn limits the value that our customers are getting from SAP data and these other data sets. We want to enable that and unleash. >> Yeah, time for value. This is great stuff. Itamar final question, you know, what are customers using this? What do you have? I'm sure you have customers examples already using the solution. Can you share kind of what these examples look like in the use cases and the value? >> Oh yeah, absolutely. Thank you. Happy to. We have customers across different, different sectors. You see manufacturing, retail, energy, oil and gas, CPG. So again, customers in those segments, typically sectors typically have SAP. So we have customers in all of them. A great example is like Siemens Energy. Siemens Energy is a global provider of gas par services. You know, over what, 28 billion, 30 billion in revenue. 90,000 employees. They operate globally in over 90 countries. So they've used SAP HANA as a core system, so it's running on premises, multiple locations around the world. And what they were looking for is a way to bring all these data together so they can innovate with it. And the thing is, Peter mentioned earlier, not just the SAP data, but also bring other data from other systems to bring it together for more value. That includes finance data, these logistics data, these customer CRM data. So they bring data from over 20 different SAP systems. Okay, with Qlik data integration, feeding that into Snowflake in under 20 minutes, 24/7, 365, you know, days a year. Okay, they get data from over 20,000 tables, you know, over million, hundreds of millions of records daily going in. So it is a great example of the type of scale, scalability, agility and speed that they can get to drive these kind of innovation. So that's a great example with Siemens. You know, another one comes to mind is a global manufacturer. Very similar scenario, but you know, they're using it for real-time executive reporting. So it's more like feasibility to the production data as well as for financial analytics. So think, think, think about everything from audit to texts to innovate financial intelligence because all the data's coming from SAP. >> It's a great time to be in the data business again. It keeps getting better and better. There's more data coming. It's not stopping, you know, it's growing so fast, it keeps coming. Every year, it's the same story, Peter. It's like, doesn't stop coming. As we wrap up here, let's just get customers some information on how to get started. I mean, obviously you're starting to see the accelerators, it's a great program there. What a great partnership between the two companies and AWS. How can customers get started to learn about the solution and take advantage of it, getting more out of their SAP data, Peter? >> Yeah, I think the first place to go to is talk to Snowflake, talk to AWS, talk to our account executives that are assigned to your account. Reach out to them and they will be able to educate you on the solution. We have packages up very nicely and can be deployed very, very quickly. >> Well gentlemen, thank you so much for coming on. Appreciate the conversation. Great overview of the partnership between, you know, Snowflake and Qlik and AWS on a joint solution. You know, getting more out of the SAP data. It's really kind of a key, key solution, bringing SAP data to life. Thanks for coming on theCUBE. Appreciate it. >> Thank you. >> Thank you John. >> Okay, this is theCUBE coverage here at RE:Invent 2022. I'm John Furrier, your host of theCUBE. Thanks for watching. (upbeat music)

Published Date : Nov 23 2022

SUMMARY :

bringing SAP data to life, great meeting you John. then going to jump into what On the Cloud Partner side, and I'm the senior vice and the solutions, and the value chain and accelerate time to value that are going to be powering and data to do so. What's the dynamic powering this trend? You know, it's time to value all the time with customers. and that's driving all the and it's also a solution by the way I mean, you got partnering and bringing this to market of the modern era we're living in, that the data needs to go through getting things going, you know, Yeah, and to build in the use cases and the value? agility and speed that they can get It's a great time to be to educate you on the solution. key solution, bringing SAP data to life. Okay, this is theCUBE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

PeterPERSON

0.99+

DellORGANIZATION

0.99+

SiemensORGANIZATION

0.99+

Peter MacDonaldPERSON

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Peter McDonaldPERSON

0.99+

Itamar AnkorionPERSON

0.99+

QlikORGANIZATION

0.99+

28 billionQUANTITY

0.99+

two companiesQUANTITY

0.99+

TensQUANTITY

0.99+

three companiesQUANTITY

0.99+

Siemens EnergyORGANIZATION

0.99+

20 plus yearsQUANTITY

0.99+

yesterdayDATE

0.99+

SnowflakeORGANIZATION

0.99+

third elementQUANTITY

0.99+

FirstQUANTITY

0.99+

threeQUANTITY

0.99+

ItamarPERSON

0.99+

over 20,000 tablesQUANTITY

0.99+

bothQUANTITY

0.99+

90,000 employeesQUANTITY

0.99+

firstQUANTITY

0.99+

SalesforceORGANIZATION

0.99+

Cloud PartnersORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

over 38,000 customersQUANTITY

0.99+

under 20 minutesQUANTITY

0.99+

10 yearsQUANTITY

0.99+

fiveQUANTITY

0.99+

ExcelTITLE

0.99+

oneQUANTITY

0.99+

over 11 yearsQUANTITY

0.98+

SnowparkTITLE

0.98+

Second thingQUANTITY

0.98+

Gunnar Hellekson & Adnan Ijaz | AWS re:Invent 2022


 

>>Hello everyone. Welcome to the Cube's coverage of AWS Reinvent 22. I'm John Ferer, host of the Cube. Got some great coverage here talking about software supply chain and sustainability in the cloud. We've got a great conversation. Gunner Helickson, Vice President and general manager at Red Hat Enterprise Linux and Business Unit of Red Hat. Thanks for coming on. And Edon Eja Director, Product Management of commercial software services aws. Gentlemen, thanks for joining me today. >>Oh, it's a pleasure. >>You know, the hottest topic coming out of Cloudnative developer communities is slide chain software sustainability. This is a huge issue. As open source continues to power away and fund and grow this next generation modern development environment, you know, supply chain, you know, sustainability is a huge discussion because you gotta check things out where, what's in the code. Okay, open source is great, but now we gotta commercialize it. This is the topic, Gunner, let's get in, get with you. What, what are you seeing here and what's some of the things that you're seeing around the sustainability piece of it? Because, you know, containers, Kubernetes, we're seeing that that run time really dominate this new abstraction layer, cloud scale. What's your thoughts? >>Yeah, so I, it's interesting that the, you know, so Red Hat's been doing this for 20 years, right? Making open source safe to consume in the enterprise. And there was a time when in order to do that you needed to have a, a long term life cycle and you needed to be very good at remediating security vulnerabilities. And that was kind of, that was the bar that you had that you had to climb over. Nowadays with the number of vulnerabilities coming through, what people are most worried about is, is kind of the providence of the software and making sure that it has been vetted and it's been safe, and that that things that you get from your vendor should be more secure than things that you've just downloaded off of GitHub, for example. Right? And that's, that's a, that's a place where Red Hat's very comfortable living, right? >>Because we've been doing it for, for 20 years. I think there, there's another, there's another aspect to this, to this supply chain question as well, especially with the pandemic. You know, we've got these, these supply chains have been jammed up. The actual physical supply chains have been jammed up. And, and the two of these issues actually come together, right? Because as we've been go, as we go through the pandemic, we've had these digital transformation efforts, which are in large part people creating software in order to manage better their physical supply chain problems. And so as part of that digital transformation, you have another supply chain problem, which is the software supply chain problem, right? And so these two things kind of merge on these as people are trying to improve the performance of transportation systems, logistics, et cetera. Ultimately it all boils down to it all. Both supply chain problems actually boil down to a software problem. It's very >>Interesting that, Well, that is interesting. I wanna just follow up on that real quick if you don't mind. Because if you think about the convergence of the software and physical world, you know, that's, you know, IOT and also hybrid cloud kind of plays into that at scale, this opens up more surface area for attacks, especially when you're under a lot of pressure. This is where, you know, you can, you have a service area in the physical side and you have constraints there. And obviously the pandemic causes problems, but now you've got the software side. Can you, how are you guys handling that? Can you just share a little bit more of how you guys are looking at that with Red Hat? What's, what's the customer challenge? Obviously, you know, skills gaps is one, but like that's a convergence at the same time. More security problems. >>Yeah, yeah, that's right. And certainly the volume of, if we just look at security vulnerabilities themselves, just the volume of security vulnerabilities has gone up considerably as more people begin using the software. And as the software becomes more important to kind of critical infrastructure, more eyeballs are on it. And so we're uncovering more problems, which is kind of, that's, that's okay. That's how the world works. And so certainly the, the number of remediations required every year has gone up. But also the customer expectations, as I've mentioned before, the customer expectations have changed, right? People want to be able to show to their auditors and to their regulators that no, we, we, in fact, I can show the providence of the software that I'm using. I didn't just download something random off the internet. I actually have, like you, you know, adults paying attention to the, how the software gets put together. >>And it's still, honestly, it's still very early days. We can, I think the, in as an industry, I think we're very good at managing, identifying remediating vulnerabilities in the aggregate. We're pretty good at that. I think things are less clear when we talk about kind of the management of that supply chain, proving the provenance, proving the, and creating a resilient supply chain for software. We have lots of tools, but we don't really have lots of shared expectations. Yeah. And so it's gonna be interesting over the next few years, I think we're gonna have more rules are gonna come out. I see NIST has already, has already published some of them. And as these new rules come out, the whole industry is gonna have to kind of pull together and, and really and really rally around some of this shared understanding so we can all have shared expectations and we can all speak the same language when we're talking about this >>Problem. That's awesome. A and Amazon web service is obviously the largest cloud platform out there, you know, the pandemic, even post pandemic, some of these supply chain issues, whether it's physical or software, you're also an outlet for that. So if someone can't buy hardware or, or something physical, they can always get the cloud. You guys have great network compute and whatnot and you got thousands of ISVs across the globe. How are you helping customers with this supply chain problem? Because whether it's, you know, I need to get in my networking gears delayed, I'm gonna go to the cloud and get help there. Or whether it's knowing the workloads and, and what's going on inside them with respect open source. Cause you've got open source, which is kind of an external forcing function. You got AWS and you got, you know, physical compute stores, networking, et cetera. How are you guys helping customers with the supply chain challenge, which could be an opportunity? >>Yeah, thanks John. I think there, there are multiple layers to that. At, at the most basic level we are helping customers buy abstracting away all these data central constructs that they would have to worry about if they were running their own data centers. They would have to figure out how the networking gear, you talk about, you know, having the right compute, right physical hardware. So by moving to the cloud, at least they're delegating that problem to AWS and letting us manage and making sure that we have an instance available for them whenever they want it. And if they wanna scale it, the, the, the capacity is there for them to use now then that, so we kind of give them space to work on the second part of the problem, which is building their own supply chain solutions. And we work with all kinds of customers here at AWS from all different industry segments, automotive, retail, manufacturing. >>And you know, you see that the complexity of the supply chain with all those moving pieces, like hundreds and thousands of moving pieces, it's very daunting. So cus and then on the other hand, customers need more better services. So you need to move fast. So you need to build, build your agility in the supply chain itself. And that is where, you know, Red Hat and AWS come together where we can build, we can enable customers to build their supply chain solutions on platform like Red Hat Enterprise, Linux Rail or Red Hat OpenShift on, on aws. We call it Rosa. And the benefit there is that you can actually use the services that we, that are relevant for the supply chain solutions like Amazon managed blockchain, you know, SageMaker. So you can actually build predictive and s you can improve forecasting, you can make sure that you have solutions that help you identify where you can cut costs. And so those are some of the ways we are helping customers, you know, figure out how they actually wanna deal with the supply chain challenges that we're running into in today's world. >>Yeah, and you know, you mentioned sustainability outside of software su sustainability, you know, as people move to the cloud, we've reported on silicon angle here in the cube that it's better to have the sustainability with the cloud because then the data centers aren't using all that energy too. So there's also all kinds of sustainability advantages, Gunner, because this is, this is kind of how your relationship with Amazon's expanded. You mentioned Rosa, which is Red Hat on, you know, on OpenShift, on aws. This is interesting because one of the biggest discussions is skills gap, but we were also talking about the fact that the humans are huge part of the talent value. In other words, the, the humans still need to be involved and having that relationship with managed services and Red Hat, this piece becomes one of those things that's not talked about much, which is the talent is increasing in value the humans, and now you got managed services on the cloud, has got scale and human interactions. Can you share, you know, how you guys are working together on this piece? Cuz this is interesting cuz this kind of brings up the relationship of that operator or developer. >>Yeah, Yeah. So I think there's, so I think about this in a few dimensions. First is that the kind of the, I it's difficult to find a customer who is not talking about automation at some level right now. And obviously you can automate the processes and, and the physical infrastructure that you already have that's using tools like Ansible, right? But I think that the, combining it with the, the elasticity of a solution like aws, so you combine the automation with kind of elastic and, and converting a lot of the capital expenses into operating expenses, that's a great way actually to save labor, right? So instead of like racking hard drives, you can have somebody who's somebody do something a little more like, you know, more valuable work, right? And so, so okay, but that gives you a platform and then what do you do with that platform? >>And if you've got your systems automated and you've got this kind of elastic infrastructure underneath you, what you do on top of it is really interesting. So a great example of this is the collaboration that, that we had with running the rel workstation on aws. So you might think like, well why would anybody wanna run a workstation on, on a cloud? That doesn't make a whole lot of sense unless you consider how complex it is to set up, if you have the, the use case here is like industrial workstations, right? So it's animators, people doing computational fluid dynamics, things like this. So these are industries that are extremely data heavy. They have workstations have very large hardware requirements, often with accelerated GPUs and things like this. That is an extremely expensive thing to install on premise anywhere. And if the pandemic taught us anything, it's, if you have a bunch of very expensive talent and they all have to work from a home, it is very difficult to go provide them with, you know, several tens of thousands of dollars worth of worth of worth of workstation equipment. >>And so combine the rail workstation with the AWS infrastructure and now all that workstation computational infrastructure is available on demand and on and available right next to the considerable amount of data that they're analyzing or animating or, or, or working on. So it's a really interesting, it's, it was actually, this is an idea that I was actually born with the pandemic. Yeah. And, and it's kind of a combination of everything that we're talking about, right? It's the supply chain challenges of the customer, It's the lack of lack of talent, making sure that people are being put their best and highest use. And it's also having this kind of elastic, I think, opex heavy infrastructure as opposed to a CapEx heavy infrastructure. >>That's a great example. I think that's illustrates to me what I love about cloud right now is that you can put stuff in, in the cloud and then flex what you need when you need it at in the cloud rather than either ingress or egress data. You, you just more, you get more versatility around the workload needs, whether it's more compute or more storage or other high level services. This is kind of where this NextGen cloud is going. This is where, where, where customers want to go once their workloads are up and running. How do you simplify all this and how do you guys look at this from a joint customer perspective? Because that example I think will be something that all companies will be working on, which is put it in the cloud and flex to the, whatever the workload needs and put it closer to the work compute. I wanna put it there. If I wanna leverage more storage and networking, Well, I'll do that too. It's not one thing. It's gotta flex around what's, how are you guys simplifying this? >>Yeah, I think so for, I'll, I'll just give my point of view and then I'm, I'm very curious to hear what a not has to say about it, but the, I think and think about it in a few dimensions, right? So there's, there is a, technically like any solution that aan a nun's team and my team wanna put together needs to be kind of technically coherent, right? The things need to work well together, but that's not the, that's not even most of the job. Most of the job is actually the ensuring and operational consistency and operational simplicity so that everything is the day-to-day operations of these things kind of work well together. And then also all the way to things like support and even acquisition, right? Making sure that all the contracts work together, right? It's a really in what, So when Aon and I think about places of working together, it's very rare that we're just looking at a technical collaboration. It's actually a holistic collaboration across support acquisition as well as all the engineering that we have to do. >>And on your, your view on how you're simplifying it with Red Hat for your joint customers making Collabo >>Yeah. Gun, Yeah. Gunner covered it. Well I think the, the benefit here is that Red Hat has been the leading Linux distribution provider. So they have a lot of experience. AWS has been the leading cloud provider. So we have both our own point of views, our own learning from our respective set of customers. So the way we try to simplify and bring these things together is working closely. In fact, I sometimes joke internally that if you see Ghana and my team talking to each other on a call, you cannot really tell who who belongs to which team. Because we're always figuring out, okay, how do we simplify discount experience? How do we simplify programs? How do we simplify go to market? How do we simplify the product pieces? So it's really bringing our, our learning and share our perspective to the table and then really figure out how do we actually help customers make progress. Rosa that we talked about is a great example of that, you know, you know, we, together we figured out, hey, there is a need for customers to have this capability in AWS and we went out and built it. So those are just some of the examples in how both teams are working together to simplify the experience, make it complete, make it more coherent. >>Great. That's awesome. That next question is really around how you help organizations with the sustainability piece, how to support them, simplifying it. But first, before we get into that, what is the core problem around this sustainability discussion we're talking about here, supply chain sustainability, What is the core challenge? Can you both share your thoughts on what that problem is and what the solution looks like and then we can get into advice? >>Yeah. Well from my point of view, it's, I think, you know, one of the lessons of the last three years is every organization is kind of taking a careful look at how resilient it is. Or ever I should say, every organization learned exactly how resilient it was, right? And that comes from both the, the physical challenges and the logistics challenges that everyone had. The talent challenges you mentioned earlier. And of course the, the software challenges, you know, as everyone kind of embarks on this, this digital transformation journey that, that we've all been talking about. And I think, so I really frame it as, as resilience, right? And and resilience is at bottom is really about ensuring that you have options and that you have choices. The more choices you have, the more options you have, the more resilient you, you and your organization is going to be. And so I know that that's how, that's how I approach the market. I'm pretty sure that's exact, that's how AON is, has approaching the market, is ensuring that we are providing as many options as possible to customers so that they can assemble the right, assemble the right pieces to create a, a solution that works for their particular set of challenges or their unique set of challenges and and unique context. Aon, is that, does that sound about right to you? Yeah, >>I think you covered it well. I, I can speak to another aspect of sustainability, which is becoming increasingly top of mind for our customer is like how do they build products and services and solutions and whether it's supply chain or anything else which is sustainable, which is for the long term good of the, the planet. And I think that is where we have been also being very intentional and focused in how we design our data center. How we actually build our cooling system so that we, those are energy efficient. You know, we, we are on track to power all our operations with renewable energy by 2025, which is five years ahead of our initial commitment. And perhaps the most obvious example of all of this is our work with arm processors Graviton three, where, you know, we are building our own chip to make sure that we are designing energy efficiency into the process. And you know, we, there's the arm graviton, three arm processor chips, there are about 60% more energy efficient compared to some of the CD six comparable. So all those things that are also we are working on in making sure that whatever our customers build on our platform is long term sustainable. So that's another dimension of how we are working that into our >>Platform. That's awesome. This is a great conversation. You know, the supply chain is on both sides, physical and software. You're starting to see them come together in great conversations and certainly moving workloads to the cloud running in more efficiently will help on the sustainability side, in my opinion. Of course, you guys talked about that and we've covered it, but now you start getting into how to refactor, and this is a big conversation we've been having lately, is as you not just lift and ship but re-platform and refactor, customers are seeing great advantages on this. So I have to ask you guys, how are you helping customers and organizations support sustainability and, and simplify the complex environment that has a lot of potential integrations? Obviously API's help of course, but that's the kind of baseline, what's the, what's the advice that you give customers? Cause you know, it can look complex and it becomes complex, but there's an answer here. What's your thoughts? >>Yeah, I think so. Whenever, when, when I get questions like this from from customers, the, the first thing I guide them to is, we talked earlier about this notion of consistency and how important that is. It's one thing, it it, it is one way to solve the problem is to create an entirely new operational model, an entirely new acquisition model and an entirely new stack of technologies in order to be more sustainable. That is probably not in the cards for most folks. What they want to do is have their existing estate and they're trying to introduce sustainability into the work that they are already doing. They don't need to build another silo in order to create sustainability, right? And so there have to be, there has to be some common threads, there has to be some common platforms across the existing estate and your more sustainable estate, right? >>And, and so things like Red Hat enterprise Linux, which can provide this kind of common, not just a technical substrate, but a common operational substrate on which you can build these solutions if you have a common platform on which you are building solutions, whether it's RHEL or whether it's OpenShift or any of our other platforms that creates options for you underneath. So that in some cases maybe you need to run things on premise, some things you need to run in the cloud, but you don't have to profoundly change how you work when you're moving from one place to another. >>And that, what's your thoughts on, on the simplification? >>Yeah, I mean think that when you talk about replatforming and refactoring, it is a daunting undertaking, you know, in today's, in the, especially in today's fast paced work. So, but the good news is you don't have to do it by yourself. Customers don't have to do it on their own. You know, together AWS and Red Hat, we have our rich partner ecosystem, you know AWS over AWS has over a hundred thousand partners that can help you take that journey, the transformation journey. And within AWS and working with our partners like Red Hat, we make sure that we have all in, in my mind there are really three big pillars that you have to have to make sure that customers can successfully re-platform refactor their applications to the modern cloud architecture. You need to have the rich set of services and tools that meet their different scenarios, different use cases. Because no one size fits all. You have to have the right programs because sometimes customers need those incentives, they need those, you know, that help in the first step and last but no needs, they need training. So all of that, we try to cover that as we work with our customers, work with our partners and that is where, you know, together we try to help customers take that step, which is, which is a challenging step to take. >>Yeah. You know, it's great to talk to you guys, both leaders in your field. Obviously Red hats, well story history. I remember the days back when I was provisioning, loading OSS on hardware with, with CDs, if you remember, that was days gunner. But now with high level services, if you look at this year's reinvent, and this is like kind of my final question for the segment is then we'll get your reaction to is last year we talked about higher level services. I sat down with Adam Celski, we talked about that. If you look at what's happened this year, you're starting to see people talk about their environment as their cloud. So Amazon has the gift of the CapEx, the all that, all that investment and people can operate on top of it. They're calling that environment their cloud. Okay, For the first time we're seeing this new dynamic where it's like they have a cloud, but they're Amazon's the CapEx, they're operating. So you're starting to see the operational visibility gun around how to operate this environment. And it's not hybrid this, that it's just, it's cloud. This is kind of an inflection point. Do you guys agree with that or, or having a reaction to that statement? Because I, I think this is kind of the next gen super cloud-like capability. It's, it's, we're going, we're building the cloud. It's now an environment. It's not talking about private cloud, this cloud, it's, it's all cloud. What's your reaction? >>Yeah, I think, well I think it's a very natural, I mean we used words like hybrid cloud, multi-cloud, if, I guess super cloud is what the kids are saying now, right? It's, it's all, it's all describing the same phenomena, right? Which is, which is being able to take advantage of lots of different infrastructure options, but still having something that creates some commonality among them so that you can, so that you can manage them effectively, right? So that you can have kind of uniform compliance across your estate so that you can have kind of, you can make the best use of your talent across the estate. I mean this is a, this is, it's a very natural thing. >>They're calling it cloud, the estate is the cloud. >>Yeah. So yeah, so, so fine if it, if it means that we no longer have to argue about what's multi-cloud and what's hybrid cloud, I think that's great. Let's just call it cloud. >>And what's your reaction, cuz this is kind of the next gen benefits of, of higher level services combined with amazing, you know, compute and, and resource at the infrastructure level. What's your, what's your view on that? >>Yeah, I think the construct of a unified environment makes sense for customers who have all these use cases which require, like for instance, if you are doing some edge computing and you're running it WS outpost or you know, wave lent and these things. So, and, and it is, it is fear for customer to say, think that hey, this is one environment, same set of tooling that they wanna build that works across all their different environments. That is why we work with partners like Red Hat so that customers who are running Red Hat Enterprise Linux on premises and who are running in AWS get the same level of support, get the same level of security features, all of that. So from that sense, it actually makes sense for us to build these capabilities in a way that customers don't have to worry about, Okay, now I'm actually in the AWS data center versus I'm running outpost on premises. It is all one. They, they just use the same set of cli command line APIs and all of that. So in that sense, it's actually helps customers have that unification so that that consistency of experience helps their workforce and be more productive versus figuring out, okay, what do I do, which tool I use? Where >>And on you just nailed it. This is about supply chain sustainability, moving the workloads into a cloud environment. You mentioned wavelength, this conversation's gonna continue. We haven't even talked about the edge yet. This is something that's gonna be all about operating these workloads at scale and all the, with the cloud services. So thanks for sharing that and we'll pick up that edge piece later. But for reinvent right now, this is really the key conversation. How to bake the sustained supply chain work in a complex environment, making it simpler. And so thanks for sharing your insights here on the cube. >>Thanks. Thanks for having >>Us. Okay, this is the cube's coverage of ados Reinvent 22. I'm John Fur, your host. Thanks for watching.

Published Date : Nov 3 2022

SUMMARY :

host of the Cube. and grow this next generation modern development environment, you know, supply chain, And that was kind of, that was the bar that you had that you had to climb And so as part of that digital transformation, you have another supply chain problem, which is the software supply chain the software and physical world, you know, that's, you know, IOT and also hybrid cloud kind of plays into that at scale, And as the software becomes more important to kind of critical infrastructure, more eyeballs are on it. And so it's gonna be interesting over the next few years, I think we're gonna have more rules are gonna come out. Because whether it's, you know, you talk about, you know, having the right compute, right physical hardware. And so those are some of the ways we are helping customers, you know, figure out how they Yeah, and you know, you mentioned sustainability outside of software su sustainability, you know, so okay, but that gives you a platform and then what do you do with that platform? it is very difficult to go provide them with, you know, several tens of thousands of dollars worth of worth of worth of And so combine the rail workstation with the AWS infrastructure and now all that I think that's illustrates to me what I love about cloud right now is that you can put stuff in, operational consistency and operational simplicity so that everything is the day-to-day operations of Rosa that we talked about is a great example of that, you know, you know, we, together we figured out, Can you both share your thoughts on what that problem is and And of course the, the software challenges, you know, as everyone kind of embarks on this, And you know, we, there's the So I have to ask you guys, And so there have to be, there has to be some common threads, there has to be some common platforms So that in some cases maybe you need to run things on premise, So, but the good news is you don't have to do it by yourself. if you look at this year's reinvent, and this is like kind of my final question for the segment is then we'll get your reaction to So that you can have kind of uniform compliance across your estate so that you can have kind of, hybrid cloud, I think that's great. amazing, you know, compute and, and resource at the infrastructure level. have all these use cases which require, like for instance, if you are doing some edge computing and you're running it And on you just nailed it. Thanks for having Us. Okay, this is the cube's coverage of ados Reinvent 22.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

John FererPERSON

0.99+

twoQUANTITY

0.99+

Adam CelskiPERSON

0.99+

Adnan IjazPERSON

0.99+

Gunnar HelleksonPERSON

0.99+

last yearDATE

0.99+

Edon EjaPERSON

0.99+

John FurPERSON

0.99+

20 yearsQUANTITY

0.99+

RosaPERSON

0.99+

2025DATE

0.99+

Gunner HelicksonPERSON

0.99+

Red HatORGANIZATION

0.99+

AONORGANIZATION

0.99+

NISTORGANIZATION

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

RHELTITLE

0.99+

firstQUANTITY

0.99+

OpenShiftTITLE

0.99+

both teamsQUANTITY

0.99+

two thingsQUANTITY

0.99+

Red Hat Enterprise LinuxTITLE

0.99+

this yearDATE

0.98+

oneQUANTITY

0.98+

second partQUANTITY

0.98+

todayDATE

0.98+

thousandsQUANTITY

0.98+

CapExORGANIZATION

0.98+

first timeQUANTITY

0.98+

pandemicEVENT

0.98+

Linux RailTITLE

0.98+

Red Hat Enterprise LinuxORGANIZATION

0.98+

LinuxTITLE

0.98+

both sidesQUANTITY

0.97+

Red HatTITLE

0.97+

over a hundred thousand partnersQUANTITY

0.97+

WSORGANIZATION

0.97+

Red Hat OpenShiftTITLE

0.97+

GhanaLOCATION

0.97+

GunnerPERSON

0.96+

one wayQUANTITY

0.96+

about 60%QUANTITY

0.96+

five yearsQUANTITY

0.96+

tens of thousands of dollarsQUANTITY

0.96+

Red Hat EnterpriseTITLE

0.96+

one thingQUANTITY

0.94+

NextGenORGANIZATION

0.94+

first stepQUANTITY

0.92+

GitHubORGANIZATION

0.92+

both leadersQUANTITY

0.91+

hundreds and thousands of moving piecesQUANTITY

0.91+

awsORGANIZATION

0.9+

three big pillarsQUANTITY

0.89+

Chris Grusz, AWS | AWS Marketplace Seller Conference 2022


 

>>Hello. And welcome back to the cubes live coverage here in Seattle for the cubes coverage of AWS marketplace seller conference. Now part of really big move and news, Amazon partner network combines with AWS marketplace to form one organization, the Amazon partner organization, APO where the efficiencies, the next iteration, as they say in Amazon language, where they make things better, simpler, faster, and, and for customers is happening. We're here with Chris Cruz, who's the general manager, worldwide leader of ISV alliances and marketplace, which includes all the channel partners and the buyer and seller relationships all now under one partner organization, bringing together years of work. Yes. If you work with AWS and are a partner and, or sell with them, all kind of coming together, kind of in a new way for the next generation, Chris, congratulations on the new role and the reor. >>Thank you. Yeah, it's very exciting. We're we think it invent, simplifies the process on how we work with our partners and we're really optimistic so far. The feedback's been great. And I think it's just gonna get even better as we kind of work out the final details. >>This is huge news because one, we've been very close to the partner that we've been working with and we talking to, we cover them. We cover the news, the startups from startups, channel partners, big ISVs, big and small from the dorm room to the board room. You guys have great relationships. So check marketplace, the future of procurement, how software will be bought, implemented and deployed is also changed. So you've got the confluence of two worlds coming together, growth in the ecosystem. Yep. NextGen cloud on the horizon for AWS and the customers as digital transformation goes from lift and shift to refactoring businesses. Yep. This is really a seminal moment. Can you share what you talked about on the keynote stage here, around why this is happening now? Yeah. What's the guiding principle. What's the north star where, why what's what's the big news. >>Yeah. And so, you know, a lot of reasons on why we kind of, we pulled the two teams together, but you know, a lot of it kind gets centered around co-sell. And so if you take a look at marketplace where we started off, where it was really a machine image business, and it was a great self-service model and we were working with ISVs that wanted to have this new delivery mechanism on how to bring in at the time was Amazon machine images and you fast forward, we started adding more product types like SAS and containers. And the experience that we saw was that customers would use marketplace for kind of up to a certain limit on a self-service perspective. But then invariably, they wanted by a quantity discount, they wanted to get an enterprise discount and we couldn't do that through marketplace. And so they would exit us and go do a direct deal with a, an ISV. >>And, and so to remedy that we launched private offers, you know, four years ago. And private offers now allowed ISVs to do these larger deals, but do 'em all through marketplace. And so they could start off doing self-service business. And then as a customer graduated up to buying for a full department or an organization, they can now use private offers to execute that larger agreement. And it, we started to do more and more private offers, really kind of coincided with a lot of the initiatives that were going on within Amazon partner network at the time around co-sell. And, and so we started to launch programs like ISV accelerate that really kind of focused on our co-sell relationship with ISVs. And what we found was that marketplace private offers became this awesome way to automate how we co-sell with ISV. And so we kinda had these two organizations that were parallel. We said, you know what, this is gonna be better together. If we put together, it's gonna invent simplify and we can use marketplace private offers as part of that co-sell experience and really feed that automation layer for all of our ISVs as they interacted with native >>Discussions. Well, I gotta give you props, you and Mona work on stage. You guys did a great job and it reminds me of the humble nature of AWS and Amazon. I used to talk to Andy jazzy about this all the time. That reminds me of 2013 here right now, because you're in that mode where Amazon reinvent was in 2013. Yeah. Where you knew it was breaking out. Yeah. Everyone's it was kind of small, but we haven't made it yet. Yeah. But you guys are doing billions of vows in transactions. Yeah. But this event is really, I think the beginning of what we're seeing as the change over from securing and deploying applications in the cloud, because there's a lot of nuanced things I want to get your reaction on one. I heard making your part product as an ISV, more native to AWS's stack. That was one major call out. I heard the other one was, Hey, if you're a channel partner, you can play too. And by the way, there's more choice. There's a lot going on here. That's about to kind of explode in a good way for customers. Yeah. Buyers get more access to assemble their solutions. Yeah. And you got all kinds of like business logic, compensation, integration, and scale. Yeah. This is like unprecedented. >>Yeah. It's, it's exciting to see what's going on. I mean, I think we kind of saw the tipping point probably about two years ago, which, you know, prior to that, you know, we would be working with ISVs and customers and it was really much more of an evangelism role where we were just getting people to try it. Just, just list a product. We think this is gonna be a good idea. And if you're a buyer, it's like just try out a private offer, try out a self, you know, service subscription. And, and what's happened now is there's no longer a lot of that convincing that needs to happen. It's really become accepted. And so a lot of the conversations I have now with ISVs, it's not about, should I do marketplace it's how do I do it better? And how do I really leverage marketplace as part of my co-sell initiatives as, as part of my go to market strategy. >>And so you've, you've really kind of passed this tipping point where marketplaces are now becoming very accepted ways to buy third party software. And so that's really exciting. And, and we see that we, you know, we can really enhance that experience, you know, and what we saw on the machine image side is we had this awesome integrated experience where you would buy it. It was tied right into the EC two control plane. And you could go from buying to deploying in one single motion. SAS is a little bit different, you know, we can do all the buying in a very simple motion, but then deploying it. There's a whole bunch of other stuff that our customers have to do. And so we see all kinds of ways that we can simplify that. You know, recently we launched the ability to put third party solutions outta marketplace, into control tower, which is how we deploy all of our landing zones for AWS. And now it's like, instead of having to go wire that up as you're adding new AWS environments, why not just use that third party solution that you've already integrated to you and have it there as you're span those landing zones through >>Control towers, again, back to humble nature, you guys have dominated the infrastructure as a service layer. You kind of mentioned it. You didn't really kind of highlight it other than saying you're doing pretty good. Yeah. On the IAS or the technology partners as you call or infrastructure as you guys call it. Okay. I can see how the, the, the pan, the control panel is great for those customers. But outside that, when you get into like CRM, you mentioned E R P these business apps, these horizontal and verticals have data they're gonna have SageMaker, they're gonna have edge. They might have, you know, other services that are coming online from Amazon. How do I, as an ISV, get my stuff in there. Yeah. And how do I succeed? And what are you doing to make that better? Cause I know it's kind of new, but not new. Yeah, >>No, it's not. I mean, that's one of the things that we've really invested on is how do we make it really easy to list marketplace? And, you know, again, when we first start started, it was a big, huge spreadsheet that you had to fill out. It was very cumbersome and we've really automated all those aspects. So now we've exposed an API as an example. So you can go straight out of your own build process and you might have your own C I CD pipeline. And then you have a build step at the end. And now you can have that execute marketplace update from your build script, right across that API all the way over to AWS marketplace. So it's taking that effectively, a C CD pipeline from an ISV and extending it all the way to AWS and then eventually to a customer, because now it's just an automated supply chain for that software coming into their environment. And we see that being super powerful. There's nowhere manual steps >>Along. Yeah. I wanna dig into that because you made a comment and I want you to clarify it here in the cube. Some have said, even us on the cube. Oh, marketplace. Just the website's a catalog. Yeah. Feels old school. Yeah. Feels like 1995 database. I'm kind of just, you know, saying no offense sake. And now you're saying, you're now looking at this and, and implementing more of a API based. Why is that relevant? I'm I know the answer. You already set up with APIs, but explain the transition from the mindset of it's a website. Yeah. Buy stuff on a catalog to full blown API layer. Yeah. Services. >>Absolutely. Well, when you look at all AWS services, you know, our customers will interface, you know, they'll interface them through a console initially, but when they're using them in production, they're, it's all about APIs and marketplace, as you mentioned, did start off as a website. And so we've kind of taken the opposite approach. We've got this great website experience, which is great for demand gen and, you know, highlighting those listings. But what we want to do is really have this API service layer that you're interfacing with so that an ISV effectively is not even in our marketplace. They interfacing over APIs to do a variety of their high, you know, value functions, whether it's listing soy, private offers. We don't have that all available through APIs and the same thing on the buyer side. So it's integrating directly into their AWS environment and then they can view all their third party spend within things like our cost management suites. They can look at things like cost Explorer, see third party software, right next to first party software, and have that all integrated this nice as seamless >>For the customer. That's a nice cloud native kind of native experience. I think that's a huge advantage. I'm gonna track that closer. We're we're gonna follow that. I think that's gonna be the killer killer feature. All right. Now let's get to the killer feature and the business logic. Okay. Yeah. All partners all wanna know what's in it for me. Yeah. How do I make more cash? Yeah. How do I compensate my sales people? Yeah. What do you guys don't compete with me? Give me leads. Yeah. Can I get MDF market development funds? Yeah. So take me through the, how you're thinking about supporting the partners that are leaning in that, you know, the parachute will open when they jump outta the plane. Yeah. It's gonna be, they're gonna land safely with you. Yeah. MDF marketing to leads. What are you doing to support the partners to help them serve their >>Customers? It's interesting. Market marketplace has become much more of an accepted way to buy, you know, our customers are, are really defaulting to that as the way to go get that third party software. So we've had some industry analysts do some studies and in what they found, they interviewed a whole cohort of ISVs across various categories within marketplace, whether it was security or network or even line of business software. And what they've found is that on average, our ISVs will see a 24% increased close rate by using marketplace. Right. So when I go talk to a CRO and say, do you want to close, you know, more deals? Yes. Right. And we've got data to show that we're also finding that customers on average, when an ISV sales marketplace, they're seeing an 80% uplift in the actual deal size. And so if your ASP is a hundred K 180 K has a heck of a lot better, right? >>So we're seeing increased deal sizes by going through marketplace. And then the third thing that we've seen, that's a value prop for ISVs is speed of closure. And so on average, what we're finding is that our ISVs are closing deals 40% faster by using marketplace. So if you've got a 10 month sales cycle, shaving four months off of a sales cycle means you're bringing deals in, in an earlier calendar year, earlier quarter. And for ISVs getting that cash flow early is very important. So those are great metrics that we're seeing. And, and, you know, we think that they're only >>Gonna improve and from startups who also want, they don't have a lot of cash ISVs that are rich and doing well. Yeah. They have good, good, good, good, good to market funding. Yeah. You got the range of partners and you know, the next startup could be the next Figma could be in that batch startups. Exactly. Yeah. You don't know the game is changing. Yeah. The next brand could be one of those batch of startups. Yeah. What's the message to the startup community. Yeah. >>I mean, marketplace in a lot of ways becomes a level in effect, right. Because, you know, if, if you look at pre marketplace, if you were a startup, you were having to go generate sales, have a sales force, go compete, you know, kind of hand to hand with these largest ISVs marketplace is really kind of leveling that because now you can both list in marketplace. You have the same advantage of putting that directly in the AWS bill, taking advantage of all the management go features that we offer all the automation that we bring to the table. And so >>A lot of us joint selling >>And joint selling, right? When it goes through marketplace, you know, it's gonna feed into a number of our APN programs like ISV accelerate, our sales teams are gonna get recognized for those deals. And so, you know, it brings nice co-sell behavior to how we work with our, our field sales teams together. It brings nice automation that, you know, pre marketplaces, they would have to go build all that. And that was a heavy lift that really now becomes just kind of table stakes for any kind of ISV selling to an, any of >>Customer. Well, you know, I'm a big fan of the marketplace. I've always have been, even from the early days, I saw this as a procurement game changer. It makes total sense. It's so obvious. Yeah. Not obvious to everyone, but there's a lot of moving parts behind the scenes behind the curtain. So to speak that you're handling. Yeah. What's your message to the audience out there, both the buyers and the sellers. Yeah. About what your mission is, what you're you wake up every day thinking about. Yeah. And what's your promise to them and what you're gonna work on. Cause it's not easy. You're building a, an operating model. That's not a website. It's a full on cloud service. Yeah. What's your promise. And what's >>Your goals. No. And like, you know, ultimately we're trying to do from an Aus market perspective is, is provide that selection experience to the ABUS customer, right? There's the infamous flywheel that Jeff put together that had the concepts of why Amazon is successful. And one are the concepts he points to is the concept of selection. And, and what we mean by that is if you come to Amazon it's is effectively that everything stored. And when you come across, AWS marketplace becomes that selection experience. And so that's what we're trying to do is provide whatever our AWS customers wanna buy, whatever form factor, whatever software type, whatever data type it's gonna be available in AWS marketplace for consumption. And that ultimately helps our customers because now they can get whatever technologies that they need to use alongside Avis. >>And I want, wanna give you props too. You answered the hard question on stage. I've asked Andy EY this on the cube when he was the CEO, Adam Celski last year, I asked him the same question and the answer has been consistent. We have some solutions that people want a AWS end to end, but your ecosystem, you want people to compete yes. And build a product and mostly point to things like snowflake, new Relic. Yeah. Other people that compete with Amazon services. Yeah. You guys want that. You encourage that. Yeah. You're ratifying that same statement. >>Absolutely. Right. Again, it feeds into that selection experience. Right. If a customer wants something, we wanna make sure it's gonna be a great experience. Right. And so a lot of these ISVs are building on top of AWS. We wanna make sure that they're successful. And, you know, while we have a number of our first party services, we have a variety of third party technologies that run very well in a AWS. And ultimately the customer's gonna make their decision. We're customer obsessed. And if they want to go with a third party product, we're absolutely gonna support them in every way shape we can and make sure that's a successful experience for our customers. >>I, I know you referenced two studies check out the website's got buyer and seller surveys on there for Boer. Yeah. I don't want to get into that. I want to just end on one. Yeah. Kind of final note, you got a lot of successful buyers and a lot of successful sellers. The word billions, yes. With an S was and the slide. Can you say the number, how much, how many billions are sold yeah. Through the marketplace. Yeah. And the buyer experience future what's those two things. >>Yeah. So we went on record at reinvent last year, so it's approaching it birthday, but it was the first year that we've in our 10 year history announced how much was actually being sold to the marketplace. And, you know, we are now selling billions of dollars to our marketplace and that's with an S so you can assume, at least it's two, but it's, it's a, it's a large number and it's going >>Very quickly. Yeah. Can't disclose, you know, >>But it's a, it's been a very healthy part of our business. And you know, we look at this, the experience that we >>Saw, there's a lot of headroom. I mean, oh yeah, you have infrastructure nailed down. That's long, you get better, but you have basically growth up upside with these categor other categories. What's the hot categories. You >>Know, we, we started off with infrastructure related products and we've kind of hit critical mass there. Right? We've, there's very few ISVs left that are in that infrastructure related space that are not in our marketplace. And what's happened now is our customers are saying, well, I've been buying infrastructure products for years. I'm gonna buy everything. I wanna buy my line of business software. I wanna buy my vertical solutions. I wanna buy my data and I wanna buy all my services alongside of that. And so there's tons of upside. We're seeing all of these either horizontal business applications coming to our marketplace or vertical specific solutions. Yeah. Which, you know, when we first designed our marketplace, we weren't sure if that would ever happen. We're starting to see that actually really accelerate because customers are now just defaulting to buying everything through their marketplace. >>Chris, thanks for coming on the queue. I know we went a little extra long. There wanted to get that clarification on the new role. Yeah. New organization. Great, great reorg. It makes a lot of sense. Next level NextGen. Thanks for coming on the cube. Okay. >>Thank you for the opportunity. >>All right here, covering the new big news here of AWS marketplace and the AWS partner network coming together under one coherent organization, serving fires and sellers, billions sold the future of how people are gonna be buying software, deploying it, managing it, operating it. It's all happening in the marketplace. This is the big trend. It's the cue here in Seattle with more coverage here at Davis marketplace sellers conference. After the short break.

Published Date : Sep 21 2022

SUMMARY :

If you work with AWS and are a partner and, or sell with them, And I think it's just gonna get even better Can you share what you talked about on the keynote stage here, And so if you take a look at marketplace where And, and so to remedy that we launched private offers, you know, four years ago. And you got all kinds of like business logic, compensation, integration, And so a lot of the conversations I have now with ISVs, it's not about, should I do marketplace it's how do I do and we see that we, you know, we can really enhance that experience, you know, and what we saw on the machine image side is we And what are you doing to make that better? And then you have a build step at the end. I'm kind of just, you know, saying no offense sake. of their high, you know, value functions, whether it's listing soy, private offers. you know, the parachute will open when they jump outta the plane. Market marketplace has become much more of an accepted way to buy, you know, And, and, you know, we think that they're only of partners and you know, the next startup could be the next Figma could be in that batch startups. have a sales force, go compete, you know, kind of hand to hand with these largest ISVs When it goes through marketplace, you know, it's gonna feed into a number of our APN programs And what's your promise to them and what you're gonna work on. And one are the concepts he points to is the concept of selection. And I want, wanna give you props too. And, you know, while we have a number of our first party services, And the buyer experience future what's those two things. And, you know, we are now selling billions of dollars to our marketplace and that's with an S so you can assume, And you know, we look at this, the experience that we I mean, oh yeah, you have infrastructure nailed down. Which, you know, when we first designed our marketplace, we weren't sure if that would ever happen. I know we went a little extra long. It's the cue here in Seattle with more coverage here at Davis marketplace sellers conference.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

ChrisPERSON

0.99+

Chris CruzPERSON

0.99+

SeattleLOCATION

0.99+

AmazonORGANIZATION

0.99+

Chris GruszPERSON

0.99+

10 monthQUANTITY

0.99+

APOORGANIZATION

0.99+

Adam CelskiPERSON

0.99+

24%QUANTITY

0.99+

JeffPERSON

0.99+

twoQUANTITY

0.99+

40%QUANTITY

0.99+

2013DATE

0.99+

80%QUANTITY

0.99+

10 yearQUANTITY

0.99+

last yearDATE

0.99+

two studiesQUANTITY

0.99+

two teamsQUANTITY

0.99+

four monthsQUANTITY

0.99+

bothQUANTITY

0.99+

two thingsQUANTITY

0.99+

billionsQUANTITY

0.98+

two organizationsQUANTITY

0.98+

two worldsQUANTITY

0.98+

four years agoDATE

0.98+

oneQUANTITY

0.98+

1995DATE

0.97+

billions of vowsQUANTITY

0.97+

one organizationQUANTITY

0.97+

one partner organizationQUANTITY

0.97+

firstQUANTITY

0.97+

FigmaORGANIZATION

0.97+

MonaPERSON

0.96+

third thingQUANTITY

0.95+

ABUSORGANIZATION

0.94+

first yearQUANTITY

0.94+

billions of dollarsQUANTITY

0.94+

reinventEVENT

0.93+

ISVORGANIZATION

0.92+

C CDTITLE

0.92+

Andy EYPERSON

0.9+

C I CDTITLE

0.9+

Davis marketplace sellersEVENT

0.9+

one major callQUANTITY

0.86+

one coherent organizationQUANTITY

0.85+

NextGenORGANIZATION

0.85+

Andy jazzyPERSON

0.83+

SageMakerORGANIZATION

0.83+

about two years agoDATE

0.83+

first partyQUANTITY

0.81+

one single motionQUANTITY

0.79+

earlier quarterDATE

0.78+

AWS Marketplace Seller Conference 2022EVENT

0.77+

AvisORGANIZATION

0.77+

Sirisha Kadamalakalva, DataRobot | AWS Marketplace Seller Conference 2022


 

>>Welcome back to the cubes coverage here in Seattle for AWS marketplace seller conference, the combination of the Amazon partner network, combined with the marketplace from the AWS partner organization, the APO and John Forer host of the queue, bringing you all the action and what it all means. Our next guest is Trisha kata, Malva, chief strategy officer at DataRobot. Great to have you. Thanks for coming on. >>Thank you, John. Great to be here. >>So DataRobot obviously in the big data business data is the big theme here. A lot of companies are in the marketplace selling data solutions. I just ran into snowflake person. I ran into another data analyst company, lot of, lot of data everywhere. You're seeing security. You're seeing insights a lot more going on with data than ever before. It's one of the most popular categories in the marketplace. Talk about DataRobot what you guys are doing. What's your product in there? Yeah, >>Absolutely. John. So we are an artificial intelligence machine learning platform company. We have been around for 10 years. This is this year marks our 10th anniversary and we provide a platform for data scientists and also citizen data scientists. So essentially wanna be data scientists on the business side to rapidly experiment with data and to get insights and then productionize ML models. So the 100% workflow that goes into identifying the data that you need for machine learning and then building models on top of that and operationalizing a, >>How big is the company, roughly employee count? What's the number in >>General general, about a thousand employees. And we have customers all over the world. Our biggest verticals are financial services, insurance, manufacturing, healthcare pharma, all the highly regulated, as well as our tech presence is also growing. And we have people spread across multiple geographies and I can't disclose a customer number, but needless to say, we have hundreds of customers across the >>World. A lot of customers. Yeah, yeah. You guys are well known in the industry have been following some of the recent news lately as well. Yeah. Obviously data's exploding. What in the marketplace are you guys offering? What's the pitch, someone hits the marketplace that wants to buy DataRobot what's the pitch. >>The pitch is if you're looking to get real value from your data science, personal investments and your data, then you have DataRobot that you can download from your AWS marketplace. You can do a free trial and essentially get from, get value from data in a matter of minutes and not months or quarters, that's generally associated with IML. And after that, if you want to purchase you, it's a private offer on, in the marketplace. So you need to call DataRobot representative, but AWS marketplace offers a fantastic distribution channel for us. >>Yeah. I mean, one of the things I heard Chris say, who's now heading up the marketplace and the partner network was the streamlining, a lot of the benefits for the sellers and for the buyers to have a great experience buyers. Clearly we see this as a macro trend, that's gonna only get stronger in terms of self-service buying bundling, having the console on AWS for low level services like infrastructure. But now you've got other business applications that like analytics applies to. You're seeing that work. Now he said things like than the keynote, I wanna get your reaction to like, we're gonna make this more like a C I C D pipeline. We're gonna have more native services built into AWS. What that means to me is that sounds like, oh, if I have a solution, like DataRobot, that can be more native into AWS level services. How do you see that working out for you guys is that play well for your strategy and your customers? What's the, what's the what's resonating with the >>Customers. It plays extremely well with the strategy. So I call this as a win, win, win strategy, win for DataRobot win for customers and win for AWS, which is our partner. And it's a win for DataRobot because the amount of people, the number of eyeballs that look at AWS marketplace, a significantly higher than, than the doors that we can go knock on. So it's a distribution multiplier for us. And the integration into AWS services that you're talking about. It is very important because in this day and age, we need to be interoperable with cloud player services that they offer, whether it is with SageMaker or Redshift, we support all of those. And it's a win for customers because customers, it is a very important growing buyer persona for DataRobot. Yeah. And they already have pre-committed spend with AWS and they can use the, those spend dollars for DataRobot to procure DataRobot. So it eases their procurement life cycle as >>Well. It's a forced multiplier on, on the revenue side, correct? I mean, as well as, as on the business front cost of sales, go down the cost of order dollar. Correct. This is good. Goodness. >>It's it's definitely sorry, just to finish my thought on the win for the partner for AWS. It's great win for them because they're getting the consumption from the partner side, to your point on the force multiplier. Absolutely. It is a force multiplier on the revenue side, and it's great for customers and us, because for us, we have seen that the deal size increases when there is the cloud commit that we can draw down for, for our customers, the procurement cycle shortens. And also we have multiple constituencies within the customers working together in a very seamless fashion. >>How has the procurement going through AWS helped your customers? What specific things are you seeing that are popping out as benefits to the customer? >>So from a procurement standpoint, we, we are early in our marketplace journey. We got listed about a year ago, but the amount of revenue that has gone through marketplace is pretty significant at DataRobot. We experienced like just in, by, I think this quarter until this quarter, we got like about 20 to 30 transactions that went through AWS marketplace. And that is significant within just a year of us operating on the marketplace. And the procurement becomes easier for our customers. Yeah. Because they trust AWS and we can put our legal paperwork through the AWS machine as well, which we haven't done yet. But if we do that, that'll be a further force multiplier because that's the, the less friction there is. >>I like how you say that it's a machine. Yeah. And if you think about the benefits too, like one of the things that I see happening, and I love to get your thoughts because I think this is what's happening here. Infrastructure services, I get that IAS done hardware I'm oversimplifying, but all the, all the goodness, but as customers have business apps and vertical market solutions, you got more AI involved. You need more data that's specialized for that use case. Or you need a business application. Those, you don't hear words like let's provision that app. I mean, your provision hardware and, and infrastructure, but the, the new net cloud native is that you provision turn on the apps. So you're seeing the wave of building apps are composing Lego blocks, if you will. So it seems like the customers are starting to assemble the solution, almost like deploying a service, correct. And just pressing a button. And it happens. This seems to be where the, the business apps are going. >>Yeah, absolutely. You agree for us? We are, we are a data science platform and for us being very close to the data that the customers have is very important. And where if, if the customer's data is in Redshift, we are close to there. So being very close to the hyperscale or ecosystem in that entire C I C D pipeline, and also the data platform pipeline is very important. >>You know, what's interesting is, is the data is such a big part of, I mean, DevOps infrastructure has code has been the movement for decade. Yeah. So throw security in there. It's dev SecOps. Yeah. That is the developer now. Yeah. They're running essentially what used to be it now the new ops is security and data. Yeah. You see, in those teams really level up to be highly high velocity data meshes, semantic layer. These are words I'm hearing in the industry around the big waves of data, having this mesh. Yeah. Having it connected. So you're starting to see data availability become more pervasive. And, and we see this as a way that's powering this next gen data science revolution where it's like the business person is now the data science person. >>That's exactly. That is, that is what DataRobot does the best. We were founded with the vision that we wanted to democratize the access to AI within enterprises. It shouldn't be restricted to a small group of people don't get me wrong. Data scientists also love DataRobot. They use DataRobot. But the mission is to enhance many, many hundreds of people within an organization to use data science, like how you use Tableau on a regular basis, how you use Microsoft Excel on a regular basis. We want to democratize AI. And when you want to democratize AI, you need to democratize access to data, which is, which could be stored in data marketplaces, which could be stored in data warehouses and push all the intelligence that we grab from that data into the E R P into the apps layer. Because at the end of the day, business users, customers consume predictions through applications layer. >>You know, it's interesting, you mentioned that comment about, you know, trying not to, to offend data scientists, it's actually a rising tide that the tsunami of data is actually making that population bigger too. Right. So correct. You also have data engineering, which has come out of the woodwork. We covered a lot on the cube, which is, you know, we call data as code. So infrastructure as code kind of a spoof on that. But the reality is that there's a lot more data engineering. I call that the smallest population. Those are the, those are the alphas, the alpha geeks. Yeah. Hardcore data operating systems, kind of education, data science, big pool growing. And then the users yeah. Are the new data science practitioners. Correct? Exactly. So kind of a, the landscape is you see that picture too, right? >>For sure. I mean, we, we have presence in all of those, right? Like data engineers are very important. Data scientists. Those are core users of DataRobot like, how can you develop thousands and hundreds of thousands of models without having to hand code? If you have to hand code, it takes months and years to solve one problem for one customer in one location. I mean, see how fast the microeconomic conditions are moving. And data engineers are very important because at the end of the day, yes, you do. You create the model, but you need to operationalize that model. You need to monitor that model for data drift. You need to monitor how the model is performing and you need to productionize the insights that you gain. And for that engineering effort is very important behind the scenes. Yeah. And the users at the end of the day, they are the ones who consume the predictions. >>Yeah. I mean the volume and, and the scale and scope of the data requires a lot of automation as well. Correct. Cause you had that on top of it. You gotta have a platform that's gonna do the heavy lifting. >>Correct. Exactly. The platform is we call it as an augmented platform. It augments data scientists by eliminating the tedious work that they don't want to do in their everyday life, which some of which is like feature engineering, right? It's a very high value add work. However, it takes like multiple iterations to understand which features in your data actually impact the outcome. >>This is where the SAS platform is a service is evolved and we call that super cloud, right. This new model where people can scale it out and up. So horizontally, scalable cloud, but vertically integrated into the applications. It's an integrator dilemma. Not so much correct innovators dilemma, as we say in the queue. Yeah. So I have to ask you, I'm a, I'm a buyer I'm gonna come to the marketplace. I want DataRobot why should they buy DataRobot what's in it for them? What's the key features of DataRobot for a company to hit the subscribe, buy button. >>Absolutely. Do you want to scale your data science to multiple projects? Do you want to be ahead of your competition? Do you want to make AI real? That is our pitch. We are not about doing data science for the sake of data science. We are about generating business value out of data science. And we have done it for hundreds of customers in multiple different verticals across the world, whether it is investment banks or regional banks or insurance companies or healthcare companies, we have provided real value out of data for them. And we have the knowhow in how to solve, whether it is your supply chain, forecasting, problem, demand, forecasting problem, whether it is your foreign exchange training problem, how to solve all these use cases with AI, with DataRobot. So if you want to be in the business of using your data and being ahead of your competitors, DataRobot is your tool log choice. >>Sure. Great to have you on the cube as a strategy officer, you gotta look at the chess board, right. And we're kind of in the mid game, I call it the cloud opening game was, you know, happened. Now we're in the mid game of cloud computing where you're seeing a lot of refactoring of opportunities where technologies and data is the key to success, being things secure and operationally, scalable, etcetera, et cetera. What's the key right now for the ecosystem as a strategy, look at the chessboard for data robots. Obviously marketplace is important strategy. Yeah. And bet for, for DataRobot. What else do you see for your company to be successful? And you could share with, with customers watching. >>Yeah. For us, we are in the intelligence layer, the data, the layer below us is the data layer. The layer about us is the applications and the engagement layer. DataRobot I mean, interoperability and ecosystem is important for every company, but for DataRobot it's extra important because we are in that middle of middle layer of intelligence. And we, we have to integrate with all different data warehouses out there enable our customers to pull the data out in a very, very faster way and then showcase all the predictions into, into their tool of choice. And from a chessboard perspective, I like your phrase of we are in the mid cycle of the cloud revolution. Yeah. And every cloud player has a data science platform, whether it is simple one or more complex one, or whether it has been around for quite some time or it's been latent features. And it is important for us that we have complimentary value proposition with all of them, because at the end of the day, we want to maximize our customer's choice. And DataRobot wants to be a neutral platform in supporting all the different vendors out there from a complementary standpoint, because you don't want to have a vendor lock in for your customers. So you create models in SageMaker. For example, you monitor those in DataRobot or you create models in DataRobot and monitor those in AWS so that you have to provide like a very flexible >>That's a solution architecture. >>Correct? Exactly. You have to provide a very flexible tech stack for your customers. >>Yeah. That's the choice. That's the choice. It's all good. Thank you for coming on the cube, sharing the data robot. So I really appreciate it. Thank >>You for coming. Thank you very much for the opportunity. >>Okay. Breaking it all down with the partners here, the marketplace, it's the future, obviously where people are gonna buy the buyers and sellers coming together, the partner network and marketplace, the big news here at 80 seller conference. I'm John ferry with the cube will be right back with more coverage after this short break.

Published Date : Sep 21 2022

SUMMARY :

AWS partner organization, the APO and John Forer host of the queue, bringing you all the action and So DataRobot obviously in the big data business data is the big theme here. So the 100% workflow that goes into identifying the data a customer number, but needless to say, we have hundreds of customers across the What in the marketplace are you guys offering? And after that, if you want to purchase you, it's a private offer on, out for you guys is that play well for your strategy and your customers? a significantly higher than, than the doors that we can go knock on. cost of sales, go down the cost of order dollar. It is a force multiplier on the revenue side, And the procurement becomes easier for our customers. So it seems like the customers are starting to assemble the solution, if the customer's data is in Redshift, we are close to there. That is the developer now. But the mission is to enhance So kind of a, the landscape is you see that picture too, right? at the end of the day, yes, you do. You gotta have a platform that's gonna do the heavy lifting. It augments data scientists by eliminating the tedious What's the key features of DataRobot for a company to hit the subscribe, So if you want to be in the business of using your data and being ahead of your competitors, the mid game, I call it the cloud opening game was, you know, happened. because at the end of the day, we want to maximize our customer's choice. You have to provide a very flexible tech stack for your customers. That's the choice. Thank you very much for the opportunity. I'm John ferry with the cube will be right back with more coverage after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

JohnPERSON

0.99+

Sirisha KadamalakalvaPERSON

0.99+

ChrisPERSON

0.99+

SeattleLOCATION

0.99+

APOORGANIZATION

0.99+

thousandsQUANTITY

0.99+

100%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

DataRobotORGANIZATION

0.99+

Trisha kataPERSON

0.99+

MalvaPERSON

0.99+

hundreds of customersQUANTITY

0.99+

one problemQUANTITY

0.99+

one customerQUANTITY

0.99+

one locationQUANTITY

0.99+

10th anniversaryQUANTITY

0.98+

TableauTITLE

0.98+

DataRobotTITLE

0.98+

hundredsQUANTITY

0.98+

a yearQUANTITY

0.98+

30 transactionsQUANTITY

0.97+

oneQUANTITY

0.96+

John ferryPERSON

0.96+

about 20QUANTITY

0.95+

hundreds of thousands of modelsQUANTITY

0.94+

SageMakerTITLE

0.93+

this quarterDATE

0.93+

John ForerPERSON

0.92+

about a thousand employeesQUANTITY

0.92+

aboutDATE

0.89+

decadeQUANTITY

0.87+

LegoORGANIZATION

0.86+

monthsQUANTITY

0.86+

DevOpsTITLE

0.86+

a year agoDATE

0.85+

Marketplace Seller ConferenceEVENT

0.84+

80 seller conferenceEVENT

0.83+

this yearDATE

0.78+

10 yearsQUANTITY

0.78+

yearsQUANTITY

0.76+

SecOpsTITLE

0.71+

of dataQUANTITY

0.68+

peopleQUANTITY

0.67+

RedshiftTITLE

0.67+

IASTITLE

0.65+

big waves of dataEVENT

0.63+

DataRobotCOMMERCIAL_ITEM

0.62+

devTITLE

0.59+

2022DATE

0.57+

Digging into HeatWave ML Performance


 

(upbeat music) >> Hello everyone. This is Dave Vellante. We're diving into the deep end with AMD and Oracle on the topic of mySQL HeatWave performance. And we want to explore the important issues around machine learning. As applications become more data intensive and machine intelligence continues to evolve, workloads increasingly are seeing a major shift where data and AI are being infused into applications. And having a database that simplifies the convergence of transaction and analytics data without the need to context, switch and move data out of and into different data stores. And eliminating the need to perform extensive ETL operations is becoming an industry trend that customers are demanding. At the same time, workloads are becoming more automated and intelligent. And to explore these issues further, we're happy to have back in theCUBE Nipun Agarwal, who's the Senior Vice President of mySQL HeatWave and Kumaran Siva, who's the Corporate Vice President Strategic Business Development at AMD. Gents, hello again. Welcome back. >> Hello. Hi Dave. >> Thank you, Dave. >> Okay. Nipun, obviously machine learning has become a must have for analytics offerings. It's integrated into mySQL HeatWave. Why did you take this approach and not the specialized database approach as many competitors do right tool for the right job? >> Right? So, there are a lot of customers of mySQL who have the need to run machine learning on the data which is store in mySQL database. So in the past, customers would need to extract the data out of mySQL and they would take it to a specialized service for running machine learning. Now, the reason we decided to incorporate machine learning inside the database, there are multiple reasons. One, customers don't need to move the data. And if they don't need to move the data, it is more secure because it's protected by the same access controlled mechanisms as rest of the data There is no need for customers to manage multiple services. But in addition to that, when we run the machine learning inside the database customers are able to leverage the same service the same hardware, which has been provisioned for OTP analytics and use machine learning capabilities at no additional charge. So from a customer's perspective, they get the benefits that it is a single database. They don't need to manage multiple services. And it is offered at no additional charge. And then as another aspect, which is kind of hard to learn which is based on the IP, the work we have done it is also significantly faster than what customers would get by having a separate service. >> Just to follow up on that. How are you seeing customers use HeatWaves machine learning capabilities today? How is that evolving? >> Right. So one of the things which, you know customers very often want to do is to train their models based on the data. Now, one of the things is that data in a database or in a transaction database changes quite rapidly. So we have introduced support for auto machine learning as a part of HeatWave ML. And what it does is that it fully automates the process of training. And this is something which is very important to database users, very important to mySQL users that they don't really want to hire or data scientists or specialists for doing training. So that's the first part that training in HeatWave ML is fully automated. Doesn't require the user to provide any like specific parameters, just the source data and the task which they want to train. The second aspect is the training is really fast. So the training is really fast. The benefit is that customers can retrain quite often. They can make sure that the model is up to date with any changes which have been made to their transaction database. And as a result of the models being up to date, the accuracy of the prediction is high. Right? So that's the first aspect, which is training. The second aspect is inference, which customers run once they have the models trained. And the third thing, which is perhaps been the most sought after request from the mySQL customers is the ability to provide explanations. So, HeatWave ML provides explanations for any model which has been generated or trained by HeatWave ML. So these are the three capabilities- training, inference and explanations. And this whole process is completely automated, doesn't require a specialist or a data scientist. >> Yeah, that's nice. I mean, training obviously very popular today. I've said inference I think is going to explode in the coming decade. And then of course, AI explainable AI is a very important issue. Kumaran, what are the relevant capabilities of the AMD chips that are used in OCI to support HeatWave ML? Are they different from say the specs for HeatWave in general? >> So, actually they aren't. And this is one of the key features of this architecture or this implementation that is really exciting. Um, there with HeatWave ML, you're using the same CPU. And by the way, it's not a GPU, it's a CPU for both for all three of the functions that Nipun just talked about- inference, training and explanation all done on CPU. You know, bigger picture with the capabilities we bring here we're really providing a balance, you know between the CPU cores, memory and the networking. And what that allows you to do here is be able to feed the CPU cores appropriately. And within the cores, we have these AVX instruc... extensions in with the Zen 2 and Zen 3 cores. We had AVX 2, and then with the Zen 4 core coming out we're going to have AVX 512. But we were able to with that balance of being able to bring in the data and utilize the high memory bandwidth and then use the computation to its maximum we're able to provide, you know, build pride enough AI processing that we are able to get the job done. And then we're built to build a fit into that larger pipeline that that we build out here with the HeatWave. >> Got it. Nipun you know, you and I every time we have a conversation we've got to talk benchmarks. So you've done machine learning benchmarks with HeatWave. You might even be the first in the industry to publish you know, transparent, open ML benchmarks on GitHub. I mean, I, I wouldn't know for sure but I've not seen that as common. Can you describe the benchmarks and the data sets that you used here? >> Sure. So what we did was we took a bunch of open data sets for two categories of tasks- classification and regression. So we took about a dozen data sets for classification and about six for regression. So to give an example, the kind of data sets we used for classifications like the airlines data set, hex sensors bank, right? So these are open data sets. And what we did was for on these data sets we did a comparison of what would it take to train using HeatWave ML? And then the other service we compared with is that RedShift ML. So, there were two observations. One is that with HeatWave ML, the user does not need to provide any tuning parameters, right? The HeatWave ML using RML fully generates a train model, figures out what are the right algorithms? What are the right features? What are the right hyper parameters and sets, right? So no need for any manual intervention not so the case with Redshift ML. The second thing is the performance, right? So the performance of HeatWave ML aggregate on these 12 data sets for classification and the six data sets on regression. On an average, it is 25 times faster than Redshift ML. And note that Redshift ML in turn involves SageMaker, right? So on an average, HeatWave ML provides 25 times better performance for training. And the other point to note is that there is no need for any human intervention. That's fully automated. But in the case of Redshift ML, many of these data sets did not even complete in the set duration. If you look at price performance, one of the things again I want to highlight is because of the fact that AMD does pretty well in all kinds of workloads. We are able to use the same cluster users and use the same cluster for analytics, for OTP or for machine learning. So there is no additional cost for customers to run HeatWave ML if they have provision HeatWave. But assuming a user is provisioning a HeatWave cluster only to run HeatWave ML, right? That's the case, even in that case the price performance advantage of HeatWave ML over Redshift ML is 97 times, right? So 25 times faster at 1% of the cost compared to Redshift ML And all these scripts and all this information is available on GitHub for customers to try to modify and like, see, like what are the advantages they would get on their workloads? >> Every time I hear these numbers, I shake my head. I mean, they're just so overwhelming. Um, and so we'll see how the competition responds when, and if they respond. So, but thank you for sharing those results. Kumaran, can you elaborate on how the specs that you talked about earlier contribute to HeatWave ML's you know, benchmark results. I'm particularly interested in scalability, you know Typically things degrade as you push the system harder. What are you seeing? >> No, I think, I think it's good. Look, yeah. That's by those numbers, just blow me, blow my head too. That's crazy good performance. So look from, from an AMD perspective, we have really built an architecture. Like if you think about the chiplet architecture to begin with, it is fundamentally, you know, it's kind of scaling by design, right? And, and one of the things that we've done here is been able to work with, with the HeatWave team and heat well ML team, and then been able to, to within within the CPU package itself, be able to scale up to take very efficient use of all of the course. And then of course, work with them on how you go between nodes. So you can have these very large systems that can run ML very, very efficiently. So it's really, you know, building on the building blocks of the chiplet architecture and how scaling happens there. >> Yeah. So it's you're saying it's near linear scaling or essentially. >> So, let Nipun comment on that. >> Yeah. >> Is it... So, how about as cluster sizes grow, Nipun? >> Right. >> What happens there? >> So one of the design points for HeatWave is scale out architecture, right? So as you said, that as we add more data set or increase the size of the data, or we add the number of nodes to the cluster, we want the performance to scale. So we show that we have near linear scale factor, or nearly near scale scalability for SQL workloads in the case of HeatWave ML, as well. As users add more nodes to the cluster so the size of the cluster the performance of HeatWave ML improves. So I was giving you this example that HeatWave ML is 25 times faster compared to Redshift ML. Well, that was on a cluster size of two. If you increase the cluster size of HeatWave ML to a larger number. But I think the number is 16. The performance advantage over Redshift ML increases from 25 times faster to 45 times faster. So what that means is that on a cluster size of 16 nodes HeatWave ML is 45 times faster for training these again, dozen data sets. So this shows that HeatWave ML skills better than the computation. >> So you're saying adding nodes offsets any management complexity that you would think of as getting in the way. Is that right? >> Right. So one is the management complexity and which is why by features like last customers can scale up or scale down, you know, very easily. The second aspect is, okay What gives us this advantage, right, of scalability? Or how are we able to scale? Now, the techniques which we use for HeatWave ML scalability are a bit different from what we use for SQL processing. So in the case of HeatWave ML, they really like, you know, three, two trade offs which we have to be careful about. One is the accuracy. Because we want to provide better performance for machine learning without compromising on the accuracy. So accuracy would require like more synchronization if you have multiple threads. But if you have too much of synchronization that can slow down the degree of patterns that we get. Right? So we have to strike a fine balance. So what we do is that in HeatWave ML, there are different phases of training, like algorithm selection, feature selection, hyper probability training. Each of these phases is analyzed. And for instance, one of the ways techniques we use is that if you're trying to figure out what's the optimal hyper parameter to be used? We start up with the search space. And then each of the VMs gets a part of the search space. And then we synchronize only when needed, right? So these are some of the techniques which we have developed over the years. And there are actually paper's filed, research publications filed on this. And this is what we do to achieve good scalability. And what that results to the customer is that if they have some amount of training time and they want to make it better they can just provision a larger cluster and they will get better performance. >> Got it. Thank you. Kumaran, when I think of machine learning, machine intelligence, AI, I think GPU but you're not using GPU. So how are you able to get this type of performance or price performance without using GPU's? >> Yeah, definitely. So yeah, that's a good point. And you think about what is going on here and you consider the whole pipeline that Nipun has just described in terms of how you get you know, your training, your algorithms And using the mySQL pieces of it to get to the point where the AI can be effective. In that process what happens is you have to have a lot of memory to transactions. A lot of memory bandwidth comes into play. And then bringing all that data together, feeding the actual complex that does the AI calculations that in itself could be the bottleneck, right? And you can have multiple bottlenecks along the way. And I think what you see in the AMD architecture for epic for this use case is the balance. And the fact that you are able to do the pre-processing, the AI, and then the post-processing all kind of seamlessly together, that has a huge value. And that goes back to what Nipun was saying about using the same infrastructure, gets you the better TCO but it also gets you gets you better performance. And that's because of the fact that you're bringing the data to the computation. So the computation in this case is not strictly the bottleneck. It's really about how you pull together what you need and to do the AI computation. And that is, that's probably a more, you know, it's a common case. And so, you know, you're going to start I think the least start to see this especially for inference applications. But in this case we're doing both inference explanation and training. All using the the CPU in the same OCI infrastructure. >> Interesting. Now Nipun, is the secret sauce for HeatWave ML performance different than what we've discussed before you and I with with HeatWave generally? Is there some, you know, additive engine additive that you're putting in? >> Right? Yes. The secret sauce is indeed different, right? Just the way I was saying that for SQL processing. The reason we get very good performance and price performance is because we have come up with new algorithms which help the SQL process can scale out. Similarly for HeatWave ML, we have come up with new IP, new like algorithms. One example is that we use meta-learn proxy models, right? That's the technique we use for automating the training process, right? So think of this meta-learn proxy models to be like, you know using machine learning for machine learning training. And this is an IP which we developed. And again, we have published the results and the techniques. But having such kind of like techniques is what gives us a better performance. Similarly, another thing which we use is adaptive sampling that you can have a large data set. But we intelligently sample to figure out that how can we train on a small subset without compromising on the accuracy? So, yes, there are many techniques that you have developed specifically for machine learning which is what gives us the better performance, better price performance, and also better scalability. >> What about mySQL autopilot? Is there anything that differs from HeatWave ML that is relevant? >> Okay. Interesting you should ask. So mySQL Autopilot is think of it to be an application using machine learning. So mySQL Autopilot uses machine learning to automate various aspects of the database service. So for instance, if you want to figure out that what's the right partitioning scheme to partition the data in memory? We use machine learning techniques to figure out that what's the right, the best column based on the user's workload to partition the data in memory Or given a workload, if you want to figure out what is the right cluster size to provision? That's something we use mySQL autopilot for. And I want to highlight that we don't aware of any other database service which provides this level of machine learning based automation which customers get with mySQL Autopilot. >> Hmm. Interesting. Okay. Last question for both of you. What are you guys working on next? What can customers expect from this collaboration specifically in this space? Maybe Nipun, you can start and then Kamaran can bring us home. >> Sure. So there are two things we are working on. One is based on the feedback we have gotten from customers, we are going to keep making the machine learning capabilities richer in HeatWave ML. That's one dimension. And the second thing is which Kamaran was alluding to earlier, We are looking at the next generation of like processes coming from AMD. And we will be seeing as to how we can more benefit from these processes whether it's the size of the L3 cache, the memory bandwidth, the network bandwidth, and such or the newer effects. And make sure that we leverage the all the greatness which the new generation of processes will offer. >> It's like an engineering playground. Kumaran, let's give you the final word. >> No, that's great. Now look with the Zen 4 CPU cores, we're also bringing in AVX 512 instruction capability. Now our implementation is a little different. It was in, in Rome and Milan, too where we use a double pump implementation. What that means is, you know, we take two cycles to do these instructions. But the key thing there is we don't lower our speed of the CPU. So there's no noisy neighbor effects. And it's something that OCI and the HeatWave has taken full advantage of. And so like, as we go out in time and we see the Zen 4 core, we can... we see up to 96 CPUs that that's going to work really well. So we're collaborating closely with, with OCI and with the HeatWave team here to make sure that we can take advantage of that. And we're also going to upgrade the memory subsystem to get to 12 channels of DDR 5. So it should be, you know there should be a fairly significant boost in absolute performance. But more important or just as importantly in TCO value for the customers, the end customers who are going to adopt this great service. >> I love their relentless innovation guys. Thanks so much for your time. We're going to have to leave it there. Appreciate it. >> Thank you, David. >> Thank you, David. >> Okay. Thank you for watching this special presentation on theCUBE. Your leader in enterprise and emerging tech coverage.

Published Date : Sep 14 2022

SUMMARY :

And eliminating the need and not the specialized database approach So in the past, customers How are you seeing customers use So one of the things of the AMD chips that are used in OCI And by the way, it's not and the data sets that you used here? And the other point to note elaborate on how the specs And, and one of the things or essentially. So, how about as So one of the design complexity that you would So in the case of HeatWave ML, So how are you able to get And the fact that you are Nipun, is the secret sauce That's the technique we use for automating of the database service. What are you guys working on next? And the second thing is which Kamaran Kumaran, let's give you the final word. OCI and the HeatWave We're going to have to leave it there. and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

RomeLOCATION

0.99+

DavePERSON

0.99+

DavidPERSON

0.99+

OCIORGANIZATION

0.99+

Nipun AgarwalPERSON

0.99+

MilanLOCATION

0.99+

45 timesQUANTITY

0.99+

25 timesQUANTITY

0.99+

12 channelsQUANTITY

0.99+

OracleORGANIZATION

0.99+

AMDORGANIZATION

0.99+

Zen 4COMMERCIAL_ITEM

0.99+

KumaranPERSON

0.99+

HeatWaveORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

second aspectQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

12 data setsQUANTITY

0.99+

first aspectQUANTITY

0.99+

97 timesQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.99+

oneQUANTITY

0.99+

EachQUANTITY

0.99+

1%QUANTITY

0.99+

two cyclesQUANTITY

0.99+

three capabilitiesQUANTITY

0.99+

third thingQUANTITY

0.99+

eachQUANTITY

0.99+

AVX 2COMMERCIAL_ITEM

0.99+

AVX 512COMMERCIAL_ITEM

0.99+

second thingQUANTITY

0.99+

Redshift MLTITLE

0.99+

six data setsQUANTITY

0.98+

HeatWaveTITLE

0.98+

mySQL AutopilotTITLE

0.98+

twoQUANTITY

0.98+

NipunPERSON

0.98+

two categoriesQUANTITY

0.98+

mySQLTITLE

0.98+

two observationsQUANTITY

0.98+

first partQUANTITY

0.98+

mySQL autopilotTITLE

0.98+

threeQUANTITY

0.97+

SQLTITLE

0.97+

One exampleQUANTITY

0.97+

single databaseQUANTITY

0.95+

16QUANTITY

0.95+

todayDATE

0.95+

about sixQUANTITY

0.95+

HeatWavesORGANIZATION

0.94+

about a dozen data setsQUANTITY

0.94+

16 nodesQUANTITY

0.93+

mySQL HeatWaveTITLE

0.93+

Raghu Raghuram, VMware | VMware Explore 2022


 

>>Okay, welcome back everyone. There's the cubes coverage of VMware Explorer, 22 formerly world. We've been here since 2010 and world 2010 to now it's 2022. And it's VMware Explorer. We're here at the CEO, regular writer. Welcome back to the cube. Great to see you in person. >>Yeah. Great to be here in person, >>Dave and I are, are proud to say that we've been to 12 straight years of covering VMware's annual conference. And thank you. We've seen the change in the growth over time and you know, it's kind of, I won't say pinch me moment, but it's more of a moment of there's the VMware that's grown into the cloud after your famous deal with Andy jazzy in 2016, we've been watching what has been a real sea change and VMware since taking that legacy core business and straightening out the cloud strategy in 2016, and then since then an acceleration of, of cloud native, like direction under your leadership at VMware. Now you're the CEO take us through that because this is where we are right now. We are here at the pinnacle of VMware 2.0 or cloud native VMware, as you point out on your keynote, take us through that history real quick. Cuz I think it's important to know that you've been the architect of a lot of this change and it's it's working. >>Yeah, definitely. We are super excited because like I said, it's working, the history is pretty simple. I mean we tried running our own cloud cloud air. We cloud air didn't work so well. Right. And then at that time, customers really gave us strong feedback that the hybrid they wanted was a Amazon together. Right. And so that's what we went back and did and the andjay announcement, et cetera. And then subsequently as we were continue to build it out, I mean, once that happened, we were able to go work with the Satia and Microsoft and others to get the thing built out all over. Then the next question was okay, Hey, that's great for the workloads that are running on vSphere. What's the story for workloads that are gonna be cloud native and benefit a lot from being cloud native. So that's when we went the Tansu route and the Kubernetes route, we did a couple of acquisitions and then we started that started paying off now with the Tansu portfolio. And last but not the least is once customers have this distributed portfolio now, right. Increasingly everything is becoming multi-cloud. How do you manage and connect and secure. So that's what you start seeing that you saw the management announcement, networking and security and everything else is cooking. And you'll see more stuff there. >>Yeah know, we've been talking about super cloud. It's kinda like a multi-cloud on steroids kind a little bit different pivot of it. And we're seeing some use cases. >>No, no, it's, it's a very great, it's a, it's pretty close to what we talk about. >>Awesome. I mean, and we're seeing this kind of alignment in the industry. It's kind of open, but I have to ask you, when did you, you have the moment where you said multicloud is the game changer moment. When did you have, because you guys had hybrid, which is really early as well. When was the Raghu? When did you have the moment where you said, Hey, multicloud is what's happening. That's we're doubling down on that go. >>I mean, if you think about the evolution of the cloud players, right. Microsoft really started picking up around the 2018 timeframe. I mean, I'm talking about Azure, right? >>In a big way. >>Yeah. In a big way. Right. When that happened and then Google got really serious, it became pretty clear that this was gonna be looking more like the old database market than it looked like a single player cloud market. Right. Equally sticky, but very strong players all with lots of IP creation capability. So that's when we said, okay, from a supplier side, this is gonna become multi. And from a customer side that has always been their desire. Right. Which is, Hey, I don't want to get locked into anybody. I want to do multiple things. And the cloud vendors also started leveraging that OnPrem. Microsoft said, Hey, if you're a windows customer, your licensing is gonna be better off if you go to Azure. Right. Oracle did the same thing. So it just became very clear. >>I am, I have gone make you laugh. I always go back to the software mainframe because I, I think you were here. Right. I mean, you're, you're almost 20 years in. Yeah. And I, the reason I appreciate that is because, well, that's technically very challenging. How do you make virtualization overhead virtually non-existent how do you run any workload? Yeah. How do you recover from, I mean, that's was not trivial. Yeah. Okay. So what's the technical, you know, analog today, the real technical challenge. When you think about cross cloud services. >>Yeah. I mean, I think it's different for each of these layers, right? So as I was alluding to for management, I mean, you can go each one of them by themselves, there is one way of Mo doing multi-cloud, which is multiple clouds. Right. You could say, look, I'm gonna build a great product for AWS. And then I'm gonna build a great product for Azure. I'm gonna build a great product for Google. That's not what aria is. Aria is a true multi-cloud, which means it pulls data in from multiple places. Right? So there are two or three, there are three things that aria has done. That's I think is super interesting. One is they're not trying to take all the data and bring it in. They're trying to federate the data sources. And secondly, they're doing it in real time and they're able to construct this graph of a customer's cloud resources. >>Right. So to keep the graph constructed and pulling data, federating data, I think that's a very interesting concept. The second thing that, like I said is it's a real time because in the cloud, a container might come and go like that. Like that is a second technical challenge. The third it's not as much a technical challenge, but I really like what they have done for the interface they've used GraphQL. Right? So it's not about if you remember in the old world, people talk about single pan or glass, et cetera. No, this is nothing to do with pan or glass. This is a data model. That's a graph and a query language that's suited for that. So you can literally think of whatever you wanna write. You can write and express it in GraphQL and pull all sorts of management applications. You can say, Hey, I can look at cost. I can look at metrics. I can look at whatever it is. It's not five different types of applications. It's one, that's what I think had to do it at scale is the other problem. And, and >>The, the technical enable there is just it's good software. It's a protocol. It's >>No, no, it's, it's, it's it's software. It's a data model. And it's the Federation architecture that they've got, which is open. Right. You can pull in data from Datadog, just as well as from >>Pretty >>Much anything data from VR op we don't care. Right? >>Yeah. Yeah. So rego, I have to ask you, I'm glad you like the Supercloud cuz you know, we, we think multi-cloud still early, but coming fast. I mean, everyone has multiple clouds, but spanning this idea of spanning across has interesting sequences. Do you data, do you do computer both and a lot of good things happening. Kubernetes been containers, all that good stuff. Okay. How do you see the first rev of multi-cloud evolving? Like is it what happens? What's the sequence, what's the order of operations for a client standpoint? Customer standpoint of, of multicloud or Supercloud because we think we're seeing it as a refactoring of something like snowflake, they're a data base, they're a data warehouse on the cloud. They, they say data cloud they'd they like they'll tell us no, you, we're not a data. We're not a data warehouse. We're data cloud. Okay. You're a data warehouse refactored for the CapEx from Amazon and cooler, newer things. Yeah, yeah, yeah. That's a behavior change. Yeah. But it's still a data warehouse. Yeah. How do you see this multi-cloud environment? Refactoring? Is there something that you see that might be different? That's the same if you know what I'm saying? Like what's what, what's the ne the new thing that's happening with multi-cloud, that's different than just saying I'm I'm doing SAS on the cloud. >>Yeah. So I would say, I would point to a, a couple of things that are different. Firstly, my, the answer depends on which category you are in. Like the category that snowflake is in is very different than Kubernetes or >>Something or Mongo DB, right? >>Yeah. Or Mongo DB. So, so it is not appropriate to talk about one multi-cloud approach across data and compute and so, so on and so forth. So I'll talk about the spaces that we play. Right. So step one, for most customers is two application architectures, right? The cloud native architecture and an enterprise native architecture and tying that together either through data or through networks or through et cetera. So that's where most of the customers are. Right. And then I would say step two is to bring these things together in a more, in a closer fashion and that's where we are going. And that is why you saw the cloud universal announcement and that's already, you've seen the Tansu announcement, et cetera. So it's really, the step one was two distinct clouds. That is just two separate islands. >>So the other thing that we did, that's really what my, the other thing that I'd like to get to your reaction on, cause this is great. You're like a masterclass in the cube here. Yeah, totally is. We see customers becoming super clouds because they're getting the benefit of, of VMware, AWS. And so if I'm like a media company or insurance company, if I have scale, if I continue to invest in, in cloud native development, I do all these things. I'm gonna have a da data scale advantage, possibly agile, which means I can build apps and functionality very quick for customers. I might become my own cloud within the vertical. Exactly. And so I could then service other people in the insurance vertical if I'm the insurance company with my technology and create a separate power curve that never existed before. Cause the CapEx is off the table, it's operating expense. Yep. That runs into the income statement. Yep. This is a fundamental business model shift and an advantage of this kind of scenario. >>And that's why I don't think snowflakes, >>What's your reaction to that? Cuz that's something that, that is not really, talk's highly nuanced and situational. But if Goldman Sachs builds the biggest cloud on the planet for financial service for their own benefit, why wouldn't they >>Exactly. >>And they're >>Gonna build it. They sort of hinted at it that when they were up on stage on AWS, right. That is just their first big step. I'm pretty sure over time they would be using other clouds. Think >>They already are on >>Prem. Yeah. On prem. Exactly. They're using VMware technology there. Right? I mean think about it, AWS. I don't know how many billions of dollars they're spending on AWS R and D Microsoft is doing the same thing. Google's doing the same thing we are doing. Not as much as them that you're doing oral chair. Yeah. If you are a CIO, you would be insane not to take advantage of all of this IP that's getting created and say, look, I'm just gonna bet on one. Doesn't make any sense. Right. So that's what you're seeing. And then >>I think >>The really smart companies, like you talked about would say, look, I will do something for my industry that uses these underlying clouds as the substrate, but encapsulates my IP and my operating model that I then offer to other >>Partners. Yeah. And their incentive for differentiation is scale. Yeah. And capability. And that's a super cloud. That's a, or would be say it environment. >>Yeah. But this is why this, >>It seems like the same >>Game, but >>This, I mean, I think it environment is different than >>Well, I mean it advantage to help the business, the old day service, you >>Said snowflake guys out the marketing guys. So you, >>You said snowflake data warehouse. See, I don't think it's in data warehouse. It's not, that's like saying, you >>Know, I, over >>VMware is a virtualization company or service now is a help desk tool. I, this is the change. Yes. That's occurring. Yes. And that you're enabling. So take the Goldman Sachs example. They're gonna run OnPrem. They're gonna use your infrastructure to do selfer. They're gonna build on AWS CapEx. They're gonna go across clouds and they're gonna need some multi-cloud services. And that's your opportunity. >>Exactly. That's that's really, when you, in the keynote, I talked about cloud universal. Right? So think of a future where we can go to a customer and say, Mr. Customer buy thousand scores, a hundred thousand cores, whatever capacity you can use it, any which way you want on any application platform. Right. And it could be OnPrem. It could be in the cloud, in the cloud of their choice in multiple clouds. And this thing can be fungible and they can tie it to the right services. If they like SageMaker they could tie it to Sage or Aurora. They could tie it to Aurora, cetera, et cetera. So I think that's really the foundation that we are setting. Well, I think, I >>Mean, you're building a cloud across clouds. I mean, that's the way I look at it. And, and that's why it's, to me, the, the DPU announcement, the project Monterey coming to fruition is so important. Yeah. Because if you don't have that, if you're not on that new Silicon curve yep. You're gonna be left behind. Oh, >>Absolutely. It allows us to build things that you would not otherwise be able to do, >>Not to pat ourselves on the back Ragu. But we, in what, 2013 day we said, feel >>Free. >>We, we said with Lou Tucker when OpenStack was crashing. Yeah. Yeah. And then Kubernetes was just a paper. We said, this could be the interoperability layer. Yeah. You got it. And you could have inter clouding cuz there was no clouding. I was gonna riff on inter networking. But if you remember inter networking during the OSI model, TCP and IP were hardened after the physical data link layer was taken care of. So that enabled an entire new industry that was open, open interconnect. Right. So we were saying inter clouding. So what you're kind of getting at with cross cloud is you're kind of creating this routing model if you will. Not necessarily routing, but like connection inter clouding, we called it. I think it's kinda a terrible name. >>What you said about Kubernetes is super critical. It is turning out to be the infrastructure API so long. It has been an infrastructure API for a certain cluster. Right. But if you think about what we said about VSE eight with VSE eight Kubernetes becomes the data center API. Now we sort of glossed over the point of the keynote, but you could do operations storage, anything that you can do on vSphere, you can do using a Kubernetes API. Yeah. And of course you can do all the containers in the Kubernetes clusters and et cetera, is what you could always do. Now you could do that on a VMware environment. OnPrem, you could do that on EKS. Now Kubernetes has become the standard programming model for infrastructure across. It >>Was the great equalizer. Yeah. You, we used to say Amazon turned the data center through an API. It turns, turns of like a lot of APIs and a lot of complexity. Right. And Kubernetes changed. >>Well, the role, the role of defacto standards played a lot into the T C P I P revolution before it became a standard standard. What the question Raghu, as you look at, we had submit on earlier, we had tutorial on as well. What's the disruptive enabler from a defacto. What in your mind, what should, because Kubernetes became kind of defacto, even though it was in the CNCF and in an open source open, it wasn't really standard standard. There's no like standards, body, but what de facto thing has to happen in your mind's eye around making inter clouding or connecting clouds in a, in a way that's gonna create extensibility and growth. What do you see as a de facto thing that the industry should rally around? Obviously Kubernetes is one, is there something else that you see that's important for in an open way that the industry can discuss and, and get behind? >>Yeah. I mean, there are things like identity, right? Which are pretty critical. There is connectivity and networking. So these are all things that the industry can rally around. Right. And that goes along with any modern application infrastructure. So I would say those are the building blocks that need to happen on the data side. Of course there are so many choices as well. So >>How about, you know, security? I think about, you know, when after stuck net, the, the whole industry said, Hey, we have to do a better job of collaborating. And then when you said identity, it just sort of struck me. But then a lot of people tried to sort of monetize private reporting and things like that. So you do you see a movement within the technology industry to do a better job of collaborating to, to solve the acute, you know, security problems? >>Yeah. I think the customer pressure and government pressure right. Causes that way. Yeah. Even now, even in our current universe, you see, there is a lot of behind the scenes collaboration amongst the security teams of all of the tech companies that is not widely seen or known. Right. For example, my CISO knows the AWS CSO or the Microsoft CSO and they all talk and they share the right information about vulnerability attacks and so on and so forth. So there's already a certain amount of collaboration that's happening and that'll only increase. Do, >>Do you, you know, I was somewhat surprised. I didn't hear more in your face about security would, is that just because you had such a strong multi-cloud message that you wanted to get, get across, cuz your security story is very strong and deep. When you get into the DPU side of things, the, you know, the separation of resources and the encryption and I'll end to end >>I'm well, we have a phenomenal security story. Yeah. Yeah. Tell security story and yes. I mean I'll need guilty to the fact that in the keynote you have yeah, yeah, sure time. But what we are doing with NSX and you will hear about some NSX projects as you, if you have time to go to some of the, the sessions. Yeah. There's one called project, not star. Another is called project Watchman or watch, I think it's called, we're all dealing with this. That is gonna strengthen the security story even more. Yeah. >>We think security and data is gonna be a big part of it. Right. As CEO, I have to ask you now that you're the CEO, first of all, I'd love to talk about product with you cuz you're yeah. Yeah. We just great conversation. We want to kind of read thet leaves and ask pointed questions cuz we're putting the puzzle together in real time here with the audience. But as CEO, now you have a lot of discussions around the business. You, the Broadcom thing happening, you got the rename here, you got multi-cloud all good stuff happening. Dave and I were chatting before we came on this morning around the marketplace, around financial valuations and EBIDA numbers. When you have so much strategic Goodwill and investment in the oven right now with the, with the investments in cloud native multi-year investments on a trajectory, you got economies of scale there. >>It's just now coming out to be harvest and more behind it. Yeah. As you come into the Broadcom and or the new world wave that's coming, how do you talk about that value? Cuz you can't really put a number on it yet because there's no customers on it. I mean some customers, but you can't probably some for form. It's not like sales numbers. Yeah. Yeah. How do you make the argument to the PE type folks out there? Like EBIDA and then all the strategic value. What's the, what's the conversation like if you can share any, I know it's obviously public company, all the things going down, but like how do you talk about strategic value to numbers folks? >>Yeah. I mean, we are not talking to PE guys at all. Right. I mean the only conversation we have is helping Broadcom with >>Yeah. But, but number people who are looking at the number, EBIDA kind of, >>Yeah. I mean, you'd be surprised if, for, for example, even with Broadcom, they look at the business holistically as what are the prospects of this business becoming a franchise that is durable and could drive a lot of value. Right. So that's how they look at it holistically. It's not a number driven. >>They do. They look at that. >>Yeah. Yeah, absolutely. So I think it's a misperception to say, Hey, it's a numbers driven conversation. It's a business driven conversation where, I mean, and Hawk's been public about it. He says, look, I look at businesses. Can they be leaders in their market? Yeah. Because leaders get, as we all know a disproportionate share of the economic value, is it a durable franchise that's gonna last 10 years or more, right. Obviously with technology changes in between, but 10 years or more >>Or 10, you got your internal, VMware talent customers and >>Partners. Yeah. Significant competitive advantage. So that's, that's really where the conversation starts and the numbers fall out of it. Got it. >>Okay. So I think >>There's a track record too. >>That culture >>That VMware has, you've always had an engineering culture. That's turned, you know, ideas and problems into products that, that have been very successful. >>Well, they had different engineering cultures. They're chips. You guys are software. Right. You guys know >>Software. Yeah. Mean they've been very successful with Broadcom, the standalone networking company since they took it over. Right. I mean, it's, there's a lot of amazing innovation going on there. >>Yeah. Not, not that I'm smiling. I want to kind of poke at this question question. I'll see if I get an answer out of you, when you talk to Hawk tan, does he feel like he bought a lot more than he thought or does he, did he, does he know it's all here? So >>The last two months, I mean, they've been going through a very deliberate process of digging into each business and certainly feels like he got a phenomenal asset base. Yeah. He said that to me even today after the keynote, right. Is the amazing amount of product capability that he's seeing in every one of our businesses. And that's been the constant frame. >>But congratulations on that. >>I've heard, I've heard Hawk talk about the shift to, to Mer merchant Silicon. Yeah. From custom Silicon. But I wanted to ask you when you look at things like AWS nitro yeah. And graviton and train and the advantage that AWS has with custom Silicon, you see Google and Microsoft sort of Alibaba following suit. Would it benefit you to have custom Silicon for, for DPU? I mean, I guess you, you know, to have a tighter integration or do you feel like with the relationships that you have that doesn't buy you anything? >>Yeah. I mean we have pretty strong relationships with in fact fantastic relationships with the Invidia and Intel and AMD >>Benon and AMD now. >>Yeah. Yeah. I mean, we've been working with the Pendo team in their previous incarnations for years. Right, right. When they were at Cisco and then same thing with the, we know the Melanox team as well as the invi original teams and Intel is the collaboration right. From the get go of the company. So we don't feel a need for any of that. We think, I mean, it's clear for those cloud folks, right. They're going towards a vertical integration model and select portions of their stack, like you talked about, but there is always a room for horizontal integration model. Right. And that's what we are a part of. Right. So there'll be a number of DPU pro vendors. There'll be a number of CPU vendors. There'll be a number of other storage, et cetera, et cetera. And we think that is goodness in an alternative model compared to a vertically integr >>And yeah. What this trade offs, right. It's not one or the other, I mean I used to tell, talk to Al Shugar about this all the time. Right. I mean, if vertically integrated, there may be some cost advantages, but then you've got flexibility advantages. If you're using, you know, what the industry is building. Right. And those are the tradeoffs, so yeah. Yeah. >>Greg, what are you excited about right now? You got a lot going on obviously great event. Branding's good. Love the graphics. I was kind of nervous about the name changed. I likem world, but you know, that's, I'm kind of like it >>Doesn't readily roll off your phone. Yeah. >>I know. We, I had everyone miscue this morning already and said VMware Explorer. So >>You pay Laura fine. Yeah. >>Now, I >>Mean a quarter >>Curse jar, whatever I did wrong. I don't believe it. Only small mistake that's because the thing wasn't on. Okay. Anyway, what's on your plate. What's your, what's some of the milestones. Do you share for your employees, your customers and your partners out there that are watching that might wanna know what's next in the whole Broadcom VMware situation. Is there a timeline? Can you talk publicly about what? To what people can expect? >>Yeah, no, we, we talk all the time in the company about that. Right? Because even if there is no news, you need to talk about what is where we are. Right. Because this is such a big transaction and employees need to know where we are at every minute of the day. Right? Yeah. So, so we definitely talk about that. We definitely talk about that with customers too. And where we are is that the, all the processes are on track, right? There is a regulatory track going on. And like I alluded to a few minutes ago, Broadcom is doing what they call the discovery phase of the integration planning, where they learn about the business. And then once that is done, they'll figure out what the operating model is. What Broadcom is said publicly is that the acquisition will close in their fiscal 23, which starts in November of this year, runs through October of next year. >>So >>Anywhere window, okay. As to where it is in that window. >>All right, Raghu, thank you so much for taking valuable time out of your conference time here for the queue. I really appreciate Dave and I both appreciate your friendship. Congratulations on the success as CEO, cuz we've been following your trials and tribulations and endeavors for many years and it's been great to chat with you. >>Yeah. Yeah. It's been great to chat with you, not just today, but yeah. Over a period of time and you guys do great work with this, so >>Yeah. And you guys making, making all the right calls at VMware. All right. More coverage. I'm shot. Dave ante cube coverage day one of three days of world war cup here in Moscone west, the cube coverage of VMware Explorer, 22 be right back.

Published Date : Aug 30 2022

SUMMARY :

Great to see you in person. Cuz I think it's important to know that you've been the architect of a lot of this change and it's So that's what you start seeing that you saw the management And we're seeing some use cases. When did you have the moment where I mean, if you think about the evolution of the cloud players, And the cloud vendors also started leveraging that OnPrem. I think you were here. to for management, I mean, you can go each one of them by themselves, there is one way of So it's not about if you remember in the old world, people talk about single pan The, the technical enable there is just it's good software. And it's the Federation Much anything data from VR op we don't care. That's the same if you know what I'm saying? Firstly, my, the answer depends on which category you are in. And that is why you saw the cloud universal announcement and that's already, you've seen the Tansu announcement, et cetera. So the other thing that we did, that's really what my, the other thing that I'd like to get to your reaction on, cause this is great. But if Goldman Sachs builds the biggest cloud on the planet for financial service for their own benefit, They sort of hinted at it that when they were up on stage on AWS, right. Google's doing the same thing we are doing. And that's a super cloud. Said snowflake guys out the marketing guys. you So take the Goldman Sachs example. And this thing can be fungible and they can tie it to the right services. I mean, that's the way I look at it. It allows us to build things that you would not otherwise be able to do, Not to pat ourselves on the back Ragu. And you could have inter clouding cuz there was no clouding. And of course you can do all the containers in the Kubernetes clusters and et cetera, is what you could always do. Was the great equalizer. What the question Raghu, as you look at, we had submit on earlier, we had tutorial on as well. And that goes along with any I think about, you know, when after stuck net, the, the whole industry Even now, even in our current universe, you see, is that just because you had such a strong multi-cloud message that you wanted to get, get across, cuz your security story I mean I'll need guilty to the fact that in the keynote you have yeah, As CEO, I have to ask you now that you're the CEO, I know it's obviously public company, all the things going down, but like how do you talk about strategic value to I mean the only conversation we have is helping Broadcom So that's how they look at it holistically. They look at that. So I think it's a misperception to say, Hey, it's a numbers driven conversation. the numbers fall out of it. That's turned, you know, ideas and problems into Right. I mean, it's, there's a lot of amazing innovation going on there. I want to kind of poke at this question question. He said that to me even today after the keynote, right. But I wanted to ask you when you look at things like AWS nitro Invidia and Intel and AMD a vertical integration model and select portions of their stack, like you talked about, It's not one or the other, I mean I used to tell, talk to Al Shugar about this all the time. Greg, what are you excited about right now? Yeah. I know. Yeah. Do you share for your employees, your customers and your partners out there that are watching that might wanna know what's What Broadcom is said publicly is that the acquisition will close As to where it is in that window. All right, Raghu, thank you so much for taking valuable time out of your conference time here for the queue. Over a period of time and you guys do great day one of three days of world war cup here in Moscone west, the cube coverage of VMware Explorer,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

2016DATE

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AMDORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

OracleORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

RaghuPERSON

0.99+

GregPERSON

0.99+

twoQUANTITY

0.99+

IntelORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

LauraPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

2010DATE

0.99+

threeQUANTITY

0.99+

Lou TuckerPERSON

0.99+

10 yearsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

2022DATE

0.99+

12 straight yearsQUANTITY

0.99+

Andy jazzyPERSON

0.99+

two separate islandsQUANTITY

0.99+

SatiaORGANIZATION

0.99+

thirdQUANTITY

0.99+

todayDATE

0.99+

fiscal 23DATE

0.99+

FirstlyQUANTITY

0.99+

Raghu RaghuramPERSON

0.99+

NSXORGANIZATION

0.99+

OneQUANTITY

0.99+

10QUANTITY

0.99+

2018DATE

0.99+

second thingQUANTITY

0.98+

Al ShugarPERSON

0.98+

vSphereTITLE

0.98+

TansuORGANIZATION

0.98+

two applicationQUANTITY

0.98+

22QUANTITY

0.98+

one wayQUANTITY

0.98+

three thingsQUANTITY

0.97+

first revQUANTITY

0.97+

oneQUANTITY

0.97+

three daysQUANTITY

0.97+

VSE eightTITLE

0.97+

eachQUANTITY

0.97+

bothQUANTITY

0.97+

PendoORGANIZATION

0.97+

2013 dayDATE

0.97+

each businessQUANTITY

0.97+

KubernetesTITLE

0.97+

almost 20 yearsQUANTITY

0.97+

EBIDAORGANIZATION

0.97+

five different typesQUANTITY

0.96+