Image Title

Search Results for Madrona Venture:

Jon Turow, Madrona Venture Group | CloudNativeSecurityCon 23


 

(upbeat music) >> Hello and welcome back to theCUBE. We're here in Palo Alto, California. I'm your host, John Furrier with a special guest here in the studio. As part of our Cloud Native SecurityCon Coverage we had an opportunity to bring in Jon Turow who is the partner at Madrona Venture Partners formerly with AWS and to talk about machine learning, foundational models, and how the future of AI is going to be impacted by some of the innovation around what's going on in the industry. ChatGPT has taken the world by storm. A million downloads, fastest to the million downloads there. Before some were saying it's just a gimmick. Others saying it's a game changer. Jon's here to break it down, and great to have you on. Thanks for coming in. >> Thanks John. Glad to be here. >> Thanks for coming on. So first of all, I'm glad you're here. First of all, because two things. One, you were formerly with AWS, got a lot of experience running projects at AWS. Now a partner at Madrona, a great firm doing great deals, and they had this future at modern application kind of thesis. Now you are putting out some content recently around foundational models. You're deep into computer vision. You were the IoT general manager at AWS among other things, Greengrass. So you know a lot about data. You know a lot about some of this automation, some of the edge stuff. You've been in the middle of all these kind of areas that now seem to be the next wave coming. So I wanted to ask you what your thoughts are of how the machine learning and this new automation wave is coming in, this AI tools are coming out. Is it a platform? Is it going to be smarter? What feeds AI? What's your take on this whole foundational big movement into AI? What's your general reaction to all this? >> So, thanks, Jon, again for having me here. Really excited to talk about these things. AI has been coming for a long time. It's been kind of the next big thing. Always just over the horizon for quite some time. And we've seen really compelling applications in generations before and until now. Amazon and AWS have introduced a lot of them. My firm, Madrona Venture Group has invested in some of those early players as well. But what we're seeing now is something categorically different. That's really exciting and feels like a durable change. And I can try and explain what that is. We have these really large models that are useful in a general way. They can be applied to a lot of different tasks beyond the specific task that the designers envisioned. That makes them more flexible, that makes them more useful for building applications than what we've seen before. And so that, we can talk about the depths of it, but in a nutshell, that's why I think people are really excited. >> And I think one of the things that you wrote about that jumped out at me is that this seems to be this moment where there's been a multiple decades of nerds and computer scientists and programmers and data thinkers around waiting for AI to blossom. And it's like they're scratching that itch. Every year is going to be, and it's like the bottleneck's always been compute power. And we've seen other areas, genome sequencing, all kinds of high computation things where required high forms computing. But now there's no real bottleneck to compute. You got cloud. And so you're starting to see the emergence of a massive acceleration of where AI's been and where it needs to be going. Now, it's almost like it's got a reboot. It's almost a renaissance in the AI community with a whole nother macro environmental things happening. Cloud, younger generation, applications proliferate from mobile to cloud native. It's the perfect storm for this kind of moment to switch over. Am I overreading that? Is that right? >> You're right. And it's been cooking for a cycle or two. And let me try and explain why that is. We have cloud and AWS launch in whatever it was, 2006, and offered more compute to more people than really was possible before. Initially that was about taking existing applications and running them more easily in a bigger scale. But in that period of time what's also become possible is new kinds of computation that really weren't practical or even possible without that vast amount of compute. And so one result that came of that is something called the transformer AI model architecture. And Google came out with that, published a paper in 2017. And what that says is, with a transformer model you can actually train an arbitrarily large amount of data into a model, and see what happens. That's what Google demonstrated in 2017. The what happens is the really exciting part because when you do that, what you start to see, when models exceed a certain size that we had never really seen before all of a sudden they get what we call emerging capabilities of complex reasoning and reasoning outside a domain and reasoning with data. The kinds of things that people describe as spooky when they play with something like ChatGPT. That's the underlying term. We don't as an industry quite know why it happens or how it happens, but we can measure that it does. So cloud enables new kinds of math and science. New kinds of math and science allow new kinds of experimentation. And that experimentation has led to this new generation of models. >> So one of the debates we had on theCUBE at our Supercloud event last month was, what's the barriers to entry for say OpenAI, for instance? Obviously, I weighed in aggressively and said, "The barriers for getting into cloud are high because all the CapEx." And Howie Xu formerly VMware, now at ZScaler, he's an AI machine learning guy. He was like, "Well, you can spend $100 million and replicate it." I saw a quote that set up for 180,000 I can get this other package. What's the barriers to entry? Is ChatGPT or OpenAI, does it have sustainability? Is it easy to get into? What is the market like for AI? I mean, because a lot of entrepreneurs are jumping in. I mean, I just read a story today. San Francisco's got more inbound migration because of the AI action happening, Seattle's booming, Boston with MIT's been working on neural networks for generations. That's what we've found the answer. Get off the neural network, Boston jump on the AI bus. So there's total excitement for this. People are enthusiastic around this area. >> You can think of an iPhone versus Android tension that's happening today. In the iPhone world, there are proprietary models from OpenAI who you might consider as the leader. There's Cohere, there's AI21, there's Anthropic, Google's going to have their own, and a few others. These are proprietary models that developers can build on top of, get started really quickly. They're measured to have the highest accuracy and the highest performance today. That's the proprietary side. On the other side, there is an open source part of the world. These are a proliferation of model architectures that developers and practitioners can take off the shelf and train themselves. Typically found in Hugging face. What people seem to think is that the accuracy and performance of the open source models is something like 18 to 20 months behind the accuracy and performance of the proprietary models. But on the other hand, there's infinite flexibility for teams that are capable enough. So you're going to see teams choose sides based on whether they want speed or flexibility. >> That's interesting. And that brings up a point I was talking to a startup and the debate was, do you abstract away from the hardware and be software-defined or software-led on the AI side and let the hardware side just extremely accelerate on its own, 'cause it's flywheel? So again, back to proprietary, that's with hardware kind of bundled in, bolted on. Is it accelerator or is it bolted on or is it part of it? So to me, I think that the big struggle in understanding this is that which one will end up being right. I mean, is it a beta max versus VHS kind of thing going on? Or iPhone, Android, I mean iPhone makes a lot of sense, but if you're Apple, but is there an Apple moment in the machine learning? >> In proprietary models, here does seem to be a jump ball. That there's going to be a virtuous flywheel that emerges that, for example, all these excitement about ChatGPT. What's really exciting about it is it's really easy to use. The technology isn't so different from what we've seen before even from OpenAI. You mentioned a million users in a short period of time, all providing training data for OpenAI that makes their underlying models, their next generation even better. So it's not unreasonable to guess that there's going to be power laws that emerge on the proprietary side. What I think history has shown is that iPhone, Android, Windows, Linux, there seems to be gravity towards this yin and yang. And my guess, and what other people seem to think is going to be the case is that we're going to continue to see these two poles of AI. >> So let's get into the relationship with data because I've been emerging myself with ChatGPT, fascinated by the ease of use, yes, but also the fidelity of how you query it. And I felt like when I was doing writing SQL back in the eighties and nineties where SQL was emerging. You had to be really a guru at the SQL to get the answers you wanted. It seems like the querying into ChatGPT is a good thing if you know how to talk to it. Labeling whether your input is and it does a great job if you feed it right. If you ask a generic questions like Google. It's like a Google search. It gives you great format, sounds credible, but the facts are kind of wrong. >> That's right. >> That's where general consensus is coming on. So what does that mean? That means people are on one hand saying, "Ah, it's bullshit 'cause it's wrong." But I look at, I'm like, "Wow, that's that's compelling." 'Cause if you feed it the right data, so now we're in the data modeling here, so the role of data's going to be critical. Is there a data operating system emerging? Because if this thing continues to go the way it's going you can almost imagine as you would look at companies to invest in. Who's going to be right on this? What's going to scale? What's sustainable? What could build a durable company? It might not look what like what people think it is. I mean, I remember when Google started everyone thought it was the worst search engine because it wasn't a portal. But it was the best organic search on the planet became successful. So I'm trying to figure out like, okay, how do you read this? How do you read the tea leaves? >> Yeah. There are a few different ways that companies can differentiate themselves. Teams with galactic capabilities to take an open source model and then change the architecture and retrain and go down to the silicon. They can do things that might not have been possible for other teams to do. There's a company that that we're proud to be investors in called RunwayML that provides video accelerated, sorry, AI accelerated video editing capabilities. They were used in everything, everywhere all at once and some others. In order to build RunwayML, they needed a vision of what the future was going to look like and they needed to make deep contributions to the science that was going to enable all that. But not every team has those capabilities, maybe nor should they. So as far as how other teams are going to differentiate there's a couple of things that they can do. One is called prompt engineering where they shape on behalf of their own users exactly how the prompt to get fed to the underlying model. It's not clear whether that's going to be a durable problem or whether like Google, we consumers are going to start to get more intuitive about this. That's one. The second is what's called information retrieval. How can I get information about the world outside, information from a database or a data store or whatever service into these models so they can reason about them. And the third is, this is going to sound funny, but attribution. Just like you would do in a news report or an academic paper. If you can state where your facts are coming from, the downstream consumer or the human being who has to use that information actually is going to be able to make better sense of it and rely better on it. So that's prompt engineering, that's retrieval, and that's attribution. >> So that brings me to my next point I want to dig in on is the foundational model stack that you published. And I'll start by saying that with ChatGPT, if you take out the naysayers who are like throwing cold water on it about being a gimmick or whatever, and then you got the other side, I would call the alpha nerds who are like they can see, "Wow, this is amazing." This is truly NextGen. This isn't yesterday's chatbot nonsense. They're like, they're all over it. It's that everybody's using it right now in every vertical. I heard someone using it for security logs. I heard a data center, hardware vendor using it for pushing out appsec review updates. I mean, I've heard corner cases. We're using it for theCUBE to put our metadata in. So there's a horizontal use case of value. So to me that tells me it's a market there. So when you have horizontal scalability in the use case you're going to have a stack. So you publish this stack and it has an application at the top, applications like Jasper out there. You're seeing ChatGPT. But you go after the bottom, you got silicon, cloud, foundational model operations, the foundational models themselves, tooling, sources, actions. Where'd you get this from? How'd you put this together? Did you just work backwards from the startups or was there a thesis behind this? Could you share your thoughts behind this foundational model stack? >> Sure. Well, I'm a recovering product manager and my job that I think about as a product manager is who is my customer and what problem he wants to solve. And so to put myself in the mindset of an application developer and a founder who is actually my customer as a partner at Madrona, I think about what technology and resources does she need to be really powerful, to be able to take a brilliant idea, and actually bring that to life. And if you spend time with that community, which I do and I've met with hundreds of founders now who are trying to do exactly this, you can see that the stack is emerging. In fact, we first drew it in, not in January 2023, but October 2022. And if you look at the difference between the October '22 and January '23 stacks you're going to see that holes in the stack that we identified in October around tooling and around foundation model ops and the rest are organically starting to get filled because of how much demand from the developers at the top of the stack. >> If you look at the young generation coming out and even some of the analysts, I was just reading an analyst report on who's following the whole data stacks area, Databricks, Snowflake, there's variety of analytics, realtime AI, data's hot. There's a lot of engineers coming out that were either data scientists or I would call data platform engineering folks are becoming very key resources in this area. What's the skillset emerging and what's the mindset of that entrepreneur that sees the opportunity? How does these startups come together? Is there a pattern in the formation? Is there a pattern in the competency or proficiency around the talent behind these ventures? >> Yes. I would say there's two groups. The first is a very distinct pattern, John. For the past 10 years or a little more we've seen a pattern of democratization of ML where more and more people had access to this powerful science and technology. And since about 2017, with the rise of the transformer architecture in these foundation models, that pattern has reversed. All of a sudden what has become broader access is now shrinking to a pretty small group of scientists who can actually train and manipulate the architectures of these models themselves. So that's one. And what that means is the teams who can do that have huge ability to make the future happen in ways that other people don't have access to yet. That's one. The second is there is a broader population of people who by definition has even more collective imagination 'cause there's even more people who sees what should be possible and can use things like the proprietary models, like the OpenAI models that are available off the shelf and try to create something that maybe nobody has seen before. And when they do that, Jasper AI is a great example of that. Jasper AI is a company that creates marketing copy automatically with generative models such as GPT-3. They do that and it's really useful and it's almost fun for a marketer to use that. But there are going to be questions of how they can defend that against someone else who has access to the same technology. It's a different population of founders who has to find other sources of differentiation without being able to go all the way down to the the silicon and the science. >> Yeah, and it's going to be also opportunity recognition is one thing. Building a viable venture product market fit. You got competition. And so when things get crowded you got to have some differentiation. I think that's going to be the key. And that's where I was trying to figure out and I think data with scale I think are big ones. Where's the vulnerability in the stack in terms of gaps? Where's the white space? I shouldn't say vulnerability. I should say where's the opportunity, where's the white space in the stack that you see opportunities for entrepreneurs to attack? >> I would say there's two. At the application level, there is almost infinite opportunity, John, because almost every kind of application is about to be reimagined or disrupted with a new generation that takes advantage of this really powerful new technology. And so if there is a kind of application in almost any vertical, it's hard to rule something out. Almost any vertical that a founder wishes she had created the original app in, well, now it's her time. So that's one. The second is, if you look at the tooling layer that we discussed, tooling is a really powerful way that you can provide more flexibility to app developers to get more differentiation for themselves. And the tooling layer is still forming. This is the interface between the models themselves and the applications. Tools that help bring in data, as you mentioned, connect to external actions, bring context across multiple calls, chain together multiple models. These kinds of things, there's huge opportunity there. >> Well, Jon, I really appreciate you coming in. I had a couple more questions, but I will take a minute to read some of your bios for the audience and we'll get into, I won't embarrass you, but I want to set the context. You said you were recovering product manager, 10 plus years at AWS. Obviously, recovering from AWS, which is a whole nother dimension of recovering. In all seriousness, I talked to Andy Jassy around that time and Dr. Matt Wood and it was about that time when AI was just getting on the radar when they started. So you guys started seeing the wave coming in early on. So I remember at that time as Amazon was starting to grow significantly and even just stock price and overall growth. From a tech perspective, it was pretty clear what was coming, so you were there when this tsunami hit. >> Jon: That's right. >> And you had a front row seat building tech, you were led the product teams for Computer Vision AI, Textract, AI intelligence for document processing, recognition for image and video analysis. You wrote the business product plan for AWS IoT and Greengrass, which we've covered a lot in theCUBE, which extends out to the whole edge thing. So you know a lot about AI/ML, edge computing, IOT, messaging, which I call the law of small numbers that scale become big. This is a big new thing. So as a former AWS leader who's been there and at Madrona, what's your investment thesis as you start to peruse the landscape and talk to entrepreneurs as you got the stack? What's the big picture? What are you looking for? What's the thesis? How do you see this next five years emerging? >> Five years is a really long time given some of this science is only six months out. I'll start with some, no pun intended, some foundational things. And we can talk about some implications of the technology. The basics are the same as they've always been. We want, what I like to call customers with their hair on fire. So they have problems, so urgent they'll buy half a product. The joke is if your hair is on fire you might want a bucket of cold water, but you'll take a tennis racket and you'll beat yourself over the head to put the fire out. You want those customers 'cause they'll meet you more than halfway. And when you find them, you can obsess about them and you can get better every day. So we want customers with their hair on fire. We want founders who have empathy for those customers, understand what is going to be required to serve them really well, and have what I like to call founder-market fit to be able to build the products that those customers are going to need. >> And because that's a good strategy from an emerging, not yet fully baked out requirements definition. >> Jon: That's right. >> Enough where directionally they're leaning in, more than in, they're part of the product development process. >> That's right. And when you're doing early stage development, which is where I personally spend a lot of my time at the seed and A and a little bit beyond that stage often that's going to be what you have to go on because the future is going to be so complex that you can't see the curves beyond it. But if you have customers with their hair on fire and talented founders who have the capability to serve those customers, that's got me interested. >> So if I'm an entrepreneur, I walk in and say, "I have customers that have their hair on fire." What kind of checks do you write? What's the kind of the average you're seeing for seed and series? Probably seed, seed rounds and series As. >> It can depend. I have seen seed rounds of double digit million dollars. I have seen seed rounds much smaller than that. It really depends on what is going to be the right thing for these founders to prove out the hypothesis that they're testing that says, "Look, we have this customer with her hair on fire. We think we can build at least a tennis racket that she can use to start beating herself over the head and put the fire out. And then we're going to have something really interesting that we can scale up from there and we can make the future happen. >> So it sounds like your advice to founders is go out and find some customers, show them a product, don't obsess over full completion, get some sort of vibe on fit and go from there. >> Yeah, and I think by the time founders come to me they may not have a product, they may not have a deck, but if they have a customer with her hair on fire, then I'm really interested. >> Well, I always love the professional services angle on these markets. You go in and you get some business and you understand it. Walk away if you don't like it, but you see the hair on fire, then you go in product mode. >> That's right. >> All Right, Jon, thank you for coming on theCUBE. Really appreciate you stopping by the studio and good luck on your investments. Great to see you. >> You too. >> Thanks for coming on. >> Thank you, Jon. >> CUBE coverage here at Palo Alto. I'm John Furrier, your host. More coverage with CUBE Conversations after this break. (upbeat music)

Published Date : Feb 2 2023

SUMMARY :

and great to have you on. that now seem to be the next wave coming. It's been kind of the next big thing. is that this seems to be this moment and offered more compute to more people What's the barriers to entry? is that the accuracy and the debate was, do you that there's going to be power laws but also the fidelity of how you query it. going to be critical. exactly how the prompt to get So that brings me to my next point and actually bring that to life. and even some of the analysts, But there are going to be questions Yeah, and it's going to be and the applications. the radar when they started. and talk to entrepreneurs the head to put the fire out. And because that's a good of the product development process. that you can't see the curves beyond it. What kind of checks do you write? and put the fire out. to founders is go out time founders come to me and you understand it. stopping by the studio More coverage with CUBE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

JonPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

2017DATE

0.99+

January 2023DATE

0.99+

Jon TurowPERSON

0.99+

OctoberDATE

0.99+

18QUANTITY

0.99+

MITORGANIZATION

0.99+

$100 millionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

10 plus yearsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

GoogleORGANIZATION

0.99+

twoQUANTITY

0.99+

October 2022DATE

0.99+

hundredsQUANTITY

0.99+

MadronaORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Madrona Venture PartnersORGANIZATION

0.99+

January '23DATE

0.99+

two groupsQUANTITY

0.99+

Matt WoodPERSON

0.99+

Madrona Venture GroupORGANIZATION

0.99+

180,000QUANTITY

0.99+

October '22DATE

0.99+

JasperTITLE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

six monthsQUANTITY

0.99+

2006DATE

0.99+

million downloadsQUANTITY

0.99+

Five yearsQUANTITY

0.99+

SQLTITLE

0.99+

last monthDATE

0.99+

two polesQUANTITY

0.99+

firstQUANTITY

0.99+

Howie XuPERSON

0.99+

VMwareORGANIZATION

0.99+

thirdQUANTITY

0.99+

20 monthsQUANTITY

0.99+

GreengrassORGANIZATION

0.99+

Madrona Venture GroupORGANIZATION

0.98+

secondQUANTITY

0.98+

OneQUANTITY

0.98+

SupercloudEVENT

0.98+

RunwayMLTITLE

0.98+

San FranciscoLOCATION

0.98+

ZScalerORGANIZATION

0.98+

yesterdayDATE

0.98+

oneQUANTITY

0.98+

FirstQUANTITY

0.97+

CapExORGANIZATION

0.97+

eightiesDATE

0.97+

ChatGPTTITLE

0.96+

Dr.PERSON

0.96+

SiliconANGLE News | Swami Sivasubramanian Extended Version


 

(bright upbeat music) >> Hello, everyone. Welcome to SiliconANGLE News breaking story here. Amazon Web Services expanding their relationship with Hugging Face, breaking news here on SiliconANGLE. I'm John Furrier, SiliconANGLE reporter, founder, and also co-host of theCUBE. And I have with me, Swami, from Amazon Web Services, vice president of database, analytics, machine learning with AWS. Swami, great to have you on for this breaking news segment on AWS's big news. Thanks for coming on and taking the time. >> Hey, John, pleasure to be here. >> You know- >> Looking forward to it. >> We've had many conversations on theCUBE over the years, we've watched Amazon really move fast into the large data modeling, SageMaker became a very smashing success, obviously you've been on this for a while. Now with ChatGPT OpenAI, a lot of buzz going mainstream, takes it from behind the curtain inside the ropes, if you will, in the industry to a mainstream. And so this is a big moment, I think, in the industry, I want to get your perspective, because your news with Hugging Face, I think is another tell sign that we're about to tip over into a new accelerated growth around making AI now application aware, application centric, more programmable, more API access. What's the big news about, with AWS Hugging Face, you know, what's going on with this announcement? >> Yeah. First of all, they're very excited to announce our expanded collaboration with Hugging Face, because with this partnership, our goal, as you all know, I mean, Hugging Face, I consider them like the GitHub for machine learning. And with this partnership, Hugging Face and AWS, we'll be able to democratize AI for a broad range of developers, not just specific deep AI startups. And now with this, we can accelerate the training, fine tuning and deployment of these large language models, and vision models from Hugging Face in the cloud. And the broader context, when you step back and see what customer problem we are trying to solve with this announcement, essentially if you see these foundational models, are used to now create like a huge number of applications, suggest like tech summarization, question answering, or search image generation, creative, other things. And these are all stuff we are seeing in the likes of these ChatGPT style applications. But there is a broad range of enterprise use cases that we don't even talk about. And it's because these kind of transformative, generative AI capabilities and models are not available to, I mean, millions of developers. And because either training these elements from scratch can be very expensive or time consuming and need deep expertise, or more importantly, they don't need these generic models, they need them to be fine tuned for the specific use cases. And one of the biggest complaints we hear is that these models, when they try to use it for real production use cases, they are incredibly expensive to train and incredibly expensive to run inference on, to use it at a production scale. So, and unlike web search style applications, where the margins can be really huge, here in production use cases and enterprises, you want efficiency at scale. That's where Hugging Face and AWS share our mission. And by integrating with Trainium and Inferentia, we're able to handle the cost efficient training and inference at scale, I'll deep dive on it. And by teaming up on the SageMaker front, now the time it takes to build these models and fine tune them is also coming down. So that's what makes this partnership very unique as well. So I'm very excited. >> I want to get into the time savings and the cost savings as well on the training and inference, it's a huge issue, but before we get into that, just how long have you guys been working with Hugging Face? I know there's a previous relationship, this is an expansion of that relationship, can you comment on what's different about what's happened before and then now? >> Yeah. So, Hugging Face, we have had a great relationship in the past few years as well, where they have actually made their models available to run on AWS, you know, fashion. Even in fact, their Bloom Project was something many of our customers even used. Bloom Project, for context, is their open source project which builds a GPT-3 style model. And now with this expanded collaboration, now Hugging Face selected AWS for that next generation office generative AI model, building on their highly successful Bloom Project as well. And the nice thing is, now, by direct integration with Trainium and Inferentia, where you get cost savings in a really significant way, now, for instance, Trn1 can provide up to 50% cost to train savings, and Inferentia can deliver up to 60% better costs, and four x more higher throughput than (indistinct). Now, these models, especially as they train that next generation generative AI models, it is going to be, not only more accessible to all the developers, who use it in open, so it'll be a lot cheaper as well. And that's what makes this moment really exciting, because we can't democratize AI unless we make it broadly accessible and cost efficient and easy to program and use as well. >> Yeah. >> So very exciting. >> I'll get into the SageMaker and CodeWhisperer angle in a second, but you hit on some good points there. One, accessibility, which is, I call the democratization, which is getting this in the hands of developers, and/or AI to develop, we'll get into that in a second. So, access to coding and Git reasoning is a whole nother wave. But the three things I know you've been working on, I want to put in the buckets here and comment, one, I know you've, over the years, been working on saving time to train, that's a big point, you mentioned some of those stats, also cost, 'cause now cost is an equation on, you know, bundling whether you're uncoupling with hardware and software, that's a big issue. Where do I find the GPUs? Where's the horsepower cost? And then also sustainability. You've mentioned that in the past, is there a sustainability angle here? Can you talk about those three things, time, cost, and sustainability? >> Certainly. So if you look at it from the AWS perspective, we have been supporting customers doing machine learning for the past years. Just for broader context, Amazon has been doing ML the past two decades right from the early days of ML powered recommendation to actually also supporting all kinds of generative AI applications. If you look at even generative AI application within Amazon, Amazon search, when you go search for a product and so forth, we have a team called MFi within Amazon search that helps bring these large language models into creating highly accurate search results. And these are created with models, really large models with tens of billions of parameters, scales to thousands of training jobs every month and trained on large model of hardware. And this is an example of a really good large language foundation model application running at production scale, and also, of course, Alexa, which uses a large generator model as well. And they actually even had a research paper that showed that they are more, and do better in accuracy than other systems like GPT-3 and whatnot. So, and we also touched on things like CodeWhisperer, which uses generative AI to improve developer productivity, but in a responsible manner, because 40% of some of the studies show 40% of this generated code had serious security flaws in it. This is where we didn't just do generative AI, we combined with automated reasoning capabilities, which is a very, very useful technique to identify these issues and couple them so that it produces highly secure code as well. Now, all these learnings taught us few things, and which is what you put in these three buckets. And yeah, like more than 100,000 customers using ML and AI services, including leading startups in the generative AI space, like stability AI, AI21 Labs, or Hugging Face, or even Alexa, for that matter. They care about, I put them in three dimension, one is around cost, which we touched on with Trainium and Inferentia, where we actually, the Trainium, you provide to 50% better cost savings, but the other aspect is, Trainium is a lot more power efficient as well compared to traditional one. And Inferentia is also better in terms of throughput, when it comes to what it is capable of. Like it is able to deliver up to three x higher compute performance and four x higher throughput, compared to it's previous generation, and it is extremely cost efficient and power efficient as well. >> Well. >> Now, the second element that really is important is in a day, developers deeply value the time it takes to build these models, and they don't want to build models from scratch. And this is where SageMaker, which is, even going to Kaggle uses, this is what it is, number one, enterprise ML platform. What it did to traditional machine learning, where tens of thousands of customers use StageMaker today, including the ones I mentioned, is that what used to take like months to build these models have dropped down to now a matter of days, if not less. Now, a generative AI, the cost of building these models, if you look at the landscape, the model parameter size had jumped by more than thousand X in the past three years, thousand x. And that means the training is like a really big distributed systems problem. How do you actually scale these model training? How do you actually ensure that you utilize these efficiently? Because these machines are very expensive, let alone they consume a lot of power. So, this is where SageMaker capability to build, automatically train, tune, and deploy models really concern this, especially with this distributor training infrastructure, and those are some of the reasons why some of the leading generative AI startups are actually leveraging it, because they do not want a giant infrastructure team, which is constantly tuning and fine tuning, and keeping these clusters alive. >> It sounds like a lot like what startups are doing with the cloud early days, no data center, you move to the cloud. So, this is the trend we're seeing, right? You guys are making it easier for developers with Hugging Face, I get that. I love that GitHub for machine learning, large language models are complex and expensive to build, but not anymore, you got Trainium and Inferentia, developers can get faster time to value, but then you got the transformers data sets, token libraries, all that optimized for generator. This is a perfect storm for startups. Jon Turow, a former AWS person, who used to work, I think for you, is now a VC at Madrona Venture, he and I were talking about the generator AI landscape, it's exploding with startups. Every alpha entrepreneur out there is seeing this as the next frontier, that's the 20 mile stairs, next 10 years is going to be huge. What is the big thing that's happened? 'Cause some people were saying, the founder of Yquem said, "Oh, the start ups won't be real, because they don't all have AI experience." John Markoff, former New York Times writer told me that, AI, there's so much work done, this is going to explode, accelerate really fast, because it's almost like it's been waiting for this moment. What's your reaction? >> I actually think there is going to be an explosion of startups, not because they need to be AI startups, but now finally AI is really accessible or going to be accessible, so that they can create remarkable applications, either for enterprises or for disrupting actually how customer service is being done or how creative tools are being built. And I mean, this is going to change in many ways. When we think about generative AI, we always like to think of how it generates like school homework or arts or music or whatnot, but when you look at it on the practical side, generative AI is being actually used across various industries. I'll give an example of like Autodesk. Autodesk is a customer who runs an AWS and SageMaker. They already have an offering that enables generated design, where designers can generate many structural designs for products, whereby you give a specific set of constraints and they actually can generate a structure accordingly. And we see similar kind of trend across various industries, where it can be around creative media editing or various others. I have the strong sense that literally, in the next few years, just like now, conventional machine learning is embedded in every application, every mobile app that we see, it is pervasive, and we don't even think twice about it, same way, like almost all apps are built on cloud. Generative AI is going to be part of every startup, and they are going to create remarkable experiences without needing actually, these deep generative AI scientists. But you won't get that until you actually make these models accessible. And I also don't think one model is going to rule the world, then you want these developers to have access to broad range of models. Just like, go back to the early days of deep learning. Everybody thought it is going to be one framework that will rule the world, and it has been changing, from Caffe to TensorFlow to PyTorch to various other things. And I have a suspicion, we had to enable developers where they are, so. >> You know, Dave Vellante and I have been riffing on this concept called super cloud, and a lot of people have co-opted to be multicloud, but we really were getting at this whole next layer on top of say, AWS. You guys are the most comprehensive cloud, you guys are a super cloud, and even Adam and I are talking about ISVs evolving to ecosystem partners. I mean, your top customers have ecosystems building on top of it. This feels like a whole nother AWS. How are you guys leveraging the history of AWS, which by the way, had the same trajectory, startups came in, they didn't want to provision a data center, the heavy lifting, all the things that have made Amazon successful culturally. And day one thinking is, provide the heavy lifting, undifferentiated heavy lifting, and make it faster for developers to program code. AI's got the same thing. How are you guys taking this to the next level, because now, this is an opportunity for the competition to change the game and take it over? This is, I'm sure, a conversation, you guys have a lot of things going on in AWS that makes you unique. What's the internal and external positioning around how you take it to the next level? >> I mean, so I agree with you that generative AI has a very, very strong potential in terms of what it can enable in terms of next generation application. But this is where Amazon's experience and expertise in putting these foundation models to work internally really has helped us quite a bit. If you look at it, like amazon.com search is like a very, very important application in terms of what is the customer impact on number of customers who use that application openly, and the amount of dollar impact it does for an organization. And we have been doing it silently for a while now. And the same thing is true for like Alexa too, which actually not only uses it for natural language understanding other city, even national leverages is set for creating stories and various other examples. And now, our approach to it from AWS is we actually look at it as in terms of the same three tiers like we did in machine learning, because when you look at generative AI, we genuinely see three sets of customers. One is, like really deep technical expert practitioner startups. These are the startups that are creating the next generation models like the likes of stability AIs or Hugging Face with Bloom or AI21. And they generally want to build their own models, and they want the best price performance of their infrastructure for training and inference. That's where our investments in silicon and hardware and networking innovations, where Trainium and Inferentia really plays a big role. And we can nearly do that, and that is one. The second middle tier is where I do think developers don't want to spend time building their own models, let alone, they actually want the model to be useful to that data. They don't need their models to create like high school homeworks or various other things. What they generally want is, hey, I had this data from my enterprises that I want to fine tune and make it really work only for this, and make it work remarkable, can be for tech summarization, to generate a report, or it can be for better Q&A, and so forth. This is where we are. Our investments in the middle tier with SageMaker, and our partnership with Hugging Face and AI21 and co here are all going to very meaningful. And you'll see us investing, I mean, you already talked about CodeWhisperer, which is an open preview, but we are also partnering with a whole lot of top ISVs, and you'll see more on this front to enable the next wave of generated AI apps too, because this is an area where we do think lot of innovation is yet to be done. It's like day one for us in this space, and we want to enable that huge ecosystem to flourish. >> You know, one of the things Dave Vellante and I were talking about in our first podcast we just did on Friday, we're going to do weekly, is we highlighted the AI ChatGPT example as a horizontal use case, because everyone loves it, people are using it in all their different verticals, and horizontal scalable cloud plays perfectly into it. So I have to ask you, as you look at what AWS is going to bring to the table, a lot's changed over the past 13 years with AWS, a lot more services are available, how should someone rebuild or re-platform and refactor their application of business with AI, with AWS? What are some of the tools that you see and recommend? Is it Serverless, is it SageMaker, CodeWhisperer? What do you think's going to shine brightly within the AWS stack, if you will, or service list, that's going to be part of this? As you mentioned, CodeWhisperer and SageMaker, what else should people be looking at as they start tinkering and getting all these benefits, and scale up their ups? >> You know, if we were a startup, first, I would really work backwards from the customer problem I try to solve, and pick and choose, bar, I don't need to deal with the undifferentiated heavy lifting, so. And that's where the answer is going to change. If you look at it then, the answer is not going to be like a one size fits all, so you need a very strong, I mean, granted on the compute front, if you can actually completely accurate it, so unless, I will always recommend it, instead of running compute for running your ups, because it takes care of all the undifferentiated heavy lifting, but on the data, and that's where we provide a whole variety of databases, right from like relational data, or non-relational, or dynamo, and so forth. And of course, we also have a deep analytical stack, where data directly flows from our relational databases into data lakes and data virus. And you can get value along with partnership with various analytical providers. The area where I do think fundamentally things are changing on what people can do is like, with CodeWhisperer, I was literally trying to actually program a code on sending a message through Twilio, and I was going to pull up to read a documentation, and in my ID, I was actually saying like, let's try sending a message to Twilio, or let's actually update a Route 53 error code. All I had to do was type in just a comment, and it actually started generating the sub-routine. And it is going to be a huge time saver, if I were a developer. And the goal is for us not to actually do it just for AWS developers, and not to just generate the code, but make sure the code is actually highly secure and follows the best practices. So, it's not always about machine learning, it's augmenting with automated reasoning as well. And generative AI is going to be changing, and not just in how people write code, but also how it actually gets built and used as well. You'll see a lot more stuff coming on this front. >> Swami, thank you for your time. I know you're super busy. Thank you for sharing on the news and giving commentary. Again, I think this is a AWS moment and industry moment, heavy lifting, accelerated value, agility. AIOps is going to be probably redefined here. Thanks for sharing your commentary. And we'll see you next time, I'm looking forward to doing more follow up on this. It's going to be a big wave. Thanks. >> Okay. Thanks again, John, always a pleasure. >> Okay. This is SiliconANGLE's breaking news commentary. I'm John Furrier with SiliconANGLE News, as well as host of theCUBE. Swami, who's a leader in AWS, has been on theCUBE multiple times. We've been tracking the growth of how Amazon's journey has just been exploding past five years, in particular, past three. You heard the numbers, great performance, great reviews. This is a watershed moment, I think, for the industry, and it's going to be a lot of fun for the next 10 years. Thanks for watching. (bright music)

Published Date : Feb 22 2023

SUMMARY :

Swami, great to have you on inside the ropes, if you And one of the biggest complaints we hear and easy to program and use as well. I call the democratization, the Trainium, you provide And that means the training What is the big thing that's happened? and they are going to create this to the next level, and the amount of dollar impact that's going to be part of this? And generative AI is going to be changing, AIOps is going to be John, always a pleasure. and it's going to be a lot

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SwamiPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Jon TurowPERSON

0.99+

John MarkoffPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

40%QUANTITY

0.99+

AutodeskORGANIZATION

0.99+

50%QUANTITY

0.99+

Madrona VentureORGANIZATION

0.99+

20 mileQUANTITY

0.99+

Hugging FaceORGANIZATION

0.99+

FridayDATE

0.99+

second elementQUANTITY

0.99+

more than 100,000 customersQUANTITY

0.99+

AI21ORGANIZATION

0.99+

tens of thousandsQUANTITY

0.99+

first podcastQUANTITY

0.99+

three tiersQUANTITY

0.98+

SiliconANGLEORGANIZATION

0.98+

twiceQUANTITY

0.98+

Bloom ProjectTITLE

0.98+

oneQUANTITY

0.98+

SageMakerORGANIZATION

0.98+

Hugging FaceTITLE

0.98+

AlexaTITLE

0.98+

firstQUANTITY

0.98+

GitHubORGANIZATION

0.98+

one modelQUANTITY

0.98+

up to 50%QUANTITY

0.97+

ChatGPTTITLE

0.97+

FirstQUANTITY

0.97+

more than thousand XQUANTITY

0.97+

amazon.comORGANIZATION

0.96+

tens of billionsQUANTITY

0.96+

OneQUANTITY

0.96+

up to 60%QUANTITY

0.96+

one frameworkQUANTITY

0.96+

YquemORGANIZATION

0.94+

three thingsQUANTITY

0.94+

InferentiaORGANIZATION

0.94+

CodeWhispererTITLE

0.93+

fourQUANTITY

0.92+

three setsQUANTITY

0.92+

threeQUANTITY

0.92+

TwilioORGANIZATION

0.92+

Luis Ceze, OctoML | Amazon re:MARS 2022


 

(upbeat music) >> Welcome back, everyone, to theCUBE's coverage here live on the floor at AWS re:MARS 2022. I'm John Furrier, host for theCUBE. Great event, machine learning, automation, robotics, space, that's MARS. It's part of the re-series of events, re:Invent's the big event at the end of the year, re:Inforce, security, re:MARS, really intersection of the future of space, industrial, automation, which is very heavily DevOps machine learning, of course, machine learning, which is AI. We have Luis Ceze here, who's the CEO co-founder of OctoML. Welcome to theCUBE. >> Thank you very much for having me in the show, John. >> So we've been following you guys. You guys are a growing startup funded by Madrona Venture Capital, one of your backers. You guys are here at the show. This is a, I would say small show relative what it's going to be, but a lot of robotics, a lot of space, a lot of industrial kind of edge, but machine learning is the centerpiece of this trend. You guys are in the middle of it. Tell us your story. >> Absolutely, yeah. So our mission is to make machine learning sustainable and accessible to everyone. So I say sustainable because it means we're going to make it faster and more efficient. You know, use less human effort, and accessible to everyone, accessible to as many developers as possible, and also accessible in any device. So, we started from an open source project that began at University of Washington, where I'm a professor there. And several of the co-founders were PhD students there. We started with this open source project called Apache TVM that had actually contributions and collaborations from Amazon and a bunch of other big tech companies. And that allows you to get a machine learning model and run on any hardware, like run on CPUs, GPUs, various GPUs, accelerators, and so on. It was the kernel of our company and the project's been around for about six years or so. Company is about three years old. And we grew from Apache TVM into a whole platform that essentially supports any model on any hardware cloud and edge. >> So is the thesis that, when it first started, that you want to be agnostic on platform? >> Agnostic on hardware, that's right. >> Hardware, hardware. >> Yeah. >> What was it like back then? What kind of hardware were you talking about back then? Cause a lot's changed, certainly on the silicon side. >> Luis: Absolutely, yeah. >> So take me through the journey, 'cause I could see the progression. I'm connecting the dots here. >> So once upon a time, yeah, no... (both chuckling) >> I walked in the snow with my bare feet. >> You have to be careful because if you wake up the professor in me, then you're going to be here for two hours, you know. >> Fast forward. >> The average version here is that, clearly machine learning has shown to actually solve real interesting, high value problems. And where machine learning runs in the end, it becomes code that runs on different hardware, right? And when we started Apache TVM, which stands for tensor virtual machine, at that time it was just beginning to start using GPUs for machine learning, we already saw that, with a bunch of machine learning models popping up and CPUs and GPU's starting to be used for machine learning, it was clear that it come opportunity to run on everywhere. >> And GPU's were coming fast. >> GPUs were coming and huge diversity of CPUs, of GPU's and accelerators now, and the ecosystem and the system software that maps models to hardware is still very fragmented today. So hardware vendors have their own specific stacks. So Nvidia has its own software stack, and so does Intel, AMD. And honestly, I mean, I hope I'm not being, you know, too controversial here to say that it kind of of looks like the mainframe era. We had tight coupling between hardware and software. You know, if you bought IBM hardware, you had to buy IBM OS and IBM database, IBM applications, it all tightly coupled. And if you want to use IBM software, you had to buy IBM hardware. So that's kind of like what machine learning systems look like today. If you buy a certain big name GPU, you've got to use their software. Even if you use their software, which is pretty good, you have to buy their GPUs, right? So, but you know, we wanted to help peel away the model and the software infrastructure from the hardware to give people choice, ability to run the models where it best suit them. Right? So that includes picking the best instance in the cloud, that's going to give you the right, you know, cost properties, performance properties, or might want to run it on the edge. You might run it on an accelerator. >> What year was that roughly, when you were going this? >> We started that project in 2015, 2016 >> Yeah. So that was pre-conventional wisdom. I think TensorFlow wasn't even around yet. >> Luis: No, it wasn't. >> It was, I'm thinking like 2017 or so. >> Luis: Right. So that was the beginning of, okay, this is opportunity. AWS, I don't think they had released some of the nitro stuff that the Hamilton was working on. So, they were already kind of going that way. It's kind of like converging. >> Luis: Yeah. >> The space was happening, exploding. >> Right. And the way that was dealt with, and to this day, you know, to a large extent as well is by backing machine learning models with a bunch of hardware specific libraries. And we were some of the first ones to say, like, know what, let's take a compilation approach, take a model and compile it to very efficient code for that specific hardware. And what underpins all of that is using machine learning for machine learning code optimization. Right? But it was way back when. We can talk about where we are today. >> No, let's fast forward. >> That's the beginning of the open source project. >> But that was a fundamental belief, worldview there. I mean, you have a world real view that was logical when you compare to the mainframe, but not obvious to the machine learning community. Okay, good call, check. Now let's fast forward, okay. Evolution, we'll go through the speed of the years. More chips are coming, you got GPUs, and seeing what's going on in AWS. Wow! Now it's booming. Now I got unlimited processors, I got silicon on chips, I got, everywhere >> Yeah. And what's interesting is that the ecosystem got even more complex, in fact. Because now you have, there's a cross product between machine learning models, frameworks like TensorFlow, PyTorch, Keras, and like that and so on, and then hardware targets. So how do you navigate that? What we want here, our vision is to say, folks should focus, people should focus on making the machine learning models do what they want to do that solves a value, like solves a problem of high value to them. Right? So another deployment should be completely automatic. Today, it's very, very manual to a large extent. So once you're serious about deploying machine learning model, you got a good understanding where you're going to deploy it, how you're going to deploy it, and then, you know, pick out the right libraries and compilers, and we automated the whole thing in our platform. This is why you see the tagline, the booth is right there, like bringing DevOps agility for machine learning, because our mission is to make that fully transparent. >> Well, I think that, first of all, I use that line here, cause I'm looking at it here on live on camera. People can't see, but it's like, I use it on a couple couple of my interviews because the word agility is very interesting because that's kind of the test on any kind of approach these days. Agility could be, and I talked to the robotics guys, just having their product be more agile. I talked to Pepsi here just before you came on, they had this large scale data environment because they built an architecture, but that fostered agility. So again, this is an architectural concept, it's a systems' view of agility being the output, and removing dependencies, which I think what you guys were trying to do. >> Only part of what we do. Right? So agility means a bunch of things. First, you know-- >> Yeah explain. >> Today it takes a couple months to get a model from, when the model's ready, to production, why not turn that in two hours. Agile, literally, physically agile, in terms of walk off time. Right? And then the other thing is give you flexibility to choose where your model should run. So, in our deployment, between the demo and the platform expansion that we announced yesterday, you know, we give the ability of getting your model and, you know, get it compiled, get it optimized for any instance in the cloud and automatically move it around. Today, that's not the case. You have to pick one instance and that's what you do. And then you might auto scale with that one instance. So we give the agility of actually running and scaling the model the way you want, and the way it gives you the right SLAs. >> Yeah, I think Swami was mentioning that, not specifically that use case for you, but that use case generally, that scale being moving things around, making them faster, not having to do that integration work. >> Scale, and run the models where they need to run. Like some day you want to have a large scale deployment in the cloud. You're going to have models in the edge for various reasons because speed of light is limited. We cannot make lights faster. So, you know, got to have some, that's a physics there you cannot change. There's privacy reasons. You want to keep data locally, not send it around to run the model locally. So anyways, and giving the flexibility. >> Let me jump in real quick. I want to ask this specific question because you made me think of something. So we're just having a data mesh conversation. And one of the comments that's come out of a few of these data as code conversations is data's the product now. So if you can move data to the edge, which everyone's talking about, you know, why move data if you don't have to, but I can move a machine learning algorithm to the edge. Cause it's costly to move data. I can move computer, everyone knows that. But now I can move machine learning to anywhere else and not worry about integrating on the fly. So the model is the code. >> It is the product. >> Yeah. And since you said, the model is the code, okay, now we're talking even more here. So machine learning models today are not treated as code, by the way. So do not have any of the typical properties of code that you can, whenever you write a piece of code, you run a code, you don't know, you don't even think what is a CPU, we don't think where it runs, what kind of CPU it runs, what kind of instance it runs. But with machine learning model, you do. So what we are doing and created this fully transparent automated way of allowing you to treat your machine learning models if you were a regular function that you call and then a function could run anywhere. >> Yeah. >> Right. >> That's why-- >> That's better. >> Bringing DevOps agility-- >> That's better. >> Yeah. And you can use existing-- >> That's better, because I can run it on the Artemis too, in space. >> You could, yeah. >> If they have the hardware. (both laugh) >> And that allows you to run your existing, continue to use your existing DevOps infrastructure and your existing people. >> So I have to ask you, cause since you're a professor, this is like a masterclass on theCube. Thank you for coming on. Professor. (Luis laughing) I'm a hardware guy. I'm building hardware for Boston Dynamics, Spot, the dog, that's the diversity in hardware, it's tends to be purpose driven. I got a spaceship, I'm going to have hardware on there. >> Luis: Right. >> It's generally viewed in the community here, that everyone I talk to and other communities, open source is going to drive all software. That's a check. But the scale and integration is super important. And they're also recognizing that hardware is really about the software. And they even said on stage, here. Hardware is not about the hardware, it's about the software. So if you believe that to be true, then your model checks all the boxes. Are people getting this? >> I think they're starting to. Here is why, right. A lot of companies that were hardware first, that thought about software too late, aren't making it. Right? There's a large number of hardware companies, AI chip companies that aren't making it. Probably some of them that won't make it, unfortunately just because they started thinking about software too late. I'm so glad to see a lot of the early, I hope I'm not just doing our own horn here, but Apache TVM, the infrastructure that we built to map models to different hardware, it's very flexible. So we see a lot of emerging chip companies like SiMa.ai's been doing fantastic work, and they use Apache TVM to map algorithms to their hardware. And there's a bunch of others that are also using Apache TVM. That's because you have, you know, an opening infrastructure that keeps it up to date with all the machine learning frameworks and models and allows you to extend to the chips that you want. So these companies pay attention that early, gives them a much higher fighting chance, I'd say. >> Well, first of all, not only are you backable by the VCs cause you have pedigree, you're a professor, you're smart, and you get good recruiting-- >> Luis: I don't know about the smart part. >> And you get good recruiting for PhDs out of University of Washington, which is not too shabby computer science department. But they want to make money. The VCs want to make money. >> Right. >> So you have to make money. So what's the pitch? What's the business model? >> Yeah. Absolutely. >> Share us what you're thinking there. >> Yeah. The value of using our solution is shorter time to value for your model from months to hours. Second, you shrink operator, op-packs, because you don't need a specialized expensive team. Talk about expensive, expensive engineers who can understand machine learning hardware and software engineering to deploy models. You don't need those teams if you use this automated solution, right? Then you reduce that. And also, in the process of actually getting a model and getting specialized to the hardware, making hardware aware, we're talking about a very significant performance improvement that leads to lower cost of deployment in the cloud. We're talking about very significant reduction in costs in cloud deployment. And also enabling new applications on the edge that weren't possible before. It creates, you know, latent value opportunities. Right? So, that's the high level value pitch. But how do we make money? Well, we charge for access to the platform. Right? >> Usage. Consumption. >> Yeah, and value based. Yeah, so it's consumption and value based. So depends on the scale of the deployment. If you're going to deploy machine learning model at a larger scale, chances are that it produces a lot of value. So then we'll capture some of that value in our pricing scale. >> So, you have direct sales force then to work those deals. >> Exactly. >> Got it. How many customers do you have? Just curious. >> So we started, the SaaS platform just launched now. So we started onboarding customers. We've been building this for a while. We have a bunch of, you know, partners that we can talk about openly, like, you know, revenue generating partners, that's fair to say. We work closely with Qualcomm to enable Snapdragon on TVM and hence our platform. We're close with AMD as well, enabling AMD hardware on the platform. We've been working closely with two hyperscaler cloud providers that-- >> I wonder who they are. >> I don't know who they are, right. >> Both start with the letter A. >> And they're both here, right. What is that? >> They both start with the letter A. >> Oh, that's right. >> I won't give it away. (laughing) >> Don't give it away. >> One has three, one has four. (both laugh) >> I'm guessing, by the way. >> Then we have customers in the, actually, early customers have been using the platform from the beginning in the consumer electronics space, in Japan, you know, self driving car technology, as well. As well as some AI first companies that actually, whose core value, the core business come from AI models. >> So, serious, serious customers. They got deep tech chops. They're integrating, they see this as a strategic part of their architecture. >> That's what I call AI native, exactly. But now there's, we have several enterprise customers in line now, we've been talking to. Of course, because now we launched the platform, now we started onboarding and exploring how we're going to serve it to these customers. But it's pretty clear that our technology can solve a lot of other pain points right now. And we're going to work with them as early customers to go and refine them. >> So, do you sell to the little guys, like us? Will we be customers if we wanted to be? >> You could, absolutely, yeah. >> What we have to do, have machine learning folks on staff? >> So, here's what you're going to have to do. Since you can see the booth, others can't. No, but they can certainly, you can try our demo. >> OctoML. >> And you should look at the transparent AI app that's compiled and optimized with our flow, and deployed and built with our flow. That allows you to get your image and do style transfer. You know, you can get you and a pineapple and see how you look like with a pineapple texture. >> We got a lot of transcript and video data. >> Right. Yeah. Right, exactly. So, you can use that. Then there's a very clear-- >> But I could use it. You're not blocking me from using it. Everyone's, it's pretty much democratized. >> You can try the demo, and then you can request access to the platform. >> But you get a lot of more serious deeper customers. But you can serve anybody, what you're saying. >> Luis: We can serve anybody, yeah. >> All right, so what's the vision going forward? Let me ask this. When did people start getting the epiphany of removing the machine learning from the hardware? Was it recently, a couple years ago? >> Well, on the research side, we helped start that trend a while ago. I don't need to repeat that. But I think the vision that's important here, I want the audience here to take away is that, there's a lot of progress being made in creating machine learning models. So, there's fantastic tools to deal with training data, and creating the models, and so on. And now there's a bunch of models that can solve real problems there. The question is, how do you very easily integrate that into your intelligent applications? Madrona Venture Group has been very vocal and investing heavily in intelligent applications both and user applications as well as enablers. So we say an enable of that because it's so easy to use our flow to get a model integrated into your application. Now, any regular software developer can integrate that. And that's just the beginning, right? Because, you know, now we have CI/CD integration to keep your models up to date, to continue to integrate, and then there's more downstream support for other features that you normally have in regular software development. >> I've been thinking about this for a long, long, time. And I think this whole code, no one thinks about code. Like, I write code, I'm deploying it. I think this idea of machine learning as code independent of other dependencies is really amazing. It's so obvious now that you say it. What's the choices now? Let's just say that, I buy it, I love it, I'm using it. Now what do I got to do if I want to deploy it? Do I have to pick processors? Are there verified platforms that you support? Is there a short list? Is there every piece of hardware? >> We actually can help you. I hope we're not saying we can do everything in the world here, but we can help you with that. So, here's how. When you have them all in the platform you can actually see how this model runs on any instance of any cloud, by the way. So we support all the three major cloud providers. And then you can make decisions. For example, if you care about latency, your model has to run on, at most 50 milliseconds, because you're going to have interactivity. And then, after that, you don't care if it's faster. All you care is that, is it going to run cheap enough. So we can help you navigate. And also going to make it automatic. >> It's like tire kicking in the dealer showroom. >> Right. >> You can test everything out, you can see the simulation. Are they simulations, or are they real tests? >> Oh, no, we run all in real hardware. So, we have, as I said, we support any instances of any of the major clouds. We actually run on the cloud. But we also support a select number of edge devices today, like ARMs and Nvidia Jetsons. And we have the OctoML cloud, which is a bunch of racks with a bunch Raspberry Pis and Nvidia Jetsons, and very soon, a bunch of mobile phones there too that can actually run the real hardware, and validate it, and test it out, so you can see that your model runs performant and economically enough in the cloud. And it can run on the edge devices-- >> You're a machine learning as a service. Would that be an accurate? >> That's part of it, because we're not doing the machine learning model itself. You come with a model and we make it deployable and make it ready to deploy. So, here's why it's important. Let me try. There's a large number of really interesting companies that do API models, as in API as a service. You have an NLP model, you have computer vision models, where you call an API and then point in the cloud. You send an image and you got a description, for example. But it is using a third party. Now, if you want to have your model on your infrastructure but having the same convenience as an API you can use our service. So, today, chances are that, if you have a model that you know that you want to do, there might not be an API for it, we actually automatically create the API for you. >> Okay, so that's why I get the DevOps agility for machine learning is a better description. Cause it's not, you're not providing the service. You're providing the service of deploying it like DevOps infrastructure as code. You're now ML as code. >> It's your model, your API, your infrastructure, but all of the convenience of having it ready to go, fully automatic, hands off. >> Cause I think what's interesting about this is that it brings the craftsmanship back to machine learning. Cause it's a craft. I mean, let's face it. >> Yeah. I want human brains, which are very precious resources, to focus on building those models, that is going to solve business problems. I don't want these very smart human brains figuring out how to scrub this into actually getting run the right way. This should be automatic. That's why we use machine learning, for machine learning to solve that. >> Here's an idea for you. We should write a book called, The Lean Machine Learning. Cause the lean startup was all about DevOps. >> Luis: We call machine leaning. No, that's not it going to work. (laughs) >> Remember when iteration was the big mantra. Oh, yeah, iterate. You know, that was from DevOps. >> Yeah, that's right. >> This code allowed for standing up stuff fast, double down, we all know the history, what it turned out. That was a good value for developers. >> I could really agree. If you don't mind me building on that point. You know, something we see as OctoML, but we also see at Madrona as well. Seeing that there's a trend towards best in breed for each one of the stages of getting a model deployed. From the data aspect of creating the data, and then to the model creation aspect, to the model deployment, and even model monitoring. Right? We develop integrations with all the major pieces of the ecosystem, such that you can integrate, say with model monitoring to go and monitor how a model is doing. Just like you monitor how code is doing in deployment in the cloud. >> It's evolution. I think it's a great step. And again, I love the analogy to the mainstream. I lived during those days. I remember the monolithic propriety, and then, you know, OSI model kind of blew it. But that OSI stack never went full stack, and it only stopped at TCP/IP. So, I think the same thing's going on here. You see some scalability around it to try to uncouple it, free it. >> Absolutely. And sustainability and accessibility to make it run faster and make it run on any deice that you want by any developer. So, that's the tagline. >> Luis Ceze, thanks for coming on. Professor. >> Thank you. >> I didn't know you were a professor. That's great to have you on. It was a masterclass in DevOps agility for machine learning. Thanks for coming on. Appreciate it. >> Thank you very much. Thank you. >> Congratulations, again. All right. OctoML here on theCube. Really important. Uncoupling the machine learning from the hardware specifically. That's only going to make space faster and safer, and more reliable. And that's where the whole theme of re:MARS is. Let's see how they fit in. I'm John for theCube. Thanks for watching. More coverage after this short break. >> Luis: Thank you. (gentle music)

Published Date : Jun 24 2022

SUMMARY :

live on the floor at AWS re:MARS 2022. for having me in the show, John. but machine learning is the And that allows you to get certainly on the silicon side. 'cause I could see the progression. So once upon a time, yeah, no... because if you wake up learning runs in the end, that's going to give you the So that was pre-conventional wisdom. the Hamilton was working on. and to this day, you know, That's the beginning of that was logical when you is that the ecosystem because that's kind of the test First, you know-- and scaling the model the way you want, not having to do that integration work. Scale, and run the models So if you can move data to the edge, So do not have any of the typical And you can use existing-- the Artemis too, in space. If they have the hardware. And that allows you So I have to ask you, So if you believe that to be true, to the chips that you want. about the smart part. And you get good recruiting for PhDs So you have to make money. And also, in the process So depends on the scale of the deployment. So, you have direct sales How many customers do you have? We have a bunch of, you know, And they're both here, right. I won't give it away. One has three, one has four. in Japan, you know, self They're integrating, they see this as it to these customers. Since you can see the booth, others can't. and see how you look like We got a lot of So, you can use that. But I could use it. and then you can request But you can serve anybody, of removing the machine for other features that you normally have It's so obvious now that you say it. So we can help you navigate. in the dealer showroom. you can see the simulation. And it can run on the edge devices-- You're a machine learning as a service. know that you want to do, I get the DevOps agility but all of the convenience it brings the craftsmanship for machine learning to solve that. Cause the lean startup No, that's not it going to work. You know, that was from DevOps. double down, we all know the such that you can integrate, and then, you know, OSI on any deice that you Professor. That's great to have you on. Thank you very much. Uncoupling the machine learning Luis: Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Luis CezePERSON

0.99+

QualcommORGANIZATION

0.99+

LuisPERSON

0.99+

2015DATE

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Boston DynamicsORGANIZATION

0.99+

two hoursQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

2017DATE

0.99+

JapanLOCATION

0.99+

Madrona Venture CapitalORGANIZATION

0.99+

AMDORGANIZATION

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

IBMORGANIZATION

0.99+

OneQUANTITY

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

2016DATE

0.99+

University of WashingtonORGANIZATION

0.99+

TodayDATE

0.99+

PepsiORGANIZATION

0.99+

BothQUANTITY

0.99+

yesterdayDATE

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

SecondQUANTITY

0.99+

todayDATE

0.99+

SiMa.aiORGANIZATION

0.99+

OctoMLTITLE

0.99+

OctoMLORGANIZATION

0.99+

IntelORGANIZATION

0.98+

one instanceQUANTITY

0.98+

DevOpsTITLE

0.98+

Madrona Venture GroupORGANIZATION

0.98+

SwamiPERSON

0.98+

MadronaORGANIZATION

0.98+

about six yearsQUANTITY

0.96+

SpotORGANIZATION

0.96+

The Lean Machine LearningTITLE

0.95+

firstQUANTITY

0.95+

theCUBEORGANIZATION

0.94+

ARMsORGANIZATION

0.94+

pineappleORGANIZATION

0.94+

Raspberry PisORGANIZATION

0.92+

TensorFlowTITLE

0.89+

SnapdragonORGANIZATION

0.89+

about three years oldQUANTITY

0.89+

a couple years agoDATE

0.88+

two hyperscaler cloud providersQUANTITY

0.88+

first onesQUANTITY

0.87+

one ofQUANTITY

0.85+

50 millisecondsQUANTITY

0.83+

Apache TVMORGANIZATION

0.82+

both laughQUANTITY

0.82+

three major cloud providersQUANTITY

0.81+

Cracking the Code: Lessons Learned from How Enterprise Buyers Evaluate New Startups


 

(bright music) >> Welcome back to the CUBE presents the AWS Startup Showcase The Next Big Thing in cloud startups with AI security and life science tracks, 15 hottest growing startups are presented. And we had a great opening keynote with luminaries in the industry. And now our closing keynote is to get a deeper dive on cracking the code in the enterprise, how startups are changing the game and helping companies change. And they're also changing the game of open source. We have a great guest, Katie Drucker, Head of Business Development, Madrona Venture Group. Katie, thank you for coming on the CUBE for this special closing keynote. >> Thank you for having me, I appreciate it. >> So one of the topics we talked about with Soma from Madrona on the opening keynote, as well as Ali from Databricks is how startups are seeing success faster. So that's the theme of the Cloud speed, agility, but the game has changed in the enterprise. And I want to really discuss with you how growth changes and growth strategy specifically. They talk, go to market. We hear things like good sales to enterprise sales, organic, freemium, there's all kinds of different approaches, but at the end of the day, the most successful companies, the ones that might not be known that just come out of nowhere. So the economics are changing and the buyers are thinking differently. So let's explore that topic. So take us through your view 'cause you have a lot of experience. But first talk about your role at Madrona, what you do. >> Absolutely all great points. So my role at Madrona, I think I have personally one of the more enviable jobs and that my job is to... I get the privilege of working with all of these fantastic entrepreneurs in our portfolio and doing whatever we can as a firm to harness resources, knowledge, expertise, connections, to accelerate their growth. So my role in setting up business development is taking a look at all of those tools in the tool chest and partnering with the portfolio to make it so. And in our portfolio, we have a wide range of companies, some rely on enterprise sales, some have other go to markets. Some are direct to consumer, a wide range. >> Talk about the growth strategies that you see evolving because what's clear with the pandemic. And as we come out of it is that there are growth plays happening that don't look a little bit differently, more obvious now because of the Cloud scale, we're seeing companies like Databricks, like Snowflake, like other companies that have been built on the cloud or standalone. What are some of the new growth techniques, or I don't want to say growth hacking, that is a pejorative term, but like just a way for companies to quickly describe their value to an enterprise buyer who's moving away from the old RFP days of vendor selection. The game has changed. So take us through how you see secret key and unlocking that new equation of how to present value to an enterprise and how you see enterprises evaluating startups. >> Yes, absolutely. Well, and that's got a question, that's got a few components nestled in what I think are some bigger trends going on. AWS of course brought us the Cloud first. I think now the Cloud is more and more a utility. And so it's incumbent upon thinking about how an enterprise 'cause using the Cloud is going to go up the value stack and partner with its cloud provider and other service providers. I think also with that agility of operations, you have thinning, if you will, the systems of record and a lot of new entrance into this space that are saying things like, how can we harness AIML and other emerging trends to provide more value directly around work streams that were historically locked into those systems of record? And then I think you also have some price plans that are far more flexible around usage based as opposed to just flat subscription or even these big clunky annual or multi-year RFP type stuff. So all of those trends are really designed in ways that favor the emerging startup. And I think if done well, and in partnership with those underlying cloud providers, there can be some amazing benefits that the enterprise realizes an opportunity for those startups to grow. And I think that's what you're seeing. I think there's also this emergence of a buyer that's different than the CIO or the site the CISO. You have things with low code, no code. You've got other buyers in the organization, other line of business executives that are coming to the table, making software purchase decisions. And then you also have empowered developers that are these citizen builders and developer buyers and personas that really matter. So lots of inroads in places for a startup to reach in the enterprise to make a connection and to bring value. That's a great insight. I want to ask that just if you don't mind follow up on that, you mentioned personas. And what we're seeing is the shift happens. There's new roles that are emerging and new things that are being reconfigured or refactored if you will, whether it's human resources or AI, and you mentioned ML playing a role in automation. These are big parts of the new value proposition. How should companies posture to the customer? Because I don't want to say pivot 'cause that means it's not working but mostly extending our iterating around their positioning because as new things have not yet been realized, it might not be operationalized in a company or maybe new things need to be operationalized, it's a new solution for that. Positioning the value is super important and a lot of companies often struggle with that, but also if they get it right, that's the key. What's your feeling on startups in their positioning? So people will dismiss it like, "Oh, that's marketing." But maybe that's important. What's your thoughts on the great positioning question? >> I've been in this industry a long time. And I think there are some things that are just tried and true, and it is not unique to tech, which is, look, you have to tell a story and you have to reach the customer and you have to speak to the customer's need. And what that means is, AWS is a great example. They're famous for the whole concept of working back from the customer and thinking about what that customer's need is. I think any startup that is looking to partner or work alongside of AWS really has to embody that very, very customer centric way of thinking about things, even though, as we just talked about those personas are changing who that customer really is in the enterprise. And then speaking to that value proposition and meeting that customer and creating a dialogue with them that really helps to understand not only what their pain points are, but how you were offering solves those pain points. And sometimes the customer doesn't realize that that is their pain point and that's part of the education and part of the way in which you engage that dialogue. That doesn't change a lot, just generation to generation. I think the modality of how we have that dialogue, the methods in which we choose to convey that change, but that basic discussion is what makes us human. >> What's your... Great, great, great insight. I want to ask you on the value proposition question again, the question I often get, and it's hard to answer is am I competing on value or am I competing on commodity? And depending on where you're in the stack, there could be different things like, for example, land is getting faster, smaller, cheaper, as an example on Amazon. That's driving down to low cost high value, but it shifts up the stack. You start to see in companies this changing the criteria for how to evaluate. So an enterprise might be struggling. And I often hear enterprises say, "I don't know how to pick who I need. I buy tools, I don't buy many platforms." So they're constantly trying to look for that answer key, if you will, what's your thoughts on the changing requirements of an enterprise? And how to do vendor selection. >> Yeah, so obviously I don't think there's a single magic bullet. I always liked just philosophically to think about, I think it's always easier and frankly more exciting as a buyer to want to buy stuff that's going to help me make more revenue and build and grow as opposed to do things that save me money. And just in a binary way, I like to think which side of the fence are you sitting on as a product offering? And the best ways that you can articulate that, what opportunities are you unlocking for your customer? The problems that you're solving, what kind of growth and what impact is that going to lead to, even if you're one or two removed from that? And again, that's not a new concept. And I think that the companies that have that squarely in mind when they think about their go-to market strategy, when they think about the dialogue they're having, when they think about the problems that they're solving, find a much faster path. And I think that also speaks to why we're seeing so many explosion in the line of business, SAS apps that are out there. Again, that thinning of the systems of record, really thinking about what are the scenarios and work streams that we can have happened that are going to help with that revenue growth and unlocking those opportunities. >> What's the common startup challenge that you see when they're trying to do business development? Usually they build the product first, product led value, you hear that a lot. And then they go, "Okay, we're ready to sell, hire a sales guy." That seems to be shifting away because of the go to markets are changing. What are some of the challenges that startups have? What are some that you're seeing? >> Well, and I think the point that you're making about the changes are really almost a result of the trends that we're talking about. The sales organization itself is becoming... These work streams are becoming instrumented. Data is being collected, insights are being derived off of those things. So you see companies like Clary or Highspot or two examples or tutorial that are in our portfolio that are looking at that action and making the art of sales and marketing far more sophisticated overall, which then leads to the different growth hacking and the different insights that are driven. I think the common mistakes that I see across the board, especially with earlier stage startups, look you got to find product market fit. I think that's always... You start with a thesis or a belief and a passion that you're building something that you think the market needs. And it's a lot of dialogue you have to have to make sure that you do find that. I think once you find that another common problem that I see is leading with an explanation of technology. And again, not focusing on the buyer or the... Sorry, the buyer about solving a problem and focusing on that problem as opposed to focusing on how cool your technology is. Those are basic and really, really simple. And then I think setting a set of expectations, especially as it comes to business development and partnering with companies like AWS. The researching that you need to adequately meet the demand that can be turned on. And then I'm sure you heard about from Databricks, from an organization like AWS, you have to be pragmatic. >> Yeah, Databricks gone from zero a software sales a few years ago to over a billion. Now it looks like a Snowflake which came out of nowhere and they had a great product, but built on Amazon, they became the data cloud on top of Amazon. And now they're growing just whole new business models and new business development techniques. Katie, thank you for sharing your insight here. The CUBE's closing keynote. Thanks for coming on. >> Appreciate it, thank you. >> Okay, Katie Drucker, Head of Business Development at Madrona Venture Group. Premier VC in the Seattle area and beyond they're doing a lot of cloud action. And of course they know AWS very well and investing in the ecosystem. So great, great stuff there. Next up is Peter Wagner partner at Wing.VX. Love this URL first of all 'cause of the VC domain extension. But Peter is a long time venture capitalist. I've been following his career. He goes back to the old networking days, back when the internet was being connected during the OSI days, when the TCP IP open systems interconnect was really happening and created so much. Well, Peter, great to see you on the CUBE here and congratulations with success at Wing VC. >> Yeah, thanks, John. It's great to be here. I really appreciate you having me. >> Reason why I wanted to have you come on. First of all, you had a great track record in investing over many decades. You've seen many waves of innovation, startups. You've seen all the stories. You've seen the movie a few times, as I say. But now more than ever, enterprise wise it's probably the hottest I've ever seen. And you've got a confluence of many things on the stack. You were also an early seed investor in Snowflake, well-regarded as a huge success. So you've got your eye on some of these awesome deals. Got a great partner over there has got a network experience as well. What is the big aha moment here for the industry? Because it's not your classic enterprise startups anymore. They have multiple things going on and some of the winners are not even known. They come out of nowhere and they connect to enterprise and get the lucrative positions and can create a moat and value. Like out of nowhere, it's not the old way of like going to the airport and doing an RFP and going through the stringent requirements, and then you're in, you get to win the lucrative contract and you're in. Not anymore, that seems to have changed. What's your take on this 'cause people are trying to crack the code here and sometimes you don't have to be well-known. >> Yeah, well, thank goodness the game has changed 'cause that old thing was (indistinct) So I for one don't miss it. There was some modernization movement in the enterprise and the modern enterprise is built on data powered by AI infrastructure. That's an agile workplace. All three of those things are really transformational. There's big investments being made by enterprises, a lot of receptivity and openness to technology to enable all those agendas, and that translates to good prospects for startups. So I think as far as my career goes, I've never seen a more positive or fertile ground for startups in terms of penetrating enterprise, it doesn't mean it's easy to do, but you have a receptive audience on the other side and that hasn't necessarily always been the case. >> Yeah, I got to ask you, I know that you're a big sailor and your family and Franks Lubens also has a boat and sailing metaphor is always good to have 'cause you got to have a race that's being run and they have tactics. And this game that we're in now, you see the successes, there's investment thesises, and then there's also actually bets. And I want to get your thoughts on this because a lot of enterprises are trying to figure out how to evaluate startups and starts also can make the wrong bet. They could sail to the wrong continent and be in the wrong spot. So how do you pick the winners and how should enterprises understand how to pick winners too? >> Yeah, well, one of the real important things right now that enterprise is facing startups are learning how to do and so learning how to leverage product led growth dynamics in selling to the enterprise. And so product led growth has certainly always been important consumer facing companies. And then there's a few enterprise facing companies, early ones that cracked the code, as you said. And some of these examples are so old, if you think about, like the ones that people will want to talk about them and talk about Classy and want to talk about Twilio and these were of course are iconic companies that showed the way for others. But even before that, folks like Solar Winds, they'd go to market model, clearly product red, bottom stuff. Back then we didn't even have those words to talk about it. And then some of the examples are so enormous if think about them like the one right in front of your face, like AWS. (laughing) Pretty good PLG, (indistinct) but it targeted builders, it targeted developers and flipped over the way you think about enterprise infrastructure, as a result some how every company, even if they're harnessing relatively conventional sales and marketing motion, and you think about product led growth as a way to kick that motion off. And so it's not really an either word even more We might think OPLJ, that means there's no sales keep one company not true, but here's a way to set the table so that you can very efficiently use your sales and marketing resources, only have the most attractive targets and ones that are really (indistinct) >> I love the product led growth. I got to ask you because in the networking days, I remember the term inevitability was used being nested in a solution that they're just going to Cisco off router and a firewall is one you can unplug and replace with another vendor. Cisco you'd have to go through no switching costs were huge. So when you get it to the Cloud, how do you see the competitiveness? Because we were riffing on this with Ali, from Databricks where the lock-in might be value. The more value provider is the lock-in. Is their nestedness? Is their intimate ability as a competitive advantage for some of these starts? How do you look at that? Because startups, they're using open source. They want to have a land position in an enterprise, but how do they create that sustainable competitive advantage going forward? Because again, this is what you do. You bet on ones that you can see that could establish a model whatever we want to call it, but a competitive advantage and ongoing nested position. >> Sometimes it has to do with data, John, and so you mentioned Snowflake a couple of times here, a big part of Snowflake's strategy is what they now call the data cloud. And one of the reasons you go there is not to just be able to process data, to actually get access to it, exchange with the partners. And then that of course is a great reason for the customers to come to the Snowflake platform. And so the more data it gets more customers, it gets more data, the whole thing start spinning in the right direction. That's a really big example, but all of these startups that are using ML in a fundamental way, applying it in a novel way, the data modes are really important. So getting to the right data sources and training on it, and then putting it to work so that you can see that in this process better and doing this earlier on that scale. That's a big part of success. Another company that I work with is a good example that I call (indistinct) which works in sales technology space, really crushing it in terms of building better sales organizations both at performance level, in terms of the intelligence level, and just overall revenue attainment using ML, and using novel data sources, like the previously lost data or phone calls or Zoom calls as you already know. So I think the data advantages are really big. And smart startups are thinking through it early. >> It's interest-- >> And they're planning by the way, not to ramble on too much, but they're betting that PLG strategy. So their land option is designed not just to be an interesting way to gain usage, but it's also a way to gain access to data that then enables the expand in a component. >> That is a huge call-out point there, I was going to ask another question, but I think that is the key I see. It's a new go to market in a way. product led with that kind of approach gets you a beachhead and you get a little position, you get some data that is a cloud model, it means variable, whatever you want to call it variable value proposition, value proof, or whatever, getting that data and reiterating it. So it brings up the whole philosophical question of okay, product led growth, I love that with product led growth of data, I get that. Remember the old platform versus a tool? That's the way buyers used to think. How has that changed? 'Cause now almost, this conversation throws out the whole platform thing, but isn't like a platform. >> It looks like it's all. (laughs) you can if it is a platform, though to do that you can reveal that later, but you're looking for adoption, so if it's down stock product, you're looking for adoption by like developers or DevOps people or SOEs, and they're trying to solve a problem, and they want rapid gratification. So they don't want to have an architectural boomimg, placed in front of them. And if it's up stock product and application, then it's a user or the business or whatever that is, is adopting the application. And again, they're trying to solve a very specific problem. You need instant and immediate obvious time and value. And now you have a ticket to the dance and build on that and maybe a platform strategy can gradually take shape. But you know who's not in this conversation is the CIO, it's like, "I'm always the last to know." >> That's the CISO though. And they got him there on the firing lines. CISOs are buying tools like it's nobody's business. They need everything. They'll buy anything or you go meet with sand, they'll buy it. >> And you make it sound so easy. (laughing) We do a lot of security investment if only (indistinct) (laughing) >> I'm a little bit over the top, but CISOs are under a lot of pressure. I would talk to the CISO at Capital One and he was saying that he's on Amazon, now he's going to another cloud, not as a hedge, but he doesn't want to focus development teams. So he's making human resource decisions as well. Again, back to what IT used to be back in the old days where you made a vendor decision, you built around it. So again, clouds play that way. I see that happening. But the question is that I think you nailed this whole idea of cross hairs on the target persona, because you got to know who you are and then go to the market. So if you know you're a problem solving and the lower in the stack, do it and get a beachhead. That's a strategy, you can do that. You can't try to be the platform and then solve a problem at the same time. So you got to be careful. Is that what you were getting at? >> Well, I think you just understand what you're trying to achieve in that line of notion. And how those dynamics work and you just can't drag it out. And they could make it too difficult. Another company I work with is a very strategic cloud data platform. It's a (indistinct) on systems. We're not trying to foist that vision though (laughs) or not adopters today. We're solving some thorny problems with them in the short term, rapid time to value operational needs in scale. And then yeah, once they found success with (indistinct) there's would be an opportunity to be increasing the platform, and an obstacle for those customers. But we're not talking about that. >> Well, Peter, I appreciate you taking the time and coming out of a board meeting, I know that you're super busy and I really appreciate you making time for us. I know you've got an impressive partner in (indistinct) who's a former Sequoia, but Redback Networks part of that company over the years, you guys are doing extremely well, even a unique investment thesis. I'd like you to put the plug in for the firm. I think you guys have a good approach. I like what you guys are doing. You're humble, you don't brag a lot, but you make a lot of great investments. So could you take them in to explain what your investment thesis is and then how that relates to how an enterprise is making their investment thesis? >> Yeah, yeah, for sure. Well, the concept that I described earlier that the modern enterprise movement as a workplace built on data powered by AI. That's what we're trying to work with founders to enable. And also we're investing in companies that build the products and services that enable that modern enterprise to exist. And we do it from very early stages, but with a longterm outlook. So we'll be leading series and series, rounds of investment but staying deeply involved, both operationally financially throughout the whole life cycle of the company. And then we've done that a bunch of times, our goal is always the big independent public company and they don't always make it but enough for them to have it all be worthwhile. An interesting special case of this, and by the way, I think it intersects with some of startup showcase here is in the life sciences. And I know you were highlighting a lot of healthcare websites and deals, and that's a vertical where to disrupt tremendous impact of data both new data availability and new ways to put it to use. I know several of my partners are very focused on that. They call it bio-X data. It's a transformation all on its own. >> That's awesome. And I think that the reason why we're focusing on these verticals is if you have a cloud horizontal scale view and vertically specialized with machine learning, every vertical is impacted by data. It's so interesting that I think, first start, I was probably best time to be a cloud startup right now. I really am bullish on it. So I appreciate you taking the time Peter to come in again from your board meeting, popping out. Thanks for-- (indistinct) Go back in and approve those stock options for all the employees. Yeah, thanks for coming on. Appreciate it. >> All right, thank you John, it's a pleasure. >> Okay, Peter Wagner, Premier VC, very humble Wing.VC is a great firm. Really respect them. They do a lot of great investing investments, Snowflake, and we have Dave Vellante back who knows a lot about Snowflake's been covering like a blanket and Sarbjeet Johal. Cloud Influencer friend of the CUBE. Cloud commentator and cloud experience built clouds, runs clouds now invests. So V. Dave, thanks for coming back on. You heard Peter Wagner at Wing VC. These guys have their roots in networking, which networking back in the day was, V. Dave. You remember the internet Cisco days, remember Cisco, Wellfleet routers. I think Peter invested in Arrow Point, remember Arrow Point, that was about in the 495 belt where you were. >> Lynch's company. >> That was Chris Lynch's company. I think, was he a sales guy there? (indistinct) >> That was his first big hit I think. >> All right, well guys, let's wrap this up. We've got a great program here. Sarbjeet, thank you for coming on. >> No worries. Glad to be here todays. >> Hey, Sarbjeet. >> First of all, really appreciate the Twitter activity lately on the commentary, the observability piece on Jeremy Burton's launch, Dave was phenomenal, but Peter was talking about this dynamic and I think ties this cracking the code thing together, which is there's a product led strategy that feels like a platform, but it's also a tool. In other words, it's not mutually exclusive, the old methods thrown out the window. Land in an account, know what problem you're solving. If you're below the stack, nail it, get data and go from there. If you're a process improvement up the stack, you have to much more of a platform longer-term sale, more business oriented, different motions, different mechanics. What do you think about that? What's your reaction? >> Yeah, I was thinking about this when I was listening to some of the startups pitching, if you will, or talking about what they bring to the table in this cloud scale or cloud era, if you will. And there are tools, there are applications and then they're big monolithic platforms, if you will. And then they're part of the ecosystem. So I think the companies need to know where they play. A startup cannot be platform from the get-go I believe. Now many aspire to be, but they have to start with tooling. I believe in, especially in B2B side of things, and then go into the applications, one way is to go into the application area, if you will, like a very precise use cases for certain verticals and stuff like that. And other parties that are going into the platform, which is like horizontal play, if you will, in technology. So I think they have to understand their age, like how old they are, how new they are, how small they are, because when their size matter when you are procuring as a big business, procuring your technology vendors size matters and the economic viability matters and their proximity to other windows matter as well. So I think we'll jump into that in other discussions later, but I think that's key, as you said. >> I would agree with that. I would phrase it in my mind, somewhat differently from Sarbjeet which is you have product led growth, and that's your early phase and you get product market fit, you get product led growth, and then you expand and there are many, many examples of this, and that's when you... As part of your team expansion strategy, you're going to get into the platform discussion. There's so many examples of that. You take a look at Ali Ghodsi today with what's happening at Databricks, Snowflake is another good example. They've started with product led growth. And then now they're like, "Okay, we've got to expand the team." Okta is another example that just acquired zero. That's about building out the platform, versus more of a point product. And there's just many, many examples of that, but you cannot to your point, very hard to start with a platform. Arm did it, but that was like a one in a million chance. >> It's just harder, especially if it's new and it's not operationalized yet. So one of the things Dave that we've observed the Cloud is some of the best known successes where nobody's not known at all, database we've been covering from the beginning 'cause we were close to that movement when they came out of Berkeley. But they still were misunderstood and they just started generating revenue in only last year. So again, only a few years ago, zero software revenue, now they're approaching a billion dollars. So it's not easy to make these vendor selections anymore. And if you're new and you don't have someone to operate it or your there's no department and the departments changing, that's another problem. These are all like enterprisey problems. What's your thoughts on that, Dave? >> Well, I think there's a big discussion right now when you've been talking all day about how should enterprise think about startups and think about most of these startups they're software companies and software is very capital efficient business. At the same time, these companies are raising hundreds of millions, sometimes over a billion dollars before they go to IPO. Why is that? A lot of it's going to promotion. I look at it as... And there's a big discussion going on but well, maybe sales can be more efficient and more direct and so forth. I really think it comes down to the golden rule. Two things really mattered in the early days in the startup it's sales and engineering. And writers should probably say engineering and sales and start with engineering. And then you got to figure out your go to market. Everything else is peripheral to those two and you don't get those two things right, you struggle. And I think that's what some of these successful startups are proving. >> Sarbjeet, what's your take on that point? >> Could you repeat the point again? Sorry, I lost-- >> As cloud scale comes in this whole idea of competing, the roles are changing. So look at IOT, look at the Edge, for instance, you got all kinds of new use cases that no one actually knows is a problem to solve. It's just pure opportunity. So there's no one's operational I could have a product, but it don't know we can buy it yet. It's a problem. >> Yeah, I think the solutions have to be point solutions and the startups need to focus on the practitioners, number one, not the big buyers, not the IT, if you will, but the line of business, even within that sphere, like just focus on the practitioners who are going to use that technology. I talked to, I think it wasn't Fiddler, no, it was CoreLogics. I think that story was great today earlier in how they kind of struggle in the beginning, they were trying to do a big bang approach as a startup, but then they almost stumbled. And then they found their mojo, if you will. They went to Don the market, actually, that's a very classic theory of disruption, like what we study from Harvard School of Business that you go down the market, go to the non-consumers, because if you're trying to compete head to head with big guys. Because most of the big guys have lot of feature and functionality, especially at the platform level. And if you're trying to innovate in that space, you have to go to the practitioners and solve their core problems and then learn and expand kind of thing. So I think you have to focus on practitioners a lot more than the traditional oracle buyers. >> Sarbjeet, we had a great thread last night in Twitter, on observability that you started. And there's a couple of examples there. Chaos searches and relatively small company right now, they just raised them though. And they're part of this star showcase. And they could've said, "Hey, we're going to go after Splunk." But they chose not to. They said, "Okay, let's kind of disrupt the elk stack and simplify that." Another example is a company observed, you've mentioned Jeremy Burton's company, John. They're focused really on SAS companies. They're not going after initially these complicated enterprise deals because they got to get it right or else they'll get churn, and churn is that silent killer of software companies. >> The interesting other company that was on the showcase was Tetra Science. I don't know if you noticed that one in the life science track, and again, Peter Wagner pointed out the life science. That's an under recognized in the press vertical that's exploding. Certainly during the pandemic you saw it, Tetra science is an R&D cloud, Dave, R&D data cloud. So pharmaceuticals, they need to do their research. So the pandemic has brought to life, this now notion of tapping into data resources, not just data lakes, but like real deal. >> Yeah, you and Natalie and I were talking about that this morning and that's one of the opportunities for R&D and you have all these different data sources and yeah, it's not just about the data lake. It's about the ecosystem that you're building around them. And I see, it's really interesting to juxtapose what Databricks is doing and what Snowflake is doing. They've got different strategies, but they play a part there. You can see how ecosystems can build that system. It's not one company is going to solve all these problems. It's going to really have to be connections across these various companies. And that's what the Cloud enables and ecosystems have all this data flowing that can really drive new insights. >> And I want to call your attention to a tweet Sarbjeet you wrote about Splunk's earnings and they're data companies as well. They got Teresa Carlson there now AWS as the president, working with Doug, that should change the game a little bit more. But there was a thread of the neath there. Andy Thry says to replies to Dave you or Sarbjeet, you, if you're on AWS, they're a fine solution. The world doesn't just revolve around AWS, smiley face. Well, a lot of it does actually. So (laughing) nice point, Andy. But he brings up this thing and Ali brought it up too, Hybrid now is a new operating system for what now Edge does. So we got Mobile World Congress happening this month in person. This whole Telco 5G brings up a whole nother piece of the Cloud puzzle. Jeff Barr pointed out in his keynote, Dave. Guys, I want to get your reaction. The Edge now is... I'm calling it the super Edge because it's not just Edge as we know it before. You're going to have these pops, these points of presence that are going to have wavelength as your spectrum or whatever they have. I think that's the solution for Azure. So you're going to have all this new cloud power for low latency applications. Self-driving delivery VR, AR, gaming, Telemetry data from Teslas, you name it, it's happening. This is huge, what's your thoughts? Sarbjeet, we'll start with you. >> Yeah, I think Edge is like bound to happen. And for many reasons, the volume of data is increasing. Our use cases are also expanding if you will, with the democratization of computer analysis. Specialization of computer, actually Dave wrote extensively about how Intel and other chip players are gearing up for that future if you will. Most of the inference in the AI world will happen in the field close to the workloads if you will, that can be mobility, the self-driving car that can be AR, VR. It can be healthcare. It can be gaming, you name it. Those are the few use cases, which are in the forefront and what alarm or use cases will come into the play I believe. I've said this many times, Edge, I think it will be dominated by the hyperscalers, mainly because they're building their Metro data centers now. And with a very low latency in the Metro areas where the population is, we're serving the people still, not the machines yet, or the empty areas where there is no population. So wherever the population is, all these big players are putting their data centers there. And I think they will dominate the Edge. And I know some Edge lovers. (indistinct) >> Edge huggers. >> Edge huggers, yeah. They don't like the hyperscalers story, but I think that's the way were' going. Why would we go backwards? >> I think you're right, first of all, I agree with the hyperscale dying you look at the top three clouds right now. They're all in the Edge, Hardcore it's a huge competitive battleground, Dave. And I think the missing piece, that's going to be uncovered at Mobile Congress. Maybe they'll miss it this year, but it's the developer traction, whoever wins the developer market or wins the loyalty, winning over the market or having adoption. The applications will drive the Edge. >> And I would add the fourth cloud is Alibaba. Alibaba is actually bigger than Google and they're crushing it as well. But I would say this, first of all, it's popular to say, "Oh not everything's going to move into the Cloud, John, Dave, Sarbjeet." But the fact is that AWS they're trend setter. They are crushing it in terms of features. And you'd look at what they're doing in the plumbing with Annapurna. Everybody's following suit. So you can't just ignore that, number one. Second thing is what is the Edge? Well, the edge is... Where's the logical place to process the data? That's what the Edge is. And I think to your point, both Sarbjeet and John, the Edge is going to be won by developers. It's going to be one by programmability and it's going to be low cost and really super efficient. And most of the data is going to stay at the Edge. And so who is in the best position to actually create that? Is it going to be somebody who was taking an x86 box and throw it over the fence and give it a fancy name with the Edge in it and saying, "Here's our Edge box." No, that's not what's going to win the Edge. And so I think first of all it's huge, it's wide open. And I think where's the innovation coming from? I agree with you it's the hyperscalers. >> I think the developers as John said, developers are the kingmakers. They build the solutions. And in that context, I always talk about the skills gravity, a lot of people are educated in certain technologies and they will keep using those technologies. Their proximity to that technology is huge and they don't want to learn something new. So as humans we just tend to go what we know how to use it. So from that front, I usually talk with consumption economics of cloud and Edge. It has to focus on the practitioners. And in this case, practitioners are developers because you're just cooking up those solutions right now. We're not serving that in huge quantity right now, but-- >> Well, let's unpack that Sarbjeet, let's unpack that 'cause I think you're right on the money on that. The consumption of the tech and also the consumption of the application, the end use and end user. And I think the reason why hyperscalers will continue to dominate besides the fact that they have all the resource and they're going to bring that to the Edge, is that the developers are going to be driving the applications at the Edge. So if you're low latency Edge, that's going to open up new applications, not just the obvious ones I did mention, gaming, VR, AR, metaverse and other things that are obvious. There's going to be non-obvious things that are going to be huge that are going to come out from the developers. But the Cloud native aspect of the hyperscalers, to me is where the scales are tipping, let me explain. IT was built to build a supply resource to the businesses who were writing business applications. Mostly driven by IBM in the mainframe in the old days, Dave, and then IT became IT. Telcos have been OT closed, "This is our thing, that's it." Now they have to open up. And the Cloud native technologies is the fastest way to value. And I think that paths, Sarbjeet is going to be defined by this new developer and this new super Edge concept. So I think it's going to be wide open. I don't know what to say. I can't guess, but it's going to be creative. >> Let me ask you a question. You said years ago, data's new development kit, does low code and no code to Sarbjeet's point, change the equation? In other words, putting data in the hands of those OT professionals, those practitioners who have the context. Does low-code and no-code enable, more of those protocols? I know it's a bromide, but the citizen developer, and what impact does that have? And who's in the best position? >> Well, I think that anything that reduces friction to getting stuff out there that can be automated, will increase the value. And then the question is, that's not even a debate. That's just fact that's going to be like rent, massive rise. Then the issue comes down to who has the best asset? The software asset that's eating the world or the tower and the physical infrastructure. So if the physical infrastructure aka the Telcos, can't generate value fast enough, in my opinion, the private equity will come in and take it over, and then refactor that business model to take advantage of the over the top software model. That to me is the big stare down competition between the Telco world and this new cloud native, whichever one yields in valley is going to blink first, if you say. And I think the Cloud native wins this one hands down because the assets are valuable, but only if they enable the new model. If the old model tries to hang on to the old hog, the old model as the Edge hugger, as Sarbjeet says, they'll just going to slowly milk that cow dry. So it's like, it's over. So to me, they have to move. And I think this Mobile World Congress day, we will see, we will be looking for that. >> Yeah, I think that in the Mobile World Congress context, I think Telcos should partner with the hyperscalers very closely like everybody else has. And they have to cave in. (laughs) I usually say that to them, like the people came in IBM tried to fight and they cave in. Other second tier vendors tried to fight the big cloud vendors like top three or four. And then they cave in. okay, we will serve our stuff through your cloud. And that's where all the buyers are congregating. They're going to buy stuff along with the skills gravity, the feature proximity. I've got another term I'll turn a coin. It matters a lot when you're doing one thing and you want to do another thing when you're doing all this transactional stuff and regular stuff, and now you want to do data science, where do you go? You go next to it, wherever you have been. Your skills are in that same bucket. And then also you don't have to write a new contract with a new vendor, you just go there. So in order to serve, this is a lesson for startups as well. You need to prepare yourself for being in the Cloud marketplaces. You cannot go alone independently to fight. >> Cloud marketplace is going to replace procurement, for sure, we know that. And this brings up the point, Dave, we talked about years ago, remember on the CUBE. We said, there's going to be Tier two clouds. I used that word in quotes cause nothing... What does it even mean Tier two. And we were talking about like Amazon, versus Microsoft and Google. We set at the time and Alibaba but they're in China, put that aside for a second, but the big three. They're going to win it all. And they're all going to be successful to a relative terms, but whoever can enable that second tier. And it ended up happening, Snowflake is that example. As is Databricks as is others. So Google and Microsoft as fast as they can replicate the success of AWS by enabling someone to build their business on their cloud in a way that allows the customer to refactor their business will win. They will win most of the lion's share my opinion. So I think that applies to the Edge as well. So whoever can come in and say... Whichever cloud says, "I'm going to enable the next Snowflake, the next enterprise solution." I think takes it. >> Well, I think that it comes back... Every conversation coming back to the data. And if you think about the prevailing way in which we treated data with the exceptions of the two data driven companies in their quotes is as we've shoved all the data into some single repository and tried to come up with a single version of the truth and it's adjudicated by a centralized team, with hyper specialized roles. And then guess what? The line of business, there's no context for the business in that data architecture or data Corpus, if you will. And then the time it takes to go from idea for a data product or data service commoditization is way too long. And that's changing. And the winners are going to be the ones who are able to exploit this notion of leaving data where it is, the point about data gravity or courting a new term. I liked that, I think you said skills gravity. And then enabling the business lines to have access to their own data teams. That's exactly what Ali Ghodsi, he was saying this morning. And really having the ability to create their own data products without having to go bow down to an ivory tower. That is an emerging model. All right, well guys, I really appreciate the wrap up here, Dave and Sarbjeet. I'd love to get your final thoughts. I'll just start by saying that one of the highlights for me was the luminary guests size of 15 great companies, the luminary guests we had from our community on our keynotes today, but Ali Ghodsi said, "Don't listen to what everyone's saying in the press." That was his position. He says, "You got to figure out where the puck's going." He didn't say that, but I'm saying, I'm paraphrasing what he said. And I love how he brought up Sky Cloud. I call it Sky net. That's an interesting philosophy. And then he also brought up that machine learning auto ML has got to be table stakes. So I think to me, that's the highlight walk away. And the second one is this idea that the enterprises have to have a new way to procure and not just the consumption, but some vendor selection. I think it's going to be very interesting as value can be proved with data. So maybe the procurement process becomes, here's a beachhead, here's a little bit of data. Let me see what it can do. >> I would say... Again, I said it was this morning, that the big four have given... Last year they spent a hundred billion dollars more on CapEx. To me, that's a gift. In so many companies, especially focusing on trying to hang onto the legacy business. They're saying, "Well not everything's going to move to the Cloud." Whatever, the narrative should change to, "Hey, thank you for that gift. We're now going to build value on top of the Cloud." Ali Ghodsi laid that out, how Databricks is doing it. And it's clearly what Snowflake's new with the data cloud. It basically a layer that abstracts all that underlying complexity and add value on top. Eventually going out to the Edge. That's a value added model that's enabled by the hyperscalers. And that to me, if I have to evaluate where I'm going to place my bets as a CIO or IT practitioner, I'm going to look at who are the ones that are actually embracing that investment that's been made and adding value on top in a way that can drive my data-driven, my digital business or whatever buzzword you want to throw on. >> Yeah, I think we were talking about the startups in today's sessions. I think for startups, my advice is to be as close as you can be to hyperscalers and anybody who awards them, they will cave in at the end of the day, because that's where the whole span of gravity is. That's what the innovation gravity is, everybody's gravitating towards that. And I would say quite a few times in the last couple of years that the rate of innovation happening in a non-cloud companies, when I talk about non-cloud means are not public companies. I think it's like diminishing, if you will, as compared to in cloud, there's a lot of innovation. The Cloud companies are not paying by power people anymore. They have all sophisticated platforms and leverage those, and also leverage the marketplaces and leverage their buyers. And the key will be how you highlight yourself in that cloud market place if you will. It's like in a grocery store where your product is placed and you have to market around it, and you have to have a good story telling team in place as well after you do the product market fit. I think that's a key. I think just being close to the Cloud providers, that's the way to go for startups. >> Real, real quick. Each of you talk about what it takes to crack the code for the enterprise in the modern era now. Dave, we'll start with you. What's it take? (indistinct) >> You got to have it be solving a problem that is 10X better at one 10th a cost of anybody else, if you're a small company, that rule number one. Number two is you obviously got to get product market fit. You got to then figure out. And I think, and again, you're in your early phases, you have to be almost processed builders, figure out... Your KPIs should all be built around retention. How do I define customer success? How do I keep customers and how do I make them loyal so that I know that my cost of acquisition is going to be at least one-third or lower than my lifetime value of that customer? So you've got to nail that. And then once you nail that, you've got to codify that process in the next phase, which really probably gets into your platform discussion. And that's really where you can start to standardize and scale and figure out your go to market and the relationship between marketing spend and sales productivity. And then when you get that, then you got to move on to figure out your Mot. Your Mot might just be a brand. It might be some secret sauce, but more often than not though, it's going to be the relationship that you build. And I think you've got to think about those phases and in today's world, you got to move really fast. Sarbjeet, real quick. What's the secret to crack the code? >> I think the secret to crack the code is partnership and alliances. As a small company selling to the bigger enterprises, the vendors size will be one of the big objections. Even if they don't say it, it's on the back of their mind, "What if these guys disappear tomorrow what would we do if we pick this technology?" And another thing is like, if you're building on the left side, which is the developer side, not on the right side, which is the operations or production side, if you will, you have to understand the sales cycles are longer on the right side and left side is easier to get to, but that's why we see a lot more startups. And on the left side of your DevOps space, if you will, because it's easier to sell to practitioners and market to them and then show the value correctly. And also understand that on the left side, the developers are very know how hungry, on the right side people are very cost-conscious. So understanding the traits of these different personas, if you will buyers, it will, I think set you apart. And as Dave said, you have to solve a problem, focus on practitioners first, because you're small. You have to solve political problems very well. And then you can expand. >> Well, guys, I really appreciate the time. Dave, we're going to do more of these, Sarbjeet we're going to do more of these. We're going to add more community to it. We're going to add our community rooms next time. We're going to do these quarterly and try to do them as more frequently, we learned a lot and we still got a lot more to learn. There's a lot more contribution out in the community that we're going to tap into. Certainly the CUBE Club as we call it, Dave. We're going to build this actively around Cloud. This is another 20 years. The Edge brings us more life with Cloud, it's really exciting. And again, enterprise is no longer an enterprise, it's just the world now. So great companies here, the next Databricks, the next IPO. The next big thing is in this list, Dave. >> Hey, John, we'll see you in Barcelona. Looking forward to that. Sarbjeet, I know in a second half, we're going to run into each other. So (indistinct) thank you John. >> Trouble has started. Great talking to you guys today and have fun in Barcelona and keep us informed. >> Thanks for coming. I want to thank Natalie Erlich who's in Rome right now. She's probably well past her bedtime, but she kicked it off and emceeing and hosting with Dave and I for this AW startup showcase. This is batch two episode two day. What do we call this? It's like a release so that the next 15 startups are coming. So we'll figure it out. (laughs) Thanks for watching everyone. Thanks. (bright music)

Published Date : Jun 24 2021

SUMMARY :

on cracking the code in the enterprise, Thank you for having and the buyers are thinking differently. I get the privilege of working and how you see enterprises in the enterprise to make a and part of the way in which the criteria for how to evaluate. is that going to lead to, because of the go to markets are changing. and making the art of sales and they had a great and investing in the ecosystem. I really appreciate you having me. and some of the winners and the modern enterprise and be in the wrong spot. the way you think about I got to ask you because And one of the reasons you go there not just to be an interesting and you get a little position, it's like, "I'm always the last to know." on the firing lines. And you make it sound and then go to the market. and you just can't drag it out. that company over the years, and by the way, I think it intersects the time Peter to come in All right, thank you Cloud Influencer friend of the CUBE. I think, was he a sales guy there? Sarbjeet, thank you for coming on. Glad to be here todays. lately on the commentary, and the economic viability matters and you get product market fit, and the departments changing, And then you got to figure is a problem to solve. and the startups need to focus on observability that you started. So the pandemic has brought to life, that's one of the opportunities to a tweet Sarbjeet you to the workloads if you They don't like the hyperscalers story, but it's the developer traction, And I think to your point, I always talk about the skills gravity, is that the developers but the citizen developer, So if the physical You go next to it, wherever you have been. the customer to refactor And really having the ability to create And that to me, if I have to evaluate And the key will be how for the enterprise in the modern era now. What's the secret to crack the code? And on the left side of your So great companies here, the So (indistinct) thank you John. Great talking to you guys It's like a release so that the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

KatiePERSON

0.99+

JohnPERSON

0.99+

Natalie ErlichPERSON

0.99+

MicrosoftORGANIZATION

0.99+

SarbjeetPERSON

0.99+

GoogleORGANIZATION

0.99+

Katie DruckerPERSON

0.99+

AmazonORGANIZATION

0.99+

Peter WagnerPERSON

0.99+

TelcosORGANIZATION

0.99+

PeterPERSON

0.99+

NataliePERSON

0.99+

Ali GhodsiPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

Jeff BarrPERSON

0.99+

AlibabaORGANIZATION

0.99+

AndyPERSON

0.99+

CiscoORGANIZATION

0.99+

Andy ThryPERSON

0.99+

BarcelonaLOCATION

0.99+

AliPERSON

0.99+

RomeLOCATION

0.99+

Madrona Venture GroupORGANIZATION

0.99+

Jeremy BurtonPERSON

0.99+

Redback NetworksORGANIZATION

0.99+

MadronaORGANIZATION

0.99+

Jeremy BurtonPERSON

0.99+

DatabricksORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DougPERSON

0.99+

WellfleetORGANIZATION

0.99+

Harvard School of BusinessORGANIZATION

0.99+

Last yearDATE

0.99+

BerkeleyLOCATION

0.99+

Ted Kummert, UiPath | The Release Show: Post Event Analysis


 

>> Narrator: From around the globe it's theCUBE! With digital coverage of UiPath Live, the release show. Brought to you by UiPath. >> Hi everybody this is Dave Valenti, welcome back to our RPA Drill Down. Ted Kummert is here he is Executive Vice President for Products and Engineering at UiPath. Ted, thanks for coming on, great to see you. >> Dave, it's great to be here, thanks so much. >> Dave your background is pretty interesting, you started as a Silicon Valley Engineer, they pulled you out, you did a huge stint at Microsoft. You got experience in SAS, you've got VC chops with Madrona. And at Microsoft you saw it all, the NT, the CE Space, Workflow, even MSN you did stuff with MSN, and then the all important data. So I'm interested in what attracted you to UiPath? >> Yeah Dave, I feel super fortunate to have worked in the industry in this span of time, it's been an amazing journey, and I had a great run at Microsoft it was fantastic. You mentioned one experience in the middle there, when I first went to the server business, the enterprise business, I owned our Integration and Workflow products, and I would say that's the first I encountered this idea. Often in the software industry there are ideas that have been around for a long time, and what we're doing is refining how we're delivering them. And we had ideas we talked about in terms of Business Process Management, Business Activity Monitoring, Workflow. The ways to efficiently able somebody to express the business process in a piece of software. Bring systems together, make everybody productive, bring humans into it. These were the ideas we talked about. Now in reality there were some real gaps. Because what happened in the technology was pretty different from what the actual business process was. And so lets fast forward then, I met Madrona Venture Group, Seattle based Venture Capital Firm. We actually made a decision to participate in one of UiPath's fundraising rounds. And that's the first I really came encountered with the company and had to have more than an intellectual understanding of RPA. 'Cause when I first saw it, I said "oh, I think that's desktop automation" I didn't look very close, maybe that's going to run out of runway, whatever. And then I got more acquainted with it and figured out "Oh, there's a much bigger idea here". And the power is that by really considering the process and the implementation from the humans work in, then you have an opportunity really to automate the real work. Not that what we were doing before wasn't significant, this is just that much more powerful. And that's when I got really excited. And then the companies statistics and growth and everything else just speaks for itself, in terms of an opportunity to work, I believe, in one of the most significant platforms going in the enterprise today, and work at one of the fastest growing companies around. It was like almost an automatic decision to decide to come to the company. >> Well you know, you bring up a good point you think about software historically through our industry, a lot of it was 'okay here's this software, now figure out how to map your processes to make it all work' and today the processes, especially you think about this pandemic, the processes are unknown. And so the software really has to be adaptable. So I'm wondering, and essentially we're talking about a fundamental shift in the way we work. And is there really a fundamental shift going on in how we write software and how would you describe that? >> Well there certainly are, and in a way that's the job of what we do when we build platforms for the enterprises, is try and give our customers a new way to get work done, that's more efficient and helps them build more powerful applications. And that's exactly what RPA does, the efficiency, it's not that this is the only way in software to express a lot of this, it just happens to be the quickest. You know in most ways. Especially as you start thinking about initiatives like our StudioX product to what we talk about as enabling citizen developers. It's an expression that allows customers to just do what they could have done otherwise much more quickly and efficient. And the value on that is always high, certainly in an unknown era like this, it's even more valuable, there are specific processes we've been helping automate in the healthcare, in financial services, with things like SBA Loan Processing, that we weren't thinking about six months ago, or they weren't thinking about six months ago. We're all thinking about how we're reinventing the way we work as individuals and corporations because of what's going on with the coronavirus crisis, having a platform like this that gives you agility and mapping the real work to what your computer state and applications all know how to do, is even more valuable in a climate like that. >> What attracted us originally to UiPath, we knew Bobby Patrick CMO, he said "Dave, go download a copy, go build some automations and go try it with some other companies". So that really struck us as wow, this is actually quite simple. Yet at the same time, and so you've of course been automating all these simple tasks, but now you've got real aspiration, you're glomming on to this term of Hyperautomation, you've made some acquisitions, you've got a vision, that really has taken you beyond 'paving the cow path' I sometimes say, of all these existing processes. It's really trying to discover new processes and opportunities for automation, which you would think after 50 or whatever years we've been in this industry, we'd have attacked a lot of it, but wow, seems like we have a long way to go. Again, especially what we're learning through this pandemic. Your thoughts on that? >> Yeah, I'd say Hyperautomation. It's actually a Gartner term, it's not our term. But there is a bigger idea here, built around the core automation platform. So let's talk for a second just what's not about the core platform and then what Hyperautomation really means around that. And I think of that as the bookends of how do I discover and plan, how do I improve my ability to do more automations, and find the real opportunities that I have. And how do I measure and optimize? And that's a lot of what we delivered in 20.4 as a new capability. So let's talk about discover and plan. One aspect of that is the wisdom of the crowd. We have a product we call Automation Hub that is all about that. Enabling people who have ideas, they're the ones doing the work, they have the observation into what efficiencies can be. Enabling them to either with our Ask Capture Utility capture that and document that, or just directly document that. And then, people across the company can then collaborate eventually moving on building the best ideas out of that. So there's capturing the crowd, and then there's a more scientific way of capturing actually what the opportunities are. So we've got two products we introduced. One is process mining, and process mining is about going outside in from the, let's call it the larger processes, more end to end processes in the enterprise. Things like order-to-cash and procure-to-pay, helping you understand by watching the events, and doing the analytics around that, where your bottle necks, where are you opportunities. And then task mining said "let's watch an individual, or group of individuals, what their tasks are, let's watch the log of events there, let's apply some machine learning processing to that, and say here's the repetitive things we've found." And really helping you then scientifically discover what your opportunities are. And these ideas have been along for a long time, process mining is not new. But the connection to an automation platform, we think is a new and powerful idea, and something we plan to invest a lot in going forward. So that's the first bookend. And then the second bookend is really about attaching rich analytics, so how do I measure it, so there's operationally how are my robots doing? And then there's everything down to return on investment. How do I understand how they are performing, verses what I would have spent if I was continuing to do them the old way. >> Yeah that's big 'cause (laughing) the hero reports for the executives to say "hey, this is actually working" but at the same time you've got to take a systems view. You don't want to just optimize one part of the system at the detriment to others. So you talk about process mining, which is kind of discovering the backend systems, ERP and the like, where the task mining it sounds like it's more the collaboration and front end. So that whole system thinking, really applies, doesn't it? >> Yeah. Very much so. Another part of what we talked about then, in the system is, how do we capture the ideas and how do we enable more people to build these automations? And that really gets down to, we talk about it in our company level vision, is a robot for every person. Every person should have a digital assistant. It can help you with things you do less frequently, it can help you with things you do all the time to do your job. And how do we help you create those? We've released a new tool we call StudioX. So for our RPA Developers we have Studio, and StudioX is really trying to enable a citizen developer. It's not unlike the art that we saw in Business Intelligence there was the era where analytics and reporting were the domain of experts, and they produced formalized reports that people could consume. But the people that had the questions would have to work with them and couldn't do the work themselves. And then along comes ClickView and Tableau and Power BI enabling the self services model, and all of a sudden people could do that work themselves, and that enabled powerful things. We think the same arch happens here, and StudioX is really our way of enabling that, citizen developer with the ideas to get some automation work done on their own. >> Got a lot in this announcement, things like document understanding, bring your own AI with AI fabric, how are you able to launch so many products, and have them fit together, you've made some acquisitions. Can you talk about the architecture that enables you to do that? >> Yeah, it's clearly in terms of ambition, and I've been there for 10 weeks, but in terms of ambition you don't have to have been there when they started the release after Forward III in October to know that this is the most ambitious thing that this company has ever done from a release perspective. Just in terms of the surface area we're delivering across now as an organization, is substantive. We talk about 1,000 feature improvements, 100's of discreet features, new products, as well as now our automation cloud has become generally available as well. So we've had muscle building over this past time to become world class at offering SAS, in addition to on-premises. And then we've got this big surface area, and architecture is a key component of how you can do this. How do you deliver efficiently the same software on-premises and in the cloud? Well you do that by having the right architecture and making the right bets. And certainly you look forward, how are companies doing this today? It's really all about Cloud-Native Platform. But it's about an architecture such that we can do that efficiently. So there is a lot about just your technical strategy. And then it's just about a ton of discipline and customer focus. It keeps you focused on the right things. StudioX was a great example of we were led by customers through a lot of what we actually delivered, a couple of the major features in it, certainly the out of box templates, the studio governance features, came out of customer suggestions. I think we had about 100 that we have sitting in the backlog, a lot of which we've already done, and really being disciplined and really focused on what customers are telling. So make sure you have the right technical strategy and architecture, really follow your customers, and really stay disciplined and focused on what matters most as you execute on the release. >> What can we learn from previous examples, I think about for instance SQL Server, you obviously have some knowledge in it, it started out pretty simple workloads and then at the time we all said "wow, it's a lot more powerful to come from below that it is, if a Db2, or an Oracle sort of goes down market", Microsoft proved that, obviously built in the robustness necessary, is there a similar metaphor here with regard to things like governance and security, just in terms of where UiPath started and where you see it going? >> Well I think the similarities have more to do with we have an idea of a bigger platform that we're now delivering against. In the database market, that was, we started, SQL Server started out as more of just a transactional database product, and ultimately grew to all of the workloads in the data platform, including transaction for transactional apps, data warehousing and as well as business intelligence. I see the same analogy here of thinking more broadly of the needs, and what the ability of an integrated platform, what it can do to enable great things for customers, I think that's a very consistent thing. And I think another consistent thing is know who you are. SQL Server knew exactly who it had to be when it entered the database market. That it was going to set a new benchmark on simplicity, TCO, and that was going to be the way it differentiated. In this case, we're out ahead of the market, we have a vision that's broader than a lot of the market is today. I think we see a lot of people coming in to this space, but we see them building to where we were, and we're out ahead. So we are operating from a leadership position, and I'm not going to tell you one's easier that the other, and both you have to execute with great urgency. But we're really executing out ahead, so we've got to keep thinking about, and there's no one's tail lights to follow, we have to be the ones really blazing the trail on what all of this means. >> I want to ask you about this incorporation of existing systems. Some markets they take off, it's kind of a one shot deal, and the market just embeds. I think you guys have bigger aspirations than that, I look at it like a service now, misunderstood early on, built the platform and now really is fundamental part of a lot of enterprises. I also look at things like EDW, which again, you have some experience in. In my view it failed to live up to a lot of it's promises even though it delivered a lot of value. You look at some of the big data initiatives, you know EDW still plugs in, it's the system of record, okay that's fine. How do you see RPA evolving? Are we going to incorporate, do we have to embrace existing business process systems? Or is this largely a do-over in your opinion? >> Well I think it's certainly about a new way of building automation, and it's starting to incorporate and include the other ways, for instance in the current release we added support for long running workflow, it was about human workflow based scenarios, now the human is collaborating with the robot, and we built those capabilities. So I do see us combining some of the old and new way. I think one of the most significant things here, is also that impact that AI and ML based technologies and skills can have on the power of the automations that we deliver. We've certainly got a surface area that, I think about our AI and ML strategy in two parts, that we are building first class first party skills, that we're including in the platform, and then we're building a platform for third parties and customers to bring their what their data science teams have delivered, so those can also be a part of our ecosystem, and part of automations. And so things like document understanding, how do I easily extract data from more structured, semi-structured and completely unstructured documents, accurately? And include those in my automations. Computer vision which gives us an ability to automate at a UI level across other types of systems than say a Windows and a browser base application. And task mining is built on a very robust, multi layer ML system, and the innovation opportunity that I think just consider there, you know continue there. You think it's a macro level if there's aspects of machine learning that are about captured human knowledge, well what exactly is an automation that captured where you're capturing a lot of human knowledge. The impact of ML and AI are going to be significant going out into the future. >> Yeah, I want to ask you about them, and I think a lot of people are just afraid of AI, as a separate thing and they have to figure out how to operationalize it. And I think companies like UiPath are really in a position to embed UI into applications AI into applications everywhere, so that maybe those folks that haven't climbed on the digital bandwagon, who are now with this pandemic are realizing "wow, we better accelerate this" they can actually tap machine intelligence through your products and others as well. Your thoughts on that sort of narrative? >> Yeah, I agree with that point of view, it's AI and ML is still maturing discipline across the industry. And you have to build new muscle, and you build new muscle and data science, and it forces you to think about data and how you manage your data in a different way. And that's a journey we've been on as a company to not only build our first party skills, but also to build the platform. It's what's given us the knowledge that to help us figure out, well what do we need to include here so our customers can bring their skills, actually to our platform, and I do think this is a place where we're going to see the real impact of AI and ML in a broader way. Based on the kind of apps it is and the kind of skills we can bring to bear. >> Okay last question, you're ten weeks in, when you're 50, 100, 200 weeks in, what should we be watching, what do you want to have accomplished? >> Well we're listening, we're obviously listening closely to our customers, right now we're still having a great week, 'cause there's nothing like shipping new software. So right now we're actually thinking deeply about where we're headed next. We see there's lots of opportunities and robot for every person, and that initiative, and so we're launched a bunch of important new capabilities there, and we're going to keep working with the market to understand how we can, how we can add additional capability there. We've just got the GA of our automation cloud, I think you should expect more and more services in our automation cloud going forward. I think this area we talked about, in terms of AI and ML and those technologies, I think you should expect more investment and innovation there from us and the community, helping our customers, and I think you will also see us then, as we talked about this convergence of the ways we bring together systems through integrate and build business process, I think we'll see a convergence into the platform of more of those methods. I look ahead to the next releases, and want to see us making some very significant releases that are advancing all of those things, and continuing our leadership in what we talk about now as the Hyperautomation platform. >> Well Ted, lot of innovation opportunities and of course everybody's hopping on the automation bandwagon. Everybody's going to want a piece of your RPA hide, and you're in the lead, we're really excited for you, we're excited to have you on theCUBE, so thanks very much for all your time and your insight. Really appreciate it. >> Yeah, thanks Dave, great to spend this time with you. >> All right thank you for watching everybody, this is Dave Velanti for theCUBE, and our RPA Drill Down Series, keep it right there we'll be right back, right after this short break. (calming instrumental music)

Published Date : May 21 2020

SUMMARY :

Brought to you by UiPath. great to see you. Dave, it's great to the NT, the CE Space, Workflow, the company and had to have more than an a fundamental shift in the way we work. and mapping the real work Yet at the same time, and find the real ERP and the like, And how do we help you create those? how are you able to and making the right bets. and I'm not going to tell you one's easier and the market just embeds. and include the other ways, and I think a lot of people and it forces you to think and I think you will also see us then, and of course everybody's hopping on the great to spend this time with you. and our RPA Drill Down Series,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ted KummertPERSON

0.99+

DavePERSON

0.99+

Dave ValentiPERSON

0.99+

Dave VelantiPERSON

0.99+

10 weeksQUANTITY

0.99+

TedPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Madrona Venture GroupORGANIZATION

0.99+

ten weeksQUANTITY

0.99+

100QUANTITY

0.99+

OctoberDATE

0.99+

UiPathORGANIZATION

0.99+

MSNORGANIZATION

0.99+

SeattleLOCATION

0.99+

SQL ServerTITLE

0.99+

50QUANTITY

0.99+

SQL ServerTITLE

0.99+

firstQUANTITY

0.99+

first bookendQUANTITY

0.99+

two partsQUANTITY

0.98+

MadronaORGANIZATION

0.98+

Venture Capital FirmORGANIZATION

0.98+

second bookendQUANTITY

0.98+

bothQUANTITY

0.98+

200 weeksQUANTITY

0.98+

SQLTITLE

0.98+

OneQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

two productsQUANTITY

0.98+

TableauTITLE

0.98+

OracleORGANIZATION

0.97+

one experienceQUANTITY

0.97+

Power BITITLE

0.97+

about 100QUANTITY

0.97+

WindowsTITLE

0.96+

EDWORGANIZATION

0.96+

GartnerORGANIZATION

0.96+

ClickViewTITLE

0.95+

CE SpaceORGANIZATION

0.94+

one partQUANTITY

0.94+

100'sQUANTITY

0.94+

Executive Vice PresidentPERSON

0.92+

six months agoDATE

0.92+

Forward IIITITLE

0.91+

coronavirus crisisEVENT

0.91+

first partyQUANTITY

0.91+

SASORGANIZATION

0.86+

One aspectQUANTITY

0.86+

UiPathPERSON

0.86+

Bobby Patrick CMOPERSON

0.83+

one shotQUANTITY

0.83+

20.4QUANTITY

0.81+

StudioXTITLE

0.81+

WorkflowORGANIZATION

0.8+

first classQUANTITY

0.79+

StudioXORGANIZATION

0.79+

HubORGANIZATION

0.78+

theCUBEORGANIZATION

0.78+

HyperautomationORGANIZATION

0.77+

UiPath LiveTITLE

0.77+

about 1,000 feature improvementsQUANTITY

0.74+

about six months agoDATE

0.73+

pandemicEVENT

0.7+

secondQUANTITY

0.66+

StudioTITLE

0.66+

NTORGANIZATION

0.65+

SBAORGANIZATION

0.61+

Silicon ValleyLOCATION

0.55+