Emilia A'Bell Platform9
(Gentle music) >> Hello and welcome to the Cube here in Palo Alto, California. I'm John Furrier here, joined by Platform nine, Amelia Bell the Chief Revenue Officer, really digging into the conversation around Kubernetes Cloud native and the journey this next generation cloud. Amelia, thanks for coming in and joining me today. >> Thank you, thank you. Great pleasure to be here. >> So, CRO, chief Revenue Officer. So you're mainly in charge of serving the customers, making sure they're they're happy with the solution you guys have. >> That's right. >> And this market must be pretty exciting. >> Oh, it's very exciting and we are seeing a lot of new use cases coming up all the time. So part of my job is to obtain new customers but then of course, service our existing customers and then there's a constant evolution. Nothing is standing still right now. >> We've had all your co-founders on, on the show here and we've kind of talked about the trends and where you guys have come from, where you guys are going now. And it's interesting, if you look at the cloud native market, the scale is still huge. You seeing now this next wave of AI coming on, which I call that's the real web three in my mind in terms of like the next experiences really still points to data infrastructure scale. These next gen apps are coming. And so that's being built on the previous generation of DevSecOps. >> Right >> And so a lot of enterprises are having to grow up really, really fast >> Right. >> And figure out, okay, I got to have scale I got large scale data, I got horizontal scalability I got to apply machine learning now the new software engineering practice. And then, oh, by the way I got the Kubernetes clusters I got to manage >> Right. >> I got what's containers weather, the security problems. This is a really complicated but important area of build out right now in the marketplace. >> Right. What are you seeing? >> So it's, it's really important that the infrastructure is not the hindrance in these cases. And we, one of our customers is in fact a large AI company and we, I met with them yesterday and asked them, you know, why are you giving that to us? You've got really smart engineers. They can run and create the infrastructure, you know in a custom way that you want it. And they said, we've got to be core to our business. There's plenty of work to do just on delivering the AI capabilities, and there's plenty of work to do. We can't get bogged down in the infrastructure. We don't want to have people running the engine we want them driving the car. We want them creating value on top of that. so they can't have the infrastructure being the bottleneck for them. >> It's interesting, the AI companies, that's their value proposition to their customers is that they don't want the technical talent. >> Right. >> Working on, you know, non-differentiated heavy lifting things. >> Right. >> And automate those and scale it up. Can you talk about the problem that you guys are solving? Because there's a lot going on here. >> Yeah. >> You can look at all aspects of the DevOps scale. There's a lot of little problems, some big problems. What are you guys focusing on? What's the bullseye for Platform known? >> Okay, so the bullseye is that Kubernetes infrastructure is really hard, right? It's really hard to create and run. So we introduce a time to market efficiency, let's get this up and running and let's get you into production and and producing results for your customers fast. But at the same time, let's reduce your cost and complexity and increase reliability. So, >> And what are some of the things that they're having problems with that are breaking? Is it more of updates on code? Is it size of the, I mean clusters they have, what what is it more operational? What are the, what are some of the things that are that kind of get them to call you guys up? What's the main thing? >> It's the operations. It's all operations. So what, what happens is that if you have a look at Kubernetes platform it's made up of many, many components. And that's where it gets complex. It's not just Kubernetes. There's load balances, networking, there's observability. All these things have to operate together. And all the piece parts have to be upgraded and maintained. The integrations need to work, you need to have probes into the system to predict where problems can be coming. So the operational part of it is complex. So you need to be observing not only your clusters in the health of the clusters and the nodes and so on but the health of the platform itself. >> We're going to get Peter Frey in on here after I talk about some of the technical issues on deployments. But what's the, what's the big decision for the customer? Because there's kind of, there's two schools of thought. One is, I'm going to build my own and have my team build it or I'm going to go with a partner >> Right. >> Say platform nine, what's the trade offs there? Because it seems to me that, that there's a there's a certain area of where it's core competency but I can outsource it or partner with it and, and work with platform nine versus trying to take it all on internally >> Right. >> Of which requires more costs. So there's a, there's a line where you kind of like figure out that customers have to figure out that, that piece >> Right >> What do, what's your view on that? Because I'm hearing that more people are saying, hey I want to, I want to focus my people on solutions. The app side, not so much the ops >> Right. >> What's the trade off? How do you talk about? >> It's a really interesting question because most companies think they have two options. It's either a DIY option and they love that engineers love playing with the new and on the latest. And then they think the other option is going to cloud, public cloud and have it semi managed by them. And you get very different out of those. So in the DIY you get flexibility coz you get to choose your infrastructure but then you've got all the complexities of the DIY piece. You've got to not only choose all your components but you've got to keep them working. Now if you go to public cloud option, you lose flexibility because a lot of those choices are made for you but you gain agility because quite frankly it's really easy to spin up clusters. So what we are, is that in the middle we bring the agility and the flexibility because we bring the control plane that allows you to spin up clusters and and lifecycle manage them very quickly. So the agility's there but you can do it on the infrastructure of your choice. And in the DIY culture, one of the hardest things to do actually is to convince them they don't have to do it themselves. They can focus on higher value activities, which are more focused on delivering outcomes to their customers. >> So you provide the solution that allows them to feel like they're billing it themselves. >> Correct. >> And get these scale and speed and the efficiencies of the op side. So it's kind of the best of both worlds. It's not a full outsource. >> Right, right. >> You're bringing them in to make their jobs easier >> Right, That's right. So they get choices. >> Yeah. >> We, we, they get choices on how they build it and then we run and operate it for them. But they, they have all the observability. The benefit is that if we are managing their operations and most of our customers choose the managed operations piece of it, then they don't. If something goes wrong, we fix that and they, they they get told, oh, by the way, you had a problem. We've dealt with it. But in the other model is they've got to create all that observability themselves and they've got to get ahead of the issues themselves, and then they've got to raise tickets to whoever they need to raise tickets to. Whereas we have things like auto ticket generation and so on where, look, just drive the car let us worry about the engine and all of that. Let us deal with that. And you can choose whatever you want about the engine but let us manage it for you. So >> What do you, what do you say to folks out there that are may have a need for platform nine? What's the signals inside their company that they should be calling you guys up and, and leaning in with platform nine? >> Right. >> Is it more sprawl on on clusters? Is it more errors? Is it more tickets? Is it more hassle? What are some of the signs? If someone's watching this say, hey I have, I have an issue with this. >> I would say, if there's operational inefficiencies you can't get things to market fast enough because you are building this and it's just taking too long you're spending way too much time operationally on the infrastructure, then you are, you are not using your resources where they should best be used. And, and that is delivering services to the customer. >> Ed me Hora on for International Women's Day. And she was talking about how they love to solve complex problems on the engineering team at Platform nine. It's going to get pretty complex with the edge emerging >> Indeed >> and cloud native on-premises distributed computing. >> Indeed. >> essentially is what it is. That's kind of the core DNA of the team. >> Yeah. >> What, how does that translate to the customers? Because IT seems to be, okay, I have virtual machines were great, now I got to scale up and and convert over a transform to containers, Kubernetes >> Right. >> And then large scale app, app applications. >> Right, so when it comes to Edge it gets complex pretty fast because it's highly distributed. So how do you have standardization and governance across all the different edge locations? So what we bring into play is an ability to, um, at each edge, location eh, provision from bare metal up all the way up to the application. So let's say you have thousands of stores and you want to modernize those stores, you know rather than having a server being sent somewhere to have an image loaded up and then sent that and then you've got to send a technical guide to the store and you've got to implement it all there. Forget all that. That's just, that's just a ridiculous waste of time. So what we've done is we've created the ability where the server can just be sent to the store. You can get your barista or your chef just to plug it in, right? You don't need to send any technical person over there. As long as we have access to it, we get access to it and we provision the whole thing from bare metal up and then we can maintain it according to the standards that are needed and upgrade accordingly. And that gives standardization across all your stores or edge locations or 5G towers or whatever it is, distribution centers. And we can create nice governance and good standardization which allows them to innovate fast as well. >> So this is a real opportunity for you guys. >> Yeah. >> This is an advantage from your expertise. >> Yes. >> The edge piece, dropping in a box, self-provisioning. >> That's right. So yeah. >> Can people do that? What's the, >> No, actually it, it's, it's very difficult to do. I I, from my understanding, we're the only people that can provision it from bare metal up, right? So if anyone has a different story, I'd love to hear about that. But that's my understanding today. >> That's a good value purpose. So talk about the value of the customer. What kind of scope do you got? Can you scope some of the customer environments you have from >> Sure. >> From, you know, small to the large, how give us an idea of the order of magnitude of the >> Yeah, so, so small customers may have 20 clusters or something like that. 20 nodes, I beg your pardon. Our large customers, like we're we are scaling one particular distributed environment from 2200 nodes to 10,000 nodes by the end of this year and 26,000 nodes next year. We have another customer that's scaling up to 10,000 nodes this year as well. So we have some very large scale, but some smaller ones too. And we're, we're happy to work with either end. >> Okay, so pretend I'm a customer. I'm really, I got pain and Kubernetes like I want to, I can't hire enough people. I want to have my all focus. What's the pitch? >> Okay. So skill shortage is something that that everyone is facing right now. And if, if you've got skill shortage it's going to be really hard to hire if you are competing against really, you know, high salary you know, offering companies that are out there. So the pitch is, let us do it for you. We have, we have a team of excellent probably the best Kubernetes engineers on the planet. We will create your environment for you. We will get it up and running. We will allow you to, you know, run your applica, just consume the platform, we'll run it for you. We'll have SLAs and up times guaranteed and you can just focus on delivering the software and the value needed to your customers. >> What are some of the testimonials that you get from people? Just anecdotally, what do they say? Oh my god, you guys save. >> Yeah. >> Our butts. >> Yeah. >> This is amazing. We just shipped our code out much faster. >> Yeah. >> What are some of the things that you hear? >> So, so the number one thing I hear is it just works right? It's, we don't have to worry about it, it just works. So that, that's a really great feedback that we get. The other thing I hear is if we do have issues that your team are amazing, they they fix things, they're proactive, you know, they're we really enjoy working with you. So from, from that perspective, that's great. But the other side of it is we hear things like if we were to do that ourselves we would've taken six to 12 months to build that. And you guys have just saved us six to 12 months. The other thing that we hear is with the same two engineers we started on, you know, a hundred nodes we're now running thousands of nodes. We have not had to increase the size of the team and expand and scale exponentially. >> Awesome. What's next for you guys? What's on your, your plate? >> Yeah. >> With CRO, what's some of the goals you have? >> Yeah, so growth of course as a CRO, you don't get away from that. We've got some very exciting, actually, initiatives coming up. One of the things that we are seeing a lot of demand for and is, is in the area of virtualization bringing virtual machine, virtual virtual containers, sorry I'm saying that all wrong. Bringing virtual machine, the virtual machines onto the cloud native infrastructure using Kubernetes technology. So that provides a, an excellent stepping stone for those guys who are in the virtualization world. And they can't move to containers, they can't refactor their applications and workloads fast enough. So just bring your virtual machine and put it onto the container infrastructure. So we're seeing a lot of demand for that, because it provides an excellent stepping stone. Why not use Kubernetes to orchestrate virtual the virtual world? And then we've got some really interesting cost optimization. >> So a lot of migration kind of thinking around VMs and >> Oh, tremendous. The, the VM world is just massively bigger than the container world right now. So you can't ignore that. So we are providing basically the evolution, the the journey for the customers to utilize the greatest of technologies without having to do that in a, in a in a way that just breaks the bank and they can't get there fast enough. So we provide those stepping stones for them. Yeah. >> Amelia thank you for coming on. Sharing. >> Thank you. >> The update on platform nine. Congratulations on your big accounts you have and >> thank you. >> And the world could get more complex, which Means >> indeed >> have more customers. >> Thank you, thank you John. Appreciate that. Thank you. >> I'm John Furry. You're watching Platform nine and the Cube Conversations here. Thanks for watching. (gentle music)
SUMMARY :
and the journey this Great pleasure to be here. mainly in charge of serving the customers, And this market must and we are seeing a lot and where you guys have come from, I got the Kubernetes of build out right now in the marketplace. What are you seeing? that the infrastructure is not It's interesting, the AI Working on, you know, that you guys are solving? aspects of the DevOps scale. Okay, so the bullseye is into the system to predict of the technical issues out that customers have to The app side, not so much the ops So in the DIY you get flexibility So you provide the solution of the best of both worlds. So they get choices. get ahead of the issues are some of the signs? on the infrastructure, complex problems on the engineering team and cloud native on-premises is. That's kind of the core And then large scale So let's say you have thousands of stores opportunity for you guys. from your expertise. in a box, self-provisioning. So yeah. different story, I'd love to So talk about the value of the customer. by the end of this year What's the pitch? and the value needed to your customers. What are some of the testimonials This is amazing. of the team and expand What's next for you guys? and is, is in the area of virtualization So you can't ignore Amelia thank you for coming on. big accounts you have and Thank you. and the Cube Conversations here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amelia | PERSON | 0.99+ |
Amelia Bell | PERSON | 0.99+ |
John | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Emilia A'Bell | PERSON | 0.99+ |
John Furry | PERSON | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Peter Frey | PERSON | 0.99+ |
12 months | QUANTITY | 0.99+ |
International Women's Day | EVENT | 0.99+ |
two engineers | QUANTITY | 0.99+ |
two options | QUANTITY | 0.99+ |
20 clusters | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two schools | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
this year | DATE | 0.98+ |
today | DATE | 0.98+ |
20 nodes | QUANTITY | 0.97+ |
each edge | QUANTITY | 0.96+ |
Kubernetes | ORGANIZATION | 0.96+ |
thousands of stores | QUANTITY | 0.93+ |
end of this year | DATE | 0.93+ |
2200 nodes | QUANTITY | 0.93+ |
Cube | ORGANIZATION | 0.93+ |
10,000 nodes | QUANTITY | 0.93+ |
Kubernetes | TITLE | 0.92+ |
both worlds | QUANTITY | 0.91+ |
up to 10,000 nodes | QUANTITY | 0.88+ |
thousands of nodes | QUANTITY | 0.87+ |
Edge | TITLE | 0.84+ |
26,000 nodes | QUANTITY | 0.81+ |
Ed me Hora | PERSON | 0.8+ |
Platform nine | TITLE | 0.75+ |
hundred nodes | QUANTITY | 0.69+ |
DevSecOps | TITLE | 0.68+ |
Platform nine | ORGANIZATION | 0.68+ |
one thing | QUANTITY | 0.62+ |
wave | EVENT | 0.57+ |
Chief Revenue Officer | PERSON | 0.57+ |
nine | QUANTITY | 0.56+ |
CRO | PERSON | 0.54+ |
three | QUANTITY | 0.53+ |
nine | OTHER | 0.52+ |
DevOps | TITLE | 0.5+ |
next | EVENT | 0.49+ |
platform nine | OTHER | 0.49+ |
Cube | TITLE | 0.39+ |
Closing Panel | Generative AI: Riding the Wave | AWS Startup Showcase S3 E1
(mellow music) >> Hello everyone, welcome to theCUBE's coverage of AWS Startup Showcase. This is the closing panel session on AI machine learning, the top startups generating generative AI on AWS. It's a great panel. This is going to be the experts talking about riding the wave in generative AI. We got Ankur Mehrotra, who's the director and general manager of AI and machine learning at AWS, and Clem Delangue, co-founder and CEO of Hugging Face, and Ori Goshen, who's the co-founder and CEO of AI21 Labs. Ori from Tel Aviv dialing in, and rest coming in here on theCUBE. Appreciate you coming on for this closing session for the Startup Showcase. >> Thanks for having us. >> Thank you for having us. >> Thank you. >> I'm super excited to have you all on. Hugging Face was recently in the news with the AWS relationship, so congratulations. Open source, open science, really driving the machine learning. And we got the AI21 Labs access to the LLMs, generating huge scale live applications, commercial applications, coming to the market, all powered by AWS. So everyone, congratulations on all your success, and thank you for headlining this panel. Let's get right into it. AWS is powering this wave here. We're seeing a lot of push here from applications. Ankur, set the table for us on the AI machine learning. It's not new, it's been goin' on for a while. Past three years have been significant advancements, but there's been a lot of work done in AI machine learning. Now it's released to the public. Everybody's super excited and now says, "Oh, the future's here!" It's kind of been going on for a while and baking. Now it's kind of coming out. What's your view here? Let's get it started. >> Yes, thank you. So, yeah, as you may be aware, Amazon has been in investing in machine learning research and development since quite some time now. And we've used machine learning to innovate and improve user experiences across different Amazon products, whether it's Alexa or Amazon.com. But we've also brought in our expertise to extend what we are doing in the space and add more generative AI technology to our AWS products and services, starting with CodeWhisperer, which is an AWS service that we announced a few months ago, which is, you can think of it as a coding companion as a service, which uses generative AI models underneath. And so this is a service that customers who have no machine learning expertise can just use. And we also are talking to customers, and we see a lot of excitement about generative AI, and customers who want to build these models themselves, who have the talent and the expertise and resources. For them, AWS has a number of different options and capabilities they can leverage, such as our custom silicon, such as Trainium and Inferentia, as well as distributed machine learning capabilities that we offer as part of SageMaker, which is an end-to-end machine learning development service. At the same time, many of our customers tell us that they're interested in not training and building these generative AI models from scratch, given they can be expensive and can require specialized talent and skills to build. And so for those customers, we are also making it super easy to bring in existing generative AI models into their machine learning development environment within SageMaker for them to use. So we recently announced our partnership with Hugging Face, where we are making it super easy for customers to bring in those models into their SageMaker development environment for fine tuning and deployment. And then we are also partnering with other proprietary model providers such as AI21 and others, where we making these generative AI models available within SageMaker for our customers to use. So our approach here is to really provide customers options and choices and help them accelerate their generative AI journey. >> Ankur, thank you for setting the table there. Clem and Ori, I want to get your take, because the riding the waves, the theme of this session, and to me being in California, I imagine the big surf, the big waves, the big talent out there. This is like alpha geeks, alpha coders, developers are really leaning into this. You're seeing massive uptake from the smartest people. Whether they're young or around, they're coming in with their kind of surfboards, (chuckles) if you will. These early adopters, they've been on this for a while; Now the waves are hitting. This is a big wave, everyone sees it. What are some of those early adopter devs doing? What are some of the use cases you're seeing right out of the gate? And what does this mean for the folks that are going to come in and get on this wave? Can you guys share your perspective on this? Because you're seeing the best talent now leaning into this. >> Yeah, absolutely. I mean, from Hugging Face vantage points, it's not even a a wave, it's a tidal wave, or maybe even the tide itself. Because actually what we are seeing is that AI and machine learning is not something that you add to your products. It's very much a new paradigm to do all technology. It's this idea that we had in the past 15, 20 years, one way to build software and to build technology, which was writing a million lines of code, very rule-based, and then you get your product. Now what we are seeing is that every single product, every single feature, every single company is starting to adopt AI to build the next generation of technology. And that works both to make the existing use cases better, if you think of search, if you think of social network, if you think of SaaS, but also it's creating completely new capabilities that weren't possible with the previous paradigm. Now AI can generate text, it can generate image, it can describe your image, it can do so many new things that weren't possible before. >> It's going to really make the developers really productive, right? I mean, you're seeing the developer uptake strong, right? >> Yes, we have over 15,000 companies using Hugging Face now, and it keeps accelerating. I really think that maybe in like three, five years, there's not going to be any company not using AI. It's going to be really kind of the default to build all technology. >> Ori, weigh in on this. APIs, the cloud. Now I'm a developer, I want to have live applications, I want the commercial applications on this. What's your take? Weigh in here. >> Yeah, first, I absolutely agree. I mean, we're in the midst of a technology shift here. I think not a lot of people realize how big this is going to be. Just the number of possibilities is endless, and I think hard to imagine. And I don't think it's just the use cases. I think we can think of it as two separate categories. We'll see companies and products enhancing their offerings with these new AI capabilities, but we'll also see new companies that are AI first, that kind of reimagine certain experiences. They build something that wasn't possible before. And that's why I think it's actually extremely exciting times. And maybe more philosophically, I think now these large language models and large transformer based models are helping us people to express our thoughts and kind of making the bridge from our thinking to a creative digital asset in a speed we've never imagined before. I can write something down and get a piece of text, or an image, or a code. So I'll start by saying it's hard to imagine all the possibilities right now, but it's certainly big. And if I had to bet, I would say it's probably at least as big as the mobile revolution we've seen in the last 20 years. >> Yeah, this is the biggest. I mean, it's been compared to the Enlightenment Age. I saw the Wall Street Journal had a recent story on this. We've been saying that this is probably going to be bigger than all inflection points combined in the tech industry, given what transformation is coming. I guess I want to ask you guys, on the early adopters, we've been hearing on these interviews and throughout the industry that there's already a set of big companies, a set of companies out there that have a lot of data and they're already there, they're kind of tinkering. Kind of reminds me of the old hyper scaler days where they were building their own scale, and they're eatin' glass, spittin' nails out, you know, they're hardcore. Then you got everybody else kind of saying board level, "Hey team, how do I leverage this?" How do you see those two things coming together? You got the fast followers coming in behind the early adopters. What's it like for the second wave coming in? What are those conversations for those developers like? >> I mean, I think for me, the important switch for companies is to change their mindset from being kind of like a traditional software company to being an AI or machine learning company. And that means investing, hiring machine learning engineers, machine learning scientists, infrastructure in members who are working on how to put these models in production, team members who are able to optimize models, specialized models, customized models for the company's specific use cases. So it's really changing this mindset of how you build technology and optimize your company building around that. Things are moving so fast that I think now it's kind of like too late for low hanging fruits or small, small adjustments. I think it's important to realize that if you want to be good at that, and if you really want to surf this wave, you need massive investments. If there are like some surfers listening with this analogy of the wave, right, when there are waves, it's not enough just to stand and make a little bit of adjustments. You need to position yourself aggressively, paddle like crazy, and that's how you get into the waves. So that's what companies, in my opinion, need to do right now. >> Ori, what's your take on the generative models out there? We hear a lot about foundation models. What's your experience running end-to-end applications for large foundation models? Any insights you can share with the app developers out there who are looking to get in? >> Yeah, I think first of all, it's start create an economy, where it probably doesn't make sense for every company to create their own foundation models. You can basically start by using an existing foundation model, either open source or a proprietary one, and start deploying it for your needs. And then comes the second round when you are starting the optimization process. You bootstrap, whether it's a demo, or a small feature, or introducing new capability within your product, and then start collecting data. That data, and particularly the human feedback data, helps you to constantly improve the model, so you create this data flywheel. And I think we're now entering an era where customers have a lot of different choice of how they want to start their generative AI endeavor. And it's a good thing that there's a variety of choices. And the really amazing thing here is that every industry, any company you speak with, it could be something very traditional like industrial or financial, medical, really any company. I think peoples now start to imagine what are the possibilities, and seriously think what's their strategy for adopting this generative AI technology. And I think in that sense, the foundation model actually enabled this to become scalable. So the barrier to entry became lower; Now the adoption could actually accelerate. >> There's a lot of integration aspects here in this new wave that's a little bit different. Before it was like very monolithic, hardcore, very brittle. A lot more integration, you see a lot more data coming together. I have to ask you guys, as developers come in and grow, I mean, when I went to college and you were a software engineer, I mean, I got a degree in computer science, and software engineering, that's all you did was code, (chuckles) you coded. Now, isn't it like everyone's a machine learning engineer at this point? Because that will be ultimately the science. So, (chuckles) you got open source, you got open software, you got the communities. Swami called you guys the GitHub of machine learning, Hugging Face is the GitHub of machine learning, mainly because that's where people are going to code. So this is essentially, machine learning is computer science. What's your reaction to that? >> Yes, my co-founder Julien at Hugging Face have been having this thing for quite a while now, for over three years, which was saying that actually software engineering as we know it today is a subset of machine learning, instead of the other way around. People would call us crazy a few years ago when we're seeing that. But now we are realizing that you can actually code with machine learning. So machine learning is generating code. And we are starting to see that every software engineer can leverage machine learning through open models, through APIs, through different technology stack. So yeah, it's not crazy anymore to think that maybe in a few years, there's going to be more people doing AI and machine learning. However you call it, right? Maybe you'll still call them software engineers, maybe you'll call them machine learning engineers. But there might be more of these people in a couple of years than there is software engineers today. >> I bring this up as more tongue in cheek as well, because Ankur, infrastructure's co is what made Cloud great, right? That's kind of the DevOps movement. But here the shift is so massive, there will be a game-changing philosophy around coding. Machine learning as code, you're starting to see CodeWhisperer, you guys have had coding companions for a while on AWS. So this is a paradigm shift. How is the cloud playing into this for you guys? Because to me, I've been riffing on some interviews where it's like, okay, you got the cloud going next level. This is an example of that, where there is a DevOps-like moment happening with machine learning, whether you call it coding or whatever. It's writing code on its own. Can you guys comment on what this means on top of the cloud? What comes out of the scale? What comes out of the benefit here? >> Absolutely, so- >> Well first- >> Oh, go ahead. >> Yeah, so I think as far as scale is concerned, I think customers are really relying on cloud to make sure that the applications that they build can scale along with the needs of their business. But there's another aspect to it, which is that until a few years ago, John, what we saw was that machine learning was a data scientist heavy activity. They were data scientists who were taking the data and training models. And then as machine learning found its way more and more into production and actual usage, we saw the MLOps become a thing, and MLOps engineers become more involved into the process. And then we now are seeing, as machine learning is being used to solve more business critical problems, we're seeing even legal and compliance teams get involved. We are seeing business stakeholders more engaged. So, more and more machine learning is becoming an activity that's not just performed by data scientists, but is performed by a team and a group of people with different skills. And for them, we as AWS are focused on providing the best tools and services for these different personas to be able to do their job and really complete that end-to-end machine learning story. So that's where, whether it's tools related to MLOps or even for folks who cannot code or don't know any machine learning. For example, we launched SageMaker Canvas as a tool last year, which is a UI-based tool which data analysts and business analysts can use to build machine learning models. So overall, the spectrum in terms of persona and who can get involved in the machine learning process is expanding, and the cloud is playing a big role in that process. >> Ori, Clem, can you guys weigh in too? 'Cause this is just another abstraction layer of scale. What's it mean for you guys as you look forward to your customers and the use cases that you're enabling? >> Yes, I think what's important is that the AI companies and providers and the cloud kind of work together. That's how you make a seamless experience and you actually reduce the barrier to entry for this technology. So that's what we've been super happy to do with AWS for the past few years. We actually announced not too long ago that we are doubling down on our partnership with AWS. We're excited to have many, many customers on our shared product, the Hugging Face deep learning container on SageMaker. And we are working really closely with the Inferentia team and the Trainium team to release some more exciting stuff in the coming weeks and coming months. So I think when you have an ecosystem and a system where the AWS and the AI providers, AI startups can work hand in hand, it's to the benefit of the customers and the companies, because it makes it orders of magnitude easier for them to adopt this new paradigm to build technology AI. >> Ori, this is a scale on reasoning too. The data's out there and making sense out of it, making it reason, getting comprehension, having it make decisions is next, isn't it? And you need scale for that. >> Yes. Just a comment about the infrastructure side. So I think really the purpose is to streamline and make these technologies much more accessible. And I think we'll see, I predict that we'll see in the next few years more and more tooling that make this technology much more simple to consume. And I think it plays a very important role. There's so many aspects, like the monitoring the models and their kind of outputs they produce, and kind of containing and running them in a production environment. There's so much there to build on, the infrastructure side will play a very significant role. >> All right, that's awesome stuff. I'd love to change gears a little bit and get a little philosophy here around AI and how it's going to transform, if you guys don't mind. There's been a lot of conversations around, on theCUBE here as well as in some industry areas, where it's like, okay, all the heavy lifting is automated away with machine learning and AI, the complexity, there's some efficiencies, it's horizontal and scalable across all industries. Ankur, good point there. Everyone's going to use it for something. And a lot of stuff gets brought to the table with large language models and other things. But the key ingredient will be proprietary data or human input, or some sort of AI whisperer kind of role, or prompt engineering, people are saying. So with that being said, some are saying it's automating intelligence. And that creativity will be unleashed from this. If the heavy lifting goes away and AI can fill the void, that shifts the value to the intellect or the input. And so that means data's got to come together, interact, fuse, and understand each other. This is kind of new. I mean, old school AI was, okay, got a big model, I provisioned it long time, very expensive. Now it's all free flowing. Can you guys comment on where you see this going with this freeform, data flowing everywhere, heavy lifting, and then specialization? >> Yeah, I think- >> Go ahead. >> Yeah, I think, so what we are seeing with these large language models or generative models is that they're really good at creating stuff. But I think it's also important to recognize their limitations. They're not as good at reasoning and logic. And I think now we're seeing great enthusiasm, I think, which is justified. And the next phase would be how to make these systems more reliable. How to inject more reasoning capabilities into these models, or augment with other mechanisms that actually perform more reasoning so we can achieve more reliable results. And we can count on these models to perform for critical tasks, whether it's medical tasks, legal tasks. We really want to kind of offload a lot of the intelligence to these systems. And then we'll have to get back, we'll have to make sure these are reliable, we'll have to make sure we get some sort of explainability that we can understand the process behind the generated results that we received. So I think this is kind of the next phase of systems that are based on these generated models. >> Clem, what's your view on this? Obviously you're at open community, open source has been around, it's been a great track record, proven model. I'm assuming creativity's going to come out of the woodwork, and if we can automate open source contribution, and relationships, and onboarding more developers, there's going to be unleashing of creativity. >> Yes, it's been so exciting on the open source front. We all know Bert, Bloom, GPT-J, T5, Stable Diffusion, that work up. The previous or the current generation of open source models that are on Hugging Face. It has been accelerating in the past few months. So I'm super excited about ControlNet right now that is really having a lot of impact, which is kind of like a way to control the generation of images. Super excited about Flan UL2, which is like a new model that has been recently released and is open source. So yeah, it's really fun to see the ecosystem coming together. Open source has been the basis for traditional software, with like open source programming languages, of course, but also all the great open source that we've gotten over the years. So we're happy to see that the same thing is happening for machine learning and AI, and hopefully can help a lot of companies reduce a little bit the barrier to entry. So yeah, it's going to be exciting to see how it evolves in the next few years in that respect. >> I think the developer productivity angle that's been talked about a lot in the industry will be accelerated significantly. I think security will be enhanced by this. I think in general, applications are going to transform at a radical rate, accelerated, incredible rate. So I think it's not a big wave, it's the water, right? I mean, (chuckles) it's the new thing. My final question for you guys, if you don't mind, I'd love to get each of you to answer the question I'm going to ask you, which is, a lot of conversations around data. Data infrastructure's obviously involved in this. And the common thread that I'm hearing is that every company that looks at this is asking themselves, if we don't rebuild our company, start thinking about rebuilding our business model around AI, we might be dinosaurs, we might be extinct. And it reminds me that scene in Moneyball when, at the end, it's like, if we're not building the model around your model, every company will be out of business. What's your advice to companies out there that are having those kind of moments where it's like, okay, this is real, this is next gen, this is happening. I better start thinking and putting into motion plans to refactor my business, 'cause it's happening, business transformation is happening on the cloud. This kind of puts an exclamation point on, with the AI, as a next step function. Big increase in value. So it's an opportunity for leaders. Ankur, we'll start with you. What's your advice for folks out there thinking about this? Do they put their toe in the water? Do they jump right into the deep end? What's your advice? >> Yeah, John, so we talk to a lot of customers, and customers are excited about what's happening in the space, but they often ask us like, "Hey, where do we start?" So we always advise our customers to do a lot of proof of concepts, understand where they can drive the biggest ROI. And then also leverage existing tools and services to move fast and scale, and try and not reinvent the wheel where it doesn't need to be. That's basically our advice to customers. >> Get it. Ori, what's your advice to folks who are scratching their head going, "I better jump in here. "How do I get started?" What's your advice? >> So I actually think that need to think about it really economically. Both on the opportunity side and the challenges. So there's a lot of opportunities for many companies to actually gain revenue upside by building these new generative features and capabilities. On the other hand, of course, this would probably affect the cogs, and incorporating these capabilities could probably affect the cogs. So I think we really need to think carefully about both of these sides, and also understand clearly if this is a project or an F word towards cost reduction, then the ROI is pretty clear, or revenue amplifier, where there's, again, a lot of different opportunities. So I think once you think about this in a structured way, I think, and map the different initiatives, then it's probably a good way to start and a good way to start thinking about these endeavors. >> Awesome. Clem, what's your take on this? What's your advice, folks out there? >> Yes, all of these are very good advice already. Something that you said before, John, that I disagreed a little bit, a lot of people are talking about the data mode and proprietary data. Actually, when you look at some of the organizations that have been building the best models, they don't have specialized or unique access to data. So I'm not sure that's so important today. I think what's important for companies, and it's been the same for the previous generation of technology, is their ability to build better technology faster than others. And in this new paradigm, that means being able to build machine learning faster than others, and better. So that's how, in my opinion, you should approach this. And kind of like how can you evolve your company, your teams, your products, so that you are able in the long run to build machine learning better and faster than your competitors. And if you manage to put yourself in that situation, then that's when you'll be able to differentiate yourself to really kind of be impactful and get results. That's really hard to do. It's something really different, because machine learning and AI is a different paradigm than traditional software. So this is going to be challenging, but I think if you manage to nail that, then the future is going to be very interesting for your company. >> That's a great point. Thanks for calling that out. I think this all reminds me of the cloud days early on. If you went to the cloud early, you took advantage of it when the pandemic hit. If you weren't native in the cloud, you got hamstrung by that, you were flatfooted. So just get in there. (laughs) Get in the cloud, get into AI, you're going to be good. Thanks for for calling that. Final parting comments, what's your most exciting thing going on right now for you guys? Ori, Clem, what's the most exciting thing on your plate right now that you'd like to share with folks? >> I mean, for me it's just the diversity of use cases and really creative ways of companies leveraging this technology. Every day I speak with about two, three customers, and I'm continuously being surprised by the creative ideas. And the future is really exciting of what can be achieved here. And also I'm amazed by the pace that things move in this industry. It's just, there's not at dull moment. So, definitely exciting times. >> Clem, what are you most excited about right now? >> For me, it's all the new open source models that have been released in the past few weeks, and that they'll keep being released in the next few weeks. I'm also super excited about more and more companies getting into this capability of chaining different models and different APIs. I think that's a very, very interesting development, because it creates new capabilities, new possibilities, new functionalities that weren't possible before. You can plug an API with an open source embedding model, with like a no-geo transcription model. So that's also very exciting. This capability of having more interoperable machine learning will also, I think, open a lot of interesting things in the future. >> Clem, congratulations on your success at Hugging Face. Please pass that on to your team. Ori, congratulations on your success, and continue to, just day one. I mean, it's just the beginning. It's not even scratching the service. Ankur, I'll give you the last word. What are you excited for at AWS? More cloud goodness coming here with AI. Give you the final word. >> Yeah, so as both Clem and Ori said, I think the research in the space is moving really, really fast, so we are excited about that. But we are also excited to see the speed at which enterprises and other AWS customers are applying machine learning to solve real business problems, and the kind of results they're seeing. So when they come back to us and tell us the kind of improvement in their business metrics and overall customer experience that they're driving and they're seeing real business results, that's what keeps us going and inspires us to continue inventing on their behalf. >> Gentlemen, thank you so much for this awesome high impact panel. Ankur, Clem, Ori, congratulations on all your success. We'll see you around. Thanks for coming on. Generative AI, riding the wave, it's a tidal wave, it's the water, it's all happening. All great stuff. This is season three, episode one of AWS Startup Showcase closing panel. This is the AI ML episode, the top startups building generative AI on AWS. I'm John Furrier, your host. Thanks for watching. (mellow music)
SUMMARY :
This is the closing panel I'm super excited to have you all on. is to really provide and to me being in California, and then you get your product. kind of the default APIs, the cloud. and kind of making the I saw the Wall Street Journal I think it's important to realize that the app developers out there So the barrier to entry became lower; I have to ask you guys, instead of the other way around. That's kind of the DevOps movement. and the cloud is playing a and the use cases that you're enabling? the barrier to entry And you need scale for that. in the next few years and AI can fill the void, a lot of the intelligence and if we can automate reduce a little bit the barrier to entry. I'd love to get each of you drive the biggest ROI. to folks who are scratching So I think once you think Clem, what's your take on this? and it's been the same of the cloud days early on. And also I'm amazed by the pace in the past few weeks, Please pass that on to your team. and the kind of results they're seeing. This is the AI ML episode,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ankur Mehrotra | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Clem | PERSON | 0.99+ |
Ori Goshen | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Ori | PERSON | 0.99+ |
Clem Delangue | PERSON | 0.99+ |
Hugging Face | ORGANIZATION | 0.99+ |
Julien | PERSON | 0.99+ |
Ankur | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Ankur | ORGANIZATION | 0.99+ |
second round | QUANTITY | 0.99+ |
AI21 Labs | ORGANIZATION | 0.99+ |
two separate categories | QUANTITY | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
over 15,000 companies | QUANTITY | 0.98+ |
Both | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
over three years | QUANTITY | 0.98+ |
three customers | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Trainium | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Alexa | TITLE | 0.98+ |
Stable Diffusion | ORGANIZATION | 0.97+ |
Swami | PERSON | 0.97+ |
Inferentia | ORGANIZATION | 0.96+ |
GPT-J | ORGANIZATION | 0.96+ |
SageMaker | TITLE | 0.96+ |
AI21 Labs | ORGANIZATION | 0.95+ |
Riding the Wave | TITLE | 0.95+ |
ControlNet | ORGANIZATION | 0.94+ |
one way | QUANTITY | 0.94+ |
a million lines | QUANTITY | 0.93+ |
Startup Showcase | EVENT | 0.92+ |
few months ago | DATE | 0.92+ |
second wave | EVENT | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
few years ago | DATE | 0.91+ |
CodeWhisperer | TITLE | 0.9+ |
AI21 | ORGANIZATION | 0.89+ |
Adam Wenchel & John Dickerson, Arthur | AWS Startup Showcase S3 E1
(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI Machine Learning Top Startups Building Generative AI on AWS. This is season 3, episode 1 of the ongoing series covering the exciting startup from the AWS ecosystem to talk about AI and machine learning. I'm your host, John Furrier. I'm joined by two great guests here, Adam Wenchel, who's the CEO of Arthur, and Chief Scientist of Arthur, John Dickerson. Talk about how they help people build better LLM AI systems to get them into the market faster. Gentlemen, thank you for coming on. >> Yeah, thanks for having us, John. >> Well, I got to say I got to temper my enthusiasm because the last few months explosion of interest in LLMs with ChatGPT, has opened the eyes to everybody around the reality of that this is going next gen, this is it, this is the moment, this is the the point we're going to look back and say, this is the time where AI really hit the scene for real applications. So, a lot of Large Language Models, also known as LLMs, foundational models, and generative AI is all booming. This is where all the alpha developers are going. This is where everyone's focusing their business model transformations on. This is where developers are seeing action. So it's all happening, the wave is here. So I got to ask you guys, what are you guys seeing right now? You're in the middle of it, it's hitting you guys right on. You're in the front end of this massive wave. >> Yeah, John, I don't think you have to temper your enthusiasm at all. I mean, what we're seeing every single day is, everything from existing enterprise customers coming in with new ways that they're rethinking, like business things that they've been doing for many years that they can now do an entirely different way, as well as all manner of new companies popping up, applying LLMs to everything from generating code and SQL statements to generating health transcripts and just legal briefs. Everything you can imagine. And when you actually sit down and look at these systems and the demos we get of them, the hype is definitely justified. It's pretty amazing what they're going to do. And even just internally, we built, about a month ago in January, we built an Arthur chatbot so customers could ask questions, technical questions from our, rather than read our product documentation, they could just ask this LLM a particular question and get an answer. And at the time it was like state of the art, but then just last week we decided to rebuild it because the tooling has changed so much that we, last week, we've completely rebuilt it. It's now way better, built on an entirely different stack. And the tooling has undergone a full generation worth of change in six weeks, which is crazy. So it just tells you how much energy is going into this and how fast it's evolving right now. >> John, weigh in as a chief scientist. I mean, you must be blown away. Talk about kid in the candy store. I mean, you must be looking like this saying, I mean, she must be super busy to begin with, but the change, the acceleration, can you scope the kind of change you're seeing and be specific around the areas you're seeing movement and highly accelerated change? >> Yeah, definitely. And it is very, very exciting actually, thinking back to when ChatGPT was announced, that was a night our company was throwing an event at NeurIPS, which is maybe the biggest machine learning conference out there. And the hype when that happened was palatable and it was just shocking to see how well that performed. And then obviously over the last few months since then, as LLMs have continued to enter the market, we've seen use cases for them, like Adam mentioned all over the place. And so, some things I'm excited about in this space are the use of LLMs and more generally, foundation models to redesign traditional operations, research style problems, logistics problems, like auctions, decisioning problems. So moving beyond the already amazing news cases, like creating marketing content into more core integration and a lot of the bread and butter companies and tasks that drive the American ecosystem. And I think we're just starting to see some of that. And in the next 12 months, I think we're going to see a lot more. If I had to make other predictions, I think we're going to continue seeing a lot of work being done on managing like inference time costs via shrinking models or distillation. And I don't know how to make this prediction, but at some point we're going to be seeing lots of these very large scale models operating on the edge as well. So the time scales are extremely compressed, like Adam mentioned, 12 months from now, hard to say. >> We were talking on theCUBE prior to this session here. We had theCUBE conversation here and then the Wall Street Journal just picked up on the same theme, which is the printing press moment created the enlightenment stage of the history. Here we're in the whole nother automating intellect efficiency, doing heavy lifting, the creative class coming back, a whole nother level of reality around the corner that's being hyped up. The question is, is this justified? Is there really a breakthrough here or is this just another result of continued progress with AI? Can you guys weigh in, because there's two schools of thought. There's the, "Oh my God, we're entering a new enlightenment tech phase, of the equivalent of the printing press in all areas. Then there's, Ah, it's just AI (indistinct) inch by inch. What's your guys' opinion? >> Yeah, I think on the one hand when you're down in the weeds of building AI systems all day, every day, like we are, it's easy to look at this as an incremental progress. Like we have customers who've been building on foundation models since we started the company four years ago, particular in computer vision for classification tasks, starting with pre-trained models, things like that. So that part of it doesn't feel real new, but what does feel new is just when you apply these things to language with all the breakthroughs and computational efficiency, algorithmic improvements, things like that, when you actually sit down and interact with ChatGPT or one of the other systems that's out there that's building on top of LLMs, it really is breathtaking, like, the level of understanding that they have and how quickly you can accelerate your development efforts and get an actual working system in place that solves a really important real world problem and makes people way faster, way more efficient. So I do think there's definitely something there. It's more than just incremental improvement. This feels like a real trajectory inflection point for the adoption of AI. >> John, what's your take on this? As people come into the field, I'm seeing a lot of people move from, hey, I've been coding in Python, I've been doing some development, I've been a software engineer, I'm a computer science student. I'm coding in C++ old school, OG systems person. Where do they come in? Where's the focus, where's the action? Where are the breakthroughs? Where are people jumping in and rolling up their sleeves and getting dirty with this stuff? >> Yeah, all over the place. And it's funny you mentioned students in a different life. I wore a university professor hat and so I'm very, very familiar with the teaching aspects of this. And I will say toward Adam's point, this really is a leap forward in that techniques like in a co-pilot for example, everybody's using them right now and they really do accelerate the way that we develop. When I think about the areas where people are really, really focusing right now, tooling is certainly one of them. Like you and I were chatting about LangChain right before this interview started, two or three people can sit down and create an amazing set of pipes that connect different aspects of the LLM ecosystem. Two, I would say is in engineering. So like distributed training might be one, or just understanding better ways to even be able to train large models, understanding better ways to then distill them or run them. So like this heavy interaction now between engineering and what I might call traditional machine learning from 10 years ago where you had to know a lot of math, you had to know calculus very well, things like that. Now you also need to be, again, a very strong engineer, which is exciting. >> I interviewed Swami when he talked about the news. He's ahead of Amazon's machine learning and AI when they announced Hugging Face announcement. And I reminded him how Amazon was easy to get into if you were developing a startup back in 2007,8, and that the language models had that similar problem. It's step up a lot of content and a lot of expense to get provisioned up, now it's easy. So this is the next wave of innovation. So how do you guys see that from where we are right now? Are we at that point where it's that moment where it's that cloud-like experience for LLMs and large language models? >> Yeah, go ahead John. >> I think the answer is yes. We see a number of large companies that are training these and serving these, some of which are being co-interviewed in this episode. I think we're at that. Like, you can hit one of these with a simple, single line of Python, hitting an API, you can boot this up in seconds if you want. It's easy. >> Got it. >> So I (audio cuts out). >> Well let's take a step back and talk about the company. You guys being featured here on the Showcase. Arthur, what drove you to start the company? How'd this all come together? What's the origination story? Obviously you got a big customers, how'd get started? What are you guys doing? How do you make money? Give a quick overview. >> Yeah, I think John and I come at it from slightly different angles, but for myself, I have been a part of a number of technology companies. I joined Capital One, they acquired my last company and shortly after I joined, they asked me to start their AI team. And so even though I've been doing AI for a long time, I started my career back in DARPA. It was the first time I was really working at scale in AI at an organization where there were hundreds of millions of dollars in revenue at stake with the operation of these models and that they were impacting millions of people's financial livelihoods. And so it just got me hyper-focused on these issues around making sure that your AI worked well and it worked well for your company and it worked well for the people who were being affected by it. At the time when I was doing this 2016, 2017, 2018, there just wasn't any tooling out there to support this production management model monitoring life phase of the life cycle. And so we basically left to start the company that I wanted. And John has a his own story. I'll let let you share that one, John. >> Go ahead John, you're up. >> Yeah, so I'm coming at this from a different world. So I'm on leave now from a tenured role in academia where I was leading a large lab focusing on the intersection of machine learning and economics. And so questions like fairness or the response to the dynamism on the underlying environment have been around for quite a long time in that space. And so I've been thinking very deeply about some of those more like R and D style questions as well as having deployed some automation code across a couple of different industries, some in online advertising, some in the healthcare space and so on, where concerns of, again, fairness come to bear. And so Adam and I connected to understand the space of what that might look like in the 2018 20 19 realm from a quantitative and from a human-centered point of view. And so booted things up from there. >> Yeah, bring that applied engineering R and D into the Capital One, DNA that he had at scale. I could see that fit. I got to ask you now, next step, as you guys move out and think about LLMs and the recent AI news around the generative models and the foundational models like ChatGPT, how should we be looking at that news and everyone watching might be thinking the same thing. I know at the board level companies like, we should refactor our business, this is the future. It's that kind of moment, and the tech team's like, okay, boss, how do we do this again? Or are they prepared? How should we be thinking? How should people watching be thinking about LLMs? >> Yeah, I think they really are transformative. And so, I mean, we're seeing companies all over the place. Everything from large tech companies to a lot of our large enterprise customers are launching significant projects at core parts of their business. And so, yeah, I would be surprised, if you're serious about becoming an AI native company, which most leading companies are, then this is a trend that you need to be taking seriously. And we're seeing the adoption rate. It's funny, I would say the AI adoption in the broader business world really started, let's call it four or five years ago, and it was a relatively slow adoption rate, but I think all that kind of investment in and scaling the maturity curve has paid off because the rate at which people are adopting and deploying systems based on this is tremendous. I mean, this has all just happened in the few months and we're already seeing people get systems into production. So, now there's a lot of things you have to guarantee in order to put these in production in a way that basically is added into your business and doesn't cause more headaches than it solves. And so that's where we help customers is where how do you put these out there in a way that they're going to represent your company well, they're going to perform well, they're going to do their job and do it properly. >> So in the use case, as a customer, as I think about this, there's workflows. They might have had an ML AI ops team that's around IT. Their inference engines are out there. They probably don't have a visibility on say how much it costs, they're kicking the tires. When you look at the deployment, there's a cost piece, there's a workflow piece, there's fairness you mentioned John, what should be, I should be thinking about if I'm going to be deploying stuff into production, I got to think about those things. What's your opinion? >> Yeah, I'm happy to dive in on that one. So monitoring in general is extremely important once you have one of these LLMs in production, and there have been some changes versus traditional monitoring that we can dive deeper into that LLMs are really accelerated. But a lot of that bread and butter style of things you should be looking out for remain just as important as they are for what you might call traditional machine learning models. So the underlying environment of data streams, the way users interact with these models, these are all changing over time. And so any performance metrics that you care about, traditional ones like an accuracy, if you can define that for an LLM, ones around, for example, fairness or bias. If that is a concern for your particular use case and so on. Those need to be tracked. Now there are some interesting changes that LLMs are bringing along as well. So most ML models in production that we see are relatively static in the sense that they're not getting flipped in more than maybe once a day or once a week or they're just set once and then not changed ever again. With LLMs, there's this ongoing value alignment or collection of preferences from users that is often constantly updating the model. And so that opens up all sorts of vectors for, I won't say attack, but for problems to arise in production. Like users might learn to use your system in a different way and thus change the way those preferences are getting collected and thus change your system in ways that you never intended. So maybe that went through governance already internally at the company and now it's totally, totally changed and it's through no fault of your own, but you need to be watching over that for sure. >> Talk about the reinforced learnings from human feedback. How's that factoring in to the LLMs? Is that part of it? Should people be thinking about that? Is that a component that's important? >> It certainly is, yeah. So this is one of the big tweaks that happened with InstructGPT, which is the basis model behind ChatGPT and has since gone on to be used all over the place. So value alignment I think is through RLHF like you mentioned is a very interesting space to get into and it's one that you need to watch over. Like, you're asking humans for feedback over outputs from a model and then you're updating the model with respect to that human feedback. And now you've thrown humans into the loop here in a way that is just going to complicate things. And it certainly helps in many ways. You can ask humans to, let's say that you're deploying an internal chat bot at an enterprise, you could ask humans to align that LLM behind the chatbot to, say company values. And so you're listening feedback about these company values and that's going to scoot that chatbot that you're running internally more toward the kind of language that you'd like to use internally on like a Slack channel or something like that. Watching over that model I think in that specific case, that's a compliance and HR issue as well. So while it is part of the greater LLM stack, you can also view that as an independent bit to watch over. >> Got it, and these are important factors. When people see the Bing news, they freak out how it's doing great. Then it goes off the rails, it goes big, fails big. (laughing) So these models people see that, is that human interaction or is that feedback, is that not accepting it or how do people understand how to take that input in and how to build the right apps around LLMs? This is a tough question. >> Yeah, for sure. So some of the examples that you'll see online where these chatbots go off the rails are obviously humans trying to break the system, but some of them clearly aren't. And that's because these are large statistical models and we don't know what's going to pop out of them all the time. And even if you're doing as much in-house testing at the big companies like the Go-HERE's and the OpenAI's of the world, to try to prevent things like toxicity or racism or other sorts of bad content that might lead to bad pr, you're never going to catch all of these possible holes in the model itself. And so, again, it's very, very important to keep watching over that while it's in production. >> On the business model side, how are you guys doing? What's the approach? How do you guys engage with customers? Take a minute to explain the customer engagement. What do they need? What do you need? How's that work? >> Yeah, I can talk a little bit about that. So it's really easy to get started. It's literally a matter of like just handing out an API key and people can get started. And so we also offer alternative, we also offer versions that can be installed on-prem for models that, we find a lot of our customers have models that deal with very sensitive data. So you can run it in your cloud account or use our cloud version. And so yeah, it's pretty easy to get started with this stuff. We find people start using it a lot of times during the validation phase 'cause that way they can start baselining performance models, they can do champion challenger, they can really kind of baseline the performance of, maybe they're considering different foundation models. And so it's a really helpful tool for understanding differences in the way these models perform. And then from there they can just flow that into their production inferencing, so that as these systems are out there, you have really kind of real time monitoring for anomalies and for all sorts of weird behaviors as well as that continuous feedback loop that helps you make make your product get better and observability and you can run all sorts of aggregated reports to really understand what's going on with these models when they're out there deciding. I should also add that we just today have another way to adopt Arthur and that is we are in the AWS marketplace, and so we are available there just to make it that much easier to use your cloud credits, skip the procurement process, and get up and running really quickly. >> And that's great 'cause Amazon's got SageMaker, which handles a lot of privacy stuff, all kinds of cool things, or you can get down and dirty. So I got to ask on the next one, production is a big deal, getting stuff into production. What have you guys learned that you could share to folks watching? Is there a cost issue? I got to monitor, obviously you brought that up, we talked about the even reinforcement issues, all these things are happening. What is the big learnings that you could share for people that are going to put these into production to watch out for, to plan for, or be prepared for, hope for the best plan for the worst? What's your advice? >> I can give a couple opinions there and I'm sure Adam has. Well, yeah, the big one from my side is, again, I had mentioned this earlier, it's just the input data streams because humans are also exploring how they can use these systems to begin with. It's really, really hard to predict the type of inputs you're going to be seeing in production. Especially, we always talk about chatbots, but then any generative text tasks like this, let's say you're taking in news articles and summarizing them or something like that, it's very hard to get a good sampling even of the set of news articles in such a way that you can really predict what's going to pop out of that model. So to me, it's, adversarial maybe isn't the word that I would use, but it's an unnatural shifting input distribution of like prompts that you might see for these models. That's certainly one. And then the second one that I would talk about is, it can be hard to understand the costs, the inference time costs behind these LLMs. So the pricing on these is always changing as the models change size, it might go up, it might go down based on model size, based on energy cost and so on, but your pricing per token or per a thousand tokens and that I think can be difficult for some clients to wrap their head around. Again, you don't know how these systems are going to be used after all so it can be tough. And so again that's another metric that really should be tracked. >> Yeah, and there's a lot of trade off choices in there with like, how many tokens do you want at each step and in the sequence and based on, you have (indistinct) and you reject these tokens and so based on how your system's operating, that can make the cost highly variable. And that's if you're using like an API version that you're paying per token. A lot of people also choose to run these internally and as John mentioned, the inference time on these is significantly higher than a traditional classifi, even NLP classification model or tabular data model, like orders of magnitude higher. And so you really need to understand how that, as you're constantly iterating on these models and putting out new versions and new features in these models, how that's affecting the overall scale of that inference cost because you can use a lot of computing power very quickly with these profits. >> Yeah, scale, performance, price all come together. I got to ask while we're here on the secret sauce of the company, if you had to describe to people out there watching, what's the secret sauce of the company? What's the key to your success? >> Yeah, so John leads our research team and they've had a number of really cool, I think AI as much as it's been hyped for a while, it's still commercial AI at least is really in its infancy. And so the way we're able to pioneer new ways to think about performance for computer vision NLP LLMs is probably the thing that I'm proudest about. John and his team publish papers all the time at Navs and other places. But I think it's really being able to define what performance means for basically any kind of model type and give people really powerful tools to understand that on an ongoing basis. >> John, secret sauce, how would you describe it? You got all the action happening all around you. >> Yeah, well I going to appreciate Adam talking me up like that. No, I. (all laughing) >> Furrier: Robs to you. >> I would also say a couple of other things here. So we have a very strong engineering team and so I think some early hires there really set the standard at a very high bar that we've maintained as we've grown. And I think that's really paid dividends as scalabilities become even more of a challenge in these spaces, right? And so that's not just scalability when it comes to LLMs, that's scalability when it comes to millions of inferences per day, that kind of thing as well in traditional ML models. And I think that's compared to potential competitors, that's really... Well, it's made us able to just operate more efficiently and pass that along to the client. >> Yeah, and I think the infancy comment is really important because it's the beginning. You really is a long journey ahead. A lot of change coming, like I said, it's a huge wave. So I'm sure you guys got a lot of plannings at the foundation even for your own company, so I appreciate the candid response there. Final question for you guys is, what should the top things be for a company in 2023? If I'm going to set the agenda and I'm a customer moving forward, putting the pedal to the metal, so to speak, what are the top things I should be prioritizing or I need to do to be successful with AI in 2023? >> Yeah, I think, so number one, as we talked about, we've been talking about this entire episode, the things are changing so quickly and the opportunities for business transformation and really disrupting different applications, different use cases, is almost, I don't think we've even fully comprehended how big it is. And so really digging in to your business and understanding where I can apply these new sets of foundation models is, that's a top priority. The interesting thing is I think there's another force at play, which is the macroeconomic conditions and a lot of places are, they're having to work harder to justify budgets. So in the past, couple years ago maybe, they had a blank check to spend on AI and AI development at a lot of large enterprises that was limited primarily by the amount of talent they could scoop up. Nowadays these expenditures are getting scrutinized more. And so one of the things that we really help our customers with is like really calculating the ROI on these things. And so if you have models out there performing and you have a new version that you can put out that lifts the performance by 3%, how many tens of millions of dollars does that mean in business benefit? Or if I want to go to get approval from the CFO to spend a few million dollars on this new project, how can I bake in from the beginning the tools to really show the ROI along the way? Because I think in these systems when done well for a software project, the ROI can be like pretty spectacular. Like we see over a hundred percent ROI in the first year on some of these projects. And so, I think in 2023, you just need to be able to show what you're getting for that spend. >> It's a needle moving moment. You see it all the time with some of these aha moments or like, whoa, blown away. John, I want to get your thoughts on this because one of the things that comes up a lot for companies that I talked to, that are on my second wave, I would say coming in, maybe not, maybe the front wave of adopters is talent and team building. You mentioned some of the hires you got were game changing for you guys and set the bar high. As you move the needle, new developers going to need to come in. What's your advice given that you've been a professor, you've seen students, I know a lot of computer science people want to shift, they might not be yet skilled in AI, but they're proficient in programming, is that's going to be another opportunity with open source when things are happening. How do you talk to that next level of talent that wants to come in to this market to supplement teams and be on teams, lead teams? Any advice you have for people who want to build their teams and people who are out there and want to be a coder in AI? >> Yeah, I've advice, and this actually works for what it would take to be a successful AI company in 2023 as well, which is, just don't be afraid to iterate really quickly with these tools. The space is still being explored on what they can be used for. A lot of the tasks that they're used for now right? like creating marketing content using a machine learning is not a new thing to do. It just works really well now. And so I'm excited to see what the next year brings in terms of folks from outside of core computer science who are, other engineers or physicists or chemists or whatever who are learning how to use these increasingly easy to use tools to leverage LLMs for tasks that I think none of us have really thought about before. So that's really, really exciting. And so toward that I would say iterate quickly. Build things on your own, build demos, show them the friends, host them online and you'll learn along the way and you'll have somebody to show for it. And also you'll help us explore that space. >> Guys, congratulations with Arthur. Great company, great picks and shovels opportunities out there for everybody. Iterate fast, get in quickly and don't be afraid to iterate. Great advice and thank you for coming on and being part of the AWS showcase, thanks. >> Yeah, thanks for having us on John. Always a pleasure. >> Yeah, great stuff. Adam Wenchel, John Dickerson with Arthur. Thanks for coming on theCUBE. I'm John Furrier, your host. Generative AI and AWS. Keep it right there for more action with theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
of the AWS Startup Showcase has opened the eyes to everybody and the demos we get of them, but the change, the acceleration, And in the next 12 months, of the equivalent of the printing press and how quickly you can accelerate As people come into the field, aspects of the LLM ecosystem. and that the language models in seconds if you want. and talk about the company. of the life cycle. in the 2018 20 19 realm I got to ask you now, next step, in the broader business world So in the use case, as a the way users interact with these models, How's that factoring in to that LLM behind the chatbot and how to build the Go-HERE's and the OpenAI's What's the approach? differences in the way that are going to put So the pricing on these is always changing and in the sequence What's the key to your success? And so the way we're able to You got all the action Yeah, well I going to appreciate Adam and pass that along to the client. so I appreciate the candid response there. get approval from the CFO to spend You see it all the time with some of A lot of the tasks that and being part of the Yeah, thanks for having us Generative AI and AWS.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Adam Wenchel | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Dickerson | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
3% | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Arthur | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
millions | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
each step | QUANTITY | 0.99+ |
2018 20 19 | DATE | 0.99+ |
two schools | QUANTITY | 0.99+ |
couple years ago | DATE | 0.99+ |
once a week | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
Swami | PERSON | 0.98+ |
four years ago | DATE | 0.98+ |
four | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
Arthur | ORGANIZATION | 0.98+ |
two great guests | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
once a day | QUANTITY | 0.98+ |
six weeks | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
ChatGPT | TITLE | 0.97+ |
second one | QUANTITY | 0.96+ |
three people | QUANTITY | 0.96+ |
front | EVENT | 0.95+ |
second wave | EVENT | 0.95+ |
January | DATE | 0.95+ |
hundreds of millions of dollars | QUANTITY | 0.95+ |
five years ago | DATE | 0.94+ |
about a month ago | DATE | 0.94+ |
tens of millions | QUANTITY | 0.93+ |
today | DATE | 0.92+ |
next 12 months | DATE | 0.91+ |
LangChain | ORGANIZATION | 0.91+ |
over a hundred percent | QUANTITY | 0.91+ |
million dollars | QUANTITY | 0.89+ |
millions of inferences | QUANTITY | 0.89+ |
theCUBE | ORGANIZATION | 0.88+ |
Opening Panel | Generative AI: Hype or Reality | AWS Startup Showcase S3 E1
(light airy music) >> Hello, everyone, welcome to theCUBE's presentation of the AWS Startup Showcase, AI and machine learning. "Top Startups Building Generative AI on AWS." This is season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talking about AI machine learning. We have three great guests Bratin Saha, VP, Vice President of Machine Learning and AI Services at Amazon Web Services. Tom Mason, the CTO of Stability AI, and Aidan Gomez, CEO and co-founder of Cohere. Two practitioners doing startups and AWS. Gentlemen, thank you for opening up this session, this episode. Thanks for coming on. >> Thank you. >> Thank you. >> Thank you. >> So the topic is hype versus reality. So I think we're all on the reality is great, hype is great, but the reality's here. I want to get into it. Generative AI's got all the momentum, it's going mainstream, it's kind of come out of the behind the ropes, it's now mainstream. We saw the success of ChatGPT, opens up everyone's eyes, but there's so much more going on. Let's jump in and get your early perspectives on what should people be talking about right now? What are you guys working on? We'll start with AWS. What's the big focus right now for you guys as you come into this market that's highly active, highly hyped up, but people see value right out of the gate? >> You know, we have been working on generative AI for some time. In fact, last year we released Code Whisperer, which is about using generative AI for software development and a number of customers are using it and getting real value out of it. So generative AI is now something that's mainstream that can be used by enterprise users. And we have also been partnering with a number of other companies. So, you know, stability.ai, we've been partnering with them a lot. We want to be partnering with other companies as well. In seeing how we do three things, you know, first is providing the most efficient infrastructure for generative AI. And that is where, you know, things like Trainium, things like Inferentia, things like SageMaker come in. And then next is the set of models and then the third is the kind of applications like Code Whisperer and so on. So, you know, it's early days yet, but clearly there's a lot of amazing capabilities that will come out and something that, you know, our customers are starting to pay a lot of attention to. >> Tom, talk about your company and what your focus is and why the Amazon Web Services relationship's important for you? >> So yeah, we're primarily committed to making incredible open source foundation models and obviously stable effusions been our kind of first big model there, which we trained all on AWS. We've been working with them over the last year and a half to develop, obviously a big cluster, and bring all that compute to training these models at scale, which has been a really successful partnership. And we're excited to take it further this year as we develop commercial strategy of the business and build out, you know, the ability for enterprise customers to come and get all the value from these models that we think they can get. So we're really excited about the future. We got hugely exciting pipeline for this year with new modalities and video models and wonderful things and trying to solve images for once and for all and get the kind of general value and value proposition correct for customers. So it's a really exciting time and very honored to be part of it. >> It's great to see some of your customers doing so well out there. Congratulations to your team. Appreciate that. Aidan, let's get into what you guys do. What does Cohere do? What are you excited about right now? >> Yeah, so Cohere builds large language models, which are the backbone of applications like ChatGPT and GPT-3. We're extremely focused on solving the issues with adoption for enterprise. So it's great that you can make a super flashy demo for consumers, but it takes a lot to actually get it into billion user products and large global enterprises. So about six months ago, we released our command models, which are some of the best that exist for large language models. And in December, we released our multilingual text understanding models and that's on over a hundred different languages and it's trained on, you know, authentic data directly from native speakers. And so we're super excited to continue pushing this into enterprise and solving those barriers for adoption, making this transformation a reality. >> Just real quick, while I got you there on the new products coming out. Where are we in the progress? People see some of the new stuff out there right now. There's so much more headroom. Can you just scope out in your mind what that looks like? Like from a headroom standpoint? Okay, we see ChatGPT. "Oh yeah, it writes my papers for me, does some homework for me." I mean okay, yawn, maybe people say that, (Aidan chuckles) people excited or people are blown away. I mean, it's helped theCUBE out, it helps me, you know, feed up a little bit from my write-ups but it's not always perfect. >> Yeah, at the moment it's like a writing assistant, right? And it's still super early in the technologies trajectory. I think it's fascinating and it's interesting but its impact is still really limited. I think in the next year, like within the next eight months, we're going to see some major changes. You've already seen the very first hints of that with stuff like Bing Chat, where you augment these dialogue models with an external knowledge base. So now the models can be kept up to date to the millisecond, right? Because they can search the web and they can see events that happened a millisecond ago. But that's still limited in the sense that when you ask the question, what can these models actually do? Well they can just write text back at you. That's the extent of what they can do. And so the real project, the real effort, that I think we're all working towards is actually taking action. So what happens when you give these models the ability to use tools, to use APIs? What can they do when they can actually affect change out in the real world, beyond just streaming text back at the user? I think that's the really exciting piece. >> Okay, so I wanted to tee that up early in the segment 'cause I want to get into the customer applications. We're seeing early adopters come in, using the technology because they have a lot of data, they have a lot of large language model opportunities and then there's a big fast follower wave coming behind it. I call that the people who are going to jump in the pool early and get into it. They might not be advanced. Can you guys share what customer applications are being used with large language and vision models today and how they're using it to transform on the early adopter side, and how is that a tell sign of what's to come? >> You know, one of the things we have been seeing both with the text models that Aidan talked about as well as the vision models that stability.ai does, Tom, is customers are really using it to change the way you interact with information. You know, one example of a customer that we have, is someone who's kind of using that to query customer conversations and ask questions like, you know, "What was the customer issue? How did we solve it?" And trying to get those kinds of insights that was previously much harder to do. And then of course software is a big area. You know, generating software, making that, you know, just deploying it in production. Those have been really big areas that we have seen customers start to do. You know, looking at documentation, like instead of you know, searching for stuff and so on, you know, you just have an interactive way, in which you can just look at the documentation for a product. You know, all of this goes to where we need to take the technology. One of which is, you know, the models have to be there but they have to work reliably in a production setting at scale, with privacy, with security, and you know, making sure all of this is happening, is going to be really key. That is what, you know, we at AWS are looking to do, which is work with partners like stability and others and in the open source and really take all of these and make them available at scale to customers, where they work reliably. >> Tom, Aidan, what's your thoughts on this? Where are customers landing on this first use cases or set of low-hanging fruit use cases or applications? >> Yeah, so I think like the first group of adopters that really found product market fit were the copywriting companies. So one great example of that is HyperWrite. Another one is Jasper. And so for Cohere, that's the tip of the iceberg, like there's a very long tail of usage from a bunch of different applications. HyperWrite is one of our customers, they help beat writer's block by drafting blog posts, emails, and marketing copy. We also have a global audio streaming platform, which is using us the power of search engine that can comb through podcast transcripts, in a bunch of different languages. Then a global apparel brand, which is using us to transform how they interact with their customers through a virtual assistant, two dozen global news outlets who are using us for news summarization. So really like, these large language models, they can be deployed all over the place into every single industry sector, language is everywhere. It's hard to think of any company on Earth that doesn't use language. So it's, very, very- >> We're doing it right now. We got the language coming in. >> Exactly. >> We'll transcribe this puppy. All right. Tom, on your side, what do you see the- >> Yeah, we're seeing some amazing applications of it and you know, I guess that's partly been, because of the growth in the open source community and some of these applications have come from there that are then triggering this secondary wave of innovation, which is coming a lot from, you know, controllability and explainability of the model. But we've got companies like, you know, Jasper, which Aidan mentioned, who are using stable diffusion for image generation in block creation, content creation. We've got Lensa, you know, which exploded, and is built on top of stable diffusion for fine tuning so people can bring themselves and their pets and you know, everything into the models. So we've now got fine tuned stable diffusion at scale, which is democratized, you know, that process, which is really fun to see your Lensa, you know, exploded. You know, I think it was the largest growing app in the App Store at one point. And lots of other examples like NightCafe and Lexica and Playground. So seeing lots of cool applications. >> So much applications, we'll probably be a customer for all you guys. We'll definitely talk after. But the challenges are there for people adopting, they want to get into what you guys see as the challenges that turn into opportunities. How do you see the customers adopting generative AI applications? For example, we have massive amounts of transcripts, timed up to all the videos. I don't even know what to do. Do I just, do I code my API there. So, everyone has this problem, every vertical has these use cases. What are the challenges for people getting into this and adopting these applications? Is it figuring out what to do first? Or is it a technical setup? Do they stand up stuff, they just go to Amazon? What do you guys see as the challenges? >> I think, you know, the first thing is coming up with where you think you're going to reimagine your customer experience by using generative AI. You know, we talked about Ada, and Tom talked about a number of these ones and you know, you pick up one or two of these, to get that robust. And then once you have them, you know, we have models and we'll have more models on AWS, these large language models that Aidan was talking about. Then you go in and start using these models and testing them out and seeing whether they fit in use case or not. In many situations, like you said, John, our customers want to say, "You know, I know you've trained these models on a lot of publicly available data, but I want to be able to customize it for my use cases. Because, you know, there's some knowledge that I have created and I want to be able to use that." And then in many cases, and I think Aidan mentioned this. You know, you need these models to be up to date. Like you can't have it staying. And in those cases, you augmented with a knowledge base, you know you have to make sure that these models are not hallucinating. And so you need to be able to do the right kind of responsible AI checks. So, you know, you start with a particular use case, and there are a lot of them. Then, you know, you can come to AWS, and then look at one of the many models we have and you know, we are going to have more models for other modalities as well. And then, you know, play around with the models. We have a playground kind of thing where you can test these models on some data and then you can probably, you will probably want to bring your own data, customize it to your own needs, do some of the testing to make sure that the model is giving the right output and then just deploy it. And you know, we have a lot of tools. >> Yeah. >> To make this easy for our customers. >> How should people think about large language models? Because do they think about it as something that they tap into with their IP or their data? Or is it a large language model that they apply into their system? Is the interface that way? What's the interaction look like? >> In many situations, you can use these models out of the box. But in typical, in most of the other situations, you will want to customize it with your own data or with your own expectations. So the typical use case would be, you know, these are models are exposed through APIs. So the typical use case would be, you know you're using these APIs a little bit for testing and getting familiar and then there will be an API that will allow you to train this model further on your data. So you use that AI, you know, make sure you augmented the knowledge base. So then you use those APIs to customize the model and then just deploy it in an application. You know, like Tom was mentioning, a number of companies that are using these models. So once you have it, then you know, you again, use an endpoint API and use it in an application. >> All right, I love the example. I want to ask Tom and Aidan, because like most my experience with Amazon Web Service in 2007, I would stand up in EC2, put my code on there, play around, if it didn't work out, I'd shut it down. Is that a similar dynamic we're going to see with the machine learning where developers just kind of log in and stand up infrastructure and play around and then have a cloud-like experience? >> So I can go first. So I mean, we obviously, with AWS working really closely with the SageMaker team, do fantastic platform there for ML training and inference. And you know, going back to your point earlier, you know, where the data is, is hugely important for companies. Many companies bringing their models to their data in AWS on-premise for them is hugely important. Having the models to be, you know, open sources, makes them explainable and transparent to the adopters of those models. So, you know, we are really excited to work with the SageMaker team over the coming year to bring companies to that platform and make the most of our models. >> Aidan, what's your take on developers? Do they just need to have a team in place, if we want to interface with you guys? Let's say, can they start learning? What do they got to do to set up? >> Yeah, so I think for Cohere, our product makes it much, much easier to people, for people to get started and start building, it solves a lot of the productionization problems. But of course with SageMaker, like Tom was saying, I think that lowers a barrier even further because it solves problems like data privacy. So I want to underline what Bratin was saying earlier around when you're fine tuning or when you're using these models, you don't want your data being incorporated into someone else's model. You don't want it being used for training elsewhere. And so the ability to solve for enterprises, that data privacy and that security guarantee has been hugely important for Cohere, and that's very easy to do through SageMaker. >> Yeah. >> But the barriers for using this technology are coming down super quickly. And so for developers, it's just becoming completely intuitive. I love this, there's this quote from Andrej Karpathy. He was saying like, "It really wasn't on my 2022 list of things to happen that English would become, you know, the most popular programming language." And so the barrier is coming down- >> Yeah. >> Super quickly and it's exciting to see. >> It's going to be awesome for all the companies here, and then we'll do more, we're probably going to see explosion of startups, already seeing that, the maps, ecosystem maps, the landscape maps are happening. So this is happening and I'm convinced it's not yesterday's chat bot, it's not yesterday's AI Ops. It's a whole another ballgame. So I have to ask you guys for the final question before we kick off the company's showcasing here. How do you guys gauge success of generative AI applications? Is there a lens to look through and say, okay, how do I see success? It could be just getting a win or is it a bigger picture? Bratin we'll start with you. How do you gauge success for generative AI? >> You know, ultimately it's about bringing business value to our customers. And making sure that those customers are able to reimagine their experiences by using generative AI. Now the way to get their ease, of course to deploy those models in a safe, effective manner, and ensuring that all of the robustness and the security guarantees and the privacy guarantees are all there. And we want to make sure that this transitions from something that's great demos to actual at scale products, which means making them work reliably all of the time not just some of the time. >> Tom, what's your gauge for success? >> Look, I think this, we're seeing a completely new form of ways to interact with data, to make data intelligent, and directly to bring in new revenue streams into business. So if businesses can use our models to leverage that and generate completely new revenue streams and ultimately bring incredible new value to their customers, then that's fantastic. And we hope we can power that revolution. >> Aidan, what's your take? >> Yeah, reiterating Bratin and Tom's point, I think that value in the enterprise and value in market is like a huge, you know, it's the goal that we're striving towards. I also think that, you know, the value to consumers and actual users and the transformation of the surface area of technology to create experiences like ChatGPT that are magical and it's the first time in human history we've been able to talk to something compelling that's not a human. I think that in itself is just extraordinary and so exciting to see. >> It really brings up a whole another category of markets. B2B, B2C, it's B2D, business to developer. Because I think this is kind of the big trend the consumers have to win. The developers coding the apps, it's a whole another sea change. Reminds me everyone use the "Moneyball" movie as example during the big data wave. Then you know, the value of data. There's a scene in "Moneyball" at the end, where Billy Beane's getting the offer from the Red Sox, then the owner says to the Red Sox, "If every team's not rebuilding their teams based upon your model, there'll be dinosaurs." I think that's the same with AI here. Every company will have to need to think about their business model and how they operate with AI. So it'll be a great run. >> Completely Agree >> It'll be a great run. >> Yeah. >> Aidan, Tom, thank you so much for sharing about your experiences at your companies and congratulations on your success and it's just the beginning. And Bratin, thanks for coming on representing AWS. And thank you, appreciate for what you do. Thank you. >> Thank you, John. Thank you, Aidan. >> Thank you John. >> Thanks so much. >> Okay, let's kick off season three, episode one. I'm John Furrier, your host. Thanks for watching. (light airy music)
SUMMARY :
of the AWS Startup Showcase, of the behind the ropes, and something that, you know, and build out, you know, Aidan, let's get into what you guys do. and it's trained on, you know, it helps me, you know, the ability to use tools, to use APIs? I call that the people and you know, making sure the first group of adopters We got the language coming in. Tom, on your side, what do you see the- and you know, everything into the models. they want to get into what you guys see and you know, you pick for our customers. then you know, you again, All right, I love the example. and make the most of our models. And so the ability to And so the barrier is coming down- and it's exciting to see. So I have to ask you guys and ensuring that all of the robustness and directly to bring in new and it's the first time in human history the consumers have to win. and it's just the beginning. I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Tom Mason | PERSON | 0.99+ |
Aidan | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrej Karpathy | PERSON | 0.99+ |
Bratin Saha | PERSON | 0.99+ |
December | DATE | 0.99+ |
2007 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Aidan Gomez | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Billy Beane | PERSON | 0.99+ |
Bratin | PERSON | 0.99+ |
Moneyball | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
Ada | PERSON | 0.99+ |
last year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Two practitioners | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
ChatGPT | TITLE | 0.99+ |
next year | DATE | 0.99+ |
Code Whisperer | TITLE | 0.99+ |
third | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
App Store | TITLE | 0.99+ |
first time | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Inferentia | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
GPT-3 | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Lensa | TITLE | 0.98+ |
SageMaker | ORGANIZATION | 0.98+ |
three things | QUANTITY | 0.97+ |
Cohere | ORGANIZATION | 0.96+ |
over a hundred different languages | QUANTITY | 0.96+ |
English | OTHER | 0.96+ |
one example | QUANTITY | 0.96+ |
about six months ago | DATE | 0.96+ |
One | QUANTITY | 0.96+ |
first use | QUANTITY | 0.96+ |
SageMaker | TITLE | 0.96+ |
Bing Chat | TITLE | 0.95+ |
one point | QUANTITY | 0.95+ |
Trainium | TITLE | 0.95+ |
Lexica | TITLE | 0.94+ |
Playground | TITLE | 0.94+ |
three great guests | QUANTITY | 0.93+ |
HyperWrite | TITLE | 0.92+ |
Nancy Wang & Kate Watts | International Women's Day
>> Hello everyone. Welcome to theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE been profiling the leaders in the technology world, women in technology from developers to the boardroom, everything in between. We have two great guests promoting in from Malaysia. Nancy Wang is the general manager, also CUBE alumni from AWS Data Protection, and founder and board chair of Advancing Women in Tech, awit.org. And of course Kate Watts who's the executive director of Advancing Women in Tech.org. So it's awit.org. Nancy, Kate, thanks for coming all the way across remotely from Malaysia. >> Of course, we're coming to you as fast as our internet bandwidth will allow us. And you know, I'm just thrilled today that you get to see a whole nother aspect of my life, right? Because typically we talk about AWS, and here we're talking about a topic near and dear to my heart. >> Well, Nancy, I love the fact that you're spending a lot of time taking the empowerment to go out and help the industries and helping with the advancement of women in tech. Kate, the executive director it's a 501C3, it's nonprofit, dedicating to accelerating the careers of women in groups in tech. Can you talk about the organization? >> Yes, I can. So Advancing Women in Tech was founded in 2017 in order to fix some of the pathway problems that we're seeing on the rise to leadership in the industry. And so we specifically focus on supporting mid-level women in technical roles, get into higher positions. We do that in a few different ways through mentorship programs through building technical skills and by connecting people to a supportive community. So you have your peer network and then a vertical sort of relationships to help you navigate the next steps in your career. So to date we've served about 40,000 individuals globally and we're just looking to expand our reach and impact and be able to better support women in the industry. >> Nancy, talk about the creation, the origination story. How'd this all come together? Obviously the momentum, everyone in the industry's been focused on this for a long time. Where did AWIT come from? Advancing Women in Technology, that's the acronym. Advancing Women in Technology.org, where'd it come from? What's the origination story? >> Yeah, so AWIT really originated from this desire that I had, to Kate's point around, well if you look around right and you know, don't take my word for it, right? Look at stats, look at news reports, or just frankly go on your LinkedIn and see how many women in underrepresented groups are in senior technical leadership roles right out in the companies whose names we all know. And so that was my case back in 2016. And so when I first got the idea and back then I was actually at Google, just another large tech company in the valley, right? It was about how do we get more role models, how we get more, for example, women into leadership roles so they can bring up the next generation, right? And so this is actually part of a longer speech that I'm about to give on Wednesday and part of the US State Department speaker program. In fact, that's why Kate and I are here in Malaysia right now is working with over 200 women entrepreneurs from all over in Southeast Asia, including Malaysia Philippines, Vietnam, Borneo, you know, so many countries where having more women entrepreneurs can help raise the GDP right, and that fits within our overall mission of getting more women into top leadership roles in tech. >> You know, I was talking about Teresa Carlson she came on the program as well for this year this next season we're going to do. And she mentioned the decision between the US progress and international. And she's saying as much as it's still bad numbers, it's worse than outside the United States and needs to get better. Can you comment on the global aspect? You brought that up. I think it's super important to highlight that it's just not one area, it's a global evolution. >> Absolutely, so let me start, and I'd love to actually have Kate talk about our current programs and all of the international groups that we're working with. So as Teresa aptly mentioned there is so much work to be done not just outside the US and North Americas where typically tech nonprofits will focus, but rather if you think about the one to end model, right? For example when I was doing the product market fit workshop for the US State Department I had women dialing in from rice fields, right? So let me just pause there for a moment. They were holding their cell phones up near towers near trees just so that they can get a few minutes of time with me to do a workshop and how to accelerate their business. So if you don't call that the desire to propel oneself or accelerate oneself, not sure what is, right. And so it's really that passion that drove me to spend the next week and a half here working with local entrepreneurs working with policy makers so we can take advantage and really leverage that passion that people have, right? To accelerate more business globally. And so that's why, you know Kate will be leading our contingent with the United Nations Women Group, right? That is focused on women's economic empowerment because that's super important, right? One aspect can be sure, getting more directors, you know vice presidents into companies like Google and Amazon. But another is also how do you encourage more women around the world to start businesses, right? To reach economic and freedom independence, right? To overcome some of the maybe social barriers to becoming a leader in their own country. >> Yes, and if I think about our own programs and our model of being very intentional about supporting the learning development and skills of women and members of underrepresented groups we focused very much on providing global access to a number of our programs. For instance, our product management certification on Coursera or engineering management our upcoming women founders accelerator. We provide both access that you can get from anywhere. And then also very intentional programming that connects people into the networks to be able to further their networks and what they've learned through the skills online, so. >> Yeah, and something Kate just told me recently is these courses that Kate's mentioning, right? She was instrumental in working with the American Council on Education and so that our learners can actually get up to six college credits for taking these courses on product management engineering management, on cloud product management. And most recently we had our first organic one of our very first organic testimonials was from a woman's tech bootcamp in Nigeria, right? So if you think about the worldwide impact of these upskilling courses where frankly in the US we might take for granted right around the world as I mentioned, there are women dialing in from rice patties from other, you know, for example, outside the, you know corporate buildings in order to access this content. >> Can you think about the idea of, oh sorry, go ahead. >> Go ahead, no, go ahead Kate. >> I was going to say, if you can't see it, you can't become it. And so we are very intentional about ensuring that we have we're spotlighting the expertise of women and we are broadcasting that everywhere so that anybody coming up can gain the skills and the networks to be able to succeed in this industry. >> We'll make sure we get those links so we can promote them. Obviously we feel the same way getting the word out. I think a couple things I'd like to ask you guys cause I think you hit a great point. One is the economic advantage the numbers prove that diverse teams perform better number one, that's clear. So good point there. But I want to get your thoughts on the entrepreneurial equation. You mentioned founders and startups and there's also different makeups in different countries. It's not like the big corporations sometimes it's smaller business in certain areas the different cultures have different business sizes and business types. How do you guys see that factoring in outside the United States, say the big tech companies? Okay, yeah. The easy lower the access to get in education than stay with them, in other countries is it the same or is it more diverse in terms of business? >> So what really actually got us started with the US State Department was around our work with women founders. And I love for Kate to actually share her experience working with AWS startups in that capacity. But frankly, you know, we looked at the content and the mentor programs that were providing women who wanted to be executives, you know, quickly realize a lot of those same skills such as finding customers, right? Scaling your product and building channels can also apply to women founders, not just executives. And so early supporters of our efforts from firms such as Moderna up in Seattle, Emergence Ventures, Decibel Ventures in, you know, the Bay Area and a few others that we're working with right now. Right, they believed in the mission and really helped us scale out what is now our existing platform and offerings for women founders. >> Those are great firms by the way. And they also are very founder friendly and also understand the global workforce. I mean, that's a whole nother dimension. Okay, what's your reaction to all that? >> Yes, we have been very intentional about taking the product expertise and the learnings of women and in our network, we first worked with AWS startups to support the development of the curriculum for the recent accelerator for women founders that was held last spring. And so we're able to support 25 founders and also brought in the expertise of about 20 or 30 women from Advancing Women in Tech to be able to be the lead instructors and mentors for that. And so we have really realized that with this network and this individual sort of focus on product expertise building strong teams, we can take that information and bring it to folks everywhere. And so there is very much the intentionality of allowing founders allowing individuals to take the lessons and bring it to their individual circumstances and the cultures in which they are operating. But the product sense is a skill that we can support the development of and we're proud to do so. >> That's awesome. Nancy, I want to ask you some never really talk about data storage and AWS cloud greatness and goodness, here's different and you also work full-time at AWS and you're the founder or the chairman of this great organization. How do you balance both and do you get, they're getting behind you on this, Amazon is getting behind you on this. >> Well, as I say it's always easier to negotiate on the way in. But jokes aside, I have to say the leadership has been tremendously supportive. If you think about, for example, my leaders Wayne Duso who's also been on the show multiple times, Bill Vaas who's also been on the show multiple times, you know they're both founders and also operators entrepreneurs at heart. So they understand that it is important, right? For all of us, it's really incumbent on all of us who are in positions to do so, to create a pathway for more people to be in leadership roles for more people to be successful entrepreneurs. So, no, I mean if you just looked at LinkedIn they're always uploading my vote so they reach to more audiences. And frankly they're rooting for us back home in the US while we're in Malaysia this week. >> That's awesome. And I think that's a good culture to have that empowerment and I think that's very healthy. What's next for you guys? What's on the agenda? Take us through the activities. I know that you got a ton of things happening. You got your event out there, which is why you're out there. There's a bunch of other activities. I think you guys call it the Advancing Women in Tech week. >> Yes, this week we are having a week of programming that you can check out at Advancing Women in Tech.org. That is spotlighting the expertise of a number of women in our space. So it is three days of programming Tuesday, Wednesday and Thursday if you are in the US so the seventh through the ninth, but available globally. We are also going to be in New York next week for the event at the UN and are looking to continue to support our mentorship programs and also our work supporting women founders throughout the year. >> All right. I have to ask you guys if you don't mind get a little market data so you can share with us here at theCUBE. What are you hearing this year that's different in the conversation space around the topics, the interests? Obviously I've seen massive amounts of global acceleration around conversations, more video, things like this more stories are scaling, a lot more LinkedIn activity. It just seems like it's a lot different this year. Can you guys share any kind of current trends you're seeing relative to the conversations and topics being discussed across the the community? >> Well, I think from a needle moving perspective, right? I think due to the efforts of wonderful organizations including the Q for spotlighting all of these awesome women, right? Trailblazing women and the nonprofits the government entities that we work with there's definitely more emphasis on creating access and creating pathways. So that's probably one thing that you're seeing is more women, more investors posting about their activities. Number two, from a global trend perspective, right? The rise of women in security. I noticed that on your agenda today, you had Lena Smart who's a good friend of mine chief information security officer at MongoDB, right? She and I are actually quite involved in helping founders especially early stage founders in the security space. And so globally from a pure technical perspective, right? There's right more increasing regulations around data privacy, data sovereignty, right? For example, India's in a few weeks about to get their first data protection regulation there locally. So all of that is giving rise to yet another wave of opportunity and we want women founders uniquely positioned to take advantage of that opportunity. >> I love it. Kate, reaction to that? I mean founders, more pathways it sounds like a neural network, it sounds like AI enabled. >> Yes, and speaking of AI, with the rise of that we are also hearing from many community members the importance of continuing to build their skills upskill learn to be able to keep up with the latest trends. There's a lot of people wondering what does this mean for my own career? And so they're turning to organizations like Advancing Women in Tech to find communities to both learn the latest information, but also build their networks so that they are able to move forward regardless of what the industry does. >> I love the work you guys are doing. It's so impressive. I think the economic angle is new it's more amplified this year. It's always kind of been there and continues to be. What do you guys hope for by next year this time what do you hope to see different from a needle moving perspective, to use your word Nancy, for next year? What's the visual output in your mind? >> I want to see real effort made towards 50-50 representation in all tech leadership roles. And I'd like to see that happen by 2050. >> Kate, anything on your end? >> I love that. I'm going to go a little bit more touchy-feely. I want everybody in our space to understand that the skills that they build and that the networks they have carry with them regardless of wherever they go. And so to be able to really lean in and learn and continue to develop the career that you want to have. So whether that be at a large organization or within your own business, that you've got the potential to move forward on that within you. >> Nancy, Kate, thank you so much for your contribution. I'll give you the final word. Put a plug in for the organization. What are you guys looking for? Any kind of PSA you want to share with the folks watching? >> Absolutely, so if you're in a position to be a mentor, join as a mentor, right? Help elevate and accelerate the next generation of women leaders. If you're an investor help us invest in more women started companies, right? Women founded startups and lastly, if you are women looking to accelerate your career, come join our community. We have resources, we have mentors and who we have investors who are willing to come in on the ground floor and help you accelerate your business. >> Great work. Thank you so much for participating in our International Women's Day 23 program and we'd look to keep this going quarterly. We'll see you next year, next time. Thanks for coming on. Appreciate it. >> Thanks so much John. >> Thank you. >> Okay, women leaders here. >> Nancy: Thanks for having us >> All over the world, coming together for a great celebration but really highlighting the accomplishments, the pathways the investment, the mentoring, everything in between. It's theCUBE. Bring as much as we can. I'm John Furrier, your host. Thanks for watching.
SUMMARY :
in the technology world, that you get to see a whole nother aspect of time taking the empowerment to go on the rise to leadership in the industry. in the industry's been focused of the US State Department And she mentioned the decision and all of the international into the networks to be able to further in the US we might take for Can you think about the and the networks to be able The easy lower the access to get and the mentor programs Those are great firms by the way. and also brought in the or the chairman of this in the US while we're I know that you got a of programming that you can check I have to ask you guys if you don't mind founders in the security space. Kate, reaction to that? of continuing to build their skills I love the work you guys are doing. And I'd like to see that happen by 2050. and that the networks Any kind of PSA you want to and accelerate the next Thank you so much for participating All over the world,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kate | PERSON | 0.99+ |
Nancy | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
Bill Vaas | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Malaysia | LOCATION | 0.99+ |
Kate Watts | PERSON | 0.99+ |
Nigeria | LOCATION | 0.99+ |
Nancy Wang | PERSON | 0.99+ |
Wayne Duso | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Moderna | ORGANIZATION | 0.99+ |
Wednesday | DATE | 0.99+ |
American Council on Education | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Vietnam | LOCATION | 0.99+ |
Borneo | LOCATION | 0.99+ |
Emergence Ventures | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
2016 | DATE | 0.99+ |
United Nations Women Group | ORGANIZATION | 0.99+ |
Decibel Ventures | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
United States | LOCATION | 0.99+ |
Southeast Asia | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2050 | DATE | 0.99+ |
MongoDB | ORGANIZATION | 0.99+ |
US State Department | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
International Women's Day | EVENT | 0.99+ |
25 founders | QUANTITY | 0.99+ |
Seattle | LOCATION | 0.99+ |
North Americas | LOCATION | 0.99+ |
AWS Data Protection | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
three days | QUANTITY | 0.99+ |
seventh | QUANTITY | 0.99+ |
Bay Area | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
next week | DATE | 0.99+ |
30 women | QUANTITY | 0.98+ |
One aspect | QUANTITY | 0.98+ |
Thursday | DATE | 0.98+ |
this year | DATE | 0.98+ |
about 40,000 individuals | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
last spring | DATE | 0.98+ |
this week | DATE | 0.98+ |
Tuesday | DATE | 0.98+ |
Lena Smart & Tara Hernandez, MongoDB | International Women's Day
(upbeat music) >> Hello and welcome to theCube's coverage of International Women's Day. I'm John Furrier, your host of "theCUBE." We've got great two remote guests coming into our Palo Alto Studios, some tech athletes, as we say, people that've been in the trenches, years of experience, Lena Smart, CISO at MongoDB, Cube alumni, and Tara Hernandez, VP of Developer Productivity at MongoDB as well. Thanks for coming in to this program and supporting our efforts today. Thanks so much. >> Thanks for having us. >> Yeah, everyone talk about the journey in tech, where it all started. Before we get there, talk about what you guys are doing at MongoDB specifically. MongoDB is kind of gone the next level as a platform. You have your own ecosystem, lot of developers, very technical crowd, but it's changing the business transformation. What do you guys do at Mongo? We'll start with you, Lena. >> So I'm the CISO, so all security goes through me. I like to say, well, I don't like to say, I'm described as the ones throat to choke. So anything to do with security basically starts and ends with me. We do have a fantastic Cloud engineering security team and a product security team, and they don't report directly to me, but obviously we have very close relationships. I like to keep that kind of church and state separate and I know I've spoken about that before. And we just recently set up a physical security team with an amazing gentleman who left the FBI and he came to join us after 26 years for the agency. So, really starting to look at the physical aspects of what we offer as well. >> I interviewed a CISO the other day and she said, "Every day is day zero for me." Kind of goofing on the Amazon Day one thing, but Tara, go ahead. Tara, go ahead. What's your role there, developer productivity? What are you focusing on? >> Sure. Developer productivity is kind of the latest description for things that we've described over the years as, you know, DevOps oriented engineering or platform engineering or build and release engineering development infrastructure. It's all part and parcel, which is how do we actually get our code from developer to customer, you know, and all the mechanics that go into that. It's been something I discovered from my first job way back in the early '90s at Borland. And the art has just evolved enormously ever since, so. >> Yeah, this is a very great conversation both of you guys, right in the middle of all the action and data infrastructures changing, exploding, and involving big time AI and data tsunami and security never stops. Well, let's get into, we'll talk about that later, but let's get into what motivated you guys to pursue a career in tech and what were some of the challenges that you faced along the way? >> I'll go first. The fact of the matter was I intended to be a double major in history and literature when I went off to university, but I was informed that I had to do a math or a science degree or else the university would not be paid for. At the time, UC Santa Cruz had a policy that called Open Access Computing. This is, you know, the late '80s, early '90s. And anybody at the university could get an email account and that was unusual at the time if you were, those of us who remember, you used to have to pay for that CompuServe or AOL or, there's another one, I forget what it was called, but if a student at Santa Cruz could have an email account. And because of that email account, I met people who were computer science majors and I'm like, "Okay, I'll try that." That seems good. And it was a little bit of a struggle for me, a lot I won't lie, but I can't complain with how it ended up. And certainly once I found my niche, which was development infrastructure, I found my true love and I've been doing it for almost 30 years now. >> Awesome. Great story. Can't wait to ask a few questions on that. We'll go back to that late '80s, early '90s. Lena, your journey, how you got into it. >> So slightly different start. I did not go to university. I had to leave school when I was 16, got a job, had to help support my family. Worked a bunch of various jobs till I was about 21 and then computers became more, I think, I wouldn't say they were ubiquitous, but they were certainly out there. And I'd also been saving up every penny I could earn to buy my own computer and bought an Amstrad 1640, 20 meg hard drive. It rocked. And kind of took that apart, put it back together again, and thought that could be money in this. And so basically just teaching myself about computers any job that I got. 'Cause most of my jobs were like clerical work and secretary at that point. But any job that had a computer in front of that, I would make it my business to go find the guy who did computing 'cause it was always a guy. And I would say, you know, I want to learn how these work. Let, you know, show me. And, you know, I would take my lunch hour and after work and anytime I could with these people and they were very kind with their time and I just kept learning, so yep. >> Yeah, those early days remind me of the inflection point we're going through now. This major C change coming. Back then, if you had a computer, you had to kind of be your own internal engineer to fix things. Remember back on the systems revolution, late '80s, Tara, when, you know, your career started, those were major inflection points. Now we're seeing a similar wave right now, security, infrastructure. It feels like it's going to a whole nother level. At Mongo, you guys certainly see this as well, with this AI surge coming in. A lot more action is coming in. And so there's a lot of parallels between these inflection points. How do you guys see this next wave of change? Obviously, the AI stuff's blowing everyone away. Oh, new user interface. It's been called the browser moment, the mobile iPhone moment, kind of for this generation. There's a lot of people out there who are watching that are young in their careers, what's your take on this? How would you talk to those folks around how important this wave is? >> It, you know, it's funny, I've been having this conversation quite a bit recently in part because, you know, to me AI in a lot of ways is very similar to, you know, back in the '90s when we were talking about bringing in the worldwide web to the forefront of the world, right. And we tended to think in terms of all the optimistic benefits that would come of it. You know, free passing of information, availability to anyone, anywhere. You just needed an internet connection, which back then of course meant a modem. >> John: Not everyone had though. >> Exactly. But what we found in the subsequent years is that human beings are what they are and we bring ourselves to whatever platforms that are there, right. And so, you know, as much as it was amazing to have this freely available HTML based internet experience, it also meant that the negatives came to the forefront quite quickly. And there were ramifications of that. And so to me, when I look at AI, we're already seeing the ramifications to that. Yes, are there these amazing, optimistic, wonderful things that can be done? Yes. >> Yeah. >> But we're also human and the bad stuff's going to come out too. And how do we- >> Yeah. >> How do we as an industry, as a community, you know, understand and mitigate those ramifications so that we can benefit more from the positive than the negative. So it is interesting that it comes kind of full circle in really interesting ways. >> Yeah. The underbelly takes place first, gets it in the early adopter mode. Normally industries with, you know, money involved arbitrage, no standards. But we've seen this movie before. Is there hope, Lena, that we can have a more secure environment? >> I would hope so. (Lena laughs) Although depressingly, we've been in this well for 30 years now and we're, at the end of the day, still telling people not to click links on emails. So yeah, that kind of still keeps me awake at night a wee bit. The whole thing about AI, I mean, it's, obviously I am not an expert by any stretch of the imagination in AI. I did read (indistinct) book recently about AI and that was kind of interesting. And I'm just trying to teach myself as much as I can about it to the extent of even buying the "Dummies Guide to AI." Just because, it's actually not a dummies guide. It's actually fairly interesting, but I'm always thinking about it from a security standpoint. So it's kind of my worst nightmare and the best thing that could ever happen in the same dream. You know, you've got this technology where I can ask it a question and you know, it spits out generally a reasonable answer. And my team are working on with Mark Porter our CTO and his team on almost like an incubation of AI link. What would it look like from MongoDB? What's the legal ramifications? 'Cause there will be legal ramifications even though it's the wild, wild west just now, I think. Regulation's going to catch up to us pretty quickly, I would think. >> John: Yeah, yeah. >> And so I think, you know, as long as companies have a seat at the table and governments perhaps don't become too dictatorial over this, then hopefully we'll be in a good place. But we'll see. I think it's a really interest, there's that curse, we're living in interesting times. I think that's where we are. >> It's interesting just to stay on this tech trend for a minute. The standards bodies are different now. Back in the old days there were, you know, IEEE standards, ITF standards. >> Tara: TPC. >> The developers are the new standard. I mean, now you're seeing open source completely different where it was in the '90s to here beginning, that was gen one, some say gen two, but I say gen one, now we're exploding with open source. You have kind of developers setting the standards. If developers like it in droves, it becomes defacto, which then kind of rolls into implementation. >> Yeah, I mean I think if you don't have developer input, and this is why I love working with Tara and her team so much is 'cause they get it. If we don't have input from developers, it's not going to get used. There's going to be ways of of working around it, especially when it comes to security. If they don't, you know, if you're a developer and you're sat at your screen and you don't want to do that particular thing, you're going to find a way around it. You're a smart person. >> Yeah. >> So. >> Developers on the front lines now versus, even back in the '90s, they're like, "Okay, consider the dev's, got a QA team." Everything was Waterfall, now it's Cloud, and developers are on the front lines of everything. Tara, I mean, this is where the standards are being met. What's your reaction to that? >> Well, I think it's outstanding. I mean, you know, like I was at Netscape and part of the crowd that released the browser as open source and we founded mozilla.org, right. And that was, you know, in many ways kind of the birth of the modern open source movement beyond what we used to have, what was basically free software foundation was sort of the only game in town. And I think it is so incredibly valuable. I want to emphasize, you know, and pile onto what Lena was saying, it's not just that the developers are having input on a sort of company by company basis. Open source to me is like a checks and balance, where it allows us as a broader community to be able to agree on and enforce certain standards in order to try and keep the technology platforms as accessible as possible. I think Kubernetes is a great example of that, right. If we didn't have Kubernetes, that would've really changed the nature of how we think about container orchestration. But even before that, Linux, right. Linux allowed us as an industry to end the Unix Wars and as someone who was on the front lines of that as well and having to support 42 different operating systems with our product, you know, that was a huge win. And it allowed us to stop arguing about operating systems and start arguing about software or not arguing, but developing it in positive ways. So with, you know, with Kubernetes, with container orchestration, we all agree, okay, that's just how we're going to orchestrate. Now we can build up this huge ecosystem, everybody gets taken along, right. And now it changes the game for what we're defining as business differentials, right. And so when we talk about crypto, that's a little bit harder, but certainly with AI, right, you know, what are the checks and balances that as an industry and as the developers around this, that we can in, you know, enforce to make sure that no one company or no one body is able to overly control how these things are managed, how it's defined. And I think that is only for the benefit in the industry as a whole, particularly when we think about the only other option is it gets regulated in ways that do not involve the people who actually know the details of what they're talking about. >> Regulated and or thrown away or bankrupt or- >> Driven underground. >> Yeah. >> Which would be even worse actually. >> Yeah, that's a really interesting, the checks and balances. I love that call out. And I was just talking with another interview part of the series around women being represented in the 51% ratio. Software is for everybody. So that we believe that open source movement around the collective intelligence of the participants in the industry and independent of gender, this is going to be the next wave. You're starting to see these videos really have impact because there are a lot more leaders now at the table in companies developing software systems and with AI, the aperture increases for applications. And this is the new dynamic. What's your guys view on this dynamic? How does this go forward in a positive way? Is there a certain trajectory you see? For women in the industry? >> I mean, I think some of the states are trying to, again, from the government angle, some of the states are trying to force women into the boardroom, for example, California, which can be no bad thing, but I don't know, sometimes I feel a bit iffy about all this kind of forced- >> John: Yeah. >> You know, making, I don't even know how to say it properly so you can cut this part of the interview. (John laughs) >> Tara: Well, and I think that they're >> I'll say it's not organic. >> No, and I think they're already pulling it out, right. It's already been challenged so they're in the process- >> Well, this is the open source angle, Tara, you are getting at it. The change agent is open, right? So to me, the history of the proven model is openness drives transparency drives progress. >> No, it's- >> If you believe that to be true, this could have another impact. >> Yeah, it's so interesting, right. Because if you look at McKinsey Consulting or Boston Consulting or some of the other, I'm blocking on all of the names. There has been a decade or more of research that shows that a non homogeneous employee base, be it gender or ethnicity or whatever, generates more revenue, right? There's dollar signs that can be attached to this, but it's not enough for all companies to want to invest in that way. And it's not enough for all, you know, venture firms or investment firms to grant that seed money or do those seed rounds. I think it's getting better very slowly, but socialization is a much harder thing to overcome over time. Particularly, when you're not just talking about one country like the United States in our case, but around the world. You know, tech centers now exist all over the world, including places that even 10 years ago we might not have expected like Nairobi, right. Which I think is amazing, but you have to factor in the cultural implications of that as well, right. So yes, the openness is important and we have, it's important that we have those voices, but I don't think it's a panacea solution, right. It's just one more piece. I think honestly that one of the most important opportunities has been with Cloud computing and Cloud's been around for a while. So why would I say that? It's because if you think about like everybody holds up the Steve Jobs, Steve Wozniak, back in the '70s, or Sergey and Larry for Google, you know, you had to have access to enough credit card limit to go to Fry's and buy your servers and then access to somebody like Susan Wojcicki to borrow the garage or whatever. But there was still a certain amount of upfrontness that you had to be able to commit to, whereas now, and we've, I think, seen a really good evidence of this being able to lease server resources by the second and have development platforms that you can do on your phone. I mean, for a while I think Africa, that the majority of development happened on mobile devices because there wasn't a sufficient supply chain of laptops yet. And that's no longer true now as far as I know. But like the power that that enables for people who would otherwise be underrepresented in our industry instantly opens it up, right? And so to me that's I think probably the biggest opportunity that we've seen from an industry on how to make more availability in underrepresented representation for entrepreneurship. >> Yeah. >> Something like AI, I think that's actually going to take us backwards if we're not careful. >> Yeah. >> Because of we're reinforcing that socialization. >> Well, also the bias. A lot of people commenting on the biases of the large language inherently built in are also problem. Lena, I want you to weigh on this too, because I think the skills question comes up here and I've been advocating that you don't need the pedigree, college pedigree, to get into a certain jobs, you mentioned Cloud computing. I mean, it's been around for you think a long time, but not really, really think about it. The ability to level up, okay, if you're going to join something new and half the jobs in cybersecurity are created in the past year, right? So, you have this what used to be a barrier, your degree, your pedigree, your certification would take years, would be a blocker. Now that's gone. >> Lena: Yeah, it's the opposite. >> That's, in fact, psychology. >> I think so, but the people who I, by and large, who I interview for jobs, they have, I think security people and also I work with our compliance folks and I can't forget them, but let's talk about security just now. I've always found a particular kind of mindset with security folks. We're very curious, not very good at following rules a lot of the time, and we'd love to teach others. I mean, that's one of the big things stem from the start of my career. People were always interested in teaching and I was interested in learning. So it was perfect. And I think also having, you know, strong women leaders at MongoDB allows other underrepresented groups to actually apply to the company 'cause they see that we're kind of talking the talk. And that's been important. I think it's really important. You know, you've got Tara and I on here today. There's obviously other senior women at MongoDB that you can talk to as well. There's a bunch of us. There's not a whole ton of us, but there's a bunch of us. And it's good. It's definitely growing. I've been there for four years now and I've seen a growth in women in senior leadership positions. And I think having that kind of track record of getting really good quality underrepresented candidates to not just interview, but come and join us, it's seen. And it's seen in the industry and people take notice and they're like, "Oh, okay, well if that person's working, you know, if Tara Hernandez is working there, I'm going to apply for that." And that in itself I think can really, you know, reap the rewards. But it's getting started. It's like how do you get your first strong female into that position or your first strong underrepresented person into that position? It's hard. I get it. If it was easy, we would've sold already. >> It's like anything. I want to see people like me, my friends in there. Am I going to be alone? Am I going to be of a group? It's a group psychology. Why wouldn't? So getting it out there is key. Is there skills that you think that people should pay attention to? One's come up as curiosity, learning. What are some of the best practices for folks trying to get into the tech field or that's in the tech field and advancing through? What advice are you guys- >> I mean, yeah, definitely, what I say to my team is within my budget, we try and give every at least one training course a year. And there's so much free stuff out there as well. But, you know, keep learning. And even if it's not right in your wheelhouse, don't pick about it. Don't, you know, take a look at what else could be out there that could interest you and then go for it. You know, what does it take you few minutes each night to read a book on something that might change your entire career? You know, be enthusiastic about the opportunities out there. And there's so many opportunities in security. Just so many. >> Tara, what's your advice for folks out there? Tons of stuff to taste, taste test, try things. >> Absolutely. I mean, I always say, you know, my primary qualifications for people, I'm looking for them to be smart and motivated, right. Because the industry changes so quickly. What we're doing now versus what we did even last year versus five years ago, you know, is completely different though themes are certainly the same. You know, we still have to code and we still have to compile that code or package the code and ship the code so, you know, how well can we adapt to these new things instead of creating floppy disks, which was my first job. Five and a quarters, even. The big ones. >> That's old school, OG. There it is. Well done. >> And now it's, you know, containers, you know, (indistinct) image containers. And so, you know, I've gotten a lot of really great success hiring boot campers, you know, career transitioners. Because they bring a lot experience in addition to the technical skills. I think the most important thing is to experiment and figuring out what do you like, because, you know, maybe you are really into security or maybe you're really into like deep level coding and you want to go back, you know, try to go to school to get a degree where you would actually want that level of learning. Or maybe you're a front end engineer, you want to be full stacked. Like there's so many different things, data science, right. Maybe you want to go learn R right. You know, I think it's like figure out what you like because once you find that, that in turn is going to energize you 'cause you're going to feel motivated. I think the worst thing you could do is try to force yourself to learn something that you really could not care less about. That's just the worst. You're going in handicapped. >> Yeah and there's choices now versus when we were breaking into the business. It was like, okay, you software engineer. They call it software engineering, that's all it was. You were that or you were in sales. Like, you know, some sort of systems engineer or sales and now it's,- >> I had never heard of my job when I was in school, right. I didn't even know it was a possibility. But there's so many different types of technical roles, you know, absolutely. >> It's so exciting. I wish I was young again. >> One of the- >> Me too. (Lena laughs) >> I don't. I like the age I am. So one of the things that I did to kind of harness that curiosity is we've set up a security champions programs. About 120, I guess, volunteers globally. And these are people from all different backgrounds and all genders, diversity groups, underrepresented groups, we feel are now represented within this champions program. And people basically give up about an hour or two of their time each week, with their supervisors permission, and we basically teach them different things about security. And we've now had seven full-time people move from different areas within MongoDB into my team as a result of that program. So, you know, monetarily and time, yeah, saved us both. But also we're showing people that there is a path, you know, if you start off in Tara's team, for example, doing X, you join the champions program, you're like, "You know, I'd really like to get into red teaming. That would be so cool." If it fits, then we make that happen. And that has been really important for me, especially to give, you know, the women in the underrepresented groups within MongoDB just that window into something they might never have seen otherwise. >> That's a great common fit is fit matters. Also that getting access to what you fit is also access to either mentoring or sponsorship or some sort of, at least some navigation. Like what's out there and not being afraid to like, you know, just ask. >> Yeah, we just actually kicked off our big mentor program last week, so I'm the executive sponsor of that. I know Tara is part of it, which is fantastic. >> We'll put a plug in for it. Go ahead. >> Yeah, no, it's amazing. There's, gosh, I don't even know the numbers anymore, but there's a lot of people involved in this and so much so that we've had to set up mentoring groups rather than one-on-one. And I think it was 45% of the mentors are actually male, which is quite incredible for a program called Mentor Her. And then what we want to do in the future is actually create a program called Mentor Them so that it's not, you know, not just on the female and so that we can live other groups represented and, you know, kind of break down those groups a wee bit more and have some more granularity in the offering. >> Tara, talk about mentoring and sponsorship. Open source has been there for a long time. People help each other. It's community-oriented. What's your view of how to work with mentors and sponsors if someone's moving through ranks? >> You know, one of the things that was really interesting, unfortunately, in some of the earliest open source communities is there was a lot of pervasive misogyny to be perfectly honest. >> Yeah. >> And one of the important adaptations that we made as an open source community was the idea, an introduction of code of conducts. And so when I'm talking to women who are thinking about expanding their skills, I encourage them to join open source communities to have opportunity, even if they're not getting paid for it, you know, to develop their skills to work with people to get those code reviews, right. I'm like, "Whatever you join, make sure they have a code of conduct and a good leadership team. It's very important." And there are plenty, right. And then that idea has come into, you know, conferences now. So now conferences have codes of contact, if there are any good, and maybe not all of them, but most of them, right. And the ideas of expanding that idea of intentional healthy culture. >> John: Yeah. >> As a business goal and business differentiator. I mean, I won't lie, when I was recruited to come to MongoDB, the culture that I was able to discern through talking to people, in addition to seeing that there was actually women in senior leadership roles like Lena, like Kayla Nelson, that was a huge win. And so it just builds on momentum. And so now, you know, those of us who are in that are now representing. And so that kind of reinforces, but it's all ties together, right. As the open source world goes, particularly for a company like MongoDB, which has an open source product, you know, and our community builds. You know, it's a good thing to be mindful of for us, how we interact with the community and you know, because that could also become an opportunity for recruiting. >> John: Yeah. >> Right. So we, in addition to people who might become advocates on Mongo's behalf in their own company as a solution for themselves, so. >> You guys had great successful company and great leadership there. I mean, I can't tell you how many times someone's told me "MongoDB doesn't scale. It's going to be dead next year." I mean, I was going back 10 years. It's like, just keeps getting better and better. You guys do a great job. So it's so fun to see the success of developers. Really appreciate you guys coming on the program. Final question, what are you guys excited about to end the segment? We'll give you guys the last word. Lena will start with you and Tara, you can wrap us up. What are you excited about? >> I'm excited to see what this year brings. I think with ChatGPT and its copycats, I think it'll be a very interesting year when it comes to AI and always in the lookout for the authentic deep fakes that we see coming out. So just trying to make people aware that this is a real thing. It's not just pretend. And then of course, our old friend ransomware, let's see where that's going to go. >> John: Yeah. >> And let's see where we get to and just genuine hygiene and housekeeping when it comes to security. >> Excellent. Tara. >> Ah, well for us, you know, we're always constantly trying to up our game from a security perspective in the software development life cycle. But also, you know, what can we do? You know, one interesting application of AI that maybe Google doesn't like to talk about is it is really cool as an addendum to search and you know, how we might incorporate that as far as our learning environment and developer productivity, and how can we enable our developers to be more efficient, productive in their day-to-day work. So, I don't know, there's all kinds of opportunities that we're looking at for how we might improve that process here at MongoDB and then maybe be able to share it with the world. One of the things I love about working at MongoDB is we get to use our own products, right. And so being able to have this interesting document database in order to put information and then maybe apply some sort of AI to get it out again, is something that we may well be looking at, if not this year, then certainly in the coming year. >> Awesome. Lena Smart, the chief information security officer. Tara Hernandez, vice president developer of productivity from MongoDB. Thank you so much for sharing here on International Women's Day. We're going to do this quarterly every year. We're going to do it and then we're going to do quarterly updates. Thank you so much for being part of this program. >> Thank you. >> Thanks for having us. >> Okay, this is theCube's coverage of International Women's Day. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Thanks for coming in to this program MongoDB is kind of gone the I'm described as the ones throat to choke. Kind of goofing on the you know, and all the challenges that you faced the time if you were, We'll go back to that you know, I want to learn how these work. Tara, when, you know, your career started, you know, to me AI in a lot And so, you know, and the bad stuff's going to come out too. you know, understand you know, money involved and you know, it spits out And so I think, you know, you know, IEEE standards, ITF standards. The developers are the new standard. and you don't want to do and developers are on the And that was, you know, in many ways of the participants I don't even know how to say it properly No, and I think they're of the proven model is If you believe that that you can do on your phone. going to take us backwards Because of we're and half the jobs in cybersecurity And I think also having, you know, I going to be of a group? You know, what does it take you Tons of stuff to taste, you know, my primary There it is. And now it's, you know, containers, Like, you know, some sort you know, absolutely. I (Lena laughs) especially to give, you know, Also that getting access to so I'm the executive sponsor of that. We'll put a plug in for it. and so that we can live to work with mentors You know, one of the things And one of the important and you know, because So we, in addition to people and Tara, you can wrap us up. and always in the lookout for it comes to security. addendum to search and you know, We're going to do it and then we're I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Susan Wojcicki | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Tara Hernandez | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Mark Porter | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Kevin Deierling | PERSON | 0.99+ |
Marty Lans | PERSON | 0.99+ |
Tara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jim Jackson | PERSON | 0.99+ |
Jason Newton | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Daniel Hernandez | PERSON | 0.99+ |
Dave Winokur | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Lena | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Julie Sweet | PERSON | 0.99+ |
Marty | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Kayla Nelson | PERSON | 0.99+ |
Mike Piech | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ireland | LOCATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Daniel Laury | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
Todd Kerry | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$20 | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
January 30th | DATE | 0.99+ |
Meg | PERSON | 0.99+ |
Mark Little | PERSON | 0.99+ |
Luke Cerney | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Jeff Basil | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Allan | PERSON | 0.99+ |
40 gig | QUANTITY | 0.99+ |
Sue Barsamian | International Women's Day
(upbeat music) >> Hi, everyone. Welcome to theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE. As part of International Women's Day, we're featuring some of the leading women in business technology from developer to all types of titles and to the executive level. And one topic that's really important is called Getting a Seat at the Table, board makeup, having representation at corporate boards, private and public companies. It's been a big push. And former technology operating executive and corporate board member, she's a board machine Sue Barsamian, formerly with HPE, Hewlett Packard. Sue, great to see you. CUBE alumni, distinguished CUBE alumni. Thank you for coming on. >> Yes, I'm very proud of my CUBE alumni title. >> I'm sure it opens a lot of doors for you. (Sue laughing) We're psyched to have you on. This is a really important topic, and I want to get into the whole, as women advance up, and they're sitting on the boards, they can implement policy and there's governance. Obviously public companies have very strict oversight, and not strict, but like formal. Private boards have to operate, be nimble. They don't have to share all their results. But still, boards play an important role in the success of scaled up companies. So super important, that representation there is key. >> Yes. >> I want to get into that, but first, before we get started, how did you get into tech? How did it all start for you? >> Yeah, long time ago, I was an electrical engineering major. Came out in 1981 when, you know, opportunities for engineering, if you were kind, I went to Kansas State as an undergrad, and basically in those days you went to Texas and did semiconductors. You went to Atlanta and did communication satellites. You went to Boston or you went to Silicon Valley. And for me, that wasn't too hard a choice. I ended up going west and really, I guess what, embarked on a 40 year career in Silicon Valley and absolutely loved it. Largely software, but some time on the hardware side. Started out in networking, but largely software. And then, you know, four years ago transitioned to my next chapter, which is the corporate board director. And again, focused on technology software and cybersecurity boards. >> For the folks watching, we'll cut through another segment we can probably do about your operating career, but you rose through the ranks and became a senior operating executive at the biggest companies in the world. Hewlett Packard Enterprise, Hewlett Packard Enterprise and others. Very great career, okay. And so now you're kind of like, put that on pause, and you're moving on to the next chapter, which is being a board director. What inspired you to be a board director for multiple public companies and multiple private companies? Well, how many companies are you on? But what's the inspiration? What's the inspiration? First tell me how many board ships you're on, board seats you're on, and then what inspired you to become a board director? >> Yeah, so I'm on three public, and you are limited in terms of the number of publics that you can do to four. So I'm on three public, and I'm on four private from a tech perspective. And those range from, you know, a $4 billion in revenue public company down to a 35 person private company. So I've got the whole range. >> So you're like freelancing, I mean, what is it like? It's a full-time job, obviously. It's a lot of work involved. >> Yeah, yeah, it's. >> John: Why are you doing it? >> Well, you know, so I retired from being an operating executive after 37 years. And, but I loved, I mean, it's tough, right? It's tough these days, particularly with all the pressures out there in the market, not to mention the pandemic, et cetera. But I loved it. I loved working. I loved having a career, and I was ready to back off on, I would say the stresses of quarterly results and the stresses of international travel. You have so much of it. But I wasn't ready to back off from being involved and engaged and continuing to learn new things. I think this is why you come to tech, and for me, why I went to the valley to begin with was really that energy and that excitement, and it's like it's constantly reinventing itself. And I felt like that wasn't over for me. And I thought because I hadn't done boards before I retired from operating roles, I thought, you know, that would fill the bill. And it's honestly, it has exceeded expectations. >> In a good way. You feel good about where you're at and. >> Yeah. >> What you went in, what was the expectation going in and what surprised you? And were there people along the way that kind of gave you some pointers or don't do this, stay away from this. Take us through your experiences. >> Yeah, honestly, there is an amazing network of technology board directors, you know, in the US and specifically in the Valley. And we are all incredibly supportive. We have groups where we get together as board directors, and we talk about topics, and we share best practices and stories, and so I underestimated that, right? I thought I was going to, I thought I was going to enter this chapter where I would be largely giving back after 37 years. You've learned a little bit, right? What I underestimated was just the power of continuing to learn and being surrounded by so many amazing people. When, you know, when you do, you know, multiple boards, your learnings are just multiplied, right? Because you see not just one model, but you see many models. You see not just one problem, but many problems. Not just one opportunity, but many opportunities. And I underestimated how great that would be for me from a learning perspective and then your ability to share from one board to the other board because all of my boards are companies who are also quite close to each other, the executives collaborate. So that has turned out to be really exciting for me. >> So you had the stressful job. You rose to the top of the ranks, quarterly shot clock earnings, and it's hard charging. It's like, it's like, you know, being an athlete, as we say tech athlete. You're a tech athlete. Now you're taking that to the next level, which is now you're juggling multiple operational kind of things, but not with super pressure. But there's still a lot of responsibility. I know there's one board, you got compensation committee, I mean there's work involved. It's not like you're clipping coupons and having pizza. >> Yeah, no, it's real work. Believe me, it's real work. But I don't know how long it took me to not, to stop waking up and looking at my phone and thinking somebody was going to be dropping their forecast, right? Just that pressure of the number, and as a board member, obviously you are there to support and help guide the company and you feel, you know, you feel the pressure and the responsibility of what that role entails, but it's not the same as the frontline pressure every quarter. It's different. And so I did the first type. I loved it, you know. I'm loving this second type. >> You know, the retirement, it's always a cliche these days, but it's not really like what people think it is. It's not like getting a boat, going fishing or whatever. It's doing whatever you want to do, that's what retirement is. And you've chose to stay active. Your brain's being tested, and you're working it, having fun without all the stress. But it's enough, it's like going the gym. You're not hardcore workout, but you're working out with the brain. >> Yeah, no, for sure. It's just a different, it's just a different model. But the, you know, the level of conversations, the level of decisions, all of that is quite high. Which again, I like, yeah. >> Again, you really can't talk about some of the fun questions I want to ask, like what's the valuations like? How's the market, your headwinds? Is there tailwinds? >> Yes, yes, yes. It's an amazing, it's an amazing market right now with, as you know, counter indicators everywhere, right? Something's up, something's down, you know. Consumer spending's up, therefore interest rates go up and, you know, employment's down. And so or unemployment's down. And so it's hard. Actually, I really empathize with, you know, the, and have a great deal of respect for the CEOs and leadership teams of my board companies because, you know, I kind of retired from operating role, and then everybody else had to deal with running a company during a pandemic and then running a company through the great resignation, and then running a company through a downturn. You know, those are all tough things, and I have a ton of respect for any operating executive who's navigating through this and leading a company right now. >> I'd love to get your take on the board conversations at the end if we have more time, what the mood is, but I want to ask you about one more thing real quick before we go to the next topic is you're a retired operating executive. You have multiple boards, so you've got your hands full. I noticed there's a lot of amazing leaders, other female tech athletes joining boards, but they also have full-time jobs. >> Yeah. >> And so what's your advice? Cause I know there's a lot of networking, a lot of sharing going on. There's kind of a balance between how much you can contribute on the board versus doing the day job, but there's a real need for more women on boards, so yet there's a lot going on boards. What's the current state of the union if you will, state of the market relative to people in their careers and the stresses? >> Yeah. >> Cause you left one and jumped in all in there. >> Yeah. >> Some can't do that. They can't be on five boards, but they're on a few. What's the? >> Well, and you know, and if you're an operating executive, you wouldn't be on five boards, right? You would be on one or two. And so I spend a lot of time now bringing along the next wave of women and helping them both in their career but also to get a seat at the table on a board. And I'm very vocal about telling people not to do it the way I do it. There's no reason for it to be sequential. You can, you know, I thought I was so busy and was traveling all the time, and yes, all of that was true, but, and maybe I should say, you know, you can still fit in a board. And so, and what I see now is that your learnings are so exponential with outside perspective that I believe I would've been an even better operating executive had I done it earlier. I know I would've been an even better operating executive had I done it earlier. And so my advice is don't do it the way I did it. You know, it's worked out fine for me, but hindsight's 2020, I would. >> If you can go back and do a mulligan or a redo, what would you do? >> Yeah, I would get on a board earlier, full stop, yeah. >> Board, singular, plural? >> Well, I really, I don't think as an operating executive you can do, you could do one, maybe two. I wouldn't go beyond that, and I think that's fine. >> Yeah, totally makes sense. Okay, I got to ask you about your career. I know technical, you came in at that time in the market, I remember when I broke into the business, very male dominated, and then now it's much better. When you went through the ranks as a technical person, I know you had some blockers and definitely some, probably some people like, well, you know. We've seen that. How did you handle that? What were some of the key pivot points in your journey? And we've had a lot of women tell their stories here on theCUBE, candidly, like, hey, I was going to tell that professor, I'm going to sit in the front row. I'm going to, I'm getting two degrees, you know, robotics and aerospace. So, but they were challenged, even with the aspiration to do tech. I'm not saying that was something that you had, but like have you had experience like that, that you overcome? What were those key points and how did you handle them and how does that help people today? >> Yeah, you know, I have to say, you know, and not discounting that obviously this has been a journey for women, and there are a lot of things to overcome both in the workforce and also just balancing life honestly. And they're all real. There's also a story of incredible support, and you know, I'm the type of person where if somebody blocked me or didn't like me, I tended to just, you know, think it was me and like work harder and get around them, and I'm sure that some of that was potentially gender related. I didn't interpret it that way at the time. And I was lucky to have amazing mentors, many, many, many of whom were men, you know, because they were in the positions of power, and they made a huge difference on my career, huge. And I also had amazing female mentors, Meg Whitman, Ann Livermore at HPE, who you know well. So I had both, but you know, when I look back on the people who made a difference, there are as many men on the list as there are women. >> Yeah, and that's a learning there. Create those coalitions, not just one or the other. >> Yeah, yeah, yeah, absolutely. >> Well, I got to ask you about the, well, you brought up the pandemic. This has come up on some interviews this year, a little bit last year on the International Women's Day, but this year it's resonating, and I would never ask in an interview. I saw an interview once where a host asked a woman, how do you balance it all? And I was just like, no one asked men that. And so it's like, but with remote work, it's come up now the word empathy around people knowing each other's personal situation. In other words, when remote work happened, everybody went home. So we all got a glimpse of the backdrop. You got, you can see what their personal life was on Facebook. We were just commenting before we came on camera about that. So remote work really kind of opened up this personal side of everybody, men and women. >> Yeah. >> So I think this brings this new empathy kind of vibe or authentic self people call it. Is remote work an opportunity or a threat for advancement of women in tech? >> It's a much debated topic. I look at it as an opportunity for many of the reasons that you just said. First of all, let me say that when I was an operating executive and would try to create an environment on my team that was family supportive, I would do that equally for young or, you know, early to mid-career women as I did for early to mid-career men. And the reason is I needed those men, you know, chances are they had a working spouse at home, right? I needed them to be able to share the load. It's just as important to the women that companies give, you know, the partner, male or female, the partner support and the ability to share the love, right? So to me it's not just a woman thing. It's women and men, and I always tried to create the environment where it was okay to go to your soccer game. I knew you would be online later in the evening when the kids were in bed, and that was fine. And I think the pandemic has democratized that and made that, you know, made that kind of an everyday occurrence. >> Yeah the baby walks in. They're in the zoom call. The dog comes in. The leaf blower going on the outside the window. I've seen it all on theCUBE. >> Yeah, and people don't try to pretend anymore that like, you know, the house is clean, the dog's behaved, you know, I mean it's just, it's just real, and it's authentic, and I think that's healthy. >> Yeah. >> I do, you know, I also love, I also love the office, and you know, I've got a 31 year old and a soon to be 27 year old daughter, two daughters. And you know, they love going into the office, and I think about when I was their age, how just charged up I would get from being in the office. I also see how great it is for them to have a couple of days a week at home because you can get a few things done in between Zoom calls that you don't have to end up piling onto the weekend, and, you know, so I think it's a really healthy, I think it's a really healthy mix now. Most tech companies are not mandating five days in. Most tech companies are at two to three days in. I think that's a, I think that's a really good combination. >> It's interesting how people are changing their culture to get together more as groups and even events. I mean, while I got you, I might as well ask you, what's the board conversations around, you know, the old conferences? You know, before the pandemic, every company had like a user conference. Right, now it's like, well, do we really need to have that? Maybe we do smaller, and we do digital. Have you seen how companies are handling the in-person? Because there's where the relationships are really formed face-to-face, but not everyone's going to be going. But now certain it's clearly back to face-to-face. We're seeing that with theCUBE as you know. >> Yeah, yeah. >> But the numbers aren't coming back, and the numbers aren't that high, but the stakeholders. >> Yeah. >> And the numbers are actually higher if you count digital. >> Yeah, absolutely. But you know, also on digital there's fatigue from 100% digital, right? It's a hybrid. People don't want to be 100% digital anymore, but they also don't want to go back to the days when everybody got on a plane for every meeting, every call, every sales call. You know, I'm seeing a mix on user conferences. I would say two-thirds of my companies are back, but not at the expense level that they were on user conferences. We spend a lot of time getting updates on, cause nobody has put, interestingly enough, nobody has put T&E, travel and expense back to pre-pandemic levels. Nobody, so everybody's pulled back on number of trips. You know, marketing events are being very scrutinized, but I think very effective. We're doing a lot of, and, you know, these were part of the old model as well, like some things, some things just recycle, but you know, there's a lot of CIO and customer round tables in regional cities. You know, those are quite effective right now because people want some face-to-face, but they don't necessarily want to get on a plane and go to Las Vegas in order to do it. I mean, some of them are, you know, there are a lot of things back in Las Vegas. >> And think about the meetings that when you were an operating executive. You got to go to the sales kickoff, you got to go to this, go to that. There were mandatory face-to-faces that you had to go to, but there was a lot of travel that you probably could have done on Zoom. >> Oh, a lot, I mean. >> And then the productivity to the family impact too. Again, think about again, we're talking about the family and people's personal lives, right? So, you know, got to meet a customer. All right. Salesperson wants you to get in front of a customer, got to fly to New York, take a red eye, come on back. Like, I mean, that's gone. >> Yeah, and oh, by the way, the customer doesn't necessarily want to be in the office that day, so, you know, they may or may not be happy about that. So again, it's and not or, right? It's a mix. And I think it's great to see people back to some face-to-face. It's great to see marketing and events back to some face-to-face. It's also great to see that it hasn't gone back to the level it was. I think that's a really healthy dynamic. >> Well, I'll tell you that from our experience while we're on the topic, we'll move back to the International Women's Day is that the productivity of digital, this program we're doing is going to be streamed. We couldn't do this face-to-face because we had to have everyone fly to an event. We're going to do hundreds of stories that we couldn't have done. We're doing it remote. Because it's better to get the content than not have it. I mean it's offline, so, but it's not about getting people to the event and watch the screen for seven hours. It's pick your interview, and then engage. >> Yeah. >> So it's self-service. So we're seeing a lot, the new user experience kind of direct to consumer, and so I think there will be an, I think there's going to be a digital first class citizen with events, so that that matches up with the kind of experience, but the offline version. Face-to-face optimized for relationships, and that's where the recruiting gets done. That's where, you know, people can build these relationships with each other. >> Yeah, and it can be asynchronous. I think that's a real value proposition. It's a great point. >> Okay, I want to get, I want to get into the technology side of the education and re-skilling and those things. I remember in the 80s, computer science was software engineering. You learned like nine languages. You took some double E courses, one or two, and all the other kind of gut classes in school. Engineering, you had the four class disciplines and some offshoots of specialization. Now it's incredible the diversity of tracks in all engineering programs and computer science and outside of those departments. >> Yeah. >> Can you speak to the importance of STEM and the diversity in the technology industry and how this brings opportunity to lower the bar to get in and how people can stay in and grow and keep leveling up? >> Yeah, well look, we're constantly working on how to, how to help the incoming funnel. But then, you know, at a university level, I'm on the foundation board of Kansas State where I got my engineering degree. I was also Chairman of the National Action Council for Minorities in Engineering, which was all about diversity in STEM and how do you keep that pipeline going because honestly the US needs more tech resources than we have. And if you don't tap into the diversity of our entire workforce, we won't be able to fill that need. And so we focused a lot on both the funnel, right, that starts at the middle school level, particularly for girls, getting them in, you know, the situation of hands-on comfort level with coding, with robot building, you know, whatever gives them that confidence. And then keeping that going all the way into, you know, university program, and making sure that they don't attrit out, right? And so there's a number of initiatives, whether it's mentoring and support groups and financial aid to make sure that underrepresented minorities, women and other minorities, you know, get through the funnel and stay, you know, stay in. >> Got it. Now let me ask you, you said, I have two daughters. You have a family of girls too. Is there a vibe difference between the new generation and what's the trends that you're seeing in this next early wave? I mean, not maybe, I don't know how this is in middle school, but like as people start getting into their adult lives, college and beyond what's the current point of view, posture, makeup of the talent coming in? >> Yeah, yeah. >> Certain orientations, do you see any patterns? What's your observation? >> Yeah, it's interesting. So if I look at electrical engineering, my major, it's, and if I look at Kansas State, which spends a lot of time on this, and I think does a great job, but the diversity of that as a major has not changed dramatically since I was there in the early 80s. Where it has changed very significantly is computer science. There are many, many university and college programs around the country where, you know, it's 50/50 in computer science from a gender mix perspective, which is huge progress. Huge progress. And so, and to me that's, you know, I think CS is a fantastic degree for tech, regardless of what function you actually end up doing in these companies. I mean, I was an electrical engineer. I never did core electrical engineering work. I went right into sales and marketing and general management roles. So I think, I think a bunch of, you know, diverse CS graduates is a really, really good sign. And you know, we need to continue to push on that, but progress has been made. I think the, you know, it kind of goes back to the thing we were just talking about, which is the attrition of those, let's just talk about women, right? The attrition of those women once they got past early career and into mid-career then was a concern, right? And that goes back to, you know, just the inability to, you know, get it all done. And that I am hopeful is going to be better served now. >> Well, Sue, it's great to have you on. I know you're super busy. I appreciate you taking the time and contributing to our program on corporate board membership and some of your story and observations and opinions and analysis. Always great to have you and call you a contributor for theCUBE. You can jump on on one more board, be one of our board contributors for our analysts. (Sue laughing) >> I'm at capacity. (both laughing) >> Final, final word. What's the big seat at the table issue that's going well and areas that need to be improved? >> So I'll speak for my boards because they have made great progress in efficiency. You know, obviously with interest rates going up and the mix between growth and profitability changing in terms of what investors are looking for. Many, many companies have had to do a hard pivot from grow at all costs to healthy balance of growth and profit. And I'm very pleased with how my companies have made that pivot. And I think that is going to make much better companies as a result. I think diversity is something that has not been solved at the corporate level, and we need to keep working it. >> Awesome. Thank you for coming on theCUBE. CUBE alumni now contributor, on multiple boards, full-time job. Love the new challenge and chapter you're on, Sue. We'll be following, and we'll check in for more updates. And thank you for being a contributor on this program this year and this episode. We're going to be doing more of these quarterly, so we're going to move beyond once a year. >> That's great. (cross talking) It's always good to see you, John. >> Thank you. >> Thanks very much. >> Okay. >> Sue: Talk to you later. >> This is theCUBE coverage of IWD, International Women's Day 2023. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Thank you for coming on. of my CUBE alumni title. We're psyched to have you on. And then, you know, four years ago and then what inspired you And those range from, you know, I mean, what is it like? I think this is why you come to tech, You feel good about where you're at and. that kind of gave you some directors, you know, in the US I know there's one board, you and you feel, you know, It's doing whatever you want to But the, you know, the right now with, as you know, but I want to ask you about of the union if you will, Cause you left one and but they're on a few. Well, and you know, Yeah, I would get on a executive you can do, Okay, I got to ask you about your career. have to say, you know, not just one or the other. Well, I got to ask you about the, So I think this brings and made that, you know, made that They're in the zoom call. that like, you know, the house is clean, I also love the office, and you know, around, you know, and the numbers aren't that And the numbers are actually But you know, also on that you had to go to, So, you know, got to meet a customer. that day, so, you know, is that the productivity of digital, That's where, you know, people Yeah, and it can be asynchronous. and all the other kind all the way into, you know, and what's the trends that you're seeing And so, and to me that's, you know, Well, Sue, it's great to have you on. I'm at capacity. that need to be improved? And I think that is going to And thank you for being a It's always good to see you, John. I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Meg Whitman | PERSON | 0.99+ |
Ann Livermore | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Sue Barsamian | PERSON | 0.99+ |
1981 | DATE | 0.99+ |
Texas | LOCATION | 0.99+ |
40 year | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
31 year | QUANTITY | 0.99+ |
National Action Council for Minorities in Engineering | ORGANIZATION | 0.99+ |
$4 billion | QUANTITY | 0.99+ |
35 person | QUANTITY | 0.99+ |
two daughters | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
five days | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Sue | PERSON | 0.99+ |
International Women's Day | EVENT | 0.99+ |
US | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
Atlanta | LOCATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
seven hours | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
one opportunity | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Kansas State | LOCATION | 0.99+ |
this year | DATE | 0.98+ |
one model | QUANTITY | 0.98+ |
second type | QUANTITY | 0.98+ |
80s | DATE | 0.98+ |
2020 | DATE | 0.98+ |
two-thirds | QUANTITY | 0.98+ |
one board | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
five boards | QUANTITY | 0.98+ |
one topic | QUANTITY | 0.98+ |
first type | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
two degrees | QUANTITY | 0.97+ |
International Women's Day 2023 | EVENT | 0.97+ |
50/50 | QUANTITY | 0.96+ |
early 80s | DATE | 0.96+ |
four years ago | DATE | 0.96+ |
four class | QUANTITY | 0.95+ |
nine languages | QUANTITY | 0.95+ |
pandemic | EVENT | 0.95+ |
ORGANIZATION | 0.93+ | |
once a year | QUANTITY | 0.92+ |
27 year old | QUANTITY | 0.91+ |
today | DATE | 0.88+ |
Joseph Nelson, Roboflow | Cube Conversation
(gentle music) >> Hello everyone. Welcome to this CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great remote guest coming in. Joseph Nelson, co-founder and CEO of RoboFlow hot startup in AI, computer vision. Really interesting topic in this wave of AI next gen hitting. Joseph, thanks for coming on this CUBE conversation. >> Thanks for having me. >> Yeah, I love the startup tsunami that's happening here in this wave. RoboFlow, you're in the middle of it. Exciting opportunities, you guys are in the cutting edge. I think computer vision's been talked about more as just as much as the large language models and these foundational models are merging. You're in the middle of it. What's it like right now as a startup and growing in this new wave hitting? >> It's kind of funny, it's, you know, I kind of describe it like sometimes you're in a garden of gnomes. It's like we feel like we've got this giant headstart with hundreds of thousands of people building with computer vision, training their own models, but that's a fraction of what it's going to be in six months, 12 months, 24 months. So, as you described it, a wave is a good way to think about it. And the wave is still building before it gets to its full size. So it's a ton of fun. >> Yeah, I think it's one of the most exciting areas in computer science. I wish I was in my twenties again, because I would be all over this. It's the intersection, there's so many disciplines, right? It's not just tech computer science, it's computer science, it's systems, it's software, it's data. There's so much aperture of things going on around your world. So, I mean, you got to be batting all the students away kind of trying to get hired in there, probably. I can only imagine you're hiring regiment. I'll ask that later, but first talk about what the company is that you're doing. How it's positioned, what's the market you're going after, and what's the origination story? How did you guys get here? How did you just say, hey, want to do this? What was the origination story? What do you do and how did you start the company? >> Yeah, yeah. I'll give you the what we do today and then I'll shift into the origin. RoboFlow builds tools for making the world programmable. Like anything that you see should be read write access if you think about it with a programmer's mind or legible. And computer vision is a technology that enables software to be added to these real world objects that we see. And so any sort of interface, any sort of object, any sort of scene, we can interact with it, we can make it more efficient, we can make it more entertaining by adding the ability for the tools that we use and the software that we write to understand those objects. And at RoboFlow, we've empowered a little over a hundred thousand developers, including those in half the Fortune 100 so far in that mission. Whether that's Walmart understanding the retail in their stores, Cardinal Health understanding the ways that they're helping their patients, or even electric vehicle manufacturers ensuring that they're making the right stuff at the right time. As you mentioned, it's early. Like I think maybe computer vision has touched one, maybe 2% of the whole economy and it'll be like everything in a very short period of time. And so we're focused on enabling that transformation. I think it's it, as far as I think about it, I've been fortunate to start companies before, start, sell these sorts of things. This is the last company I ever wanted to start and I think it will be, should we do it right, the world's largest in riding the wave of bringing together the disparate pieces of that technology. >> What was the motivating point of the formation? Was it, you know, you guys were hanging around? Was there some catalyst? What was the moment where it all kind of came together for you? >> You know what's funny is my co-founder, Brad and I, we were making computer vision apps for making board games more fun to play. So in 2017, Apple released AR kit, augmented reality kit for building augmented reality applications. And Brad and I are both sort of like hacker persona types. We feel like we don't really understand the technology until we build something with it and so we decided that we should make an app that if you point your phone at a Sudoku puzzle, it understands the state of the board and then it kind of magically fills in that experience with all the digits in real time, which totally ruins the game of Sudoku to be clear. But it also just creates this like aha moment of like, oh wow, like the ability for our pocket devices to understand and see the world as good or better than we can is possible. And so, you know, we actually did that as I mentioned in 2017, and the app went viral. It was, you know, top of some subreddits, top of Injure, Reddit, the hacker community as well as Product Hunt really liked it. So it actually won Product Hunt AR app of the year, which was the same year that the Tesla model three won the product of the year. So we joked that we share an award with Elon our shared (indistinct) But frankly, so that was 2017. RoboFlow wasn't incorporated as a business until 2019. And so, you know, when we made Magic Sudoku, I was running a different company at the time, Brad was running a different company at the time, and we kind of just put it out there and were excited by how many people liked it. And we assumed that other curious developers would see this inevitable future of, oh wow, you know. This is much more than just a pedestrian point your phone at a board game. This is everything can be seen and understood and rewritten in a different way. Things like, you know, maybe your fridge. Knowing what ingredients you have and suggesting recipes or auto ordering for you, or we were talking about some retail use cases of automated checkout. Like anything can be seen and observed and we presume that that would kick off a Cambrian explosion of applications. It didn't. So you fast forward to 2019, we said, well we might as well be the guys to start to tackle this sort of problem. And because of our success with board games before, we returned to making more board game solving applications. So we made one that solves Boggle, you know, the four by four word game, we made one that solves chess, you point your phone at a chess board and it understands the state of the board and then can make move recommendations. And each additional board game that we added, we realized that the tooling was really immature. The process of collecting images, knowing which images are actually going to be useful for improving model performance, training those models, deploying those models. And if we really wanted to make the world programmable, developers waiting for us to make an app for their thing of interest is a lot less efficient, less impactful than taking our tool chain and releasing that externally. And so, that's what RoboFlow became. RoboFlow became the internal tools that we used to make these game changing applications readily available. And as you know, when you give developers new tools, they create new billion dollar industries, let alone all sorts of fun hobbyist projects along the way. >> I love that story. Curious, inventive, little radical. Let's break the rules, see how we can push the envelope on the board games. That's how companies get started. It's a great story. I got to ask you, okay, what happens next? Now, okay, you realize this new tooling, but this is like how companies get built. Like they solve their own problem that they had 'cause they realized there's one, but then there has to be a market for it. So you actually guys knew that this was coming around the corner. So okay, you got your hacker mentality, you did that thing, you got the award and now you're like, okay, wow. Were you guys conscious of the wave coming? Was it one of those things where you said, look, if we do this, we solve our own problem, this will be big for everybody. Did you have that moment? Was that in 2019 or was that more of like, it kind of was obvious to you guys? >> Absolutely. I mean Brad puts this pretty effectively where he describes how we lived through the initial internet revolution, but we were kind of too young to really recognize and comprehend what was happening at the time. And then mobile happened and we were working on different companies that were not in the mobile space. And computer vision feels like the wave that we've caught. Like, this is a technology and capability that rewrites how we interact with the world, how everyone will interact with the world. And so we feel we've been kind of lucky this time, right place, right time of every enterprise will have the ability to improve their operations with computer vision. And so we've been very cognizant of the fact that computer vision is one of those groundbreaking technologies that every company will have as a part of their products and services and offerings, and we can provide the tooling to accelerate that future. >> Yeah, and the developer angle, by the way, I love that because I think, you know, as we've been saying in theCUBE all the time, developer's the new defacto standard bodies because what they adopt is pure, you know, meritocracy. And they pick the best. If it's sell service and it's good and it's got open source community around it, its all in. And they'll vote. They'll vote with their code and that is clear. Now I got to ask you, as you look at the market, we were just having this conversation on theCUBE in Barcelona at recent Mobile World Congress, now called MWC, around 5G versus wifi. And the debate was specifically computer vision, like facial recognition. We were talking about how the Cleveland Browns were using facial recognition for people coming into the stadium they were using it for ships in international ports. So the question was 5G versus wifi. My question is what infrastructure or what are the areas that need to be in place to make computer vision work? If you have developers building apps, apps got to run on stuff. So how do you sort that out in your mind? What's your reaction to that? >> A lot of the times when we see applications that need to run in real time and on video, they'll actually run at the edge without internet. And so a lot of our users will actually take their models and run it in a fully offline environment. Now to act on that information, you'll often need to have internet signal at some point 'cause you'll need to know how many people were in the stadium or what shipping crates are in my port at this point in time. You'll need to relay that information somewhere else, which will require connectivity. But actually using the model and creating the insights at the edge does not require internet. I mean we have users that deploy models on underwater submarines just as much as in outer space actually. And those are not very friendly environments to internet, let alone 5g. And so what you do is you use an edge device, like an Nvidia Jetson is common, mobile devices are common. Intel has some strong edge devices, the Movidius family of chips for example. And you use that compute that runs completely offline in real time to process those signals. Now again, what you do with those signals may require connectivity and that becomes a question of the problem you're solving of how soon you need to relay that information to another place. >> So, that's an architectural issue on the infrastructure. If you're a tactical edge war fighter for instance, you might want to have highly available and maybe high availability. I mean, these are words that mean something. You got storage, but it's not at the edge in real time. But you can trickle it back and pull it down. That's management. So that's more of a business by business decision or environment, right? >> That's right, that's right. Yeah. So I mean we can talk through some specifics. So for example, the RoboFlow actually powers the broadcaster that does the tennis ball tracking at Wimbledon. That runs completely at the edge in real time in, you know, technically to track the tennis ball and point the camera, you actually don't need internet. Now they do have internet of course to do the broadcasting and relay the signal and feeds and these sorts of things. And so that's a case where you have both edge deployment of running the model and high availability act on that model. We have other instances where customers will run their models on drones and the drone will go and do a flight and it'll say, you know, this many residential homes are in this given area, or this many cargo containers are in this given shipping yard. Or maybe we saw these environmental considerations of soil erosion along this riverbank. The model in that case can run on the drone during flight without internet, but then you only need internet once the drone lands and you're going to act on that information because for example, if you're doing like a study of soil erosion, you don't need to be real time. You just need to be able to process and make use of that information once the drone finishes its flight. >> Well I can imagine a zillion use cases. I heard of a use case interview at a company that does computer vision to help people see if anyone's jumping the fence on their company. Like, they know what a body looks like climbing a fence and they can spot it. Pretty easy use case compared to probably some of the other things, but this is the horizontal use cases, its so many use cases. So how do you guys talk to the marketplace when you say, hey, we have generative AI for commuter vision. You might know language models that's completely different animal because vision's like the world, right? So you got a lot more to do. What's the difference? How do you explain that to customers? What can I build and what's their reaction? >> Because we're such a developer centric company, developers are usually creative and show you the ways that they want to take advantage of new technologies. I mean, we've had people use things for identifying conveyor belt debris, doing gas leak detection, measuring the size of fish, airplane maintenance. We even had someone that like a hobby use case where they did like a specific sushi identifier. I dunno if you know this, but there's a specific type of whitefish that if you grew up in the western hemisphere and you eat it in the eastern hemisphere, you get very sick. And so there was someone that made an app that tells you if you happen to have that fish in the sushi that you're eating. But security camera analysis, transportation flows, plant disease detection, really, you know, smarter cities. We have people that are doing curb management identifying, and a lot of these use cases, the fantastic thing about building tools for developers is they're a creative bunch and they have these ideas that if you and I sat down for 15 minutes and said, let's guess every way computer vision can be used, we would need weeks to list all the example use cases. >> We'd miss everything. >> And we'd miss. And so having the community show us the ways that they're using computer vision is impactful. Now that said, there are of course commercial industries that have discovered the value and been able to be out of the gate. And that's where we have the Fortune 100 customers, like we do. Like the retail customers in the Walmart sector, healthcare providers like Medtronic, or vehicle manufacturers like Rivian who all have very difficult either supply chain, quality assurance, in stock, out of stock, anti-theft protection considerations that require successfully making sense of the real world. >> Let me ask you a question. This is maybe a little bit in the weeds, but it's more developer focused. What are some of the developer profiles that you're seeing right now in terms of low-hanging fruit applications? And can you talk about the academic impact? Because I imagine if I was in school right now, I'd be all over it. Are you seeing Master's thesis' being worked on with some of your stuff? Is the uptake in both areas of younger pre-graduates? And then inside the workforce, What are some of the devs like? Can you share just either what their makeup is, what they work on, give a little insight into the devs you're working with. >> Leading developers that want to be on state-of-the-art technology build with RoboFlow because they know they can use the best in class open source. They know that they can get the most out of their data. They know that they can deploy extremely quickly. That's true among students as you mentioned, just as much as as industries. So we welcome students and I mean, we have research grants that will regularly support for people to publish. I mean we actually have a channel inside our internal slack where every day, more student publications that cite building with RoboFlow pop up. And so, that helps inspire some of the use cases. Now what's interesting is that the use case is relatively, you know, useful or applicable for the business or the student. In other words, if a student does a thesis on how to do, we'll say like shingle damage detection from satellite imagery and they're just doing that as a master's thesis, in fact most insurance businesses would be interested in that sort of application. So, that's kind of how we see uptick and adoption both among researchers who want to be on the cutting edge and publish, both with RoboFlow and making use of open source tools in tandem with the tool that we provide, just as much as industry. And you know, I'm a big believer in the philosophy that kind of like what the hackers are doing nights and weekends, the Fortune 500 are doing in a pretty short order period of time and we're experiencing that transition. Computer vision used to be, you know, kind of like a PhD, multi-year investment endeavor. And now with some of the tooling that we're working on in open source technologies and the compute that's available, these science fiction ideas are possible in an afternoon. And so you have this idea of maybe doing asset management or the aerial observation of your shingles or things like this. You have a few hundred images and you can de-risk whether that's possible for your business today. So there's pretty broad-based adoption among both researchers that want to be on the state of the art, as much as companies that want to reduce the time to value. >> You know, Joseph, you guys and your partner have got a great front row seat, ground floor, presented creation wave here. I'm seeing a pattern emerging from all my conversations on theCUBE with founders that are successful, like yourselves, that there's two kind of real things going on. You got the enterprises grabbing the products and retrofitting into their legacy and rebuilding their business. And then you have startups coming out of the woodwork. Young, seeing greenfield or pick a specific niche or focus and making that the signature lever to move the market. >> That's right. >> So can you share your thoughts on the startup scene, other founders out there and talk about that? And then I have a couple questions for like the enterprises, the old school, the existing legacy. Little slower, but the startups are moving fast. What are some of the things you're seeing as startups are emerging in this field? >> I think you make a great point that independent of RoboFlow, very successful, especially developer focused businesses, kind of have three customer types. You have the startups and maybe like series A, series B startups that you're building a product as fast as you can to keep up with them, and they're really moving just as fast as as you are and pulling the product out at you for things that they need. The second segment that you have might be, call it SMB but not enterprise, who are able to purchase and aren't, you know, as fast of moving, but are stable and getting value and able to get to production. And then the third type is enterprise, and that's where you have typically larger contract value sizes, slower moving in terms of adoption and feedback for your product. And I think what you see is that successful companies balance having those three customer personas because you have the small startups, small fast moving upstarts that are discerning buyers who know the market and elect to build on tooling that is best in class. And so you basically kind of pass the smell test of companies who are quite discerning in their purchases, plus are moving so quick they're pulling their product out of you. Concurrently, you have a product that's enterprise ready to service the scalability, availability, and trust of enterprise buyers. And that's ultimately where a lot of companies will see tremendous commercial success. I mean I remember seeing the Twilio IPO, Uber being like a full 20% of their revenue, right? And so there's this very common pattern where you have the ability to find some of those upstarts that you make bets on, like the next Ubers of the world, the smaller companies that continue to get developed with the product and then the enterprise whom allows you to really fund the commercial success of the business, and validate the size of the opportunity in market that's being creative. >> It's interesting, there's so many things happening there. It's like, in a way it's a new category, but it's not a new category. It becomes a new category because of the capabilities, right? So, it's really interesting, 'cause that's what you're talking about is a category, creating. >> I think developer tools. So people often talk about B to B and B to C businesses. I think developer tools are in some ways a third way. I mean ultimately they're B to B, you're selling to other businesses and that's where your revenue's coming from. However, you look kind of like a B to C company in the ways that you measure product adoption and kind of go to market. In other words, you know, we're often tracking the leading indicators of commercial success in the form of usage, adoption, retention. Really consumer app, traditionally based metrics of how to know you're building the right stuff, and that's what product led growth companies do. And then you ultimately have commercial traction in a B to B way. And I think that that actually kind of looks like a third thing, right? Like you can do these sort of funny zany marketing examples that you might see historically from consumer businesses, but yet you ultimately make your money from the enterprise who has these de-risked high value problems you can solve for them. And I selfishly think that that's the best of both worlds because I don't have to be like Evan Spiegel, guessing the next consumer trend or maybe creating the next consumer trend and catching lightning in a bottle over and over again on the consumer side. But I still get to have fun in our marketing and make sort of fun, like we're launching the world's largest game of rock paper scissors being played with computer vision, right? Like that's sort of like a fun thing you can do, but then you can concurrently have the commercial validation and customers telling you the things that they need to be built for them next to solve commercial pain points for them. So I really do think that you're right by calling this a new category and it really is the best of both worlds. >> It's a great call out, it's a great call out. In fact, I always juggle with the VC. I'm like, it's so easy. Your job is so easy to pick the winners. What are you talking about its so easy? I go, just watch what the developers jump on. And it's not about who started, it could be someone in the dorm room to the boardroom person. You don't know because that B to C, the C, it's B to D you know? You know it's developer 'cause that's a human right? That's a consumer of the tool which influences the business that never was there before. So I think this direct business model evolution, whether it's media going direct or going direct to the developers rather than going to a gatekeeper, this is the reality. >> That's right. >> Well I got to ask you while we got some time left to describe, I want to get into this topic of multi-modality, okay? And can you describe what that means in computer vision? And what's the state of the growth of that portion of this piece? >> Multi modality refers to using multiple traditionally siloed problem types, meaning text, image, video, audio. So you could treat an audio problem as only processing audio signal. That is not multimodal, but you could use the audio signal at the same time as a video feed. Now you're talking about multi modality. In computer vision, multi modality is predominantly happening with images and text. And one of the biggest releases in this space is actually two years old now, was clip, contrastive language image pre-training, which took 400 million image text pairs and basically instead of previously when you do classification, you basically map every single image to a single class, right? Like here's a bunch of images of chairs, here's a bunch of images of dogs. What clip did is used, you can think about it like, the class for an image being the Instagram caption for the image. So it's not one single thing. And by training on understanding the corpora, you basically see which words, which concepts are associated with which pixels. And this opens up the aperture for the types of problems and generalizability of models. So what does this mean? This means that you can get to value more quickly from an existing trained model, or at least validate that what you want to tackle with a computer vision, you can get there more quickly. It also opens up the, I mean. Clip has been the bedrock of some of the generative image techniques that have come to bear, just as much as some of the LLMs. And increasingly we're going to see more and more of multi modality being a theme simply because at its core, you're including more context into what you're trying to understand about the world. I mean, in its most basic sense, you could ask yourself, if I have an image, can I know more about that image with just the pixels? Or if I have the image and the sound of when that image was captured or it had someone describe what they see in that image when the image was captured, which one's going to be able to get you more signal? And so multi modality helps expand the ability for us to understand signal processing. >> Awesome. And can you just real quick, define clip for the folks that don't know what that means? >> Yeah. Clip is a model architecture, it's an acronym for contrastive language image pre-training and like, you know, model architectures that have come before it captures the almost like, models are kind of like brands. So I guess it's a brand of a model where you've done these 400 million image text pairs to match up which visual concepts are associated with which text concepts. And there have been new releases of clip, just at bigger sizes of bigger encoding's, of longer strings of texture, or larger image windows. But it's been a really exciting advancement that OpenAI released in January, 2021. >> All right, well great stuff. We got a couple minutes left. Just I want to get into more of a company-specific question around culture. All startups have, you know, some sort of cultural vibe. You know, Intel has Moore's law doubles every whatever, six months. What's your culture like at RoboFlow? I mean, if you had to describe that culture, obviously love the hacking story, you and your partner with the games going number one on Product Hunt next to Elon and Tesla and then hey, we should start a company two years later. That's kind of like a curious, inventing, building, hard charging, but laid back. That's my take. How would you describe the culture? >> I think that you're right. The culture that we have is one of shipping, making things. So every week each team shares what they did for our customers on a weekly basis. And we have such a strong emphasis on being better week over week that those sorts of things compound. So one big emphasis in our culture is getting things done, shipping, doing things for our customers. The second is we're an incredibly transparent place to work. For example, how we think about giving decisions, where we're progressing against our goals, what problems are biggest and most important for the company is all open information for those that are inside the company to know and progress against. The third thing that I'd use to describe our culture is one that thrives with autonomy. So RoboFlow has a number of individuals who have founded companies before, some of which have sold their businesses for a hundred million plus upon exit. And the way that we've been able to attract talent like that is because the problems that we're tackling are so immense, yet individuals are able to charge at it with the way that they think is best. And this is what pairs well with transparency. If you have a strong sense of what the company's goals are, how we're progressing against it, and you have this ownership mentality of what can I do to change or drive progress against that given outcome, then you create a really healthy pairing of, okay cool, here's where the company's progressing. Here's where things are going really well, here's the places that we most need to improve and work on. And if you're inside that company as someone who has a preponderance to be a self-starter and even a history of building entire functions or companies yourself, then you're going to be a place where you can really thrive. You have the inputs of the things where we need to work on to progress the company's goals. And you have the background of someone that is just necessarily a fast moving and ambitious type of individual. So I think the best way to describe it is a transparent place with autonomy and an emphasis on getting things done. >> Getting shit done as they say. Getting stuff done. Great stuff. Hey, final question. Put a plug out there for the company. What are you going to hire? What's your pipeline look like for people? What jobs are open? I'm sure you got hiring all around. Give a quick plug for the company what you're looking for. >> I appreciate you asking. Basically you're either building the product or helping customers be successful with the product. So in the building product category, we have platform engineering roles, machine learning engineering roles, and we're solving some of the hardest and most impactful problems of bringing such a groundbreaking technology to the masses. And so it's a great place to be where you can kind of be your own user as an engineer. And then if you're enabling people to be successful with the products, I mean you're working in a place where there's already such a strong community around it and you can help shape, foster, cultivate, activate, and drive commercial success in that community. So those are roles that tend themselves to being those that build the product for developer advocacy, those that are account executives that are enabling our customers to realize commercial success, and even hybrid roles like we call it field engineering, where you are a technical resource to drive success within customer accounts. And so all this is listed on roboflow.com/careers. And one thing that I actually kind of want to mention John that's kind of novel about the thing that's working at RoboFlow. So there's been a lot of discussion around remote companies and there's been a lot of discussion around in-person companies and do you need to be in the office? And one thing that we've kind of recognized is you can actually chart a third way. You can create a third way which we call satellite, which basically means people can work from where they most like to work and there's clusters of people, regular onsite's. And at RoboFlow everyone gets, for example, $2,500 a year that they can use to spend on visiting coworkers. And so what's sort of organically happened is team numbers have started to pull together these resources and rent out like, lavish Airbnbs for like a week and then everyone kind of like descends in and works together for a week and makes and creates things. And we call this lighthouses because you know, a lighthouse kind of brings ships into harbor and we have an emphasis on shipping. >> Yeah, quality people that are creative and doers and builders. You give 'em some cash and let the self-governing begin, you know? And like, creativity goes through the roof. It's a great story. I think that sums up the culture right there, Joseph. Thanks for sharing that and thanks for this great conversation. I really appreciate it and it's very inspiring. Thanks for coming on. >> Yeah, thanks for having me, John. >> Joseph Nelson, co-founder and CEO of RoboFlow. Hot company, great culture in the right place in a hot area, computer vision. This is going to explode in value. The edge is exploding. More use cases, more development, and developers are driving the change. Check out RoboFlow. This is theCUBE. I'm John Furrier, your host. Thanks for watching. (gentle music)
SUMMARY :
Welcome to this CUBE conversation You're in the middle of it. And the wave is still building the company is that you're doing. maybe 2% of the whole economy And as you know, when you it kind of was obvious to you guys? cognizant of the fact that I love that because I think, you know, And so what you do is issue on the infrastructure. and the drone will go and the marketplace when you say, in the sushi that you're eating. And so having the And can you talk about the use case is relatively, you know, and making that the signature What are some of the things you're seeing and pulling the product out at you because of the capabilities, right? in the ways that you the C, it's B to D you know? And one of the biggest releases And can you just real quick, and like, you know, I mean, if you had to like that is because the problems Give a quick plug for the place to be where you can the self-governing begin, you know? and developers are driving the change.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Joseph | PERSON | 0.99+ |
Joseph Nelson | PERSON | 0.99+ |
January, 2021 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Medtronic | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
400 million | QUANTITY | 0.99+ |
Evan Spiegel | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
RoboFlow | ORGANIZATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Rivian | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Cardinal Health | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Wimbledon | EVENT | 0.99+ |
roboflow.com/careers | OTHER | 0.99+ |
first | QUANTITY | 0.99+ |
second segment | QUANTITY | 0.99+ |
each team | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both worlds | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
two years later | DATE | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Ubers | ORGANIZATION | 0.98+ |
third way | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a week | QUANTITY | 0.98+ |
Magic Sudoku | TITLE | 0.98+ |
second | QUANTITY | 0.98+ |
Nvidia | ORGANIZATION | 0.98+ |
Sudoku | TITLE | 0.98+ |
MWC | EVENT | 0.97+ |
today | DATE | 0.97+ |
billion dollar | QUANTITY | 0.97+ |
one single thing | QUANTITY | 0.97+ |
over a hundred thousand developers | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
third | QUANTITY | 0.96+ |
Elon | ORGANIZATION | 0.96+ |
third thing | QUANTITY | 0.96+ |
Tesla | ORGANIZATION | 0.96+ |
Jetson | COMMERCIAL_ITEM | 0.96+ |
Elon | PERSON | 0.96+ |
RoboFlow | TITLE | 0.96+ |
ORGANIZATION | 0.95+ | |
Twilio | ORGANIZATION | 0.95+ |
twenties | QUANTITY | 0.95+ |
Product Hunt AR | TITLE | 0.95+ |
Moore | PERSON | 0.95+ |
both researchers | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Adam Wenchel, Arthur.ai | CUBE Conversation
(bright upbeat music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCUBE. We've got a great conversation featuring Arthur AI. I'm your host. I'm excited to have Adam Wenchel who's the Co-Founder and CEO. Thanks for joining us today, appreciate it. >> Yeah, thanks for having me on, John, looking forward to the conversation. >> I got to say, it's been an exciting world in AI or artificial intelligence. Just an explosion of interest kind of in the mainstream with the language models, which people don't really get, but they're seeing the benefits of some of the hype around OpenAI. Which kind of wakes everyone up to, "Oh, I get it now." And then of course the pessimism comes in, all the skeptics are out there. But this breakthrough in generative AI field is just awesome, it's really a shift, it's a wave. We've been calling it probably the biggest inflection point, then the others combined of what this can do from a surge standpoint, applications. I mean, all aspects of what we used to know is the computing industry, software industry, hardware, is completely going to get turbo. So we're totally obviously bullish on this thing. So, this is really interesting. So my first question is, I got to ask you, what's you guys taking? 'Cause you've been doing this, you're in it, and now all of a sudden you're at the beach where the big waves are. What's the explosion of interest is there? What are you seeing right now? >> Yeah, I mean, it's amazing, so for starters, I've been in AI for over 20 years and just seeing this amount of excitement and the growth, and like you said, the inflection point we've hit in the last six months has just been amazing. And, you know, what we're seeing is like people are getting applications into production using LLMs. I mean, really all this excitement just started a few months ago, with ChatGPT and other breakthroughs and the amount of activity and the amount of new systems that we're seeing hitting production already so soon after that is just unlike anything we've ever seen. So it's pretty awesome. And, you know, these language models are just, they could be applied in so many different business contexts and that it's just the amount of value that's being created is again, like unprecedented compared to anything. >> Adam, you know, you've been in this for a while, so it's an interesting point you're bringing up, and this is a good point. I was talking with my friend John Markoff, former New York Times journalist and he was talking about, there's been a lot of work been done on ethics. So there's been, it's not like it's new. It's like been, there's a lot of stuff that's been baking over many, many years and, you know, decades. So now everyone wakes up in the season, so I think that is a key point I want to get into some of your observations. But before we get into it, I want you to explain for the folks watching, just so we can kind of get a definition on the record. What's an LLM, what's a foundational model and what's generative ai? Can you just quickly explain the three things there? >> Yeah, absolutely. So an LLM or a large language model, it's just a large, they would imply a large language model that's been trained on a huge amount of data typically pulled from the internet. And it's a general purpose language model that can be built on top for all sorts of different things, that includes traditional NLP tasks like document classification and sentiment understanding. But the thing that's gotten people really excited is it's used for generative tasks. So, you know, asking it to summarize documents or asking it to answer questions. And these aren't new techniques, they've been around for a while, but what's changed is just this new class of models that's based on new architectures. They're just so much more capable that they've gone from sort of science projects to something that's actually incredibly useful in the real world. And there's a number of companies that are making them accessible to everyone so that you can build on top of them. So that's the other big thing is, this kind of access to these models that can power generative tasks has been democratized in the last few months and it's just opening up all these new possibilities. And then the third one you mentioned foundation models is sort of a broader term for the category that includes LLMs, but it's not just language models that are included. So we've actually seen this for a while in the computer vision world. So people have been building on top of computer vision models, pre-trained computer vision models for a while for image classification, object detection, that's something we've had customers doing for three or four years already. And so, you know, like you said, there are antecedents to like, everything that's happened, it's not entirely new, but it does feel like a step change. >> Yeah, I did ask ChatGPT to give me a riveting introduction to you and it gave me an interesting read. If we have time, I'll read it. It's kind of, it's fun, you get a kick out of it. "Ladies and gentlemen, today we're a privileged "to have Adam Wenchel, Founder of Arthur who's going to talk "about the exciting world of artificial intelligence." And then it goes on with some really riveting sentences. So if we have time, I'll share that, it's kind of funny. It was good. >> Okay. >> So anyway, this is what people see and this is why I think it's exciting 'cause I think people are going to start refactoring what they do. And I've been saying this on theCUBE now for about a couple months is that, you know, there's a scene in "Moneyball" where Billy Beane sits down with the Red Sox owner and the Red Sox owner says, "If people aren't rebuilding their teams on your model, "they're going to be dinosaurs." And it reminds me of what's happening right now. And I think everyone that I talk to in the business sphere is looking at this and they're connecting the dots and just saying, if we don't rebuild our business with this new wave, they're going to be out of business because there's so much efficiency, there's so much automation, not like DevOps automation, but like the generative tasks that will free up the intellect of people. Like just the simple things like do an intro or do this for me, write some code, write a countermeasure to a hack. I mean, this is kind of what people are doing. And you mentioned computer vision, again, another huge field where 5G things are coming on, it's going to accelerate. What do you say to people when they kind of are leaning towards that, I need to rethink my business? >> Yeah, it's 100% accurate and what's been amazing to watch the last few months is the speed at which, and the urgency that companies like Microsoft and Google or others are actually racing to, to do that rethinking of their business. And you know, those teams, those companies which are large and haven't always been the fastest moving companies are working around the clock. And the pace at which they're rolling out LLMs across their suite of products is just phenomenal to watch. And it's not just the big, the large tech companies as well, I mean, we're seeing the number of startups, like we get, every week a couple of new startups get in touch with us for help with their LLMs and you know, there's just a huge amount of venture capital flowing into it right now because everyone realizes the opportunities for transforming like legal and healthcare and content creation in all these different areas is just wide open. And so there's a massive gold rush going on right now, which is amazing. >> And the cloud scale, obviously horizontal scalability of the cloud brings us to another level. We've been seeing data infrastructure since the Hadoop days where big data was coined. Now you're seeing this kind of take fruit, now you have vertical specialization where data shines, large language models all of a set up perfectly for kind of this piece. And you know, as you mentioned, you've been doing it for a long time. Let's take a step back and I want to get into how you started the company, what drove you to start it? Because you know, as an entrepreneur you're probably saw this opportunity before other people like, "Hey, this is finally it, it's here." Can you share the origination story of what you guys came up with, how you started it, what was the motivation and take us through that origination story. >> Yeah, absolutely. So as I mentioned, I've been doing AI for many years. I started my career at DARPA, but it wasn't really until 2015, 2016, my previous company was acquired by Capital One. Then I started working there and shortly after I joined, I was asked to start their AI team and scale it up. And for the first time I was actually doing it, had production models that we were working with, that was at scale, right? And so there was hundreds of millions of dollars of business revenue and certainly a big group of customers who were impacted by the way these models acted. And so it got me hyper-aware of these issues of when you get models into production, it, you know. So I think people who are earlier in the AI maturity look at that as a finish line, but it's really just the beginning and there's this constant drive to make them better, make sure they're not degrading, make sure you can explain what they're doing, if they're impacting people, making sure they're not biased. And so at that time, there really weren't any tools to exist to do this, there wasn't open source, there wasn't anything. And so after a few years there, I really started talking to other people in the industry and there was a really clear theme that this needed to be addressed. And so, I joined with my Co-Founder John Dickerson, who was on the faculty in University of Maryland and he'd been doing a lot of research in these areas. And so we ended up joining up together and starting Arthur. >> Awesome. Well, let's get into what you guys do. Can you explain the value proposition? What are people using you for now? Where's the action? What's the customers look like? What do prospects look like? Obviously you mentioned production, this has been the theme. It's not like people woke up one day and said, "Hey, I'm going to put stuff into production." This has kind of been happening. There's been companies that have been doing this at scale and then yet there's a whole follower model coming on mainstream enterprise and businesses. So there's kind of the early adopters are there now in production. What do you guys do? I mean, 'cause I think about just driving the car off the lot is not, you got to manage operations. I mean, that's a big thing. So what do you guys do? Talk about the value proposition and how you guys make money? >> Yeah, so what we do is, listen, when you go to validate ahead of deploying these models in production, starts at that point, right? So you want to make sure that if you're going to be upgrading a model, if you're going to replacing one that's currently in production, that you've proven that it's going to perform well, that it's going to be perform ethically and that you can explain what it's doing. And then when you launch it into production, traditionally data scientists would spend 25, 30% of their time just manually checking in on their model day-to-day babysitting as we call it, just to make sure that the data hasn't drifted, the model performance hasn't degraded, that a programmer did make a change in an upstream data system. You know, there's all sorts of reasons why the world changes and that can have a real adverse effect on these models. And so what we do is bring the same kind of automation that you have for other kinds of, let's say infrastructure monitoring, application monitoring, we bring that to your AI systems. And that way if there ever is an issue, it's not like weeks or months till you find it and you find it before it has an effect on your P&L and your balance sheet, which is too often before they had tools like Arthur, that was the way they were detected. >> You know, I was talking to Swami at Amazon who I've known for a long time for 13 years and been on theCUBE multiple times and you know, I watched Amazon try to pick up that sting with stage maker about six years ago and so much has happened since then. And he and I were talking about this wave, and I kind of brought up this analogy to how when cloud started, it was, Hey, I don't need a data center. 'Cause when I did my startup that time when Amazon, one of my startups at that time, my choice was put a box in the colo, get all the configuration before I could write over the line of code. So the cloud became the benefit for that and you can stand up stuff quickly and then it grew from there. Here it's kind of the same dynamic, you don't want to have to provision a large language model or do all this heavy lifting. So that seeing companies coming out there saying, you can get started faster, there's like a new way to get it going. So it's kind of like the same vibe of limiting that heavy lifting. >> Absolutely. >> How do you look at that because this seems to be a wave that's going to be coming in and how do you guys help companies who are going to move quickly and start developing? >> Yeah, so I think in the race to this kind of gold rush mentality, race to get these models into production, there's starting to see more sort of examples and evidence that there are a lot of risks that go along with it. Either your model says things, your system says things that are just wrong, you know, whether it's hallucination or just making things up, there's lots of examples. If you go on Twitter and the news, you can read about those, as well as sort of times when there could be toxic content coming out of things like that. And so there's a lot of risks there that you need to think about and be thoughtful about when you're deploying these systems. But you know, you need to balance that with the business imperative of getting these things into production and really transforming your business. And so that's where we help people, we say go ahead, put them in production, but just make sure you have the right guardrails in place so that you can do it in a smart way that's going to reflect well on you and your company. >> Let's frame the challenge for the companies now that you have, obviously there's the people who doing large scale production and then you have companies maybe like as small as us who have large linguistic databases or transcripts for example, right? So what are customers doing and why are they deploying AI right now? And is it a speed game, is it a cost game? Why have some companies been able to deploy AI at such faster rates than others? And what's a best practice to onboard new customers? >> Yeah, absolutely. So I mean, we're seeing across a bunch of different verticals, there are leaders who have really kind of started to solve this puzzle about getting AI models into production quickly and being able to iterate on them quickly. And I think those are the ones that realize that imperative that you mentioned earlier about how transformational this technology is. And you know, a lot of times, even like the CEOs or the boards are very personally kind of driving this sense of urgency around it. And so, you know, that creates a lot of movement, right? And so those companies have put in place really smart infrastructure and rails so that people can, data scientists aren't encumbered by having to like hunt down data, get access to it. They're not encumbered by having to stand up new platforms every time they want to deploy an AI system, but that stuff is already in place. There's a really nice ecosystem of products out there, including Arthur, that you can tap into. Compared to five or six years ago when I was building at a top 10 US bank, at that point you really had to build almost everything yourself and that's not the case now. And so it's really nice to have things like, you know, you mentioned AWS SageMaker and a whole host of other tools that can really accelerate things. >> What's your profile customer? Is it someone who already has a team or can people who are learning just dial into the service? What's the persona? What's the pitch, if you will, how do you align with that customer value proposition? Do people have to be built out with a team and in play or is it pre-production or can you start with people who are just getting going? >> Yeah, people do start using it pre-production for validation, but I think a lot of our customers do have a team going and they're starting to put, either close to putting something into production or about to, it's everything from large enterprises that have really sort of complicated, they have dozens of models running all over doing all sorts of use cases to tech startups that are very focused on a single problem, but that's like the lifeblood of the company and so they need to guarantee that it works well. And you know, we make it really easy to get started, especially if you're using one of the common model development platforms, you can just kind of turn key, get going and make sure that you have a nice feedback loop. So then when your models are out there, it's pointing out, areas where it's performing well, areas where it's performing less well, giving you that feedback so that you can make improvements, whether it's in training data or futurization work or algorithm selection. There's a number of, you know, depending on the symptoms, there's a number of things you can do to increase performance over time and we help guide people on that journey. >> So Adam, I have to ask, since you have such a great customer base and they're smart and they got teams and you're on the front end, I mean, early adopters is kind of an overused word, but they're killing it. They're putting stuff in the production's, not like it's a test, it's not like it's early. So as the next wave comes of fast followers, how do you see that coming online? What's your vision for that? How do you see companies that are like just waking up out of the frozen, you know, freeze of like old IT to like, okay, they got cloud, but they're not yet there. What do you see in the market? I see you're in the front end now with the top people really nailing AI and working hard. What's the- >> Yeah, I think a lot of these tools are becoming, or every year they get easier, more accessible, easier to use. And so, you know, even for that kind of like, as the market broadens, it takes less and less of a lift to put these systems in place. And the thing is, every business is unique, they have their own kind of data and so you can use these foundation models which have just been trained on generic data. They're a great starting point, a great accelerant, but then, in most cases you're either going to want to create a model or fine tune a model using data that's really kind of comes from your particular customers, the people you serve and so that it really reflects that and takes that into account. And so I do think that these, like the size of that market is expanding and its broadening as these tools just become easier to use and also the knowledge about how to build these systems becomes more widespread. >> Talk about your customer base you have now, what's the makeup, what size are they? Give a taste a little bit of a customer base you got there, what's they look like? I'll say Capital One, we know very well while you were at there, they were large scale, lot of data from fraud detection to all kinds of cool stuff. What do your customers now look like? >> Yeah, so we have a variety, but I would say one area we're really strong, we have several of the top 10 US banks, that's not surprising, that's a strength for us, but we also have Fortune 100 customers in healthcare, in manufacturing, in retail, in semiconductor and electronics. So what we find is like in any sort of these major verticals, there's typically, you know, one, two, three kind of companies that are really leading the charge and are the ones that, you know, in our opinion, those are the ones that for the next multiple decades are going to be the leaders, the ones that really kind of lead the charge on this AI transformation. And so we're very fortunate to be working with some of those. And then we have a number of startups as well who we love working with just because they're really pushing the boundaries technologically and so they provide great feedback and make sure that we're continuing to innovate and staying abreast of everything that's going on. >> You know, these early markups, even when the hyperscalers were coming online, they had to build everything themselves. That's the new, they're like the alphas out there building it. This is going to be a big wave again as that fast follower comes in. And so when you look at the scale, what advice would you give folks out there right now who want to tee it up and what's your secret sauce that will help them get there? >> Yeah, I think that the secret to teeing it up is just dive in and start like the, I think these are, there's not really a secret. I think it's amazing how accessible these are. I mean, there's all sorts of ways to access LLMs either via either API access or downloadable in some cases. And so, you know, go ahead and get started. And then our secret sauce really is the way that we provide that performance analysis of what's going on, right? So we can tell you in a very actionable way, like, hey, here's where your model is doing good things, here's where it's doing bad things. Here's something you want to take a look at, here's some potential remedies for it. We can help guide you through that. And that way when you're putting it out there, A, you're avoiding a lot of the common pitfalls that people see and B, you're able to really kind of make it better in a much faster way with that tight feedback loop. >> It's interesting, we've been kind of riffing on this supercloud idea because it was just different name than multicloud and you see apps like Snowflake built on top of AWS without even spending any CapEx, you just ride that cloud wave. This next AI, super AI wave is coming. I don't want to call AIOps because I think there's a different distinction. If you, MLOps and AIOps seem a little bit old, almost a few years back, how do you view that because everyone's is like, "Is this AIOps?" And like, "No, not kind of, but not really." How would you, you know, when someone says, just shoots off the hip, "Hey Adam, aren't you doing AIOps?" Do you say, yes we are, do you say, yes, but we do differently because it's doesn't seem like it's the same old AIOps. What's your- >> Yeah, it's a good question. AIOps has been a term that was co-opted for other things and MLOps also has people have used it for different meanings. So I like the term just AI infrastructure, I think it kind of like describes it really well and succinctly. >> But you guys are doing the ops. I mean that's the kind of ironic thing, it's like the next level, it's like NextGen ops, but it's not, you don't want to be put in that bucket. >> Yeah, no, it's very operationally focused platform that we have, I mean, it fires alerts, people can action off them. If you're familiar with like the way people run security operations centers or network operations centers, we do that for data science, right? So think of it as a DSOC, a Data Science Operations Center where all your models, you might have hundreds of models running across your organization, you may have five, but as problems are detected, alerts can be fired and you can actually work the case, make sure they're resolved, escalate them as necessary. And so there is a very strong operational aspect to it, you're right. >> You know, one of the things I think is interesting is, is that, if you don't mind commenting on it, is that the aspect of scale is huge and it feels like that was made up and now you have scale and production. What's your reaction to that when people say, how does scale impact this? >> Yeah, scale is huge for some of, you know, I think, I think look, the highest leverage business areas to apply these to, are generally going to be the ones at the biggest scale, right? And I think that's one of the advantages we have. Several of us come from enterprise backgrounds and we're used to doing things enterprise grade at scale and so, you know, we're seeing more and more companies, I think they started out deploying AI and sort of, you know, important but not necessarily like the crown jewel area of their business, but now they're deploying AI right in the heart of things and yeah, the scale that some of our companies are operating at is pretty impressive. >> John: Well, super exciting, great to have you on and congratulations. I got a final question for you, just random. What are you most excited about right now? Because I mean, you got to be pretty pumped right now with the way the world is going and again, I think this is just the beginning. What's your personal view? How do you feel right now? >> Yeah, the thing I'm really excited about for the next couple years now, you touched on it a little bit earlier, but is a sort of convergence of AI and AI systems with sort of turning into AI native businesses. And so, as you sort of do more, get good further along this transformation curve with AI, it turns out that like the better the performance of your AI systems, the better the performance of your business. Because these models are really starting to underpin all these key areas that cumulatively drive your P&L. And so one of the things that we work a lot with our customers is to do is just understand, you know, take these really esoteric data science notions and performance and tie them to all their business KPIs so that way you really are, it's kind of like the operating system for running your AI native business. And we're starting to see more and more companies get farther along that maturity curve and starting to think that way, which is really exciting. >> I love the AI native. I haven't heard any startup yet say AI first, although we kind of use the term, but I guarantee that's going to come in all the pitch decks, we're an AI first company, it's going to be great run. Adam, congratulations on your success to you and the team. Hey, if we do a few more interviews, we'll get the linguistics down. We can have bots just interact with you directly and ask you, have an interview directly. >> That sounds good, I'm going to go hang out on the beach, right? So, sounds good. >> Thanks for coming on, really appreciate the conversation. Super exciting, really important area and you guys doing great work. Thanks for coming on. >> Adam: Yeah, thanks John. >> Again, this is Cube Conversation. I'm John Furrier here in Palo Alto, AI going next gen. This is legit, this is going to a whole nother level that's going to open up huge opportunities for startups, that's going to use opportunities for investors and the value to the users and the experience will come in, in ways I think no one will ever see. So keep an eye out for more coverage on siliconangle.com and theCUBE.net, thanks for watching. (bright upbeat music)
SUMMARY :
I'm excited to have Adam Wenchel looking forward to the conversation. kind of in the mainstream and that it's just the amount Adam, you know, you've so that you can build on top of them. to give me a riveting introduction to you And you mentioned computer vision, again, And you know, those teams, And you know, as you mentioned, of when you get models into off the lot is not, you and that you can explain what it's doing. So it's kind of like the same vibe so that you can do it in a smart way And so, you know, that creates and make sure that you out of the frozen, you know, and so you can use these foundation models a customer base you got there, that are really leading the And so when you look at the scale, And so, you know, go how do you view that So I like the term just AI infrastructure, I mean that's the kind of ironic thing, and you can actually work the case, is that the aspect of and so, you know, we're seeing exciting, great to have you on so that way you really are, success to you and the team. out on the beach, right? and you guys doing great work. and the value to the users and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Markoff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Adam Wenchel | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
John Dickerson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
13 years | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Billy Beane | PERSON | 0.99+ |
over 20 years | QUANTITY | 0.99+ |
DARPA | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
University of Maryland | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
US | LOCATION | 0.97+ |
first | QUANTITY | 0.96+ |
six years ago | DATE | 0.96+ |
New York Times | ORGANIZATION | 0.96+ |
ChatGPT | ORGANIZATION | 0.96+ |
Swami | PERSON | 0.95+ |
ChatGPT | TITLE | 0.95+ |
hundreds of models | QUANTITY | 0.95+ |
25, 30% | QUANTITY | 0.95+ |
single problem | QUANTITY | 0.95+ |
hundreds of millions of dollars | QUANTITY | 0.95+ |
10 | QUANTITY | 0.94+ |
Moneyball | TITLE | 0.94+ |
wave | EVENT | 0.91+ |
three things | QUANTITY | 0.9+ |
AIOps | TITLE | 0.9+ |
last six months | DATE | 0.89+ |
few months ago | DATE | 0.88+ |
big | EVENT | 0.86+ |
next couple years | DATE | 0.86+ |
DevOps | TITLE | 0.85+ |
Arthur | PERSON | 0.85+ |
CUBE | ORGANIZATION | 0.83+ |
dozens of models | QUANTITY | 0.8+ |
a few years back | DATE | 0.8+ |
six years ago | DATE | 0.78+ |
theCUBE | ORGANIZATION | 0.76+ |
SageMaker | TITLE | 0.75+ |
decades | QUANTITY | 0.75+ |
ORGANIZATION | 0.74+ | |
MLOps | TITLE | 0.74+ |
supercloud | ORGANIZATION | 0.73+ |
super AI wave | EVENT | 0.73+ |
a couple months | QUANTITY | 0.72+ |
Arthur | ORGANIZATION | 0.72+ |
100 customers | QUANTITY | 0.71+ |
Cube Conversation | EVENT | 0.69+ |
theCUBE.net | OTHER | 0.67+ |
Peter Fetterolf, ACG Business Analytics & Charles Tsai, Dell Technologies | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (light airy music) >> Hi, everybody, welcome back to the Fira in Barcelona. My name is Dave Vellante. I'm here with my co-host Dave Nicholson. Lisa Martin is in the house. John Furrier is pounding the news from our Palo Alto studio. We are super excited to be talking about cloud at the edge, what that means. Charles Tsai is here. He's the Senior Director of product management at Dell Technologies and Peter Fetterolf is the Chief Technology Officer at ACG Business Analytics, a firm that goes deep into the TCO and the telco space, among other things. Gents, welcome to theCUBE. Thanks for coming on. Thank you. >> Good to be here. >> Yeah, good to be here. >> So I've been in search all week of the elusive next wave of monetization for the telcos. We know they make great money on connectivity, they're really good at that. But they're all talking about how they can't let this happen again. Meaning we can't let the over the top vendors yet again, basically steal our cookies. So we're going to not mess it up this time. We're going to win in the monetization. Charles, where are those monetization opportunities? Obviously at the edge, the telco cloud at the edge. What is that all about and where's the money? >> Well, Dave, I think from a Dell's perspective, what we want to be able to enable operators is a solution that enable them to roll out services much quicker, right? We know there's a lot of innovation around IoT, MEG and so on and so forth, but they continue to rely on traditional technology and way of operations is going to take them years to enable new services. So what Dell is doing is now, creating the entire vertical stack from the hardware through CAST and automation that enable them, not only to push out services very quickly, but operating them using cloud principles. >> So it's when you say the entire vertical stack, it's the integrated hardware components with like, for example, Red Hat on top- >> Right. >> Or a Wind River? >> That's correct. >> Okay, and then open API, so the developers can create workloads, I presume data companies. We just had a data conversation 'cause that was part of the original stack- >> That's correct. >> So through an open ecosystem, you can actually sort of recreate that value, correct? >> That's correct. >> Okay. >> So one thing Dell is doing, is we are offering an infrastructure block where we are taking over the overhead of certifying every release coming from the Red Hat or the Wind River of the world, right? We want telcos to spend their resources on what is going to generate them revenue. Not the overhead of creating this cloud stack. >> Dave, I remember when we went through this in the enterprise and you had companies like, you know, IBM with the AS400 and the mainframe saying it's easier to manage, which it was, but it's still, you know, it was subsumed by the open systems trend. >> Yeah, yeah. And I think that's an important thing to probe on, is this idea of what is, what exactly does it mean to be cloud at the edge in the telecom space? Because it's a much used term. >> Yeah. >> When we talk about cloud and edge, in sort of generalized IT, but what specifically does it mean? >> Yeah, so when we talk about telco cloud, first of all it's kind of different from what you're thinking about public cloud today. And there's a couple differences. One, if you look at the big hyperscaler public cloud today, they tend to be centralized in huge data centers. Okay, telco cloud, there are big data centers, but then there's also regional data centers. There are edge data centers, which are your typical like access central offices that have turned data centers, and then now even cell sites are becoming mini data centers. So it's distributed. I mean like you could have like, even in a country like say Germany, you'd have 30,000 soul sites, each one of them being a data center. So it's a very different model. Now the other thing I want to go back to the question of monetization, okay? So how do you do monetization? The only way to do that, is to be able to offer new services, like Charles said. How do you offer new services? You have to have an open ecosystem that's going to be very, very flexible. And if we look at where telcos are coming from today, they tend to be very inflexible 'cause they're all kind of single vendor solutions. And even as we've moved to virtualization, you know, if you look at packet core for instance, a lot of them are these vertical stacks of say a Nokia or Ericson or Huawei where you know, you can't really put any other vendors or any other solutions into that. So basically the idea is this kind of horizontal architecture, right? Where now across, not just my central data centers, but across my edge data centers, which would be traditionally my access COs, as well as my cell sites. I have an open environment. And we're kind of starting with, you know, packet core obviously with, and UPFs being distributed, but now open ran or virtual ran, where I can have CUs and DUs and I can split CUs, they could be at the soul site, they could be in edge data centers. But then moving forward, we're going to have like MEG, which are, you know, which are new kinds of services, you know, could be, you know, remote cars it could be gaming, it could be the Metaverse. And these are going to be a multi-vendor environment. So one of the things you need to do is you need to have you know, this cloud layer, and that's what Charles was talking about with the infrastructure blocks is helping the service providers do that, but they still own their infrastructure. >> Yeah, so it's still not clear to me how the service providers win that game but we can maybe come back to that because I want to dig into TCO a little bit. >> Sure. >> Because I have a lot of friends at Dell. I don't have a lot of friends at HPE. I've always been critical when they take an X86 server put a name on it that implies edge and they throw it over the fence to the edge, that's not going to work, okay? We're now seeing, you know we were just at the Dell booth yesterday, you did the booth crawl, which was awesome. Purpose-built servers for this environment. >> Charles: That's right. >> So there's two factors here that I want to explore in TCO. One is, how those next gen servers compare to the previous gen, especially in terms of power consumption but other factors and then how these sort of open ran, open ecosystem stacks compared to proprietary stacks. Peter, can you help us understand those? >> Yeah, sure. And Charles can comment on this as well. But I mean there, there's a couple areas. One is just moving the next generation. So especially on the Intel side, moving from Ice Lake to the Sapphire Rapids is a big deal, especially when it comes to the DU. And you know, with the radios, right? There's the radio unit, the RU, and then there's the DU the distributed unit, and the CU. The DU is really like part of the radio, but it's virtualized. When we moved from Ice lake to Sapphire Rapids, which is third generation intel to fourth generation intel, we're literally almost doubling the performance in the DU. And that's really important 'cause it means like almost half the number of servers and we're talking like 30, 40, 50,000 servers in some cases. So, you know, being able to divide that by two, that's really big, right? In terms of not only the the cost but all the TCO and the OpEx. Now another area that's really important, when I was talking moving from these vertical silos to the horizontal, the issue with the vertical silos is, you can't place any other workloads into those silos. So it's kind of inefficient, right? Whereas when we have the horizontal architecture, now you can place workloads wherever you want, which basically also means less servers but also more flexibility, more service agility. And then, you know, I think Charles can comment more, specifically on the XR8000, some things Dell's doing, 'cause it's really exciting relative to- >> Sure. >> What's happening in there. >> So, you know, when we start looking at putting compute at the edge, right? We recognize the first thing we have to do is understand the environment we are going into. So we spend with a lot of time with telcos going to the south side, going to the edge data center, looking at operation, how do the engineer today deal with maintenance replacement at those locations? Then based on understanding the operation constraints at those sites, we create innovation and take a traditional server, remodel it to make sure that we minimize the disruption to the operations, right? Just because we are helping them going from appliances to open compute, we do not want to disrupt what is have been a very efficient operation on the remote sites. So we created a lot of new ideas and develop them on general compute, where we believe we can save a lot of headache and disruptions and still provide the same level of availability, resiliency, and redundancy on an open compute platform. >> So when we talk about open, we don't mean generic? Fair? See what I mean? >> Open is more from the software workload perspective, right? A Dell server can run any type of workload that customer intend. >> But it's engineered for this? >> Environment. >> Environment. >> That's correct. >> And so what are some of the environmental issues that are dealt with in the telecom space that are different than the average data center? >> The most basic one, is in most of the traditional cell tower, they are deployed within cabinets instead of racks. So they are depth constraints that you just have no access to the rear of the chassis. So that means on a server, is everything you need to access, need to be in the front, nothing should be in the back. Then you need to consider how labor union come into play, right? There's a lot of constraint on who can go to a cell tower and touch power, who can go there and touch compute, right? So we minimize all that disruption through a modular design and make it very efficient. >> So when we took a look at XR8000, literally right here, sitting on the desk. >> Uh-huh. >> Took it apart, don't panic, just pulled out some sleds and things. >> Right, right. >> One of the interesting demonstrations was how it compared to the size of a shoe. Now apparently you hired someone at Dell specifically because they wear a size 14 shoe, (Charles laughs) so it was even more dramatic. >> That's right. >> But when you see it, and I would suggest that viewers go back and take a look at that segment, specifically on the hardware. You can see exactly what you just referenced. This idea that everything is accessible from the front. Yeah. >> So I want to dig in a couple things. So I want to push back a little bit on what you were saying about the horizontal 'cause there's the benefit, if you've got the horizontal infrastructure, you can run a lot more workloads. But I compare it to the enterprise 'cause I, that was the argument, I've made that argument with converged infrastructure versus say an Oracle vertical stack, but it turned out that actually Oracle ran Oracle better, okay? Is there an analog in telco or is this new open architecture going to be able to not only service the wide range of emerging apps but also be as resilient as the proprietary infrastructure? >> Yeah and you know, before I answer that, I also want to say that we've been writing a number of white papers. So we have actually three white papers we've just done with Dell looking at infrastructure blocks and looking at vertical versus horizontal and also looking at moving from the previous generation hardware to the next generation hardware. So all those details, you can find the white papers, and you can find them either in the Dell website or at the ACG research website >> ACGresearch.com? >> ACG research. Yeah, if you just search ACG research, you'll find- >> Yeah. >> Lots of white papers on TCO. So you know, what I want to say, relative to the vertical versus horizontal. Yeah, obviously in the vertical side, some of those things will run well, I mean it won't have issues. However, that being said, as we move to cloud native, you know, it's very high performance, okay? In terms of the stack, whether it be a Red Hat or a VMware or other cloud layers, that's really become much more mature. It now it's all CNF base, which is really containerized, very high performance. And so I don't think really performance is an issue. However, my feeling is that, if you want to offer new services and generate new revenue, you're not going to do it in vertical stacks, period. You're going to be able to do a packet core, you'll be able to do a ran over here. But now what if I want to offer a gaming service? What if I want to do metaverse? What if I want to do, you have to have an environment that's a multi-vendor environment that supports an ecosystem. Even in the RAN, when we look at the RIC, and the xApps and the rApps, these are multi-vendor environments that's going to create a lot of flexibility and you can't do that if you're restricted to, I can only have one vendor running on this hardware. >> Yeah, we're seeing these vendors work together and create RICs. That's obviously a key point, but what I'm hearing is that there may be trade offs, but the incremental value is going to overwhelm that. Second question I have, Peter is, TCO, I've been hearing a lot about 30%, you know, where's that 30% come from? Is it Op, is it from an OpEx standpoint? Is it labor, is it power? Is it, you mentioned, you know, cutting the number of servers in half. If I can unpack the granularity of that TCO, where's the benefit coming from? >> Yeah, the answer is yes. (Peter and Charles laugh) >> Okay, we'll do. >> Yeah, so- >> One side that, in terms of, where is the big bang for the bucks? >> So I mean, so you really need to look at the white paper to see details, but definitely power, definitely labor, definitely reducing the number of servers, you know, reducing the CapEx. The other thing is, is as you move to this really next generation horizontal telco cloud, there's the whole automation and orchestration, that is a key component as well. And it's enabled by what Dell is doing. It's enabled by the, because the thing is you're not going to have end-to-end automation if you have all this legacy stuff there or if you have these vertical stacks where you can't integrate. I mean you can automate that part and then you have separate automation here, you separate. you need to have integrated automation and orchestration across the whole thing. >> One other point I would add also, right, on the hardware perspective, right? With the customized hardware, what we allow operator to do is, take out the existing appliance and push a edge optimized server without reworking the entire infrastructure. There is a significant saving where you don't have to rethink about what is my power infrastructure, right? What is my security infrastructure? The server is designed to leverage the existing, what is already there. >> How should telco, Charles, plan for this transformation? Are there specific best practices that you would recommend in terms of the operational model? >> Great question. I think first thing is do an inventory of what you have. Understand what your constraints are and then come to Dell, we will love to consult with you, based on our experience on the best practices. We know how to minimize additional changes. We know how to help your support engineer, understand how to shift appliance based operation to a cloud-based operation. >> Is that a service you offer? Is that a pre-sales freebie? What is maybe both? >> It's both. >> Yeah. >> It's both. >> Yeah. >> Guys- >> Just really quickly. >> We're going to wrap. >> The, yeah. Dave loves the TCO discussion. I'm always thinking in terms of, well how do you measure TCO when you're comparing something where you can't do something to an environment where you're going to be able to do something new? And I know that that's always the challenge in any kind of emerging market where things are changing, any? >> Well, I mean we also look at, not only TCO, but we look at overall business case. So there's basically service at GLD and revenue and then there's faster time to revenues. Well, and actually ACG, we actually have a platform called the BAE or Business Analytics Engine that's a very sophisticated simulation cloud-based platform, where we can actually look at revenue month by month. And we look at what's the impact of accelerating revenue by three months. By four months. >> So you're looking into- >> By six months- >> So you're forward looking. You're just not consistently- >> So we're not just looking at TCO, we're looking at the overall business case benefit. >> Yeah, exactly right. There's the TCO, which is the hard dollars. >> Right. >> CFO wants to see that, he or she needs to see that. But you got to, you can convince that individual, that there's a business case around it. >> Peter: Yeah. >> And then you're going to sign up for that number. >> Peter: Yeah. >> And they're going to be held to it. That's the story the world wants. >> At the end of the day, telcos have to be offered new services 'cause look at all the money that's been spent. >> Dave: Yeah, that's right. >> On investment on 5G and everything else. >> 0.5 trillion over the next seven years. All right, guys, we got to go. Sorry to cut you off. >> Okay, thank you very much. >> But we're wall to wall here. All right, thanks so much for coming on. >> Dave: Fantastic. >> All right, Dave Vellante, for Dave Nicholson. Lisa Martin's in the house. John Furrier in Palo Alto Studios. Keep it right there. MWC 23 live from the Fira in Barcelona. (light airy music)
SUMMARY :
that drive human progress. and Peter Fetterolf is the of the elusive next wave of creating the entire vertical of the original stack- or the Wind River of the world, right? AS400 and the mainframe in the telecom space? So one of the things you need to do how the service providers win that game the fence to the edge, to the previous gen, So especially on the Intel side, We recognize the first thing we have to do from the software workload is in most of the traditional cell tower, sitting on the desk. Took it apart, don't panic, One of the interesting demonstrations accessible from the front. But I compare it to the Yeah and you know, Yeah, if you just search ACG research, and the xApps and the rApps, but the incremental value Yeah, the answer is yes. and then you have on the hardware perspective, right? inventory of what you have. Dave loves the TCO discussion. and then there's faster time to revenues. So you're forward looking. So we're not just There's the TCO, But you got to, you can And then you're going to That's the story the world wants. At the end of the day, and everything else. Sorry to cut you off. But we're wall to wall here. Lisa Martin's in the house.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Charles | PERSON | 0.99+ |
Charles Tsai | PERSON | 0.99+ |
Peter Fetterolf | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Ericson | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
ACG Business Analytics | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
ACG | ORGANIZATION | 0.99+ |
TCO | ORGANIZATION | 0.99+ |
four months | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
0.5 trillion | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two factors | QUANTITY | 0.99+ |
six months | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
MWC 23 | EVENT | 0.98+ |
Germany | LOCATION | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
XR8000 | COMMERCIAL_ITEM | 0.98+ |
Ice Lake | COMMERCIAL_ITEM | 0.98+ |
One | QUANTITY | 0.97+ |
one vendor | QUANTITY | 0.97+ |
Palo Alto Studios | LOCATION | 0.97+ |
third generation | QUANTITY | 0.97+ |
fourth generation | QUANTITY | 0.96+ |
40, 50,000 servers | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
telcos | ORGANIZATION | 0.95+ |
telco cloud | ORGANIZATION | 0.95+ |
each one | QUANTITY | 0.95+ |
Robert Nishihara, Anyscale | CUBE Conversation
(upbeat instrumental) >> Hello and welcome to this CUBE conversation. I'm John Furrier, host of theCUBE, here in Palo Alto, California. Got a great conversation with Robert Nishihara who's the co-founder and CEO of Anyscale. Robert, great to have you on this CUBE conversation. It's great to see you. We did your first Ray Summit a couple years ago and congratulations on your venture. Great to have you on. >> Thank you. Thanks for inviting me. >> So you're first time CEO out of Berkeley in Data. You got the Databricks is coming out of there. You got a bunch of activity coming from Berkeley. It's like a, it really is kind of like where a lot of innovations going on data. Anyscale has been one of those startups that has risen out of that scene. Right? You look at the success of what the Data lakes are now. Now you've got the generative AI. This has been a really interesting innovation market. This new wave is coming. Tell us what's going on with Anyscale right now, as you guys are gearing up and getting some growth. What's happening with the company? >> Yeah, well one of the most exciting things that's been happening in computing recently, is the rise of AI and the excitement about AI, and the potential for AI to really transform every industry. Now of course, one of the of the biggest challenges to actually making that happen is that doing AI, that AI is incredibly computationally intensive, right? To actually succeed with AI to actually get value out of AI. You're typically not just running it on your laptop, you're often running it and scaling it across thousands of machines, or hundreds of machines or GPUs, and to, so organizations and companies and businesses that do AI often end up building a large infrastructure team to manage the distributed systems, the computing to actually scale these applications. And that's a, that's a, a huge software engineering lift, right? And so, one of the goals for Anyscale is really to make that easy. To get to the point where, developers and teams and companies can succeed with AI. Can build these scalable AI applications, without really you know, without a huge investment in infrastructure with a lot of, without a lot of expertise in infrastructure, where really all they need to know is how to program on their laptop, how to program in Python. And if you have that, then that's really all you need to succeed with AI. So that's what we've been focused on. We're building Ray, which is an open source project that's been starting to get adopted by tons of companies, to actually train these models, to deploy these models, to do inference with these models, you know, to ingest and pre-process their data. And our goals, you know, here with the company are really to make Ray successful. To grow the Ray community, and then to build a great product around it and simplify the development and deployment, and productionization of machine learning for, for all these businesses. >> It's a great trend. Everyone wants developer productivity seeing that, clearly right now. And plus, developers are voting literally on what standards become. As you look at how the market is open source driven, a lot of that I love the model, love the Ray project love the, love the Anyscale value proposition. How big are you guys now, and how is that value proposition of Ray and Anyscale and foundational models coming together? Because it seems like you guys are in a perfect storm situation where you guys could get a real tailwind and draft off the the mega trend that everyone's getting excited. The new toy is ChatGPT. So you got to look at that and say, hey, I mean, come on, you guys did all the heavy lifting. >> Absolutely. >> You know how many people you are, and what's the what's the proposition for you guys these days? >> You know our company's about a hundred people, that a bit larger than that. Ray's been going really quickly. It's been, you know, companies using, like OpenAI uses Ray to train their models, like ChatGPT. Companies like Uber run all their deep learning you know, and classical machine learning on top of Ray. Companies like Shopify, Spotify, Netflix, Cruise, Lyft, Instacart, you know, Bike Dance. A lot of these companies are investing heavily in Ray for their machine learning infrastructure. And I think it's gotten to the point where, if you're one of these, you know type of businesses, and you're looking to revamp your machine learning infrastructure. If you're looking to enable new capabilities, you know make your teams more productive, increase, speed up the experimentation cycle, you know make it more performance, like build, you know, run applications that are more scalable, run them faster, run them in a more cost efficient way. All of these types of companies are at least evaluating Ray and Ray is an increasingly common choice there. I think if they're not using Ray, if many of these companies that end up not using Ray, they often end up building their own infrastructure. So Ray has been, the growth there has been incredibly exciting over the, you know we had our first in-person Ray Summit just back in August, and planning the next one for, for coming September. And so when you asked about the value proposition, I think there's there's really two main things, when people choose to go with Ray and Anyscale. One reason is about moving faster, right? It's about developer productivity, it's about speeding up the experimentation cycle, easily getting their models in production. You know, we hear many companies say that they, you know they, once they prototype a model, once they develop a model, it's another eight weeks, or 12 weeks to actually get that model in production. And that's a reason they talk to us. We hear companies say that, you know they've been training their models and, and doing inference on a single machine, and they've been sort of scaling vertically, like using bigger and bigger machines. But they, you know, you can only do that for so long, and at some point you need to go beyond a single machine and that's when they start talking to us. Right? So one of the main value propositions is around moving faster. I think probably the phrase I hear the most is, companies saying that they don't want their machine learning people to have to spend all their time configuring infrastructure. All this is about productivity. >> Yeah. >> The other. >> It's the big brains in the company. That are being used to do remedial tasks that should be automated right? I mean that's. >> Yeah, and I mean, it's hard stuff, right? It's also not these people's area of expertise, and or where they're adding the most value. So all of this is around developer productivity, moving faster, getting to market faster. The other big value prop and the reason people choose Ray and choose Anyscale, is around just providing superior infrastructure. This is really, can we scale more? You know, can we run it faster, right? Can we run it in a more cost effective way? We hear people saying that they're not getting good GPU utilization with the existing tools they're using, or they can't scale beyond a certain point, or you know they don't have a way to efficiently use spot instances to save costs, right? Or their clusters, you know can't auto scale up and down fast enough, right? These are all the kinds of things that Ray and Anyscale, where Ray and Anyscale add value and solve these kinds of problems. >> You know, you bring up great points. Auto scaling concept, early days, it was easy getting more compute. Now it's complicated. They're built into more integrated apps in the cloud. And you mentioned those companies that you're working with, that's impressive. Those are like the big hardcore, I call them hardcore. They have a good technical teams. And as the wave starts to move from these companies that were hyper scaling up all the time, the mainstream are just developers, right? So you need an interface in, so I see the dots connecting with you guys and I want to get your reaction. Is that how you see it? That you got the alphas out there kind of kicking butt, building their own stuff, alpha developers and infrastructure. But mainstream just wants programmability. They want that heavy lifting taken care of for them. Is that kind of how you guys see it? I mean, take us through that. Because to get crossover to be democratized, the automation's got to be there. And for developer productivity to be in, it's got to be coding and programmability. >> That's right. Ultimately for AI to really be successful, and really you know, transform every industry in the way we think it has the potential to. It has to be easier to use, right? And that is, and being easier to use, there's many dimensions to that. But an important one is that as a developer to do AI, you shouldn't have to be an expert in distributed systems. You shouldn't have to be an expert in infrastructure. If you do have to be, that's going to really limit the number of people who can do this, right? And I think there are so many, all of the companies we talk to, they don't want to be in the business of building and managing infrastructure. It's not that they can't do it. But it's going to slow them down, right? They want to allocate their time and their energy toward building their product, right? To building a better product, getting their product to market faster. And if we can take the infrastructure work off of the critical path for them, that's going to speed them up, it's going to simplify their lives. And I think that is critical for really enabling all of these companies to succeed with AI. >> Talk about the customers you guys are talking to right now, and how that translates over. Because I think you hit a good thread there. Data infrastructure is critical. Managed services are coming online, open sources continuing to grow. You have these people building their own, and then if they abandon it or don't scale it properly, there's kind of consequences. 'Cause it's a system you mentioned, it's a distributed system architecture. It's not as easy as standing up a monolithic app these days. So when you guys go to the marketplace and talk to customers, put the customers in buckets. So you got the ones that are kind of leaning in, that are pretty peaked, probably working with you now, open source. And then what's the customer profile look like as you go mainstream? Are they looking to manage service, looking for more architectural system, architecture approach? What's the, Anyscale progression? How do you engage with your customers? What are they telling you? >> Yeah, so many of these companies, yes, they're looking for managed infrastructure 'cause they want to move faster, right? Now the kind of these profiles of these different customers, they're three main workloads that companies run on Anyscale, run with Ray. It's training related workloads, and it is serving and deployment related workloads, like actually deploying your models, and it's batch processing, batch inference related workloads. Like imagine you want to do computer vision on tons and tons of, of images or videos, or you want to do natural language processing on millions of documents or audio, or speech or things like that, right? So the, I would say the, there's a pretty large variety of use cases, but the most common you know, we see tons of people working with computer vision data, you know, computer vision problems, natural language processing problems. And it's across many different industries. We work with companies doing drug discovery, companies doing you know, gaming or e-commerce, right? Companies doing robotics or agriculture. So there's a huge variety of the types of industries that can benefit from AI, and can really get a lot of value out of AI. And, but the, but the problems are the same problems that they all want to solve. It's like how do you make your team move faster, you know succeed with AI, be more productive, speed up the experimentation, and also how do you do this in a more performant way, in a faster, cheaper, in a more cost efficient, more scalable way. >> It's almost like the cloud game is coming back to AI and these foundational models, because I was just on a podcast, we recorded our weekly podcast, and I was just riffing with Dave Vellante, my co-host on this, were like, hey, in the early days of Amazon, if you want to build an app, you just, you have to build a data center, and then you go to now you go to the cloud, cloud's easier, pay a little money, penny's on the dollar, you get your app up and running. Cloud computing is born. With foundation models in generative AI. The old model was hard, heavy lifting, expensive, build out, before you get to do anything, as you mentioned time. So I got to think that you're pretty much in a good position with this foundational model trend in generative AI because I just looked at the foundation map, foundation models, map of the ecosystem. You're starting to see layers of, you got the tooling, you got platform, you got cloud. It's filling out really quickly. So why is Anyscale important to this new trend? How do you talk to people when they ask you, you know what does ChatGPT mean for Anyscale? And how does the financial foundational model growth, fit into your plan? >> Well, foundational models are hugely important for the industry broadly. Because you're going to have these really powerful models that are trained that you know, have been trained on tremendous amounts of data. tremendous amounts of computes, and that are useful out of the box, right? That people can start to use, and query, and get value out of, without necessarily training these huge models themselves. Now Ray fits in and Anyscale fit in, in a number of places. First of all, they're useful for creating these foundation models. Companies like OpenAI, you know, use Ray for this purpose. Companies like Cohere use Ray for these purposes. You know, IBM. If you look at, there's of course also open source versions like GPTJ, you know, created using Ray. So a lot of these large language models, large foundation models benefit from training on top of Ray. And, but of course for every company training and creating these huge foundation models, you're going to have many more that are fine tuning these models with their own data. That are deploying and serving these models for their own applications, that are building other application and business logic around these models. And that's where Ray also really shines, because Ray you know, is, can provide common infrastructure for all of these workloads. The training, the fine tuning, the serving, the data ingest and pre-processing, right? The hyper parameter tuning, the and and so on. And so where the reason Ray and Anyscale are important here, is that, again, foundation models are large, foundation models are compute intensive, doing you know, using both creating and using these foundation models requires tremendous amounts of compute. And there there's a big infrastructure lift to make that happen. So either you are using Ray and Anyscale to do this, or you are building the infrastructure and managing the infrastructure yourself. Which you can do, but it's, it's hard. >> Good luck with that. I always say good luck with that. I mean, I think if you really need to do, build that hardened foundation, you got to go all the way. And I think this, this idea of composability is interesting. How is Ray working with OpenAI for instance? Take, take us through that. Because I think you're going to see a lot of people talking about, okay I got trained models, but I'm going to have not one, I'm going to have many. There's big debate that OpenAI is going to be the mother of all LLMs, but now, but really people are also saying that to be many more, either purpose-built or specific. The fusion and these things come together there's like a blending of data, and that seems to be a value proposition. How does Ray help these guys get their models up? Can you take, take us through what Ray's doing for say OpenAI and others, and how do you see the models interacting with each other? >> Yeah, great question. So where, where OpenAI uses Ray right now, is for the training workloads. Training both to create ChatGPT and models like that. There's both a supervised learning component, where you're pre-training this model on doing supervised pre-training with example data. There's also a reinforcement learning component, where you are fine-tuning the model and continuing to train the model, but based on human feedback, based on input from humans saying that, you know this response to this question is better than this other response to this question, right? And so Ray provides the infrastructure for scaling the training across many, many GPUs, many many machines, and really running that in an efficient you know, performance fault tolerant way, right? And so, you know, open, this is not the first version of OpenAI's infrastructure, right? They've gone through iterations where they did start with building the infrastructure themselves. They were using tools like MPI. But at some point, you know, given the complexity, given the scale of what they're trying to do, you hit a wall with MPI and that's going to happen with a lot of other companies in this space. And at that point you don't have many other options other than to use Ray or to build your own infrastructure. >> That's awesome. And then your vision on this data interaction, because the old days monolithic models were very rigid. You couldn't really interface with them. But we're kind of seeing this future of data fusion, data interaction, data blending at large scale. What's your vision? How do you, what's your vision of where this goes? Because if this goes the way people think. You can have this data chemistry kind of thing going on where people are integrating all kinds of data with each other at large scale. So you need infrastructure, intelligence, reasoning, a lot of code. Is this something that you see? What's your vision in all this? Take us through. >> AI is going to be used everywhere right? It's, we see this as a technology that's going to be ubiquitous, and is going to transform every business. I mean, imagine you make a product, maybe you were making a tool like Photoshop or, or whatever the, you know, tool is. The way that people are going to use your tool, is not by investing, you know, hundreds of hours into learning all of the different, you know specific buttons they need to press and workflows they need to go through it. They're going to talk to it, right? They're going to say, ask it to do the thing they want it to do right? And it's going to do it. And if it, if it doesn't know what it's want, what it's, what's being asked of it. It's going to ask clarifying questions, right? And then you're going to clarify, and you're going to have a conversation. And this is going to make many many many kinds of tools and technology and products easier to use, and lower the barrier to entry. And so, and this, you know, many companies fit into this category of trying to build products that, and trying to make them easier to use, this is just one kind of way it can, one kind of way that AI will will be used. But I think it's, it's something that's pretty ubiquitous. >> Yeah. It'll be efficient, it'll be efficiency up and down the stack, and will change the productivity equation completely. You just highlighted one, I don't want to fill out forms, just stand up my environment for me. And then start coding away. Okay well this is great stuff. Final word for the folks out there watching, obviously new kind of skill set for hiring. You guys got engineers, give a plug for the company, for Anyscale. What are you looking for? What are you guys working on? Give a, take the last minute to put a plug in for the company. >> Yeah well if you're interested in AI and if you think AI is really going to be transformative, and really be useful for all these different industries. We are trying to provide the infrastructure to enable that to happen, right? So I think there's the potential here, to really solve an important problem, to get to the point where developers don't need to think about infrastructure, don't need to think about distributed systems. All they think about is their application logic, and what they want their application to do. And I think if we can achieve that, you know we can be the foundation or the platform that enables all of these other companies to succeed with AI. So that's where we're going. I think something like this has to happen if AI is going to achieve its potential, we're looking for, we're hiring across the board, you know, great engineers, on the go-to-market side, product managers, you know people who want to really, you know, make this happen. >> Awesome well congratulations. I know you got some good funding behind you. You're in a good spot. I think this is happening. I think generative AI and foundation models is going to be the next big inflection point, as big as the pc inter-networking, internet and smartphones. This is a whole nother application framework, a whole nother set of things. So this is the ground floor. Robert, you're, you and your team are right there. Well done. >> Thank you so much. >> All right. Thanks for coming on this CUBE conversation. I'm John Furrier with theCUBE. Breaking down a conversation around AI and scaling up in this new next major inflection point. This next wave is foundational models, generative AI. And thanks to ChatGPT, the whole world's now knowing about it. So it really is changing the game and Anyscale is right there, one of the hot startups, that is in good position to ride this next wave. Thanks for watching. (upbeat instrumental)
SUMMARY :
Robert, great to have you Thanks for inviting me. as you guys are gearing up and the potential for AI to a lot of that I love the and at some point you need It's the big brains in the company. and the reason people the automation's got to be there. and really you know, and talk to customers, put but the most common you know, and then you go to now that are trained that you know, and that seems to be a value proposition. And at that point you don't So you need infrastructure, and lower the barrier to entry. What are you guys working on? and if you think AI is really is going to be the next And thanks to ChatGPT,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Robert Nishihara | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
12 weeks | QUANTITY | 0.99+ |
Robert | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
eight weeks | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
August | DATE | 0.99+ |
September | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Cruise | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Instacart | ORGANIZATION | 0.99+ |
Anyscale | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Photoshop | TITLE | 0.99+ |
One reason | QUANTITY | 0.99+ |
Bike Dance | ORGANIZATION | 0.99+ |
Ray | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
thousands of machines | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
two main things | QUANTITY | 0.98+ |
single machine | QUANTITY | 0.98+ |
Cohere | ORGANIZATION | 0.98+ |
Ray and Anyscale | ORGANIZATION | 0.98+ |
millions of documents | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one kind | QUANTITY | 0.96+ |
first version | QUANTITY | 0.95+ |
CUBE | ORGANIZATION | 0.95+ |
about a hundred people | QUANTITY | 0.95+ |
hundreds of machines | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
OpenAI | ORGANIZATION | 0.94+ |
First | QUANTITY | 0.94+ |
hundreds of hours | QUANTITY | 0.93+ |
first time | QUANTITY | 0.93+ |
Databricks | ORGANIZATION | 0.91+ |
Ray and Anyscale | ORGANIZATION | 0.9+ |
tons | QUANTITY | 0.89+ |
couple years ago | DATE | 0.88+ |
Ray and | ORGANIZATION | 0.86+ |
ChatGPT | TITLE | 0.81+ |
tons of people | QUANTITY | 0.8+ |
SiliconANGLE News | Beyond the Buzz: A deep dive into the impact of AI
(upbeat music) >> Hello, everyone, welcome to theCUBE. I'm John Furrier, the host of theCUBE in Palo Alto, California. Also it's SiliconANGLE News. Got two great guests here to talk about AI, the impact of the future of the internet, the applications, the people. Amr Awadallah, the founder and CEO, Ed Alban is the CEO of Vectara, a new startup that emerged out of the original Cloudera, I would say, 'cause Amr's known, famous for the Cloudera founding, which was really the beginning of the big data movement. And now as AI goes mainstream, there's so much to talk about, so much to go on. And plus the new company is one of the, now what I call the wave, this next big wave, I call it the fifth wave in the industry. You know, you had PCs, you had the internet, you had mobile. This generative AI thing is real. And you're starting to see startups come out in droves. Amr obviously was founder of Cloudera, Big Data, and now Vectara. And Ed Albanese, you guys have a new company. Welcome to the show. >> Thank you. It's great to be here. >> So great to see you. Now the story is theCUBE started in the Cloudera office. Thanks to you, and your friendly entrepreneurship views that you have. We got to know each other over the years. But Cloudera had Hadoop, which was the beginning of what I call the big data wave, which then became what we now call data lakes, data oceans, and data infrastructure that's developed from that. It's almost interesting to look back 12 plus years, and see that what AI is doing now, right now, is opening up the eyes to the mainstream, and the application's almost mind blowing. You know, Sati Natel called it the Mosaic Moment, didn't say Netscape, he built Netscape (laughing) but called it the Mosaic Moment. You're seeing companies in startups, kind of the alpha geeks running here, because this is the new frontier, and there's real meat on the bone, in terms of like things to do. Why? Why is this happening now? What's is the confluence of the forces happening, that are making this happen? >> Yeah, I mean if you go back to the Cloudera days, with big data, and so on, that was more about data processing. Like how can we process data, so we can extract numbers from it, and do reporting, and maybe take some actions, like this is a fraud transaction, or this is not. And in the meanwhile, many of the researchers working in the neural network, and deep neural network space, were trying to focus on data understanding, like how can I understand the data, and learn from it, so I can take actual actions, based on the data directly, just like a human does. And we were only good at doing that at the level of somebody who was five years old, or seven years old, all the way until about 2013. And starting in 2013, which is only 10 years ago, a number of key innovations started taking place, and each one added on. It was no major innovation that just took place. It was a couple of really incremental ones, but they added on top of each other, in a very exponentially additive way, that led to, by the end of 2019, we now have models, deep neural network models, that can read and understand human text just like we do. Right? And they can reason about it, and argue with you, and explain it to you. And I think that's what is unlocking this whole new wave of innovation that we're seeing right now. So data understanding would be the essence of it. >> So it's not a Big Bang kind of theory, it's been evolving over time, and I think that the tipping point has been the advancements and other things. I mean look at cloud computing, and look how fast it just crept up on AWS. I mean AWS you back three, five years ago, I was talking to Swami yesterday, and their big news about AI, expanding the Hugging Face's relationship with AWS. And just three, five years ago, there wasn't a model training models out there. But as compute comes out, and you got more horsepower,, these large language models, these foundational models, they're flexible, they're not monolithic silos, they're interacting. There's a whole new, almost fusion of data happening. Do you see that? I mean is that part of this? >> Of course, of course. I mean this wave is building on all the previous waves. We wouldn't be at this point if we did not have hardware that can scale, in a very efficient way. We wouldn't be at this point, if we don't have data that we're collecting about everything we do, that we're able to process in this way. So this, this movement, this motion, this phase we're in, absolutely builds on the shoulders of all the previous phases. For some of the observers from the outside, when they see chatGPT for the first time, for them was like, "Oh my god, this just happened overnight." Like it didn't happen overnight. (laughing) GPT itself, like GPT3, which is what chatGPT is based on, was released a year ahead of chatGPT, and many of us were seeing the power it can provide, and what it can do. I don't know if Ed agrees with that. >> Yeah, Ed? >> I do. Although I would acknowledge that the possibilities now, because of what we've hit from a maturity standpoint, have just opened up in an incredible way, that just wasn't tenable even three years ago. And that's what makes it, it's true that it developed incrementally, in the same way that, you know, the possibilities of a mobile handheld device, you know, in 2006 were there, but when the iPhone came out, the possibilities just exploded. And that's the moment we're in. >> Well, I've had many conversations over the past couple months around this area with chatGPT. John Markoff told me the other day, that he calls it, "The five dollar toy," because it's not that big of a deal, in context to what AI's doing behind the scenes, and all the work that's done on ethics, that's happened over the years, but it has woken up the mainstream, so everyone immediately jumps to ethics. "Does it work? "It's not factual," And everyone who's inside the industry is like, "This is amazing." 'Cause you have two schools of thought there. One's like, people that think this is now the beginning of next gen, this is now we're here, this ain't your grandfather's chatbot, okay?" With NLP, it's got reasoning, it's got other things. >> I'm in that camp for sure. >> Yeah. Well I mean, everyone who knows what's going on is in that camp. And as the naysayers start to get through this, and they go, "Wow, it's not just plagiarizing homework, "it's helping me be better. "Like it could rewrite my memo, "bring the lead to the top." It's so the format of the user interface is interesting, but it's still a data-driven app. >> Absolutely. >> So where does it go from here? 'Cause I'm not even calling this the first ending. This is like pregame, in my opinion. What do you guys see this going, in terms of scratching the surface to what happens next? >> I mean, I'll start with, I just don't see how an application is going to look the same in the next three years. Who's going to want to input data manually, in a form field? Who is going to want, or expect, to have to put in some text in a search box, and then read through 15 different possibilities, and try to figure out which one of them actually most closely resembles the question they asked? You know, I don't see that happening. Who's going to start with an absolute blank sheet of paper, and expect no help? That is not how an application will work in the next three years, and it's going to fundamentally change how people interact and spend time with opening any element on their mobile phone, or on their computer, to get something done. >> Yes. I agree with that. Like every single application, over the next five years, will be rewritten, to fit within this model. So imagine an HR application, I don't want to name companies, but imagine an HR application, and you go into application and you clicking on buttons, because you want to take two weeks of vacation, and menus, and clicking here and there, reasons and managers, versus just telling the system, "I'm taking two weeks of vacation, going to Las Vegas," book it, done. >> Yeah. >> And the system just does it for you. If you weren't completing in your input, in your description, for what you want, then the system asks you back, "Did you mean this? "Did you mean that? "Were you trying to also do this as well?" >> Yeah. >> "What was the reason?" And that will fit it for you, and just do it for you. So I think the user interface that we have with apps, is going to change to be very similar to the user interface that we have with each other. And that's why all these apps will need to evolve. >> I know we don't have a lot of time, 'cause you guys are very busy, but I want to definitely have multiple segments with you guys, on this topic, because there's so much to talk about. There's a lot of parallels going on here. I was talking again with Swami who runs all the AI database at AWS, and I asked him, I go, "This feels a lot like the original AWS. "You don't have to provision a data center." A lot of this heavy lifting on the back end, is these large language models, with these foundational models. So the bottleneck in the past, was the energy, and cost to actually do it. Now you're seeing it being stood up faster. So there's definitely going to be a tsunami of apps. I would see that clearly. What is it? We don't know yet. But also people who are going to leverage the fact that I can get started building value. So I see a startup boom coming, and I see an application tsunami of refactoring things. >> Yes. >> So the replatforming is already kind of happening. >> Yes, >> OpenAI, chatGPT, whatever. So that's going to be a developer environment. I mean if Amazon turns this into an API, or a Microsoft, what you guys are doing. >> We're turning it into API as well. That's part of what we're doing as well, yes. >> This is why this is exciting. Amr, you've lived the big data dream, and and we used to talk, if you didn't have a big data problem, if you weren't full of data, you weren't really getting it. Now people have all the data, and they got to stand this up. >> Yeah. >> So the analogy is again, the mobile, I like the mobile movement, and using mobile as an analogy, most companies were not building for a mobile environment, right? They were just building for the web, and legacy way of doing apps. And as soon as the user expectations shifted, that my expectation now, I need to be able to do my job on this small screen, on the mobile device with a touchscreen. Everybody had to invest in re-architecting, and re-implementing every single app, to fit within that model, and that model of interaction. And we are seeing the exact same thing happen now. And one of the core things we're focused on at Vectara, is how to simplify that for organizations, because a lot of them are overwhelmed by large language models, and ML. >> They don't have the staff. >> Yeah, yeah, yeah. They're understaffed, they don't have the skills. >> But they got developers, they've got DevOps, right? >> Yes. >> So they have the DevSecOps going on. >> Exactly, yes. >> So our goal is to simplify it enough for them that they can start leveraging this technology effectively, within their applications. >> Ed, you're the COO of the company, obviously a startup. You guys are growing. You got great backup, and good team. You've also done a lot of business development, and technical business development in this area. If you look at the landscape right now, and I agree the apps are coming, every company I talk to, that has that jet chatGPT of, you know, epiphany, "Oh my God, look how cool this is. "Like magic." Like okay, it's code, settle down. >> Mm hmm. >> But everyone I talk to is using it in a very horizontal way. I talk to a very senior person, very tech alpha geek, very senior person in the industry, technically. they're using it for log data, they're using it for configuration of routers. And in other areas, they're using it for, every vertical has a use case. So this is horizontally scalable from a use case standpoint. When you hear horizontally scalable, first thing I chose in my mind is cloud, right? >> Mm hmm. >> So cloud, and scalability that way. And the data is very specialized. So now you have this vertical specialization, horizontally scalable, everyone will be refactoring. What do you see, and what are you seeing from customers, that you talk to, and prospects? >> Yeah, I mean put yourself in the shoes of an application developer, who is actually trying to make their application a bit more like magic. And to have that soon-to-be, honestly, expected experience. They've got to think about things like performance, and how efficiently that they can actually execute a query, or a question. They've got to think about cost. Generative isn't cheap, like the inference of it. And so you've got to be thoughtful about how and when you take advantage of it, you can't use it as a, you know, everything looks like a nail, and I've got a hammer, and I'm going to hit everything with it, because that will be wasteful. Developers also need to think about how they're going to take advantage of, but not lose their own data. So there has to be some controls around what they feed into the large language model, if anything. Like, should they fine tune a large language model with their own data? Can they keep it logically separated, but still take advantage of the powers of a large language model? And they've also got to take advantage, and be aware of the fact that when data is generated, that it is a different class of data. It might not fully be their own. >> Yeah. >> And it may not even be fully verified. And so when the logical cycle starts, of someone making a request, the relationship between that request, and the output, those things have to be stored safely, logically, and identified as such. >> Yeah. >> And taken advantage of in an ongoing fashion. So these are mega problems, each one of them independently, that, you know, you can think of it as middleware companies need to take advantage of, and think about, to help the next wave of application development be logical, sensible, and effective. It's not just calling some raw API on the cloud, like openAI, and then just, you know, you get your answer and you're done, because that is a very brute force approach. >> Well also I will point, first of all, I agree with your statement about the apps experience, that's going to be expected, form filling. Great point. The interesting about chatGPT. >> Sorry, it's not just form filling, it's any action you would like to take. >> Yeah. >> Instead of clicking, and dragging, and dropping, and doing it on a menu, or on a touch screen, you just say it, and it's and it happens perfectly. >> Yeah. It's a different interface. And that's why I love that UIUX experiences, that's the people falling out of their chair moment with chatGPT, right? But a lot of the things with chatGPT, if you feed it right, it works great. If you feed it wrong and it goes off the rails, it goes off the rails big. >> Yes, yes. >> So the the Bing catastrophes. >> Yeah. >> And that's an example of garbage in, garbage out, classic old school kind of comp-side phrase that we all use. >> Yep. >> Yes. >> This is about data in injection, right? It reminds me the old SQL days, if you had to, if you can sling some SQL, you were a magician, you know, to get the right answer, it's pretty much there. So you got to feed the AI. >> You do, Some people call this, the early word to describe this as prompt engineering. You know, old school, you know, search, or, you know, engagement with data would be, I'm going to, I have a question or I have a query. New school is, I have, I have to issue it a prompt, because I'm trying to get, you know, an action or a reaction, from the system. And the active engineering, there are a lot of different ways you could do it, all the way from, you know, raw, just I'm going to send you whatever I'm thinking. >> Yeah. >> And you get the unintended outcomes, to more constrained, where I'm going to just use my own data, and I'm going to constrain the initial inputs, the data I already know that's first party, and I trust, to, you know, hyper constrain, where the application is actually, it's looking for certain elements to respond to. >> It's interesting Amr, this is why I love this, because one we are in the media, we're recording this video now, we'll stream it. But we got all your linguistics, we're talking. >> Yes. >> This is data. >> Yep. >> So the data quality becomes now the new intellectual property, because, if you have that prompt source data, it makes data or content, in our case, the original content, intellectual property. >> Absolutely. >> Because that's the value. And that's where you see chatGPT fall down, is because they're trying to scroll the web, and people think it's search. It's not necessarily search, it's giving you something that you wanted. It is a lot of that, I remember in Cloudera, you said, "Ask the right questions." Remember that phrase you guys had, that slogan? >> Mm hmm. And that's prompt engineering. So that's exactly, that's the reinvention of "Ask the right question," is prompt engineering is, if you don't give these models the question in the right way, and very few people know how to frame it in the right way with the right context, then you will get garbage out. Right? That is the garbage in, garbage out. But if you specify the question correctly, and you provide with it the metadata that constrain what that question is going to be acted upon or answered upon, then you'll get much better answers. And that's exactly what we solved Vectara. >> Okay. So before we get into the last couple minutes we have left, I want to make sure we get a plug in for the opportunity, and the profile of Vectara, your new company. Can you guys both share with me what you think the current situation is? So for the folks who are now having those moments of, "Ah, AI's bullshit," or, "It's not real, it's a lot of stuff," from, "Oh my god, this is magic," to, "Okay, this is the future." >> Yes. >> What would you say to that person, if you're at a cocktail party, or in the elevator say, "Calm down, this is the first inning." How do you explain the dynamics going on right now, to someone who's either in the industry, but not in the ropes? How would you explain like, what this wave's about? How would you describe it, and how would you prepare them for how to change their life around this? >> Yeah, so I'll go first and then I'll let Ed go. Efficiency, efficiency is the description. So we figured that a way to be a lot more efficient, a way where you can write a lot more emails, create way more content, create way more presentations. Developers can develop 10 times faster than they normally would. And that is very similar to what happened during the Industrial Revolution. I always like to look at examples from the past, to read what will happen now, and what will happen in the future. So during the Industrial Revolution, it was about efficiency with our hands, right? So I had to make a piece of cloth, like this piece of cloth for this shirt I'm wearing. Our ancestors, they had to spend month taking the cotton, making it into threads, taking the threads, making them into pieces of cloth, and then cutting it. And now a machine makes it just like that, right? And the ancestors now turned from the people that do the thing, to manage the machines that do the thing. And I think the same thing is going to happen now, is our efficiency will be multiplied extremely, as human beings, and we'll be able to do a lot more. And many of us will be able to do things they couldn't do before. So another great example I always like to use is the example of Google Maps, and GPS. Very few of us knew how to drive a car from one location to another, and read a map, and get there correctly. But once that efficiency of an AI, by the way, behind these things is very, very complex AI, that figures out how to do that for us. All of us now became amazing navigators that can go from any point to any point. So that's kind of how I look at the future. >> And that's a great real example of impact. Ed, your take on how you would talk to a friend, or colleague, or anyone who asks like, "How do I make sense of the current situation? "Is it real? "What's in it for me, and what do I do?" I mean every company's rethinking their business right now, around this. What would you say to them? >> You know, I usually like to show, rather than describe. And so, you know, the other day I just got access, I've been using an application for a long time, called Notion, and it's super popular. There's like 30 or 40 million users. And the new version of Notion came out, which has AI embedded within it. And it's AI that allows you primarily to create. So if you could break down the world of AI into find and create, for a minute, just kind of logically separate those two things, find is certainly going to be massively impacted in our experiences as consumers on, you know, Google and Bing, and I can't believe I just said the word Bing in the same sentence as Google, but that's what's happening now (all laughing), because it's a good example of change. >> Yes. >> But also inside the business. But on the crate side, you know, Notion is a wiki product, where you try to, you know, note down things that you are thinking about, or you want to share and memorialize. But sometimes you do need help to get it down fast. And just in the first day of using this new product, like my experience has really fundamentally changed. And I think that anybody who would, you know, anybody say for example, that is using an existing app, I would show them, open up the app. Now imagine the possibility of getting a starting point right off the bat, in five seconds of, instead of having to whole cloth draft this thing, imagine getting a starting point then you can modify and edit, or just dispose of and retry again. And that's the potential for me. I can't imagine a scenario where, in a few years from now, I'm going to be satisfied if I don't have a little bit of help, in the same way that I don't manually spell check every email that I send. I automatically spell check it. I love when I'm getting type ahead support inside of Google, or anything. Doesn't mean I always take it, or when texting. >> That's efficiency too. I mean the cloud was about developers getting stuff up quick. >> Exactly. >> All that heavy lifting is there for you, so you don't have to do it. >> Right? >> And you get to the value faster. >> Exactly. I mean, if history taught us one thing, it's, you have to always embrace efficiency, and if you don't fast enough, you will fall behind. Again, looking at the industrial revolution, the companies that embraced the industrial revolution, they became the leaders in the world, and the ones who did not, they all like. >> Well the AI thing that we got to watch out for, is watching how it goes off the rails. If it doesn't have the right prompt engineering, or data architecture, infrastructure. >> Yes. >> It's a big part. So this comes back down to your startup, real quick, I know we got a couple minutes left. Talk about the company, the motivation, and we'll do a deeper dive on on the company. But what's the motivation? What are you targeting for the market, business model? The tech, let's go. >> Actually, I would like Ed to go first. Go ahead. >> Sure, I mean, we're a developer-first, API-first platform. So the product is oriented around allowing developers who may not be superstars, in being able to either leverage, or choose, or select their own large language models for appropriate use cases. But they that want to be able to instantly add the power of large language models into their application set. We started with search, because we think it's going to be one of the first places that people try to take advantage of large language models, to help find information within an application context. And we've built our own large language models, focused on making it very efficient, and elegant, to find information more quickly. So what a developer can do is, within minutes, go up, register for an account, and get access to a set of APIs, that allow them to send data, to be converted into a format that's easy to understand for large language models, vectors. And then secondarily, they can issue queries, ask questions. And they can ask them very, the questions that can be asked, are very natural language questions. So we're talking about long form sentences, you know, drill down types of questions, and they can get answers that either come back in depending upon the form factor of the user interface, in list form, or summarized form, where summarized equals the opportunity to kind of see a condensed, singular answer. >> All right. I have a. >> Oh okay, go ahead, you go. >> I was just going to say, I'm going to be a customer for you, because I want, my dream was to have a hologram of theCUBE host, me and Dave, and have questions be generated in the metaverse. So you know. (all laughing) >> There'll be no longer any guests here. They'll all be talking to you guys. >> Give a couple bullets, I'll spit out 10 good questions. Publish a story. This brings the automation, I'm sorry to interrupt you. >> No, no. No, no, I was just going to follow on on the same. So another way to look at exactly what Ed described is, we want to offer you chatGPT for your own data, right? So imagine taking all of the recordings of all of the interviews you have done, and having all of the content of that being ingested by a system, where you can now have a conversation with your own data and say, "Oh, last time when I met Amr, "which video games did we talk about? "Which movie or book did we use as an analogy "for how we should be embracing data science, "and big data, which is moneyball," I know you use moneyball all the time. And you start having that conversation. So, now the data doesn't become a passive asset that you just have in your organization. No. It's an active participant that's sitting with you, on the table, helping you make decisions. >> One of my favorite things to do with customers, is to go to their site or application, and show them me using it. So for example, one of the customers I talked to was one of the biggest property management companies in the world, that lets people go and rent homes, and houses, and things like that. And you know, I went and I showed them me searching through reviews, looking for information, and trying different words, and trying to find out like, you know, is this place quiet? Is it comfortable? And then I put all the same data into our platform, and I showed them the world of difference you can have when you start asking that question wholeheartedly, and getting real information that doesn't have anything to do with the words you asked, but is really focused on the meaning. You know, when I asked like, "Is it quiet?" You know, answers would come back like, "The wind whispered through the trees peacefully," and you know, it's like nothing to do with quiet in the literal word sense, but in the meaning sense, everything to do with it. And that that was magical even for them, to see that. >> Well you guys are the front end of this big wave. Congratulations on the startup, Amr. I know you guys got great pedigree in big data, and you've got a great team, and congratulations. Vectara is the name of the company, check 'em out. Again, the startup boom is coming. This will be one of the major waves, generative AI is here. I think we'll look back, and it will be pointed out as a major inflection point in the industry. >> Absolutely. >> There's not a lot of hype behind that. People are are seeing it, experts are. So it's going to be fun, thanks for watching. >> Thanks John. (soft music)
SUMMARY :
I call it the fifth wave in the industry. It's great to be here. and the application's almost mind blowing. And in the meanwhile, and you got more horsepower,, of all the previous phases. in the same way that, you know, and all the work that's done on ethics, "bring the lead to the top." in terms of scratching the surface and it's going to fundamentally change and you go into application And the system just does it for you. is going to change to be very So the bottleneck in the past, So the replatforming is So that's going to be a That's part of what and they got to stand this up. And one of the core things don't have the skills. So our goal is to simplify it and I agree the apps are coming, I talk to a very senior And the data is very specialized. and be aware of the fact that request, and the output, some raw API on the cloud, about the apps experience, it's any action you would like to take. you just say it, and it's But a lot of the things with chatGPT, comp-side phrase that we all use. It reminds me the old all the way from, you know, raw, and I'm going to constrain But we got all your So the data quality And that's where you That is the garbage in, garbage out. So for the folks who are and how would you prepare them that do the thing, to manage the current situation? And the new version of Notion came out, But on the crate side, you I mean the cloud was about developers so you don't have to do it. and the ones who did not, they all like. If it doesn't have the So this comes back down to Actually, I would like Ed to go first. factor of the user interface, I have a. generated in the metaverse. They'll all be talking to you guys. This brings the automation, of all of the interviews you have done, one of the customers I talked to Vectara is the name of the So it's going to be fun, Thanks John.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Markoff | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ed Alban | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
2006 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Ed Albanese | PERSON | 0.99+ |
John | PERSON | 0.99+ |
five seconds | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Ed | PERSON | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
10 good questions | QUANTITY | 0.99+ |
Swami | PERSON | 0.99+ |
15 different possibilities | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Vectara | ORGANIZATION | 0.99+ |
Amr Awadallah | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cloudera | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
end of 2019 | DATE | 0.99+ |
yesterday | DATE | 0.98+ |
Big Data | ORGANIZATION | 0.98+ |
40 million users | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.98+ |
12 plus years | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
five dollar | QUANTITY | 0.98+ |
Netscape | ORGANIZATION | 0.98+ |
five years ago | DATE | 0.98+ |
SQL | TITLE | 0.98+ |
first inning | QUANTITY | 0.98+ |
Amr | PERSON | 0.97+ |
two schools | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
One | QUANTITY | 0.96+ |
first day | QUANTITY | 0.96+ |
three | DATE | 0.96+ |
chatGPT | TITLE | 0.96+ |
first places | QUANTITY | 0.95+ |
Bing | ORGANIZATION | 0.95+ |
Notion | TITLE | 0.95+ |
first thing | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
Beyond the Buzz | TITLE | 0.94+ |
Sati Natel | PERSON | 0.94+ |
Industrial Revolution | EVENT | 0.93+ |
one location | QUANTITY | 0.93+ |
three years ago | DATE | 0.93+ |
single application | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.91+ |
first platform | QUANTITY | 0.91+ |
five years old | QUANTITY | 0.91+ |
Welcome to Supercloud2
(bright upbeat melody) >> Hello everyone, welcome back to Supercloud2. I'm John Furrier, my co-host Dave Vellante, here at theCUBE in Palo Alto, California, for our live stage performance all day for Supercloud2. Unpacking this next generation movement in cloud computing. Dave, Supercloud1 was in August. We had great response and acceleration of that momentum. We had some haters too. We had some folks out there throwing shade on this. But at the same time, a lot of leaders came out of the woodwork, a lot of practitioners. And this Supercloud2 event I think will expose and illustrate some of the examples of what's happening in the industry and more importantly, kind of where it's going. >> Well it's great to be back in our studios in Palo Alto, John. Seems like just yesterday was August 9th, where the community was really refining the definition of Super Cloud. We were identifying the essential characteristics, with some of the leading technologists in Silicon Valley. We were digging into the deployment models. Whereas this Supercloud, Supercloud2 is really taking a practitioner view. We're going to hear from Walmart today. They've built a Supercloud. They called it the Walmart Cloud native platform. We're going to hear from other data practitioners, like Saks. We're going to hear from Western Union. They've got 200 locations around the world, how they're dealing with data sovereignty. And of course we've got some local technologists and practitioners coming in, analysts, consultants, theCUBE community. I'm really excited to be here. >> And we've got some great keynotes from executives at VMware. We're going to expose some of the things that they're working on around cross cloud services, which leads into multicloud. I think the practitioner angle highlights my favorite part of this program, 'cause you're starting to see the builders, a term coined by Andy Jassy, early days of AWS. That builder movement has been continuing to go. And you're seeing the enterprise, global enterprises adopt this builder mentality with Cloud Native. This is going to power the next generation global economy. And I think the role of the cloud computing vendors like AWS, Azure, Google, Alibaba are going to be the source engine of innovation. And what gets built on top of and with the clouds will be a big significant market value for all businesses and their business models. So I think the market wants the supercloud, the business models are pointing to Supercloud. The technology needs supercloud. And society, from an economic standpoint and from a use case standpoint, needs supercloud. You're seeing it today. Everyone's talking about chat GPT. This is an example of what will come out of this next generation and it's just getting started. So to me, you're either on the supercloud side of the camp or you're on the old school, hugging onto the old school mentality of wait a minute, that's cloud computing. So I think if you're not on the super cloud wave, you're going to be driftwood. And that's a term coined by Pat Gelsinger. And this is really the reality. Are you on the super cloud side? Or are you on the old huggin' the old model? And that's going to be a determinant. And you're going to see who's going to be the players on that, Dave. This is going to be a real big year. >> Everybody's heard the phrase follow the money. Well, my philosophy is follow the data. And that's a big part of what Supercloud2 is, because the data is where the money is across the clouds. And people want more simplicity, or greater simplicity across the clouds. So it's really, there's two forces here. You've got the ecosystem that's saying, hey the hyperscalers, they've done a great job but there's problems that they're not solving. So we're going to lean in and solve those problems. At the same time, you have the practitioners saying we have multicloud, we have to deal with this, help us. It's got to be simpler. Because we want to share data across clouds. We want to build data products, we want to monetize and drive revenue and cut costs. >> This is the key thing. The builder movement is hitting a wall, and that wall will be broken down because the business models of the companies themselves are demanding that the value from the data with security has to be embedded. So I think you're going to see a big year this next year or so where the builders will accelerate through this next generation, supercloud wave, will be a builder's wave for business. And I think that's going to be the nuance here. And all the people that are on the side of Supercloud are all pro-business, pro-technology. The ones that aren't are like, wait a minute I used to do things differently. They're stuck. And so I think this is going to be a question of are we stuck? Are builders accelerating? Will the business models develop around it? That's digital transformation. At the end of the day, the market's speaking, Dave. The market wants more. Chat GPT, you're seeing AI starting to flourish, powered by data. It's unstoppable, supercloud's unstoppable. >> One of our headliners today is Zhamak Dehghani, the creator of Data Mesh. We've got some news around her. She's going to be live in studio. Super excited about that. Kit Colbert in Supercloud, the first Supercloud in last August, laid out an initial architecture for Supercloud. He's going to advance that today, tell us what's changed, and really dig into and really talk about the meat on the bone, if you will. And we've got some other technologists that are coming in saying, Hey, is it a platform? Is it an architecture? What's the right model here? So we're going to debate that a little bit today. >> And before we close, I'll just say look at the guests, look at the talk tracks. You're seeing a diversity of startups doing cloud networking, you're seeing big practitioners building their own thing, being builders for business value and business model advantages. And you got companies like VMware, who have been on the wave of virtualization. So the, everyone who's involved in super cloud, they're seeing it, they're on the front lines. They're seeing the trend. They are riding that wave. And they have, they're bringing data to the table. So to me, you look at who's involved and you judge it that way. To me, that's the way I look at this. And because we're making it open, Supercloud is going to continue to be debated. But more importantly, the results are going to come in. The market supports it, the business needs it, tech's there, and will it happen? So I think the builders movement, Dave, is going to be big to watch. And then ultimately how that business transformation kicks in, and I think those are the two variables that I would watch on Supercloud. >> Our mission has always been around free content, giving back to the community. So I really want to thank our sponsors today. We've had a great partnership with VMware, who's not only contributed some financial support, but also great content. Alkira, ChaosSearch, prosimo, all phenomenal, allowing us to achieve our mission of serving our audiences and really trying to give more than we take from. >> Free content, that's our mission. Dave, great to kick it off. Kickin' off Supercloud2 all day, we've got some great programs here. We've got VMware coming up next. We have Victoria Viering, who's been on before. He's got a great vision for cross cloud service. We're getting also a keynote with Kit Colbert, who's going to lay out the fragmentation and the benefits that that solves, from solvent fragmentation and silos, breaking down the silos and bringing multicloud future to the table via Super Cloud. So stay with us. We'll be right back after this short break. (bright upbeat music) (music fades)
SUMMARY :
and illustrate some of the examples We're going to hear from Walmart today. And that's going to be a determinant. At the same time, you And so I think this is going to the meat on the bone, if you will. Dave, is going to be big to watch. giving back to the community. and the benefits that that solves,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Silicon Valley | LOCATION | 0.99+ |
August | DATE | 0.99+ |
Victoria Viering | PERSON | 0.99+ |
August 9th | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
200 locations | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Supercloud2 | EVENT | 0.99+ |
two forces | QUANTITY | 0.99+ |
last August | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
two variables | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
Azure | ORGANIZATION | 0.97+ |
ChaosSearch | ORGANIZATION | 0.95+ |
super cloud wave | EVENT | 0.94+ |
Supercloud1 | EVENT | 0.94+ |
Super Cloud | TITLE | 0.93+ |
Alkira | PERSON | 0.83+ |
Palo Alto, John | LOCATION | 0.83+ |
this next year | DATE | 0.81+ |
Data Mesh | ORGANIZATION | 0.8+ |
supercloud wave | EVENT | 0.79+ |
wave of | EVENT | 0.79+ |
Western Union | LOCATION | 0.78+ |
Saks | ORGANIZATION | 0.76+ |
GPT | ORGANIZATION | 0.73+ |
Supercloud2 | ORGANIZATION | 0.72+ |
Cloud Native | TITLE | 0.69+ |
Supercloud | TITLE | 0.67+ |
Supercloud2 | COMMERCIAL_ITEM | 0.66+ |
multicloud | ORGANIZATION | 0.57+ |
Supercloud | COMMERCIAL_ITEM | 0.53+ |
Supercloud2 | TITLE | 0.53+ |
theCUBE | ORGANIZATION | 0.51+ |
super cloud | TITLE | 0.51+ |
Cloud | TITLE | 0.41+ |
AI Meets the Supercloud | Supercloud2
(upbeat music) >> Okay, welcome back everyone at Supercloud 2 event, live here in Palo Alto, theCUBE Studios live stage performance, virtually syndicating it all over the world. I'm John Furrier with Dave Vellante here as Cube alumni, and special influencer guest, Howie Xu, VP of Machine Learning and Zscaler, also part-time as a CUBE analyst 'cause he is that good. Comes on all the time. You're basically a CUBE analyst as well. Thanks for coming on. >> Thanks for inviting me. >> John: Technically, you're not really a CUBE analyst, but you're kind of like a CUBE analyst. >> Happy New Year to everyone. >> Dave: Great to see you. >> Great to see you, Dave and John. >> John: We've been talking about ChatGPT online. You wrote a great post about it being more like Amazon, not like Google. >> Howie: More than just Google Search. >> More than Google Search. Oh, it's going to compete with Google Search, which it kind of does a little bit, but more its infrastructure. So a clever point, good segue into this conversation, because this is kind of the beginning of these kinds of next gen things we're going to see. Things where it's like an obvious next gen, it's getting real. Kind of like seeing the browser for the first time, Mosaic browser. Whoa, this internet thing's real. I think this is that moment and Supercloud like enablement is coming. So this has been a big part of the Supercloud kind of theme. >> Yeah, you talk about Supercloud, you talk about, you know, AI, ChatGPT. I really think the ChatGPT is really another Netscape moment, the browser moment. Because if you think about internet technology, right? It was brewing for 20 years before early 90s. Not until you had a, you know, browser, people realize, "Wow, this is how wonderful this technology could do." Right? You know, all the wonderful things. Then you have Yahoo and Amazon. I think we have brewing, you know, the AI technology for, you know, quite some time. Even then, you know, neural networks, deep learning. But not until ChatGPT came along, people realize, "Wow, you know, the user interface, user experience could be that great," right? So I really think, you know, if you look at the last 30 years, there is a browser moment, there is iPhone moment. I think ChatGPT moment is as big as those. >> Dave: What do you see as the intersection of things like ChatGPT and the Supercloud? Of course, the media's going to focus, journalists are going to focus on all the negatives and the privacy. Okay. You know we're going to get by that, right? Always do. Where do you see the Supercloud and sort of the distributed data fitting in with ChatGPT? Does it use that as a data source? What's the link? >> Howie: I think there are number of use cases. One of the use cases, we talked about why we even have Supercloud because of the complexity, because of the, you know, heterogeneous nature of different clouds. In order for me as a developer, in order for me to create applications, I have so many things to worry about, right? It's a complexity. But with ChatGPT, with the AI, I don't have to worry about it, right? Those kind of details will be taken care of by, you know, the underlying layer. So we have been talking about on this show, you know, over the last, what, year or so about the Supercloud, hey, defining that, you know, API layer spanning across, you know, multiple clouds. I think that will be happening. However, for a lot of the things, that will be more hidden, right? A lot of that will be automated by the bots. You know, we were just talking about it right before the show. One of the profound statement I heard from Adrian Cockcroft about 10 years ago was, "Hey Howie, you know, at Netflix, right? You know, IT is just one API call away." That's a profound statement I heard about a decade ago. I think next decade, right? You know, the IT is just one English language away, right? So when it's one English language away, it's no longer as important, API this, API that. You still need API just like hardware, right? You still need all of those things. That's going to be more hidden. The high level thing will be more, you know, English language or the language, right? Any language for that matter. >> Dave: And so through language, you'll tap services that live across the Supercloud, is what you're saying? >> Howie: You just tell what you want, what you desire, right? You know, the bots will help you to figure out where the complexity is, right? You know, like you said, a lot of criticism about, "Hey, ChatGPT doesn't do this, doesn't do that." But if you think about how to break things down, right? For instance, right, you know, ChatGPT doesn't have Microsoft stock price today, obviously, right? However, you can ask ChatGPT to write a program for you, retrieve the Microsoft stock price, (laughs) and then just run it, right? >> Dave: Yeah. >> So the thing to think about- >> John: It's only going to get better. It's only going to get better. >> The thing people kind of unfairly criticize ChatGPT is it doesn't do this. But can you not break down humans' task into smaller things and get complex things to be done by the ChatGPT? I think we are there already, you know- >> John: That to me is the real game changer. That's the assembly of atomic elements at the top of the stack, whether the interface is voice or some programmatic gesture based thing, you know, wave your hand or- >> Howie: One of the analogy I used in my blog was, you know, each person, each professional now is a quarterback. And we suddenly have, you know, a lot more linebacks or you know, any backs to work for you, right? For free even, right? You know, and then that's sort of, you should think about it. You are the quarterback of your day-to-day job, right? Your job is not to do everything manually yourself. >> Dave: You call the play- >> Yes. >> Dave: And they execute. Do your job. >> Yes, exactly. >> Yeah, all the players are there. All the elves are in the North Pole making the toys, Dave, as we say. But this is the thing, I want to get your point. This change is going to require a new kind of infrastructure software relationship, a new kind of operating runtime, a new kind of assembler, a new kind of loader link things. This very operating systems kind of concepts. >> Data intensive, right? How to process the data, how to, you know, process so gigantic data in parallel, right? That's actually a tough job, right? So if you think about ChatGPT, why OpenAI is ahead of the game, right? You know, Google may not want to acknowledge it, right? It's not necessarily they do, you know, not have enough data scientist, but the software engineering pieces, you know, behind it, right? To train the model, to actually do all those things in parallel, to do all those things in a cost effective way. So I think, you know, a lot of those still- >> Let me ask you a question. Let me ask you a question because we've had this conversation privately, but I want to do it while we're on stage here. Where are all the alpha geeks and developers and creators and entrepreneurs going to gravitate to? You know, in every wave, you see it in crypto, all the alphas went into crypto. Now I think with ChatGPT, you're going to start to see, like, "Wow, it's that moment." A lot of people are going to, you know, scrum and do startups. CTOs will invent stuff. There's a lot of invention, a lot of computer science and customer requirements to figure out. That's new. Where are the alpha entrepreneurs going to go to? What do you think they're going to gravitate to? If you could point to the next layer to enable this super environment, super app environment, Supercloud. 'Cause there's a lot to do to enable what you just said. >> Howie: Right. You know, if you think about using internet as the analogy, right? You know, in the early 90s, internet came along, browser came along. You had two kind of companies, right? One is Amazon, the other one is walmart.com. And then there were company, like maybe GE or whatnot, right? Really didn't take advantage of internet that much. I think, you know, for entrepreneurs, suddenly created the Yahoo, Amazon of the ChatGPT native era. That's what we should be all excited about. But for most of the Fortune 500 companies, your job is to surviving sort of the big revolution. So you at least need to do your walmart.com sooner than later, right? (laughs) So not be like GE, right? You know, hand waving, hey, I do a lot of the internet, but you know, when you look back last 20, 30 years, what did they do much with leveraging the- >> So you think they're going to jump in, they're going to build service companies or SaaS tech companies or Supercloud companies? >> Howie: Okay, so there are two type of opportunities from that perspective. One is, you know, the OpenAI ish kind of the companies, I think the OpenAI, the game is still open, right? You know, it's really Close AI today. (laughs) >> John: There's room for competition, you mean? >> There's room for competition, right. You know, you can still spend you know, 50, $100 million to build something interesting. You know, there are company like Cohere and so on and so on. There are a bunch of companies, I think there is that. And then there are companies who's going to leverage those sort of the new AI primitives. I think, you know, we have been talking about AI forever, but finally, finally, it's no longer just good, but also super useful. I think, you know, the time is now. >> John: And if you have the cloud behind you, what do you make the Amazon do differently? 'Cause Amazon Web Services is only going to grow with this. It's not going to get smaller. There's more horsepower to handle, there's more needs. >> Howie: Well, Microsoft already showed what's the future, right? You know, you know, yes, there is a kind of the container, you know, the serverless that will continue to grow. But the future is really not about- >> John: Microsoft's shown the future? >> Well, showing that, you know, working with OpenAI, right? >> Oh okay. >> They already said that, you know, we are going to have ChatGPT service. >> $10 billion, I think they're putting it. >> $10 billion putting, and also open up the Open API services, right? You know, I actually made a prediction that Microsoft future hinges on OpenAI. I think, you know- >> John: They believe that $10 billion bet. >> Dave: Yeah. $10 billion bet. So I want to ask you a question. It's somewhat academic, but it's relevant. For a number of years, it looked like having first mover advantage wasn't an advantage. PCs, spreadsheets, the browser, right? Social media, Friendster, right? Mobile. Apple wasn't first to mobile. But that's somewhat changed. The cloud, AWS was first. You could debate whether or not, but AWS okay, they have first mover advantage. Crypto, Bitcoin, first mover advantage. Do you think OpenAI will have first mover advantage? >> It certainly has its advantage today. I think it's year two. I mean, I think the game is still out there, right? You know, we're still in the first inning, early inning of the game. So I don't think that the game is over for the rest of the players, whether the big players or the OpenAI kind of the, sort of competitors. So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest, to get, you know, another shot to the OpenAI sort of the level?" You know, I did a- (laughs) >> Line up. >> That's classic VC. "How much does it cost me to replicate?" >> I'm pretty sure he asked the question to a bunch of guys, right? >> Good luck with that. (laughs) >> So we kind of did some napkin- >> What'd you come up with? (laughs) >> $100 million is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So 100 million. >> John: Hundreds of millions. >> Yeah, yeah, yeah. 100 million order of magnitude is what I came up with. You know, we can get into details, you know, in other sort of the time, but- >> Dave: That's actually not that much if you think about it. >> Howie: Exactly. So when he heard me articulating why is that, you know, he's thinking, right? You know, he actually, you know, asked me, "Hey, you know, there's this company. Do you happen to know this company? Can I reach out?" You know, those things. So I truly believe it's not a billion or 10 billion issue, it's more like 100. >> John: And also, your other point about referencing the internet revolution as a good comparable. The other thing there is online user population was a big driver of the growth of that. So what's the equivalent here for online user population for AI? Is it more apps, more users? I mean, we're still early on, it's first inning. >> Yeah. We're kind of the, you know- >> What's the key metric for success of this sector? Do you have a read on that? >> I think the, you know, the number of users is a good metrics, but I think it's going to be a lot of people are going to use AI services without even knowing they're using it, right? You know, I think a lot of the applications are being already built on top of OpenAI, and then they are kind of, you know, help people to do marketing, legal documents, you know, so they're already inherently OpenAI kind of the users already. So I think yeah. >> Well, Howie, we've got to wrap, but I really appreciate you coming on. I want to give you a last minute to wrap up here. In your experience, and you've seen many waves of innovation. You've even had your hands in a lot of the big waves past three inflection points. And obviously, machine learning you're doing now, you're deep end. Why is this Supercloud movement, this wave of Supercloud and the discussion of this next inflection point, why is it so important? For the folks watching, why should they be paying attention to this particular moment in time? Could you share your super clip on Supercloud? >> Howie: Right. So this is simple from my point of view. So why do you even have cloud to begin with, right? IT is too complex, too complex to operate or too expensive. So there's a newer model. There is a better model, right? Let someone else operate it, there is elasticity out of it, right? That's great. Until you have multiple vendors, right? Many vendors even, you know, we're talking about kind of how to make multiple vendors look like the same, but frankly speaking, even one vendor has, you know, thousand services. Now it's kind of getting, what Kid was talking about what, cloud chaos, right? It's the evolution. You know, the history repeats itself, right? You know, you have, you know, next great things and then too many great things, and then people need to sort of abstract this out. So it's almost that you must do this. But I think how to abstract this out is something that at this time, AI is going to help a lot, right? You know, like I mentioned, right? A lot of the abstraction, you don't have to think about API anymore. I bet 10 years from now, you know, IT is one language away, not API away. So think about that world, right? So Supercloud in, in my opinion, sure, you kind of abstract things out. You have, you know, consistent layers. But who's going to do that? Is that like we all agreed upon the model, agreed upon those APIs? Not necessary. There are certain, you know, truth in that, but there are other truths, let bots take care of, right? Whether you know, I want some X happens, whether it's going to be done by Azure, by AWS, by GCP, bots will figure out at a given time with certain contacts with your security requirement, posture requirement. I'll think that out. >> John: That's awesome. And you know, Dave, you and I have been talking about this. We think scale is the new ratification. If you have first mover advantage, I'll see the benefit, but scale is a huge thing. OpenAI, AWS. >> Howie: Yeah. Every day, we are using OpenAI. Today, we are labeling data for them. So you know, that's a little bit of the- (laughs) >> John: Yeah. >> First mover advantage that other people don't have, right? So it's kind of scary. So I'm very sure that Google is a little bit- (laughs) >> When we do our super AI event, you're definitely going to be keynoting. (laughs) >> Howie: I think, you know, we're talking about Supercloud, you know, before long, we are going to talk about super intelligent cloud. (laughs) >> I'm super excited, Howie, about this. Thanks for coming on. Great to see you, Howie Xu. Always a great analyst for us contributing to the community. VP of Machine Learning and Zscaler, industry legend and friend of theCUBE. Thanks for coming on and sharing really, really great advice and insight into what this next wave means. This Supercloud is the next wave. "If you're not on it, you're driftwood," says Pat Gelsinger. So you're going to see a lot more discussion. We'll be back more here live in Palo Alto after this short break. >> Thank you. (upbeat music)
SUMMARY :
it all over the world. but you're kind of like a CUBE analyst. Great to see you, You wrote a great post about Kind of like seeing the So I really think, you know, Of course, the media's going to focus, will be more, you know, You know, like you said, John: It's only going to get better. I think we are there already, you know- you know, wave your hand or- or you know, any backs Do your job. making the toys, Dave, as we say. So I think, you know, A lot of people are going to, you know, I think, you know, for entrepreneurs, One is, you know, the OpenAI I think, you know, the time is now. John: And if you have You know, you know, yes, They already said that, you know, $10 billion, I think I think, you know- that $10 billion bet. So I want to ask you a question. to get, you know, another "How much does it cost me to replicate?" Good luck with that. You know, not a billion, into details, you know, if you think about it. You know, he actually, you know, asked me, the internet revolution We're kind of the, you know- I think the, you know, in a lot of the big waves You have, you know, consistent layers. And you know, Dave, you and I So you know, that's a little bit of the- So it's kind of scary. to be keynoting. Howie: I think, you know, This Supercloud is the next wave. (upbeat music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
10 million | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Howie Xu | PERSON | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
$100 million | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
Hundreds of millions | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
10 billion | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
North Pole | LOCATION | 0.99+ |
next decade | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Cohere | ORGANIZATION | 0.99+ |
first inning | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Machine Learning | ORGANIZATION | 0.99+ |
Supercloud 2 | EVENT | 0.99+ |
English | OTHER | 0.98+ |
each person | QUANTITY | 0.98+ |
two type | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Zscaler | ORGANIZATION | 0.98+ |
early 90s | DATE | 0.97+ |
Howie | PERSON | 0.97+ |
two kind | QUANTITY | 0.97+ |
one vendor | QUANTITY | 0.97+ |
one language | QUANTITY | 0.97+ |
each professional | QUANTITY | 0.97+ |
Amir Khan & Atif Khan, Alkira | Supercloud2
(lively music) >> Hello, everyone. Welcome back to the Supercloud presentation here. I'm theCUBE, I'm John Furrier, your host. What a great segment here. We're going to unpack the networking aspect of the cloud, how that translates into what Supercloud architecture and platform deployment scenarios look like. And demystify multi-cloud, hybridcloud. We've got two great experts. Amir Khan, the Co-Founder and CEO of Alkira, Atif Khan, Co-Founder and CTO of Alkira. These guys been around since 2018 with the startup, but before that story, history in the tech industry. I mean, routing early days, multiple waves, multiple cycles. >> Welcome three decades. >> Welcome to Supercloud. >> Thanks. >> Thanks for coming on. >> Thank you so much for having us. >> So, let's get your take on Supercloud because it's been one of those conversations that really galvanized the industry because it kind of highlights almost this next wave, this next side of the street that everyone's going to be on that's going to be successful. The laggards on the legacy seem to be stuck on the old model. SaaS is growing up, it's ISVs, it's ecosystems, hyperscale, full hybrid. And then multi-cloud around the corners cause all this confusion, everyone's hand waving. You know, this is a solution, that solution, where are we? What do you guys see as this supercloud dynamic? >> So where we start from is always focusing on the customer problem. And in 2018 when we identified the problem, we saw that there were multiple clouds with many diverse ways of doing things from the network perspective, and customers were struggling with that. So we delved deeper into that and looked at each one of the cloud architectures completely independent. And there was no common solution and customers were struggling with that from the perspective. They wanted to be in multiple clouds, either through mergers and acquisitions or running an application which may be more cost effective to run in something or maybe optimized for certain reasons to run in a different cloud. But from the networking perspective, everything needed to come together. So that's, we are starting to define it as a supercloud now, but basically, it's a common infrastructure across all clouds. And then integration of high lift services like, you know, security or IPAM services or many other types of services like inter-partner routing and stuff like that. So, Amir, you agree then that multi-cloud is simply a default result of having whatever outcomes, either M&A, some productivity software, maybe Azure. >> Yes. >> Amazon has this and then I've got on-premise application, so it's kinds mishmash. >> So, I would qualify it with hybrid multi-cloud because everything is going to be interconnected. >> John: Got it. >> Whether it's on-premise, remote users or clouds. >> But have CTO perspective, obviously, you got developers, multiple stacks, got AWS, Azure and GCP, other. Not everyone wants to kind of like go all in, but yet they don't want to hedge too much because it's a resource issue. And I got to learn this stack, I got to learn that stack. So then now, you have this default multi-cloud, hybrid multi-cloud, then it's like, okay, what do I do? How do you spread that around? Is it dangerous? What's the the approach technically? What's some of the challenges there? >> Yeah, certainly. John, first, thanks for having us here. So, before I get to that, I'll just add a little bit to what Amir was saying, like how we started, what we were seeing and how it, you know, correlates with the supercloud. So, as you know, before this company, Alkira, we were doing, we did the SD-WAN company, which was Viptela. So there, we started seeing when people started deploying SD-WAN at like a larger scale. We started like, you know, customers coming to us and saying they needed connectivity into the cloud from the SD-WAN. They wanted to extend the SD-WAN fabric to the cloud. So we came up with an architecture, which was like later we started calling them Cloud onRamps, where we built, you know, a transit VPC and put like the virtual instances of SD-WAN appliances extended from there to the cloud. But before we knew, like it started becoming very complicated for the customers because it wasn't just connectivity, it also required, you know, other use cases. You had to instantiate or bring in security appliances in there. You had to secure all of that stuff. There were requirements for, you know, different regions. So you had to bring up the same thing in different regions. Then multiple clouds, what did you do? You had to replicate the same thing in multiple clouds. And now if there was was requirement between clouds, how were you going to do it? You had to route traffic from somewhere, and come up with all those routing controls and stuff. So, it was very complicated. >> Like spaghetti code, but on network. >> The games begin, in fact, one of our customers called it spaghetti mess. And so, that's where like we thought about where was the industry going and which direction the industry was going into? And we came up with the Alkira where what we are doing is building a common infrastructure across multiple clouds, across in, you know, on-prem locations, be it data centers or physical sites, branches sites, et cetera, with integrated security and network networking services inside. And, you know, nowadays, networking is not only about connectivity, you have to secure everything. So, security has to be built in. Redundancy, high availability, disaster recovery. So all of that needs to be built in. So that's like, you know, kind of a definition of like what we thought at that time, what is turning into supercloud now. >> Yeah. It's interesting too, you mentioned, you know, VPCs is not, configuration of loans a hassle. Nevermind the manual mistakes could be made, but as you decide to do something you got to, "Oh, we got to get these other things." A lot of the hyper scales and a lot of the alpha cloud players now, and cloud native folks, they're kind of in that mode of, "Wow, look at what we've built." Now, they're got to maintain, how do I refresh it? Like, how do I keep the talent? So they got this similar chaotic environment where it's like, okay, now they're already already through, so I think they're going to be okay. But then some people want to bypass it completely. So there's a lot of customers that we see out there that fit the makeup of, I'm cloud first, I've lifted and shifted, I move some stuff to the cloud. But I want to bypass all that learnings from all the people that are gone through the past three years. Can I just skip that and go to a multi-cloud or coherent infrastructure? What do you think about that? What's your view? >> So yeah, so if you look at these enterprises, you know, many of them just to find like the talent, which for one cloud as far as the IT staff is concerned, it's hard enough. And now, when you have multiple clouds, it's hard to find people the talent which is, you know, which has expertise across different clouds. So that's where we come into the picture. So our vision was always to simplify all of this stuff. And simplification, it cannot be just simplification because you cannot just automate the workflows of the cloud providers underneath. So you have to, you know, provide your full data plane on top of it, fed full control plane, management plane, policy and management on top of it. And coming back to like your question, so these nowadays, those people who are working on networking, you know, before it used to be like CLI. You used to learn about Cisco CLI or Juniper CLI, and you used to work on it. Nowadays, it's very different. So automation, programmability, all of that stuff is the key. So now, you know, Ops guys, the DevOps guys, so these are the people who are in high demand. >> So what do you think about the folks out there that are saying, okay, you got a lot of fragmentation. I got the stacks, I got a lot of stove pipes, if you will, out there on the stack. I got to learn this from Azure. Can you guys have with your product abstract the way that's so developers don't need to know the ins and outs of stack's, almost like a gateway, if you will, the old days. But like I'm a developer or team develop, why should I have to learn the management layer of Azure? >> That's exactly what we started, you know, out with to solve. So it's, what we have built is a platform and the platform sits inside the cloud. And customers are able to build their own network or a virtual network on top using that platform. So the platform has its own data plane, own control plane and management plane with a policy layer on top of it. So now, it's the platform which is sitting in different clouds, but from a customer's point of view, it's one way of doing networking. One way of instantiating or bringing in services or security services in the middle. Whether those are our security services or whether those are like services from our partners, like Palo Alto or Checkpoint or Cisco. >> So you guys brought the SD-WAN mojo and refactored it for the cloud it sounds like. >> No. >> No? (chuckles) >> We cannot said. >> All right, explain. >> It's way more than that. >> I mean, SD-WAN was wan. I mean, you're talking about wide area networks, talking about connected, so explain the difference. >> SD-WAN was primarily done for one major reason. MPLS was expensive, very strong SLAs, but very low speed. Internet, on the other hand, you sat at home and you could access your applications much faster. No SLA, very low cost, right? So we wanted to marry the two together so you could have a purely private infrastructure and a public infrastructure and secure both of them by creating a common secure fabric across all those environments. And then seamlessly tying it into your internal branch and data center and cloud network. So, it merely brought you to the edge of the cloud. It didn't do anything inside the cloud. Now, the major problem resides inside the clouds where you have to optimize the clouds themselves. Take a step back. How were the clouds built? Basically, the cloud providers went to the Ciscos and Junipers and the rest of the world, built the network in the data centers or across wide area infrastructure, and brought it all together and tried to create a virtualized layer on top of that. But there were many limitations of this underlying infrastructure that they had built. So number of routes per region, how inter region connectivity worked, or how many routes you could carry to the VPCs of V nets? That all those were becoming no common policy across, you know, these environments, no segmentation across these environments, right? So the networking constructs that the enterprise customers were used to as enterprise class carry class capabilities, they did not exist in the cloud. So what did the customer do? They ended up stitching it together all manually. And that's why Atif was alluding to earlier that it became a spaghetti mess for the customers. And then what happens is, as a result, day two operations, you know, troubleshooting, everything becomes a nightmare. So what do you do? You have to build an infrastructure inside the cloud. Cloud has enough raw capabilities to build the solutions inside there. Netflix's of the world. And many different companies have been born in the cloud and evolved from there. So why could we not take the raw capabilities of the clouds and build a network cloud or a supercloud on top of these clouds to optimize the whole infrastructure and seamlessly connecting it into the on-premise and remote user locations, right? So that's your, you know, hybrid multi-cloud solution. >> Well, great call out on the SD-WAN in common versus cloud. 'Cause I think this is important because you're building a network layer in the cloud that spans out so the customers don't have to get into the, there's a gap in the system that I'm used to, my operating environment, of having lockdown security and network. >> So yeah. So what you do is you use the raw capabilities like bandwidth or virtual machines, or you know, containers, or, you know, different types of serverless capabilities. And you bring it all together in a way to solve the networking problems, thereby creating a supercloud, which is an abstraction layer which hides all the complexity of the underlying clouds from the customer, right? And it provides a common infrastructure across all environments to that customer, right? That's the beauty of it. And it does it in a way that it looks like, if they have the networking knowledge, they can apply it to this new environment and carry it forward. One way of doing security across all clouds and hybrid environments. One way of doing routing. One way of doing large-scale network address translation. One way of doing IPAM services. So people are tired of doing individual things and individual clouds and on-premise locations, right? So now they're getting something common. >> You guys brought that, you brought all that to bear and flexible for the customer to essentially self-serve their network cloud. >> Yes, yeah. Is that the wave? >> And nowadays, from business perspective, agility is the key, right? You have to move at the pace of the business. If you don't, you are losing. >> So, would it be safe to say that you guys have a network supercloud? >> Absolutely, yeah. >> We, pretty much, yeah. Absolutely. >> What does that mean to our customer? What's in it for them? What's the benefit to the customer? I got a network supercloud, it connects, provides SLA, all the capabilities I need. What do they get? What's the end point for them? What's the end? >> Atif, maybe you can talk some examples. >> The IT infrastructure is all like distributed now, right? So you have applications running in data centers. You have applications running in one cloud. Other cloud, public clouds, enterprises are depending on so many SaaS applications. So now, these are, you can call these endpoints. So a supercloud or a network cloud, from our perspective, it's a cloud in the middle or a network in the middle, which provides connectivity from any endpoint to any endpoint. So, you are able to connect to the supercloud or network cloud in one way no matter where you are. So now, whichever cloud you are in, whichever cloud you need to connect to. And also, it's not just connecting to the cloud. So you need to do a lot of stuff, a lot of networking inside the cloud also. So now, as Amir was saying, every cloud has its own from a networking, you know, the concept perspective or the construct, they are different. There are limitations in there also. So this supercloud, which is sitting on top, basically, your platform is sitting into the cloud, but the supercloud is built on top of using your platform. So that abstracts all those complexities, all those limitations. So now your limitations are whatever the limitations of that platform are. So now your platform, that platform is in our control. So we can keep building it, we can keep scaling it horizontally. Because one of the things is that, you know, in this cloud era, one of the things is autoscaling these services. So why can't the network now autoscale also, just like your other services. >> Network autoscaling is a genius idea, and I think that's a killer. I want to ask the the follow on question because I think, first of all, I love what you guys are doing. So, I think it's a great example of this new innovation. It's not obvious until you see it, right? Geographical is huge. So, you know, single instance, global instances, multiple instances, you're seeing global. How do you guys look at that global equation? Because as companies expand their clouds into geos, and then ultimately, you know, it's obviously continent, region and locales. You're going to have geographic issues. So, this is an extension of your network cloud? >> Amir: It is the extension of the network cloud because if you look at this hyperscalers, they're sitting pretty much everywhere in the globe. So, wherever their regions are, the beauty of building a supercloud is that you can by definition, be available in those regions. It literally takes a day or two of testing for our stack to run in those regions, to make sure there are no nuances that we run into, you know, for that region. The moment we bring it up in that region, all customers can onboard into that solution. So literally, what used to take months or years to build a global infrastructure, now, you can configure it in 10 minutes basically, and bring it up in less than one hour. Since when did we see any solution- >> And by the way, >> that can come up with. >> when the edge comes out too, you're going to start to see more clouds get bolted on. >> Exactly. And you can expand to the edge of the network. That's why we call cloud the new edge, right? >> John: Yeah, it is. Now, I think you guys got a good solutions, network clouds, superclouds, good. So the question on the premise side, so I get the cloud play. It's very cool. You can expand out. It's a nice layer. I'm sure you manage the SLAs between latency and all kinds of things. Knowing when not to do things. Physics or physics. Okay. Now, you've got the on-premise. What's the on-premise equation look like? >> So on-premise, the kind of customers, we are working with large enterprises, mid-size enterprises. So they have on-prem networks, they have deployed, in many cases, they have deployed SD-WAN. In many cases, they have MPLS. They have data centers also. And a lot of these companies are, you know, moving the applications from the data center into the cloud. But we still have large enterprise- >> But for you guys, you can sit there too with non server or is it a box or what is it? >> It's a software stack, right? So, we are a software company. >> Okay, so no box. >> No box. >> Okay, got it. >> No box. >> It's even better. So, we can connect any, as I mentioned, any endpoint, whether it's data centers. So, what happens is usually these enterprises from the data centers- >> John: It's a cloud endpoint for you. >> Cloud endpoint for us. And they need highspeed connectivity into the cloud. And our network cloud is sitting inside the or supercloud is sitting inside the cloud. So we need highspeed connectivity from the data centers. This is like multi-gig type of connectivity. So we enable that connectivity as a service. And as Amir was saying, you are able to bring it up in minutes, pretty much. >> John: Well, you guys have a great handle on supercloud. I really appreciate you guys coming on. I have to ask you guys, since you have so much experience in the industry, multiple inflection points you've guys lived through and we're all old, and we can remember those glory days. What's the big deal going on right now? Because you can connect the dots and you can imagine, okay, like a Lambda function spinning up some connectivity. I need instant access to a new route, throw some, I need to send compute to an edge point for process data. A lot of these kind of ad hoc services are going to start flying around, which used to be manually configured as you guys remember. >> Amir: And that's been the problem, right? The shadow IT, that was the biggest problem in the enterprise environment. So that's what we are trying to get the customers away from. Cloud teams came in, individuals or small groups of people spun up instances in the cloud. It was completely disconnected from the on-premise environment or the existing IT environment that the customer had. So, how do you bring it together? And that's what we are trying to solve for, right? At a large scale, in a carrier cloud center (indistinct). >> What do you call that? Shift right or shift left? Shift left is in the cloud native world security. >> Amir: Yes. >> Networking and security, the two hottest areas. What are you shifting? Up or down? I mean, the network's moving up the stack. I mean, you're seeing the run times at Kubernetes later' >> Amir: Right, right. It's true we're end-to-end virtualization. So you have plumbing, which is the physical infrastructure. Then on top of that, now for the first time, you have true end-to-end virtualization, which the cloud-like constructs are providing to us. We tried to virtualize the routers, we try to virtualize instances at the server level. Now, we are bringing it all together in a truly end-to-end virtualized manner to connect any endpoint anywhere across the globe. Whether it's on-premise, home, multiple clouds, or SaaS type environments. >> Yeah. If you talk about the technical benefits beyond virtualizations, you kind of see in virtualization be abstracted away. So you got end-to-end virtualization, but you don't need to know virtualization to take advantage of it. >> Exactly. Exactly. >> What are some of the tech involved where, what's the trend around on top of virtual? What's the easy button for that? >> So there are many, many use cases from the customers and they're, you know, some of those use cases, they used to deliver out of their data centers before. So now, because you, know, it takes a long time to spend something up in the data center and stuff. So the trend is and what enterprises are looking for is agility. And to achieve that agility, they are moving those services or those use cases into the cloud. So another technical benefit of like something like a supercloud and what we are doing is we allow customers to, you know, move their services from existing data centers into the cloud as well. And I'll give you some examples. You know, these enterprises have, you know, tons of partners. They provide connectivity to their partners, to select resources. It used to happen inside the data center. You would bring in connectivity into the data center and apply like tons of ACLs and whatnot to make sure that you are able to only connect. And now those use cases are, they need to be enabled inside the cloud. And the customer's customers are also, it's not just coming from the on-prem, they're coming from the cloud as well. So, if they're coming from the cloud as well as from on-prem, so you need like an infrastructure like supercloud, which is sitting inside the cloud and is able to handle all these use cases. So all of these use cases have to be, so that requires like moving those services from the data center into the cloud or into the supercloud. So, they're, oh, as we started building this service over the last four years, we have come across so many use cases. And to deliver those use cases, you have to have a platform. So you have to have your own platform because otherwise you are depending on somebody else's, you know, capabilities. And every time their capabilities change, you have to change. >> John: I'm glad you brought up the platform 'cause I want to get your both reaction to this. So Bob Muglia just said on theCUBE here at Supercloud, that supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. So the question is, is supercloud a platform or an architecture in your view? >> That's an interesting view on things, you know? I mean, if you think of it, you have to design or architect a solution before we turn it into a platform. >> John: It's a trick question actually. >> So it's a, you know, so we look at it as that you have to have an architectural approach end to end, right? And then you build a solution based on that approach. So, I don't think that they are mutually exclusive. I think they go hand in hand. It's an architecture that you turn into a solution and provide that agility and high availability and disaster recovery capability that it built into that. >> It's interesting that these definitions might be actually redefined with this new configuration. >> Amir: Yes. >> Because architecture and platform used to mean something, like, aight here's a platform, you buy this platform. >> And then you architecture solution. >> Architect it via vendor. >> Right, right, right. >> Okay. And they have to deal with that architecture in the place of multiple superclouds. If you have too many stove pipes, then what's the purpose of supercloud? >> Right, right, right. And because, you know, historically, you built a router and you sold it to the customer. And the poor customer was supposed to install it all, you know, and interconnect all those things. And if you have 40, 50,000 router network, which we saw in our lifetime, 'cause there used to be many more branches when we were growing up in the networking industry, right? You had to create hierarchy and all kinds of things to figure out how to solve that problem. We are no longer living in that world anymore. You cannot deploy individual virtual instances. And that's what approach a lot of people are taking, which is a pure overly network. You cannot take that approach anymore. You have to evolve the architecture and then build the solution based on that architecture so that it becomes a platform which is readily available, highly scalable, and available. And at the same time, it's very, very easy to deploy. It's a SaaS type solution, right? >> So you're saying, do the architecture to get the solution for the platform that the customer has. >> Amir: Yes. >> They're not buying a platform, they end up with a platform- >> With the platform. >> as a result of Supercloud path. All right. So that's what's, so you mentioned, that's a great point. I want to double click on what you just said. 'Cause I like that what you said. What's the deployment strategy in your mind for supercloud? I'm an architect. I'm at an enterprise in the Midwest. I'm an insurance company, got some cloud action going on. I'm mostly on-premise. I've got the mandate to transform the company. We have apps. We'll be fully transformed in five years. What's my strategy? What do I do? >> Amir: The resources. >> What's the deployment strategy? Single global instance, code in every region, on every cloud? >> It needs to be a solution which is available as a SaaS service, right? So from the customer's perspective, they are onboarding into the supercloud. And then the supercloud is allowing them to do whatever they used to do, you know, historically and in the new world, right? That needs to come together. And that's what we have built is that, we have brought everything together in a way that what used to take months or years, and now taking an hour or two hours, and then people test it for a week or so and deploy it in production. >> I want to bring up something we were talking about before we were on camera about the TCP/IP, the OSI model. That was a concept that destroyed the proprietary narcissist. Work operating systems of the mini computers, which brought in an era of tech prosperity for generations. TCP/IP was kind of the magical moment that allowed for that kind of super networking connection. Inter networking is what's called as a category. It feels like something's going on here with supercloud. The way you describe it, it feels like there's this unification idea. Like the reality is we've got multiple stuff sitting around by default, you either clean it up or get rid of it, right? Or it's almost a, it's either a nuance, a new nuisance or chaos. >> Yeah. And we live in the new world now. We don't have the luxury of time. So we need to move as fast as possible to solve the business problems. And that's what we are running into. If we don't have automated solutions which scale, which solve our problems, then it's going to be a problem. And that's why SaaS is so important in today's world. Why should we have to deploy the network piecemeal? Why can't we have a solution? We solve our problem as we move forward and we accomplish what we need to accomplish and move forward. >> And we don't really need standards here, dude. It's not that we need a standards body if you have unification. >> So because things move so fast, there's no time to create a standards body. And that's why you see companies like ours popping up, which are trying to create a common infrastructure across all clouds. Otherwise if we vent the standardization path may take long. Eventually, we should be going in that direction. But we don't have the luxury of time. That's what I was trying to get to. >> Well, what's interesting is, is that to your point about standards and ratification, what ratifies a defacto anything? In the old days there was some technical bodies involved, but here, I think developers drive everything. So if you look at the developers and how they're voting with their code. They're instantly, organically defining everything as a collective intelligence. >> And just like you're putting out the paper and making it available, everybody's contributing to that. That's why you need to have APIs and terra form type constructs, which are available so that the customers can continue to improve upon that. And that's the Net DevOps, right? So that you need to have. >> What was once sacrilege, just sayin', in business school, back in the days when I got my business degree after my CS degree was, you know, no one wants to have a better mousetrap, a bad business model to have a better mouse trap. In this case, the better mouse trap, the better solution actually could be that thing. >> It is that thing. >> I mean, that can trigger, tips over the industry. >> And that that's where we are seeing our customers. You know, I mean, we have some publicly referenceable customers like Coke or Warner Music Group or, you know, multiple others and chart industries. The way we are solving the problem. They have some of the largest environments in the industry from the cloud perspective. And their whole network infrastructure is running on the Alkira infrastructure. And they're able to adopt new clouds within days rather than waiting for months to architect and then deploy and then figure out how to manage it and operate it. It's available as a service. >> John: And we've heard from your customer, Warner, they were just on the program. >> Amir: Yes. Okay, okay. >> So they're building a supercloud. So superclouds aren't just for tech companies. >> Amir: No. >> You guys build a supercloud for networking. >> Amir: It is. >> But people are building their own superclouds on top of all this new stuff. Talk about that dynamic. >> Healthcare providers, financials, high-tech companies, even startups. One of our startup customers, Tekion, right? They have these dealerships that they provide sales and support services to across the globe. And for them to be able to onboard those dealerships, it is 80% less time to production. That is real money, right? So, maybe Atif can give you a lot more examples of customers who are deploying. >> Talk about some of the customer activity. What are they like? Are they laggards, they innovators? Are they trying to hit the easy button? Are they coming in late or are you got some high customers? >> Actually most of our customers, all of our customers or customers in general. I don't think they have a choice but to move in this direction because, you know, the cloud has, like everything is quick now. So the cloud teams are moving faster in these enterprises. So now that they cannot afford the network nor to keep up pace with the cloud teams. So, they don't have a choice but to go with something similar where you can, you know, build your network on demand and bring up your network as quickly as possible to meet all those use cases. So, I'll give you an example. >> John: So the demand's high for what you guys do. >> Demand is very high because the cloud teams have- >> John: Yeah. They're going fast. >> They're going fast and there's no stopping. And then network teams, they have to keep up with them. And you cannot keep deploying, you know, networks the way you used to deploy back in the day. And as far as the use cases are concerned, there are so many use cases which our customers are using our platform for. One of the use cases, I'll give you an example of these financial customers. Some of the financial customers, they have their customers who they provide data, like stock exchanges, that provide like market data information to their customers out of data centers part. But now, their customers are moving into the cloud as well. So they need to come in from the cloud. So when they're coming in from the cloud, you cannot be giving them data from your data center because that takes time, and your hair pinning everything back. >> Moving data is like moving, moving money, someone said. >> Exactly. >> Exactly. And the other thing is like you have to optimize your traffic flows in the cloud as well because every time you leave the cloud, you get charged a lot. So, you don't want to leave the cloud unless you have to leave the cloud, your traffic. So, you have to come up or use a service which allows you to optimize all those traffic flows as well, you know? >> My final question to you guys, first of all, thanks for coming on Supercloud Program. Really appreciate it. Congratulations on your success. And you guys have a great positioning and I'm a big fan. And I have to ask, you guys are agile, nimble startup, smart on the cutting edge. Supercloud concept seems to resonate with people who are kind of on the front range of this major wave. While all the incumbents like Cisco, Microsoft, even AWS, they're like, I think they're looking at it, like what is that? I think it's coming up really fast, this trend. Because I know people talk about multi-cloud, I get that. But like, this whole supercloud is not just SaaS, it's more going on there. What do you think is going on between the folks who get it, supercloud, get the concept, and some are who are scratching their heads, whether it's the Ciscos or someone, like I don't get it. Why is supercloud important for the folks that aren't really seeing it? >> So first of all, I mean, the customers, what we saw about six months, 12 months ago, were a little slower to adopt the supercloud kind of concept. And there were leading edge customers who were coming and adopting it. Now, all of a sudden, over the last six to nine months, we've seen a flurry of customers coming in and they are from all disciplines or all very diverse set of customers. And they're starting to see the value of that because of the practical implications of what they're doing. You know, these shadow IT type environments are no longer working and there's a lot of pressure from the management to move faster. And then that's where they're coming in. And perhaps, Atif, if you can give a few examples of. >> Yeah. And I'll also just add to your point earlier about the network needing to be there 'cause the cloud teams are like, let's go faster. And the network's always been slow because, but now, it's been almost turbocharged. >> Atif: Yeah. Yeah, exactly. And as I said, like there was no choice here. You had to move in this industry. And the other thing I would add a little bit is now if you look at all these enterprises, most of their traffic is from, even from which is coming from the on-prem, it's going to the cloud SaaS applications or public clouds. And it's more than 50% of traffic, which is leaving your, you know, what you used to call, your network or the private network. So now it's like, you know, before it used to just connect sites to data centers and sites together. Now, it's a cloud as well as the SaaS application. So it's either internet bound or the public cloud bound. So now you have to build a network quickly, which caters to all these use cases. And that's where like something- >> And you guys, your solution to me is you eliminate all that work for the customer. Now, they can treat the cloud like a bag of Legos. And do their thing. Well, I oversimplify. Well, you know I'm talking about. >> Atif: Right, exactly. >> And to answer your question earlier about what about the big companies coming in and, you know, now they slow to adopt? And, you know, what normally happens is when Cisco came up, right? There used to be 16 different protocols suites. And then we finally settled on TCP/IP and DECnet or AppleTalk or X&S or, you know, you name it, right? Those companies did not adapt to the networking the way it was supposed to be done. And guess what happened, right? So if the companies in the networking space do not adopt this new concept or new way of doing things, I think some of them will become extinct over time. >> Well, I think the force and function too is the cloud teams as well. So you got two evolutions. You got architectural relevance. That's real as impact. >> It's very important. >> Cost, speed. >> And I look at it as a very similar disruption to what Cisco's the world, very early days did to, you know, bring the networking out, right? And it became the internet. But now we are going through the cloud. It's the cloud era, right? How does the cloud evolve over the next 10, 15, 20 years? Everything's is going to be offered as a service, right? So slowly data centers go away, the network becomes a plumbing thing. Very, you know, simple to deploy. And everything on top of that is virtualized in the cloud-like manners. >> And that makes the networks hardened and more secure. >> More secure. >> It's a great way to be secure. You remember the glory days, we'll go back 15 years. The Cisco conversation was, we got to move up to stack. All the manager would fight each other. Now, what does that actually mean? Stay where we are. Stay in your lane. This is kind of like the network's version of moving up the stack because not so much up the stack, but the cloud is everywhere. It's almost horizontally scaled. >> It's extending into the on-premise. It is already moving towards the edge, right? So, you will see a lot- >> So, programmability is a big program. So you guys are hitting programmability, compatibility, getting people into an environment they're comfortable operating. So the Ops people love it. >> Exactly. >> Spans the clouds to a level of SLA management. It might not be perfectly spanning applications, but you can actually know latencies between clouds, measure that. And then so you're basically managing your network now as the overall infrastructure. >> Right. And it needs to be a very intelligent infrastructure going forward, right? Because customers do not want to wait to be able to troubleshoot. They don't want to be able to wait to deploy something, right? So, it needs to be a level of automation. >> Okay. So the question for you guys both on we'll end on is what is the enablement that, because you guys are a disruptive enabler, right? You create this fabric. You're going to enable companies to do stuff. What are some of the things that you see and your customers might be seeing as things that they're going to do as a result of having this enablement? So what are some of those things? >> Amir: Atif, perhaps you can talk through the some of the customer experience on that. >> It's agility. And we are allowing these customers to move very, very quickly and build these networks which meet all these requirements inside the cloud. Because as Amir was saying, in the cloud era, networking is changing. And if you look at, you know, going back to your comment about the existing networking vendors. Some of them still think that, you know, just connecting to the cloud using some concepts like Cloud OnRamp is cloud networking, but it's changing now. >> John: 'Cause there's apps that are depending upon. >> Exactly. And it's all distributed. Like IT infrastructure, as I said earlier, is all distributed. And at the end of the day, you have to make sure that wherever your user is, wherever your app is, you are able to connect them securely. >> Historically, it used to be about building a router bigger and bigger and bigger and bigger, you know, and then interconnecting those routers. Now, it's all about horizontal scale. You don't need to build big, you need to scale it, right? And that's what cloud brings to the customer. >> It's a cultural change for Cisco and Juniper because they have to understand that they're still could be in the game and still win. >> Exactly. >> The question I have for you, what are your customers telling you that, what's some of the anecdotal, like, 'cause you guys have a good solution, is it, "Oh my god, you guys saved my butt." Or what are some of the commentary that you hear from the customers in terms of praise and and glory from your solution? >> Oh, some even say, when we do our demo and stuff, they say it's too hard to believe. >> Believe. >> Like, too hard. It's hard, you know, it's >> I dont believe you. They're skeptics. >> I don't believe you that because now you're able to bring up a global network within minutes. With networking services, like let's say you have APAC, you know, on-prem users, cloud also there, cloud here, users here, you can bring up a global network with full routed connectivity between all these endpoints with security services. You can bring up like a firewall from a third party or our services in the middle. This is a matter of minutes now. And this is all high speed connectivity with SLAs. Imagine like before connecting, you know, Singapore to U.S. East or Hong Kong to Frankfurt, you know, if you were putting your infrastructure in columns like E-connects, you would have to go, you know, figure out like, how am I going to- >> Seal line In, connect to it? Yeah. A lot of hassles, >> If you had to put like firewalls in the middle, segmentation, you had to, you know, isolate different entities. >> That's called heavy lifting. >> So what you're seeing is, you know, it's like customer comes in, there's a disbelief, can you really do that? And then they try it out, they go, "Wow, this works." Right? It's deployed in a small environment. And then all of a sudden they start taking off, right? And literally we have seen customers go from few thousand dollars a month or year type deployments to multi-million dollars a year type deployments in very, very short amount of time, in a few months. >> And you guys are pay as you go? >> Pay as you go. >> Pay as go usage cloud-based compatibility. >> Exactly. And it's amazing once they get to deploy the solution. >> What's the variable on the cost? >> On the cost? >> Is it traffic or is it. >> It's multiple different things. It's packaged into the overall solution. And as a matter of fact, we end up saving a lot of money to the customers. And not only in one way, in multiple different ways. And we do a complete TOI analysis for the customers. So it's bandwidth, it's number of connections, it's the amount of compute power that we are using. >> John: Similar things that they're used to. >> Just like the cloud constructs. Yeah. >> All right. Networking supercloud. Great. Congratulations. >> Thank you so much. >> Thanks for coming on Supercloud. >> Atif: Thank you. >> And looking forward to seeing more of the demand. Translate, instant networking. I'm sure it's going to be huge with the edge exploding. >> Oh yeah, yeah, yeah, yeah. >> Congratulations. >> Thank you so much. >> Thank you so much. >> Okay. So this is Supercloud 2 event here in Palo Alto. I'm John Furrier. The network Supercloud is here. Checkout Alkira. I'm John Furry, the host. Thanks for watching. (lively music)
SUMMARY :
networking aspect of the cloud, that really galvanized the industry of the cloud architectures Amazon has this and then going to be interconnected. Whether it's on-premise, So then now, you have So you had to bring up the same So all of that needs to be built in. and a lot of the alpha cloud players now, So now, you know, Ops So what do you think So now, it's the platform which is sitting So you guys brought the SD-WAN mojo so explain the difference. So what do you do? a network layer in the So what you do is and flexible for the customer Is that the wave? agility is the key, right? We, pretty much, yeah. the benefit to the customer? So you need to do a lot of stuff, and then ultimately, you know, that we run into, you when the edge comes out too, And you can expand So the question on the premise side, So on-premise, the kind of customers, So, we are a software company. from the data centers- or supercloud is sitting inside the cloud. I have to ask you guys, since that the customer had. Shift left is in the cloud I mean, the network's moving up the stack. So you have plumbing, which is So you got end-to-end virtualization, Exactly. So you have to have your own platform So the question is, it, you have to design So it's a, you know, It's interesting that these definitions you buy this platform. in the place of multiple superclouds. And because, you know, for the platform that the customer has. 'Cause I like that what you said. So from the customer's perspective, of the mini computers, We don't have the luxury of time. if you have unification. And that's why you see So if you look at the developers So that you need to have. in business school, back in the days I mean, that can trigger, from the cloud perspective. from your customer, Warner, So they're building a supercloud. You guys build a Talk about that dynamic. And for them to be able to the customer activity. So the cloud teams are moving John: So the demand's the way you used to Moving data is like moving, And the other thing is And I have to ask, you guys from the management to move faster. about the network needing to So now you have to to me is you eliminate all So if the companies in So you got two evolutions. And it became the internet. And that makes the networks hardened This is kind of like the network's version It's extending into the on-premise. So you guys are hitting Spans the clouds to a So, it needs to be a level of automation. What are some of the things that you see of the customer experience on that. And if you look at, you know, that are depending upon. And at the end of the day, and bigger, you know, in the game and still win. commentary that you hear they say it's too hard to believe. It's hard, you know, it's I dont believe you. Imagine like before connecting, you know, Seal line In, connect to it? firewalls in the middle, can you really do that? Pay as go usage get to deploy the solution. it's the amount of compute that they're used to. Just like the cloud constructs. All right. And looking forward to I'm John Furry, the host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Amir | PERSON | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Amir Khan | PERSON | 0.99+ |
Atif Khan | PERSON | 0.99+ |
John Furry | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Coke | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Warner Music Group | ORGANIZATION | 0.99+ |
Atif | PERSON | 0.99+ |
Ciscos | ORGANIZATION | 0.99+ |
Alkira | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
an hour | QUANTITY | 0.99+ |
Alkira | ORGANIZATION | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
Singapore | LOCATION | 0.99+ |
a day | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
U.S. East | LOCATION | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
16 different protocols | QUANTITY | 0.99+ |
Junipers | ORGANIZATION | 0.99+ |
Checkpoint | ORGANIZATION | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
less than one hour | QUANTITY | 0.99+ |
Viptela | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
more than 50% | QUANTITY | 0.99+ |
one way | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Supercloud | ORGANIZATION | 0.98+ |
Supercloud 2 | EVENT | 0.98+ |
Lambda | TITLE | 0.98+ |
One way | QUANTITY | 0.98+ |
CLI | TITLE | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
12 months ago | DATE | 0.98+ |
Legos | ORGANIZATION | 0.98+ |
APAC | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Discussion about Walmart's Approach | Supercloud2
(upbeat electronic music) >> Okay, welcome back to Supercloud 2, live here in Palo Alto. I'm John Furrier, with Dave Vellante. Again, all day wall-to-wall coverage, just had a great interview with Walmart, we've got a Next interview coming up, you're going to hear from Bob Muglia and Tristan Handy, two experts, both experienced entrepreneurs, executives in technology. We're here to break down what just happened with Walmart, and what's coming up with George Gilbert, former colleague, Wikibon analyst, Gartner Analyst, and now independent investor and expert. George, great to see you, I know you're following this space. Like you read about it, remember the first days when Dataverse came out, we were talking about them coming out of Berkeley? >> Dave: Snowflake. >> John: Snowflake. >> Dave: Snowflake In the early days. >> We, collectively, have been chronicling the data movement since 2010, you were part of our team, now you've got your nose to the grindstone, you're seeing the next wave. What's this all about? Walmart building their own super cloud, we got Bob Muglia talking about how these next wave of apps are coming. What are the super apps? What's the super cloud to you? >> Well, this key's off Dave's really interesting questions to Walmart, which was like, how are they building their supercloud? 'Cause it makes a concrete example. But what was most interesting about his description of the Walmart WCMP, I forgot what it stood for. >> Dave: Walmart Cloud Native Platform. >> Walmart, okay. He was describing where the logic could run in these stateless containers, and maybe eventually serverless functions. But that's just it, and that's the paradigm of microservices, where the logic is in this stateless thing, where you can shoot it, or it fails, and you can spin up another one, and you've lost nothing. >> That was their triplet model. >> Yeah, in fact, and that was what they were trying to move to, where these things move fluidly between data centers. >> But there's a but, right? Which is they're all stateless apps in the cloud. >> George: Yeah. >> And all their stateful apps are on-prem and VMs. >> Or the stateful part of the apps are in VMs. >> Okay. >> And so if they really want to lift their super cloud layer off of this different provider's infrastructure, they're going to need a much more advanced software platform that manages data. And that goes to the -- >> Muglia and Handy, that you and I did, that's coming up next. So the big takeaway there, George, was, I'll set it up and you can chime in, a new breed of data apps is emerging, and this highly decentralized infrastructure. And Tristan Handy of DBT Labs has a sort of a solution to begin the journey today, Muglia is working on something that's way out there, describe what you learned from it. >> Okay. So to talk about what the new data apps are, and then the platform to run them, I go back to the using what will probably be seen as one of the first data app examples, was Uber, where you're describing entities in the real world, riders, drivers, routes, city, like a city plan, these are all defined by data. And the data is described in a structure called a knowledge graph, for lack of a, no one's come up with a better term. But that means the tough, the stuff that Jack built, which was all stateless and sits above cloud vendors' infrastructure, it needs an entirely different type of software that's much, much harder to build. And the way Bob described it is, you're going to need an entirely new data management infrastructure to handle this. But where, you know, we had this really colorful interview where it was like Rock 'Em Sock 'Em, but they weren't really that much in opposition to each other, because Tristan is going to define this layer, starting with like business intelligence metrics, where you're defining things like bookings, billings, and revenue, in business terms, not in SQL terms -- >> Well, business terms, if I can interrupt, he said the one thing we haven't figured out how to APIify is KPIs that sit inside of a data warehouse, and that's essentially what he's doing. >> George: That's what he's doing, yes. >> Right. And so then you can now expose those APIs, those KPIs, that sit inside of a data warehouse, or a data lake, a data store, whatever, through APIs. >> George: And the difference -- >> So what does that do for you? >> Okay, so all of a sudden, instead of working at technical data terms, where you're dealing with tables and columns and rows, you're dealing instead with business entities, using the Uber example of drivers, riders, routes, you know, ETA prices. But you can define, DBT will be able to define those progressively in richer terms, today they're just doing things like bookings, billings, and revenue. But Bob's point was, today, the data warehouse that actually runs that stuff, whereas DBT defines it, the data warehouse that runs it, you can't do it with relational technology >> Dave: Relational totality, cashing architecture. >> SQL, you can't -- >> SQL caching architectures in memory, you can't do it, you've got to rethink down to the way the data lake is laid out on the disk or cache. Which by the way, Thomas Hazel, who's speaking later, he's the chief scientist and founder at Chaos Search, he says, "I've actually done this," basically leave it in an S3 bucket, and I'm going to query it, you know, with no caching. >> All right, so what I hear you saying then, tell me if I got this right, there are some some things that are inadequate in today's world, that's not compatible with the Supercloud wave. >> Yeah. >> Specifically how you're using storage, and data, and stateful. >> Yes. >> And then the software that makes it run, is that what you're saying? >> George: Yeah. >> There's one other thing you mentioned to me, it's like, when you're using a CRM system, a human is inputting data. >> George: Nothing happens till the human does something. >> Right, nothing happens until that data entry occurs. What you're talking about is a world that self forms, polling data from the transaction system, or the ERP system, and then builds a plan without human intervention. >> Yeah. Something in the real world happens, where the user says, "I want a ride." And then the software goes out and says, "Okay, we got to match a driver to the rider, we got to calculate how long it takes to get there, how long to deliver 'em." That's not driven by a form, other than the first person hitting a button and saying, "I want a ride." All the other stuff happens autonomously, driven by data and analytics. >> But my question was different, Dave, so I want to get specific, because this is where the startups are going to come in, this is the disruption. Snowflake is a data warehouse that's in the cloud, they call it a data cloud, they refactored it, they did it differently, the success, we all know it looks like. These areas where it's inadequate for the future are areas that'll probably be either disrupted, or refactored. What is that? >> That's what Muglia's contention is, that the DBT can start adding that layer where you define these business entities, they're like mini digital twins, you can define them, but the data warehouse isn't strong enough to actually manage and run them. And Muglia is behind a company that is rethinking the database, really in a fundamental way that hasn't been done in 40 or 50 years. It's the first, in his contention, the first real rethink of database technology in a fundamental way since the rise of the relational database 50 years ago. >> And I think you admit it's a real Hail Mary, I mean it's quite a long shot right? >> George: Yes. >> Huge potential. >> But they're pretty far along. >> Well, we've been talking on theCUBE for 12 years, and what, 10 years going to AWS Reinvent, Dave, that no one database will rule the world, Amazon kind of showed that with them. What's different, is it databases are changing, or you can have multiple databases, or? >> It's a good question. And the reason we've had multiple different types of databases, each one specialized for a different type of workload, but actually what Muglia is behind is a new engine that would essentially, you'll never get rid of the data warehouse, or the equivalent engine in like a Databricks datalake house, but it's a new engine that manages the thing that describes all the data and holds it together, and that's the new application platform. >> George, we have one minute left, I want to get real quick thought, you're an investor, and we know your history, and the folks watching, George's got a deep pedigree in investment data, and we can testify against that. If you're going to invest in a company right now, if you're a customer, I got to make a bet, what does success look like for me, what do I want walking through my door, and what do I want to send out? What companies do I want to look at? What's the kind of of vendor do I want to evaluate? Which ones do I want to send home? >> Well, the first thing a customer really has to do when they're thinking about next gen applications, all the people have told you guys, "we got to get our data in order," getting that data in order means building an integrated view of all your data landscape, which is data coming out of all your applications. It starts with the data model, so, today, you basically extract data from all your operational systems, put it in this one giant, central place, like a warehouse or lake house, but eventually you want this, whether you call it a fabric or a mesh, it's all the data that describes how everything hangs together as in one big knowledge graph. There's different ways to implement that. And that's the most critical thing, 'cause that describes your Uber landscape, your Uber platform. >> That's going to power the digital transformation, which will power the business transformation, which powers the business model, which allows the builders to build -- >> Yes. >> Coders to code. That's Supercloud application. >> Yeah. >> George, great stuff. Next interview you're going to see right here is Bob Muglia and Tristan Handy, they're going to unpack this new wave. Great segment, really worth unpacking and reading between the lines with George, and Dave Vellante, and those two great guests. And then we'll come back here for the studio for more of the live coverage of Supercloud 2. Thanks for watching. (upbeat electronic music)
SUMMARY :
remember the first days What's the super cloud to you? of the Walmart WCMP, I and that's the paradigm of microservices, and that was what they stateless apps in the cloud. And all their stateful of the apps are in VMs. And that goes to the -- Muglia and Handy, that you and I did, But that means the tough, he said the one thing we haven't And so then you can now the data warehouse that runs it, Dave: Relational totality, Which by the way, Thomas I hear you saying then, and data, and stateful. thing you mentioned to me, George: Nothing happens polling data from the transaction Something in the real world happens, that's in the cloud, that the DBT can start adding that layer Amazon kind of showed that with them. and that's the new application platform. and the folks watching, all the people have told you guys, Coders to code. for more of the live
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Chaos Search | ORGANIZATION | 0.99+ |
Jack | PERSON | 0.99+ |
Tristan | PERSON | 0.99+ |
12 years | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
DBT Labs | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
two experts | QUANTITY | 0.99+ |
Supercloud 2 | TITLE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Muglia | ORGANIZATION | 0.99+ |
one minute | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
two great guests | QUANTITY | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
50 years | QUANTITY | 0.98+ |
John | PERSON | 0.98+ |
Rock 'Em Sock 'Em | TITLE | 0.98+ |
today | DATE | 0.98+ |
first person | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
S3 | COMMERCIAL_ITEM | 0.97+ |
50 years ago | DATE | 0.97+ |
2010 | DATE | 0.97+ |
Mary | PERSON | 0.96+ |
first days | QUANTITY | 0.96+ |
SQL | TITLE | 0.96+ |
one | QUANTITY | 0.95+ |
Supercloud wave | EVENT | 0.95+ |
each one | QUANTITY | 0.93+ |
DBT | ORGANIZATION | 0.91+ |
Supercloud | TITLE | 0.91+ |
Supercloud2 | TITLE | 0.91+ |
Supercloud 2 | ORGANIZATION | 0.89+ |
Snowflake | TITLE | 0.86+ |
Dataverse | ORGANIZATION | 0.83+ |
triplet | QUANTITY | 0.78+ |
Brian Stevens, Neural Magic | Cube Conversation
>> John: Hello and welcome to this cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great conversation on making machine learning easier and more affordable in an era where everybody wants more machine learning and AI. We're featuring Neural Magic with the CEO is also Cube alumni, Brian Steve. CEO, Great to see you Brian. Thanks for coming on this cube conversation. Talk about machine learning. >> Brian: Hey John, happy to be here again. >> John: What a buzz that's going on right now? Machine learning, one of the hottest topics, AI front and center, kind of going mainstream. We're seeing the success of the, of the kind of NextGen capabilities in the enterprise and in apps. It's a really exciting time. So perfect timing. Great, great to have this conversation. Let's start with taking a minute to explain what you guys are doing over there at Neural Magic. I know there's some history there, neural networks, MIT. But the, the convergence of what's going on, this big wave hitting, it's an exciting time for you guys. Take a minute to explain the company and your mission. >> Brian: Sure, sure, sure. So, as you said, the company's Neural Magic and spun out at MIT four plus years ago, along with some people and, and some intellectual property. And you summarize it better than I can cause you said, we're just trying to make, you know, AI that much easier. And so, but like another level of specificity around it is. You know, in the world you have a lot of like data scientists really focusing on making AI work for whatever their use case is. And then the next phase of that, then they're looking at optimizing the models that they built. And then it's not good enough just to work on models. You got to put 'em into production. So, what we do is we make it easier to optimize the models that have been developed and trained and then trying to make it super simple when it comes time to deploying those in production and managing them. >> Brian: You know, we've seen this movie before with the cloud. You start to see abstractions come out. Data science we saw like was like the, the secret art of being like a data scientist now democratization of data. You're kind of seeing a similar wave with machine learning models, foundational models, some call it developers are getting involved. Model complexity's still there, but, but it's getting easier. There's almost like the democratization happening. You got complexity, you got deployment, it's challenges, cost, you got developers involved. So it's like how do you grow it? How do you get more horsepower? And then how do you make developers productive, right? So like, this seems to be the thread. So, so where, where do you see this going? Because there's going to be a massive demand for, I want to do more with my machine learning. But what's the data source? What's the formatting? This kind of a stack develop, what, what are you guys doing to address this? Can you take us through and demystify this, this wave that's hitting, that everyone's seeing? >> Brian: Yeah. Now like you said, like, you know, the democratization of all of it. And that brings me all the way back to like the roots of open source, right? When you think about like, like back in the day you had to build your own tech stack yourself. A lot of people probably probably don't remember that. And then you went, you're building, you're always starting on a body of code or a module that was out there with open source. And I think that's what I equate to where AI has gotten to with what you were talking about the foundational models that didn't really exist years ago. So you really were like putting the layers of your models together in the formulas and it was a lot of heavy lifting. And so there was so much time spent on development. With far too few success cases, you know, to get into production to solve like a business stereo technical need. But as these, what's happening is as these models are becoming foundational. It's meaning people don't have to start from scratch. They're actually able to, you know, the avant-garde now is start with existing model that almost does what you want, but then applying your data set to it. So it's, you know, it's really the industry moving forward. And then we, you know, and, and the best thing about it is open source plays a new dimension, but this time, you know, in the, in the realm of AI. And so to us though, like, you know, I've been like, I spent a career focusing on, I think on like the, not just the technical side, but the consumption of the technology and how it's still way too hard for somebody to actually like, operationalize technology that all those vendors throw at them. So I've always been like empathetic the user around like, you know what their job is once you give them great technology. And so it's still too difficult even with the foundational models because what happens is there's really this impedance mismatch between the development of the model and then where, where the model has to live and run and be deployed and the life cycle of the model, if you will. And so what we've done in our research is we've developed techniques to introduce what's known as sparsity into a machine learning model. It's already been developed and trained. And what that sparsity does is that unlocks by making that model so much smaller. So in many cases we can make a model 90 to 95% smaller, even smaller than that in research. So, and, and so by doing that, we do that in a way that preserves all the accuracy out of the foundational model as you talked about. So now all of a sudden you get this much smaller model just as accurate. And then the even more exciting part about it is we developed a software-based engine called Deep Source. And what that, what the Inference Runtime does is takes that now sparsified model and it runs it, but because you sparsified it, it only needs a fraction of the compute that it, that it would've needed otherwise. So what we've done is make these models much faster, much smaller, and then by pairing that with an inference runtime, you now can actually deploy that model anywhere you want on commodity hardware, right? So X 86 in the cloud, X 86 in the data center arm at the edge, it's like this massive unlock that happens because you get the, the state-of-the-art models, but you get 'em, you know, on the IT assets and the commodity infrastructure. That is where all the applications are running today. >> John: I want to get into the inference piece and the deep sparse you mentioned, but I first have to ask, you mentioned open source, Dave and I with some fellow cube alumnis. We're having a chat about, you know, the iPhone and Android moment where you got proprietary versus open source. You got a similar thing happening with some of these machine learning modules where there's a lot of proprietary things happening and there's open source movement is growing. So is there a balance there? Are they all trying to do the same thing? Is it more like a chip, you know, silicons involved, all kinds of things going on that are really fascinating from a science. What's your, what's your reaction to that? >> Brian: I think it's like anything that, you know, the way we talk about AI you think had been around for decades, but the reality is it's been some of the deep learning models. When we first, when we first started taking models that the brain team was working on at Google and billing APIs around them on Google Cloud where the first cloud to even have AI services was 2015, 2016. So when you think about it, it's really been what, 6 years since like this thing is even getting lift off. So I think with that, everybody's throwing everything at it. You know, there's tons of funded hardware thrown at specialty for training or inference new companies. There's legacy companies that are getting into like AI now and whether it's a, you know, a CPU company that's now building specialized ASEX for training. There's new tech stacks proprietary software and there's a ton of asset service. So it really is, you know, what's gone from nascent 8 years ago is the wild, wild west out there. So there's a, there's a little bit of everything right now and I think that makes sense because at the early part of any industry it really becomes really specialized. And that's the, you know, showing my age of like, you know, the early pilot of the two thousands, you know, red Hat people weren't running X 86 in enterprise back then and they thought it was a toy and they certainly weren't running open source, but you really, and it made sense that they weren't because it didn't deliver what they needed to at that time. So they needed specialty stacks, they needed expensive, they needed expensive hardware that did what an Oracle database needed to do. They needed proprietary software. But what happens is that commoditizes through both hardware and through open source and the same thing's really just starting with with AI. >> John: Yeah. And I think that's a great point before we to call that out because in any industry timing's everything, right? I mean I remember back in the 80s, late 80s and 90s, AI, you know, stuff was going on and it just wasn't, there wasn't enough horsepower, there wasn't enough tech. >> Brian: Yep. >> John: You mentioned some of the processing. So AI is this industry that has all these experts who have been itch scratching that itch for decades. And now with cloud and custom silicon. The tech fundamental at the lower end of the stack, if you will, on the performance side is significantly more performant. It's there you got more capabilities. >> Brian: Yeah. >> John: Now you're kicking into more software, faster software. So it just seems like we're at a tipping point where finally it's here, like that AI moment or machine learning and now data is, is involved. So this is where organizations I see really jumping in with the CEO mandate. Hey team, make ML work for us. Go figure it out. It's got to be an advantage for us. >> Brian: Yeah. >> John: So now they go, okay boss, we will. So what, what do they do? What's the steps does an enterprise take to get machine learning into their organizations? Cause you know, it's coming down from the boards, you know, how does this work for rob? >> Brian: Yeah. Like the, you know, the, what we're seeing is it's like anything, like it's, whether that was source adoption or whether that was cloud adoption, it always starts usually with one person. And increasingly it is the CEO, which realizes they're getting further behind the competition because they're not leaning in, you know, faster. But typically it really comes down to like a really strong practitioner that's inside the organization, right? And, that realizes that the number one goal isn't doing more and just training more models and and necessarily being proprietary about it. It's really around understanding the art of the possible. Something that's grounded in the art of the possible, what, what deep learning can do today and what business outcomes you can deliver, you know, if you can employ. And then there's well proven paths through that. It's just that because of where it's been, it's not that industrialized today. It's very much, you know, you see ML project by ML project is very snowflakey, right? And that was kind of the early days of open source as well. And so, we're just starting to get to the point where it's getting easier, it's getting more industrialized, there's less steps, there's less burdensome on developers, there's less burdensome on, on the deployment side. And we're trying to bring that, that whole last mile by saying, you know what? Deploying deep learning and AI models should be as easy as the as to deploy your application, right? You shouldn't have to take an extra step to deploy an AI model. It shouldn't have to require a new hardware, it shouldn't require a new process, a new DevOps model. It should be as simple as what you're already doing. >> John: What is the best practice for companies to effectively bring an acceptable level of machine learning and performance into their organizations? >> Brian: Yeah, I think like the, the number one start is like what you hinted at before is they, they have to know the use case. They have to, in most cases, you're going to find across every industry you know, that that problem's been tackled by some company, right? And then you have to have the best practice around fine-tuning the models already exist. So fine tuning that existing model. That foundational model on your unique dataset. You, you know, if you are in medical instruments, it's not good enough to identify that it's a medical instrument in the picture. You got to know what type of medical instrument. So there's always a fine tuning step. And so we've created open source tools that make it easy for you to do two things at once. You can fine tune that existing foundational model, whether that's in the language space or whether that's in the vision space. You can fine tune that on your dataset. And at the same time you get an optimized model that comes out the other end. So you get kind of both things. So you, you no longer have to worry about you're, we're freeing you from worrying about the complexity of that transfer learning, if you will. And we're freeing you from worrying about, well where am I going to deploy the model? Where does it need to be? Does it need to be on a device, an edge, a data center, a cloud edge? What kind of hardware is it? Is there enough hardware there? We're liberating you from all of that. Because what you want, what you can count on is there'll always be commodity capability, commodity CPUs where you want to deploy in abundance cause that's where your application is. And so all of a sudden we're just freeing you of that, of that whole step. >> John: Okay. Let's get into deep sparse because you mentioned that earlier. What inspired the creation of deep sparse and how does it differ from any other solutions in the market that are out there? >> Brian: Sure. So, so where unique is it? It starts by, by two things. One is what the industry's pretty good at from the optimization side is they're good at like this thing called quantization, which turns like, you know, big numbers into small numbers, lower precision. So a 32 bit representation of a, of AI weight into a bit. And they're good at like cutting out layers, which also takes away accuracy. What we've figured out is to take those, the industry techniques for those that are best practice, but we combined it with unstructured varsity. So by reducing that model by 90 to 95% in size, that's great because it's made it smaller. But we've taken that when it's the deep sparse engine, when you deploy it that looks at that model and says, because it's so much smaller, I no longer have to run the part of the model that's been essentially sparsified. So what that's done is, it's meant that you no longer need a supercomputer to run models because there's not nearly as much math and processing as there was before the model was optimized. So now what happens is, every CPU platform out there has, has an enormous amount of compute because we've sparsified the rest of it away. So you can pick a, you can pick your, your laptop and you have enough compute to run state-of-the-art models. The second thing that, and you need a software engine to do that cause it ignores the parts of the models. It doesn't need to run, which is what like specialized hardware can't do. The second part is it's then turned into a memory efficiency problem. So it's really around just getting memory, getting the models loaded into the cash of the computer and keeping it there. Never having to go back out to memory. So, so our techniques are both, we reduce the model size and then we only run the part of the model that matters and then we keep it all in cash. And so what that does is it gets us to like these, these low, low latency faster and we're able to increase, you know, the CPU processing by an order magnitude. >> John: Yeah. That low latency is key. And you got developers, you know, co coding super fast. We'll get to the developer angle in a second. I want to just follow up on this, this motivation behind the, the deep sparse because you know, as we were talking earlier before we came on camera about the old days, I mean, not too long ago, virtualization and VMware abstracted away the os from, from the hardware rights and the server virtualization changed the game. >> Brian: Yeah. >> John: And that basically invented cloud computing as we know it today. So, so we see that abstraction. >> Brian: Yeah. >> John: There seems to be a motivation behind abstracting the way the machine learning models away from the hardware. And that seems to be bringing advantages to the AI growth. Can you elaborate on, is that true? And it's, what's your comment? >> Brian: It's true. I think it's true for us. I don't think the industry's there yet, honestly. Cause I think the industry still is of that mindset that if I took, if it took these expensive GPUs to train my model, then I want to run my model on those same expensive GPUs. Because there's often like not a separation between the people that are developing AI and the people that have to manage and deploy at where you need it. So the reality is, is that that's everything that we're after. Like, do we decrease the cost? Yes. Do we make the models smaller? Yes. Do we make them faster? A yes. But I think the most amazing power is that we've turned AI into a docker based microservice. And so like who in the industry wants to deploy their apps the old way on a os without virtualization, without docker, without Kubernetes, without microservices, without service mesh without serverless. You want all those tools for your apps by converting AI models. So they can be run inside a docker container with no apologies around latency and performance cause it's faster. You get the best of that whole world that you just talked about, which is, you know, what we're calling, you know, software delivered AI. So now the AI lives in the same world. Organizations that have gone through that digital cloud transformation with their app infrastructure. AI fits into that world. >> John: And this is where the abstraction concepts matter. When you have these inflection points, the convergence of compute data, machine learning that powers AI, it really becomes a developer opportunity. Because now applications and businesses, when they actually go through the digital transformation, their businesses are completely transformed. There is no IT. Developers are the application. They are the company, right? So AI will be part of whatever business or app will be out there. So there is a application developer angle here. Brian, can you explain >> Brian: Oh completely. >> John: how they're going to use this? Because you mentioned docker container microservice, I mean this really is an insane flipping of the script for developers. >> Brian: Yeah. >> John: So what's that look like? >> Brian: Well speak, it's because like AI's kind of, I mean, again, like it's come so fast. So you figure there's my app team and here's my AI team, right? And they're in different places and the AI team is dragging in specialized infrastructure in support of that as well. And that's not how app developers think. Like they've ran on fungible infrastructure that subtracted and virtualized forever, right? And so what we've done is we've, in addition to fitting into that world that they, that they like, we've also made it simple for them for they don't have to be a machine learning engineer to be able to experiment with these foundational models and transfer learning 'em. We've done that. So they can do that in a couple of commands and it has a simple API that they can either link to their application directly as a library to make difference calls or they can stand it up as a standalone, you know, scale up, scale out inference server. They get two choices. But it really fits into that, you know, you know that world that the modern developer, whether they're just using Python or C or otherwise, we made it just simple. So as opposed to like Go learn something else, they kind of don't have to. So in a way though, it's made it. It's almost made it hard because people expect when we talk to 'em for the first time to be the old way. Like, how do you look like a piece of hardware? Are you compatible with my existing hardware that runs ML? Like, no, we're, we're not. Because you don't need that stack anymore. All you need is a library called to make your prediction and that's it. That's it. >> John: Well, I mean, we were joking on Twitter the other day with someone saying, is AI a pet or a cattle? Right? Because they love their, their AI bots right now. So, so I'd say pet there. But you look at a lot of, there's going to be a lot of AI. So on a more serious note, you mentioned in microservices, will deep sparse have an API for developers? And how does that look like? What do I do? >> Brian: Yeah. >> John: tell me what my, as a developer, what's the roadmap look like? What's the >> Brian: Yeah, it, it really looks, it really can go in both modes. It can go in a standalone server mode where it handles, you know, rest API and it can scale out with ES as the workload comes up and scale back and like try to make hardware do that. Hardware may scale back, but it's just sitting there dormant, you know, so with this, it scales the same way your application needs to. And then for a developer, they basically just, they just, the PIP install de sparse, you know, has one commanded to do an install, and then they do two calls, really. The first call is a library call that the app makes to create the model. And models really already trained, but they, it's called a model create call. And the second command they do is they make a call to do a prediction. And it's as simple as that. So it's, it's AI's as simple as using any other library that the developers are already using, which I, which sounds hard to fathom because it is just so simplified. >> John: Software delivered AI. Okay, that's a cool thing. I believe in it personally. I think that's the way to go. I think there's going to be plenty of hardware options if you look at the advances of cloud players that got more silicon coming out. Yeah. More GPU. I mean, there's more instance, I mean, everything's out there right now. So the question is how does that evolve in your mind? Because that's seems to be key. You have open source projects emerging. What, what path does this take? Is there a parallel mental model that you see, Brian, that is similar? You mentioned open source earlier. Is it more like a VMware virtualization thing or is it more of a cloud thing? Is there Yeah. Is it going to evolve in a, in a trajectory that looks similar to what we might've seen in the past? >> Brian: Yeah, we're, you know, when I, when when I got involved with the company, what I, when I thought about it and I was reasoning about it, like, do you, you know, you want to, like, we all do when you want to join something full-time. I thought about it and said, where will the industry eventually get to? Right? To fully realize the value of, of deep learning and what's plausible as it evolves. And to me, like I, I know it's the old adage of, you know, you know, software, its hardware, cloudy software. But it truly was like, you know, we can solve these problems in software. Like there's nothing special that's happening at the hardware layer and the processing AI. The reality is that it's just early in the industry. So the view that that we had was like, this is eventually the best place where the industry will be, is the liberation of being able to run AI anywhere. Like you're really not democratizing, you democratize the model. But if you can't run the model anywhere you want because these models are getting bigger and bigger with these large language models, then you're kind of not democratizing. And if you got to go and like by a cluster to run this thing on. So the democratization comes by if all of a sudden that model can be consumed anywhere on demand without planning, without provisioning, wherever infrastructure is. And so I think that's with or without Neural Magic, that's where the industry will go and will get to. I think we're the leaders, leaders in getting it there. It's right because we're more advanced on these techniques. >> John: Yeah. And your background too. You've seen OpenStack, pre-cloud, you saw open source grow and still exponentially growing. And so you have the same similar dynamic with machine learning models growing. And they're also segmenting into almost a, an ML stack or foundational model as we talk about. So you're starting to see the formation of tooling inference. So a lot of components coming. It's almost a stack, it's almost a, it literally is like an operating system problem space, you know? How do you run things, how do you link things? How do you bring things together? Is that what's going on here? Is this like a data modeling operating environment kind of red hat type thing going on? Like. >> Brian: Yeah. Yeah. Like I think there is, you know, I thought about that too. And I think there is the role of like distribution, because the industrialization not happening fast enough of this. Like, can I go back to like every customers, every, every user does it in their own kind of way. Like it's not, everyone's a little bit of a snowflake. And I think that's okay. There's definitely plenty of companies that want to come in and say, well, this is the way it's going to be and we industrialize it as long as you do it our way. The reality is technology doesn't get industrialized by one company just saying, do it our way. And so that's why like we've taken the approach through open source by saying like, Hey, you haven't really industrialized it if you said. We made it simple, but you always got to run AI here. Yeah, right. You only like really industrialize it if you break it down into components that are simple to use and they work integrated in the stack the way you want them to. And so to me, that first principles was getting thing into microservices and dockers that could be run on VMware, OpenShare on the cloud in the edge. And so that's the, that's the real part that we're happening with. The other part, like I do agree, like I think it's going to quickly move into less about the model. Less about the training of the model and the transfer learning, you know, the data set of the model. We're taking away the complexity of optimization. Giving liberating deployment to be anywhere. And I think the last mile, John is going to be around the ML ops around that. Because it's easy to think of like soft now that it's just a software problem, we've turned it into a software problem. So it's easy to think of software as like kind of a point release, but that's not the reality, right? It's a life cycle. And it's, and so I think ML very much brings in the what is the lifecycle of that deployment? And, you know, you get into more interesting conversations, to be honest than like, once you've deployed in a docking container is around like model drift and accuracy and the dataset changes and the user changes is how do you become from an ML perspective of where of that sending signal back retraining. And, and that's where I think a lot of the, in more of the innovation's going to start to move there. >> John: Yeah. And software also, the software problem, the software opportunity as well is developer focused. And if you look at the cloud native landscape now, similar stacks developing a lot of components. A lot of things to, to stitch together a lot of things that are automating under the hood. A lot of developer productivity conversations. I think this is going to go down that same road. I want to get your thoughts because developers will set the pace. And this is something that's clear in this next wave developer productivity. They're the defacto standards bodies. They will decide what microservices check, API check. Now, skill gap is going to be a problem because it's relatively new. So model sprawl, model sizes, proprietary versus open. There has to be a way to kind of crunch that down into a, like a DevOps, like just make it, get the developer out of the, the muck. So what's your view? Are we early days like that? Or what's the young kid in college studying CS or whatever degree who comes into this with, with both feet? What are they doing? >> Brian: I'll probably say like the, the non-popular answer to that. A little bit is it's happening so fast that it's going to get kind of boring fast. Meaning like, yeah, you could go to school and go to MIT, right? Sorry. Like, and you could get a hold through end like becoming a model architect, like inventing the next model, right? And the layers and combining 'em and et cetera, et cetera. And then what operators and, and building a model that's bigger than the last one and trains faster, right? And there will be those people, right? That actually, like they're building the engines the same way. You know, I grew up as an infrastructure software developer. There's not a lot of companies that hire those anymore because they're all sitting inside of three big clouds. Yeah. Right? So you better be a good app developer, but I think what you're going to see is before you had to be everything, you had to be the, if you were going to use infrastructure, you had to know how to build infrastructure. And I think the same thing's true around is quickly exiting ML is to be able to use ML in your company, you better be like, great at every aspect of ML, including every intricacy inside of the model and every operation's doing, that's quickly changing. Like, you're going to start with a starting point. You know, in the future you're not going to be like cracking open these GPT models, you're going to just be pulling them off the shelf, fine tuning 'em and go. You don't have to invent it. You don't have to understand it. And I think that's going to be a pivot point, you know, in the industry between, you know, what's the future? What's, what's the future of a, a data scientist? ML engineer researcher look like? >> John: I think that's, the outcome's going to be determined. I mean, you mentioned, you know, doing it yourself what an SRE is for a Google with the servers scale's huge. So yeah, it might have to, at the beginning get boring, you get obsolete quickly, but that means it's progressing. So, The scale becomes huge. And that's where I think it's going to be interesting when we see that scale. >> Brian: Yep. Yeah, I think that's right. I think that's right. And we always, and, and what I've always said, and much the, again, the distribute into my ML team is that I want every developer to be as adept at being able take advantage of ML as non ML engineer, right? It's got to be that simple. And I think, I think it's getting there. I really do. >> John: Well, Brian, great, great to have you on theCUBE here on this cube conversation. As part of the startup showcase that's coming up. You're going to be featured. Or your company would featured on the upcoming ABRA startup showcase on making machine learning easier and more affordable as more machine learning models come in. You guys got deep sparse and some great technology. We're going to dig into that next time. I'll give you the final word right now. What do you see for the company? What are you guys looking for? Give a plug for the company right now. >> Brian: Oh, give a plug that I haven't already doubled in as the plug. >> John: You're hiring engineers, I assume from MIT and other places. >> Brian: Yep. I think like the, the biggest thing is like, like we're on the developer side. We're here to make this easy. The majority of inference today is, is on CPUs already, believe it or not, as much as kind of, we like to talk about hardware and specialized hardware. The majority is already on CPUs. We're basically bringing 95% cost savings to CPUs through this acceleration. So, but we're trying to do it in a way that makes it community first. So I think the, the shout out would be come find the Neural Magic community and engage with us and you'll find, you know, a thousand other like-minded people in Slack that are willing to help you as well as our engineers. And, and let's, let's go take on some successful AI deployments. >> John: Exciting times. This is, I think one of the pivotal moments, NextGen data, machine learning, and now starting to see AI not be that chat bot, just, you know, customer support or some basic natural language processing thing. You're starting to see real innovation. Brian Stevens, CEO of Neural Magic, bringing the magic here. Thanks for the time. Great conversation. >> Brian: Thanks John. >> John: Thanks for joining me. >> Brian: Cheers. Thank you. >> John: Okay. I'm John Furrier, host of theCUBE here in Palo Alto, California for this cube conversation with Brian Stevens. Thanks for watching.
SUMMARY :
CEO, Great to see you Brian. happy to be here again. minute to explain what you guys in the world you have a lot So it's like how do you grow it? like back in the day you had and the deep sparse you And that's the, you know, late 80s and 90s, AI, you know, It's there you got more capabilities. the CEO mandate. Cause you know, it's coming the as to deploy your application, right? And at the same time you get in the market that are out meant that you no longer need a the deep sparse because you know, John: And that basically And that seems to be bringing and the people that have to the convergence of compute data, insane flipping of the script But it really fits into that, you know, But you look at a lot of, call that the app makes to model that you see, Brian, the old adage of, you know, And so you have the same the way you want them to. And if you look at the to see is before you had to be I mean, you mentioned, you know, the distribute into my ML team great to have you on theCUBE already doubled in as the plug. and other places. the biggest thing is like, of the pivotal moments, Brian: Cheers. host of theCUBE here in Palo Alto,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Stevens | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
32 bit | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Brian Steve | PERSON | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two calls | QUANTITY | 0.99+ |
both things | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
first call | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both feet | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
both modes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
80s | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
second command | QUANTITY | 0.98+ |
Opher Kahane, Sonoma Ventures | CloudNativeSecurityCon 23
(uplifting music) >> Hello, welcome back to theCUBE's coverage of CloudNativeSecurityCon, the inaugural event, in Seattle. I'm John Furrier, host of theCUBE, here in the Palo Alto Studios. We're calling it theCUBE Center. It's kind of like our Sports Center for tech. It's kind of remote coverage. We've been doing this now for a few years. We're going to amp it up this year as more events are remote, and happening all around the world. So, we're going to continue the coverage with this segment focusing on the data stack, entrepreneurial opportunities around all things security, and as, obviously, data's involved. And our next guest is a friend of theCUBE, and CUBE alumni from 2013, entrepreneur himself, turned, now, venture capitalist angel investor, with his own firm, Opher Kahane, Managing Director, Sonoma Ventures. Formerly the founder of Origami, sold to Intuit a few years back. Focusing now on having a lot of fun, angel investing on boards, focusing on data-driven applications, and stacks around that, and all the stuff going on in, really, in the wheelhouse for what's going on around security data. Opher, great to see you. Thanks for coming on. >> My pleasure. Great to be back. It's been a while. >> So you're kind of on Easy Street now. You did the entrepreneurial venture, you've worked hard. We were on together in 2013 when theCUBE just started. XCEL Partners had an event in Stanford, XCEL, and they had all the features there. We interviewed Satya Nadella, who was just a manager at Microsoft at that time, he was there. He's now the CEO of Microsoft. >> Yeah, he was. >> A lot's changed in nine years. But congratulations on your venture you sold, and you got an exit there, and now you're doing a lot of investments. I'd love to get your take, because this is really the biggest change I've seen in the past 12 years, around an inflection point around a lot of converging forces. Data, which, big data, 10 years ago, was a big part of your career, but now it's accelerated, with cloud scale. You're seeing people building scale on top of other clouds, and becoming their own cloud. You're seeing data being a big part of it. Cybersecurity kind of has not really changed much, but it's the most important thing everyone's talking about. So, developers are involved, data's involved, a lot of entrepreneurial opportunities. So I'd love to get your take on how you see the current situation, as it relates to what's gone on in the past five years or so. What's the big story? >> So, a lot of big stories, but I think a lot of it has to do with a promise of making value from data, whether it's for cybersecurity, for Fintech, for DevOps, for RevTech startups and companies. There's a lot of challenges in actually driving and monetizing the value from data with velocity. Historically, the challenge has been more around, "How do I store data at massive scale?" And then you had the big data infrastructure company, like Cloudera, and MapR, and others, deal with it from a scale perspective, from a storage perspective. Then you had a whole layer of companies that evolved to deal with, "How do I index massive scales of data, for quick querying, and federated access, et cetera?" But now that a lot of those underlying problems, if you will, have been solved, to a certain extent, although they're always being stretched, given the scale of data, and its utility is becoming more and more massive, in particular with AI use cases being very prominent right now, the next level is how to actually make value from the data. How do I manage the full lifecycle of data in complex environments, with complex organizations, complex use cases? And having seen this from the inside, with Origami Logic, as we dealt with a lot of large corporations, and post-acquisition by Intuit, and a lot of the startups I'm involved with, it's clear that we're now onto that next step. And you have fundamental new paradigms, such as data mesh, that attempt to address that complexity, and responsibly scaling access, and democratizing access in the value monetization from data, across large organizations. You have a slew of startups that are evolving to help the entire lifecycle of data, from the data engineering side of it, to the data analytics side of it, to the AI use cases side of it. And it feels like the early days, to a certain extent, of the revolution that we've seen in transition from traditional databases, to data warehouses, to cloud-based data processing, and big data. It feels like we're at the genesis of that next wave. And it's super, super exciting, for me at least, as someone who's sitting more in the coach seat, rather than being on the pitch, and building startups, helping folks as they go through those motions. >> So that's awesome. I want to get into some of these data infrastructure dynamics you mentioned, but before that, talk to the audience around what you're working on now. You've been a successful entrepreneur, you're focused on angel investing, so, super-early seed stage. What kind of deals are you looking at? What's interesting to you? What is Sonoma Ventures looking for, and what are some of the entrepreneurial dynamics that you're seeing right now, from a startup standpoint? >> Cool, so, at a macro level, this is a little bit of background of my history, because it shapes very heavily what it is that I'm looking at. So, I've been very fortunate with entrepreneurial career. I founded three startups. All three of them are successful. Final two were sold, the first one merged and went public. And my third career has been about data, moving data, passing data, processing data, generating insights from it. And, at this phase, I wanted to really evolve from just going and building startup number four, from going through the same motions again. A 10 year adventure, I'm a little bit too old for that, I guess. But the next best thing is to sit from a point whereby I can be more elevated in where I'm dealing with, and broaden the variety of startups I'm focused on, rather than just do your own thing, and just go very, very deep into it. Now, what specifically am I focused on at Sonoma Ventures? So, basically, looking at what I refer to as a data-driven application stack. Anything from the low-level data infrastructure and cloud infrastructure, that helps any persona in the data universe maximize value for data, from their particular point of view, for their particular role, whether it's data analysts, data scientists, data engineers, cloud engineers, DevOps folks, et cetera. All the way up to the application layer, in applications that are very data-heavy. And what are very typical data-heavy applications? FinTech, cyber, Web3, revenue technologies, and product and DevOps. So these are the areas we're focused on. I have almost 23 or 24 startups in the portfolio that span all these different areas. And this is in terms of the aperture. Now, typically, focus on pre-seed, seed. Sometimes a little bit later stage, but this is the primary focus. And it's really about partnering with entrepreneurs, and helping them make, if you will, original mistakes, avoid the mistakes I made. >> Yeah. >> And take it to the next level, whatever the milestone they're driving with. So I'm very, very hands-on with many of those startups. Now, what is it that's happening right now, initially, and why is it so exciting? So, on one hand, you have this scaling of data and its complexity, yet lagging value creation from it, across those different personas we've touched on. So that's one fundamental opportunity which is secular. The other one, which is more a cyclic situation, is the fact that we're going through a down cycle in tech, as is very evident in the public markets, and everything we're hearing about funding going slower and lower, terms shifting more into the hands of typical VCs versus entrepreneur-friendly market, and so on and so forth. And a very significant amount of layoffs. Now, when you combine these two trends together, you're observing a very interesting thing, that a lot of folks, really bright folks, who have sold a startup to a company, or have been in the guts of the large startup, or a large corporation, have, hands-on, experienced all those challenges we've spoken about earlier, in turf, maximizing value from data, irrespective of their role, in a specific angle, or vantage point they have on those challenges. So, for many of them, it's an opportunity to, "Now, let me now start a startup. I've been laid off, maybe, or my company's stock isn't doing as well as it used to, as a large corporation. Now I have an opportunity to actually go and take my entrepreneurial passion, and apply it to a product and experience as part of this larger company." >> Yeah. >> And you see a slew of folks who are emerging with these great ideas. So it's a very, very exciting period of time to innovate. >> It's interesting, a lot of people look at, I mean, I look at Snowflake as an example of a company that refactored data warehouses. They just basically took data warehouse, and put it on the cloud, and called it a data cloud. That, to me, was compelling. They didn't pay any CapEx. They rode Amazon's wave there. So, a similar thing going on with data. You mentioned this, and I see it as an enabling opportunity. So whether it's cybersecurity, FinTech, whatever vertical, you have an enablement. Now, you mentioned data infrastructure. It's a super exciting area, as there's so many stacks emerging. We got an analytics stack, there's real-time stacks, there's data lakes, AI stack, foundational models. So, you're seeing an explosion of stacks, different tools probably will emerge. So, how do you look at that, as a seasoned entrepreneur, now investor? Is that a good thing? Is that just more of the market? 'Cause it just seems like more and more kind of decomposed stacks targeted at use cases seems to be a trend. >> Yeah. >> And how do you vet that, is it? >> So it's a great observation, and if you take a step back and look at the evolution of technology over the last 30 years, maybe longer, you always see these cycles of expansion, fragmentation, contraction, expansion, contraction. Go decentralize, go centralize, go decentralize, go centralize, as manifested in different types of technology paradigms. From client server, to storage, to microservices, to et cetera, et cetera. So I think we're going through another big bang, to a certain extent, whereby end up with more specialized data stacks for specific use cases, as you need performance, the data models, the tooling to best adapt to the particular task at hand, and the particular personas at hand. As the needs of the data analysts are quite different from the needs of an NL engineer, it's quite different from the needs of the data engineer. And what happens is, when you end up with these siloed stacks, you end up with new fragmentation, and new gaps that need to be filled with a new layer of innovation. And I suspect that, in part, that's what we're seeing right now, in terms of the next wave of data innovation. Whether it's in a service of FinTech use cases, or cyber use cases, or other, is a set of tools that end up having to try and stitch together those elements and bridge between them. So I see that as a fantastic gap to innovate around. I see, also, a fundamental need in creating a common data language, and common data management processes and governance across those different personas, because ultimately, the same underlying data these folks need, albeit in different mediums, different access models, different velocities, et cetera, the subject matter, if you will, the underlying raw data, and some of the taxonomies right on top of it, do need to be consistent. So, once again, a great opportunity to innovate, whether it's about semantic layers, whether it's about data mesh, whether it's about CICD tools for data engineers, and so on and so forth. >> I got to ask you, first of all, I see you have a friend you brought into the interview. You have a dog in the background who made a little cameo appearance. And that's awesome. Sitting right next to you, making sure everything's going well. On the AI thing, 'cause I think that's the hot trend here. >> Yeah. >> You're starting to see, that ChatGPT's got everyone excited, because it's kind of that first time you see kind of next-gen functionality, large-language models, where you can bring data in, and it integrates well. So, to me, I think, connecting the dots, this kind of speaks to the beginning of what will be a trend of really blending of data stacks together, or blending of models. And so, as more data modeling emerges, you start to have this AI stack kind of situation, where you have things out there that you can compose. It's almost very developer-friendly, conceptually. This is kind of new, but kind of the same concept's been working on with Google and others. How do you see this emerging, as an investor? What are some of the things that you're excited about, around the ChatGPT kind of things that's happening? 'Cause it brings it mainstream. Again, a million downloads, fastest applications get a million downloads, even among all the successes. So it's obviously hit a nerve. People are talking about it. What's your take on that? >> Yeah, so, I think that's a great point, and clearly, it feels like an iPhone moment, right, to the industry, in this case, AI, and lots of applications. And I think there's, at a high level, probably three different layers of innovation. One is on top of those platforms. What use cases can one bring to the table that would drive on top of a ChatGPT-like service? Whereby, the startup, the company, can bring some unique datasets to infuse and add value on top of it, by custom-focusing it and purpose-building it for a particular use case or particular vertical. Whether it's applying it to customer service, in a particular vertical, applying it to, I don't know, marketing content creation, and so on and so forth. That's one category. And I do know that, as one of my startups is in Y Combinator, this season, winter '23, they're saying that a very large chunk of the YC companies in this cycle are about GPT use cases. So we'll see a flurry of that. The next layer, the one below that, is those who actually provide those platforms, whether it's ChatGPT, whatever will emerge from the partnership with Microsoft, and any competitive players that emerge from other startups, or from the big cloud providers, whether it's Facebook, if they ever get into this, and Google, which clearly will, as they need to, to survive around search. The third layer is the enabling layer. As you're going to have more and more of those different large-language models and use case running on top of it, the underlying layers, all the way down to cloud infrastructure, the data infrastructure, and the entire set of tools and systems, that take raw data, and massage it into useful, labeled, contextualized features and data to feed the models, the AI models, whether it's during training, or during inference stages, in production. Personally, my focus is more on the infrastructure than on the application use cases. And I believe that there's going to be a massive amount of innovation opportunity around that, to reach cost-effective, quality, fair models that are deployed easily and maintained easily, or at least with as little pain as possible, at scale. So there are startups that are dealing with it, in various areas. Some are about focusing on labeling automation, some about fairness, about, speaking about cyber, protecting models from threats through data and other issues with it, and so on and so forth. And I believe that this will be, too, a big driver for massive innovation, the infrastructure layer. >> Awesome, and I love how you mentioned the iPhone moment. I call it the browser moment, 'cause it felt that way for me, personally. >> Yep. >> But I think, from a business model standpoint, there is that iPhone shift. It's not the BlackBerry. It's a whole 'nother thing. And I like that. But I do have to ask you, because this is interesting. You mentioned iPhone. iPhone's mostly proprietary. So, in these machine learning foundational models, >> Yeah. >> you're starting to see proprietary hardware, bolt-on, acceleration, bundled together, for faster uptake. And now you got open source emerging, as two things. It's almost iPhone-Android situation happening. >> Yeah. >> So what's your view on that? Because there's pros and cons for either one. You're seeing a lot of these machine learning laws are very proprietary, but they work, and do you care, right? >> Yeah. >> And then you got open source, which is like, "Okay, let's get some upsource code, and let people verify it, and then build with that." Is it a balance? >> Yes, I think- >> Is it mutually exclusive? What's your view? >> I think it's going to be, markets will drive the proportion of both, and I think, for a certain use case, you'll end up with more proprietary offerings. With certain use cases, I guess the fundamental infrastructure for ChatGPT-like, let's say, large-language models and all the use cases running on top of it, that's likely going to be more platform-oriented and open source, and will allow innovation. Think of it as the equivalent of iPhone apps or Android apps running on top of those platforms, as in AI apps. So we'll have a lot of that. Now, when you start going a little bit more into the guts, the lower layers, then it's clear that, for performance reasons, in particular, for certain use cases, we'll end up with more proprietary offerings, whether it's advanced silicon, such as some of the silicon that emerged from entrepreneurs who have left Google, around TensorFlow, and all the silicon that powers that. You'll see a lot of innovation in that area as well. It hopefully intends to improve the cost efficiency of running large AI-oriented workloads, both in inference and in learning stages. >> I got to ask you, because this has come up a lot around Azure and Microsoft. Microsoft, pretty good move getting into the ChatGPT >> Yep. >> and the open AI, because I was talking to someone who's a hardcore Amazon developer, and they said, they swore they would never use Azure, right? One of those types. And they're spinning up Azure servers to get access to the API. So, the developers are flocking, as you mentioned. The YC class is all doing large data things, because you can now program with data, which is amazing, which is amazing. So, what's your take on, I know you got to be kind of neutral 'cause you're an investor, but you got, Amazon has to respond, Google, essentially, did all the work, so they have to have a solution. So, I'm expecting Google to have something very compelling, but Microsoft, right now, is going to just, might run the table on developers, this new wave of data developers. What's your take on the cloud responses to this? What's Amazon, what do you think AWS is going to do? What should Google be doing? What's your take? >> So, each of them is coming from a slightly different angle, of course. I'll say, Google, I think, has massive assets in the AI space, and their underlying cloud platform, I think, has been designed to support such complicated workloads, but they have yet to go as far as opening it up the same way ChatGPT is now in that Microsoft partnership, and Azure. Good question regarding Amazon. AWS has had a significant investment in AI-related infrastructure. Seeing it through my startups, through other lens as well. How will they respond to that higher layer, above and beyond the low level, if you will, AI-enabling apparatuses? How do they elevate to at least one or two layers above, and get to the same ChatGPT layer, good question. Is there an acquisition that will make sense for them to accelerate it, maybe. Is there an in-house development that they can reapply from a different domain towards that, possibly. But I do suspect we'll end up with acquisitions as the arms race around the next level of cloud wars emerges, and it's going to be no longer just about the basic tooling for basic cloud-based applications, and the infrastructure, and the cost management, but rather, faster time to deliver AI in data-heavy applications. Once again, each one of those cloud suppliers, their vendor is coming with different assets, and different pros and cons. All of them will need to just elevate the level of the fight, if you will, in this case, to the AI layer. >> It's going to be very interesting, the different stacks on the data infrastructure, like I mentioned, analytics, data lake, AI, all happening. It's going to be interesting to see how this turns into this AI cloud, like data clouds, data operating systems. So, super fascinating area. Opher, thank you for coming on and sharing your expertise with us. Great to see you, and congratulations on the work. I'll give you the final word here. Give a plugin for what you're looking for for startup seats, pre-seeds. What's the kind of profile that gets your attention, from a seed, pre-seed candidate or entrepreneur? >> Cool, first of all, it's my pleasure. Enjoy our chats, as always. Hopefully the next one's not going to be in nine years. As to what I'm looking for, ideally, smart data entrepreneurs, who have come from a particular domain problem, or problem domain, that they understand, they felt it in their own 10 fingers, or millions of neurons in their brains, and they figured out a way to solve it. Whether it's a data infrastructure play, a cloud infrastructure play, or a very, very smart application that takes advantage of data at scale. These are the things I'm looking for. >> One final, final question I have to ask you, because you're a seasoned entrepreneur, and now coach. What's different about the current entrepreneurial environment right now, vis-a-vis, the past decade? What's new? Is it different, highly accelerated? What advice do you give entrepreneurs out there who are putting together their plan? Obviously, a global resource pool now of engineering. It might not be yesterday's formula for success to putting a venture together to get to that product-market fit. What's new and different, and what's your advice to the folks out there about what's different about the current environment for being an entrepreneur? >> Fantastic, so I think it's a great question. So I think there's a few axes of difference, compared to, let's say, five years ago, 10 years ago, 15 years ago. First and foremost, given the amount of infrastructure out there, the amount of open-source technologies, amount of developer toolkits and frameworks, trying to develop an application, at least at the application layer, is much faster than ever. So, it's faster and cheaper, to the most part, unless you're building very fundamental, core, deep tech, where you still have a big technology challenge to deal with. And absent that, the challenge shifts more to how do you manage my resources, to product-market fit, how are you integrating the GTM lens, the go-to-market lens, as early as possible in the product-market fit cycle, such that you reach from pre-seed to seed, from seed to A, from A to B, with an optimal amount of velocity, and a minimal amount of resources. One big difference, specifically as of, let's say, beginning of this year, late last year, is that money is no longer free for entrepreneurs, which means that you need to operate and build startup in an environment with a lot more constraints. And in my mind, some of the best startups that have ever been built, and some of the big market-changing, generational-changing, if you will, technology startups, in their respective industry verticals, have actually emerged from these times. And these tend to be the smartest, best startups that emerge because they operate with a lot less money. Money is not as available for them, which means that they need to make tough decisions, and make verticals every day. What you don't need to do, you can kick the cow down the road. When you have plenty of money, and it cushions for a lot of mistakes, you don't have that cushion. And hopefully we'll end up with companies with a more agile, more, if you will, resilience, and better cultures in making those tough decisions that startups need to make every day. Which is why I'm super, super excited to see the next batch of amazing unicorns, true unicorns, not just valuation, market rising with the water type unicorns that emerged from this particular era, which we're in the beginning of. And very much enjoy working with entrepreneurs during this difficult time, the times we're in. >> The next 24 months will be the next wave, like you said, best time to do a company. Remember, Airbnb's pitch was, "We'll rent cots in apartments, and sell cereal." Boy, a lot of people passed on that deal, in that last down market, that turned out to be a game-changer. So the crazy ideas might not be that bad. So it's all about the entrepreneurs, and >> 100%. >> this is a big wave, and it's certainly happening. Opher, thank you for sharing. Obviously, data is going to change all the markets. Refactoring, security, FinTech, user experience, applications are going to be changed by data, data operating system. Thanks for coming on, and thanks for sharing. Appreciate it. >> My pleasure. Have a good one. >> Okay, more coverage for the CloudNativeSecurityCon inaugural event. Data will be the key for cybersecurity. theCUBE's coverage continues after this break. (uplifting music)
SUMMARY :
and happening all around the world. Great to be back. He's now the CEO in the past five years or so. and a lot of the startups What kind of deals are you looking at? and broaden the variety of and apply it to a product and experience And you see a slew of folks and put it on the cloud, and new gaps that need to be filled You have a dog in the background but kind of the same and the entire set of tools and systems, I call it the browser moment, But I do have to ask you, And now you got open source and do you care, right? and then build with that." and all the use cases I got to ask you, because and the open AI, and it's going to be no longer What's the kind of profile These are the things I'm looking for. about the current environment and some of the big market-changing, So it's all about the entrepreneurs, and to change all the markets. Have a good one. for the CloudNativeSecurityCon
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Satya Nadella | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2013 | DATE | 0.99+ |
Opher | PERSON | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Sonoma Ventures | ORGANIZATION | 0.99+ |
BlackBerry | ORGANIZATION | 0.99+ |
10 fingers | QUANTITY | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
nine years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Origami Logic | ORGANIZATION | 0.99+ |
Origami | ORGANIZATION | 0.99+ |
Intuit | ORGANIZATION | 0.99+ |
RevTech | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
Opher Kahane | PERSON | 0.99+ |
CloudNativeSecurityCon | EVENT | 0.99+ |
Palo Alto Studios | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
third layer | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
two layers | QUANTITY | 0.98+ |
Android | TITLE | 0.98+ |
third career | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
MapR | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
one category | QUANTITY | 0.98+ |
late last year | DATE | 0.98+ |
millions of neurons | QUANTITY | 0.98+ |
a million downloads | QUANTITY | 0.98+ |
three startups | QUANTITY | 0.98+ |
10 years ago | DATE | 0.97+ |
Fintech | ORGANIZATION | 0.97+ |
winter '23 | DATE | 0.97+ |
first one | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Stanford | LOCATION | 0.97+ |
Cloudera | ORGANIZATION | 0.97+ |
theCUBE Center | ORGANIZATION | 0.96+ |
five years ago | DATE | 0.96+ |
10 year | QUANTITY | 0.96+ |
ChatGPT | TITLE | 0.96+ |
three | QUANTITY | 0.95+ |
first time | QUANTITY | 0.95+ |
XCEL Partners | ORGANIZATION | 0.95+ |
15 years ago | DATE | 0.94+ |
24 startups | QUANTITY | 0.93+ |
Jon Turow, Madrona Venture Group | CloudNativeSecurityCon 23
(upbeat music) >> Hello and welcome back to theCUBE. We're here in Palo Alto, California. I'm your host, John Furrier with a special guest here in the studio. As part of our Cloud Native SecurityCon Coverage we had an opportunity to bring in Jon Turow who is the partner at Madrona Venture Partners formerly with AWS and to talk about machine learning, foundational models, and how the future of AI is going to be impacted by some of the innovation around what's going on in the industry. ChatGPT has taken the world by storm. A million downloads, fastest to the million downloads there. Before some were saying it's just a gimmick. Others saying it's a game changer. Jon's here to break it down, and great to have you on. Thanks for coming in. >> Thanks John. Glad to be here. >> Thanks for coming on. So first of all, I'm glad you're here. First of all, because two things. One, you were formerly with AWS, got a lot of experience running projects at AWS. Now a partner at Madrona, a great firm doing great deals, and they had this future at modern application kind of thesis. Now you are putting out some content recently around foundational models. You're deep into computer vision. You were the IoT general manager at AWS among other things, Greengrass. So you know a lot about data. You know a lot about some of this automation, some of the edge stuff. You've been in the middle of all these kind of areas that now seem to be the next wave coming. So I wanted to ask you what your thoughts are of how the machine learning and this new automation wave is coming in, this AI tools are coming out. Is it a platform? Is it going to be smarter? What feeds AI? What's your take on this whole foundational big movement into AI? What's your general reaction to all this? >> So, thanks, Jon, again for having me here. Really excited to talk about these things. AI has been coming for a long time. It's been kind of the next big thing. Always just over the horizon for quite some time. And we've seen really compelling applications in generations before and until now. Amazon and AWS have introduced a lot of them. My firm, Madrona Venture Group has invested in some of those early players as well. But what we're seeing now is something categorically different. That's really exciting and feels like a durable change. And I can try and explain what that is. We have these really large models that are useful in a general way. They can be applied to a lot of different tasks beyond the specific task that the designers envisioned. That makes them more flexible, that makes them more useful for building applications than what we've seen before. And so that, we can talk about the depths of it, but in a nutshell, that's why I think people are really excited. >> And I think one of the things that you wrote about that jumped out at me is that this seems to be this moment where there's been a multiple decades of nerds and computer scientists and programmers and data thinkers around waiting for AI to blossom. And it's like they're scratching that itch. Every year is going to be, and it's like the bottleneck's always been compute power. And we've seen other areas, genome sequencing, all kinds of high computation things where required high forms computing. But now there's no real bottleneck to compute. You got cloud. And so you're starting to see the emergence of a massive acceleration of where AI's been and where it needs to be going. Now, it's almost like it's got a reboot. It's almost a renaissance in the AI community with a whole nother macro environmental things happening. Cloud, younger generation, applications proliferate from mobile to cloud native. It's the perfect storm for this kind of moment to switch over. Am I overreading that? Is that right? >> You're right. And it's been cooking for a cycle or two. And let me try and explain why that is. We have cloud and AWS launch in whatever it was, 2006, and offered more compute to more people than really was possible before. Initially that was about taking existing applications and running them more easily in a bigger scale. But in that period of time what's also become possible is new kinds of computation that really weren't practical or even possible without that vast amount of compute. And so one result that came of that is something called the transformer AI model architecture. And Google came out with that, published a paper in 2017. And what that says is, with a transformer model you can actually train an arbitrarily large amount of data into a model, and see what happens. That's what Google demonstrated in 2017. The what happens is the really exciting part because when you do that, what you start to see, when models exceed a certain size that we had never really seen before all of a sudden they get what we call emerging capabilities of complex reasoning and reasoning outside a domain and reasoning with data. The kinds of things that people describe as spooky when they play with something like ChatGPT. That's the underlying term. We don't as an industry quite know why it happens or how it happens, but we can measure that it does. So cloud enables new kinds of math and science. New kinds of math and science allow new kinds of experimentation. And that experimentation has led to this new generation of models. >> So one of the debates we had on theCUBE at our Supercloud event last month was, what's the barriers to entry for say OpenAI, for instance? Obviously, I weighed in aggressively and said, "The barriers for getting into cloud are high because all the CapEx." And Howie Xu formerly VMware, now at ZScaler, he's an AI machine learning guy. He was like, "Well, you can spend $100 million and replicate it." I saw a quote that set up for 180,000 I can get this other package. What's the barriers to entry? Is ChatGPT or OpenAI, does it have sustainability? Is it easy to get into? What is the market like for AI? I mean, because a lot of entrepreneurs are jumping in. I mean, I just read a story today. San Francisco's got more inbound migration because of the AI action happening, Seattle's booming, Boston with MIT's been working on neural networks for generations. That's what we've found the answer. Get off the neural network, Boston jump on the AI bus. So there's total excitement for this. People are enthusiastic around this area. >> You can think of an iPhone versus Android tension that's happening today. In the iPhone world, there are proprietary models from OpenAI who you might consider as the leader. There's Cohere, there's AI21, there's Anthropic, Google's going to have their own, and a few others. These are proprietary models that developers can build on top of, get started really quickly. They're measured to have the highest accuracy and the highest performance today. That's the proprietary side. On the other side, there is an open source part of the world. These are a proliferation of model architectures that developers and practitioners can take off the shelf and train themselves. Typically found in Hugging face. What people seem to think is that the accuracy and performance of the open source models is something like 18 to 20 months behind the accuracy and performance of the proprietary models. But on the other hand, there's infinite flexibility for teams that are capable enough. So you're going to see teams choose sides based on whether they want speed or flexibility. >> That's interesting. And that brings up a point I was talking to a startup and the debate was, do you abstract away from the hardware and be software-defined or software-led on the AI side and let the hardware side just extremely accelerate on its own, 'cause it's flywheel? So again, back to proprietary, that's with hardware kind of bundled in, bolted on. Is it accelerator or is it bolted on or is it part of it? So to me, I think that the big struggle in understanding this is that which one will end up being right. I mean, is it a beta max versus VHS kind of thing going on? Or iPhone, Android, I mean iPhone makes a lot of sense, but if you're Apple, but is there an Apple moment in the machine learning? >> In proprietary models, here does seem to be a jump ball. That there's going to be a virtuous flywheel that emerges that, for example, all these excitement about ChatGPT. What's really exciting about it is it's really easy to use. The technology isn't so different from what we've seen before even from OpenAI. You mentioned a million users in a short period of time, all providing training data for OpenAI that makes their underlying models, their next generation even better. So it's not unreasonable to guess that there's going to be power laws that emerge on the proprietary side. What I think history has shown is that iPhone, Android, Windows, Linux, there seems to be gravity towards this yin and yang. And my guess, and what other people seem to think is going to be the case is that we're going to continue to see these two poles of AI. >> So let's get into the relationship with data because I've been emerging myself with ChatGPT, fascinated by the ease of use, yes, but also the fidelity of how you query it. And I felt like when I was doing writing SQL back in the eighties and nineties where SQL was emerging. You had to be really a guru at the SQL to get the answers you wanted. It seems like the querying into ChatGPT is a good thing if you know how to talk to it. Labeling whether your input is and it does a great job if you feed it right. If you ask a generic questions like Google. It's like a Google search. It gives you great format, sounds credible, but the facts are kind of wrong. >> That's right. >> That's where general consensus is coming on. So what does that mean? That means people are on one hand saying, "Ah, it's bullshit 'cause it's wrong." But I look at, I'm like, "Wow, that's that's compelling." 'Cause if you feed it the right data, so now we're in the data modeling here, so the role of data's going to be critical. Is there a data operating system emerging? Because if this thing continues to go the way it's going you can almost imagine as you would look at companies to invest in. Who's going to be right on this? What's going to scale? What's sustainable? What could build a durable company? It might not look what like what people think it is. I mean, I remember when Google started everyone thought it was the worst search engine because it wasn't a portal. But it was the best organic search on the planet became successful. So I'm trying to figure out like, okay, how do you read this? How do you read the tea leaves? >> Yeah. There are a few different ways that companies can differentiate themselves. Teams with galactic capabilities to take an open source model and then change the architecture and retrain and go down to the silicon. They can do things that might not have been possible for other teams to do. There's a company that that we're proud to be investors in called RunwayML that provides video accelerated, sorry, AI accelerated video editing capabilities. They were used in everything, everywhere all at once and some others. In order to build RunwayML, they needed a vision of what the future was going to look like and they needed to make deep contributions to the science that was going to enable all that. But not every team has those capabilities, maybe nor should they. So as far as how other teams are going to differentiate there's a couple of things that they can do. One is called prompt engineering where they shape on behalf of their own users exactly how the prompt to get fed to the underlying model. It's not clear whether that's going to be a durable problem or whether like Google, we consumers are going to start to get more intuitive about this. That's one. The second is what's called information retrieval. How can I get information about the world outside, information from a database or a data store or whatever service into these models so they can reason about them. And the third is, this is going to sound funny, but attribution. Just like you would do in a news report or an academic paper. If you can state where your facts are coming from, the downstream consumer or the human being who has to use that information actually is going to be able to make better sense of it and rely better on it. So that's prompt engineering, that's retrieval, and that's attribution. >> So that brings me to my next point I want to dig in on is the foundational model stack that you published. And I'll start by saying that with ChatGPT, if you take out the naysayers who are like throwing cold water on it about being a gimmick or whatever, and then you got the other side, I would call the alpha nerds who are like they can see, "Wow, this is amazing." This is truly NextGen. This isn't yesterday's chatbot nonsense. They're like, they're all over it. It's that everybody's using it right now in every vertical. I heard someone using it for security logs. I heard a data center, hardware vendor using it for pushing out appsec review updates. I mean, I've heard corner cases. We're using it for theCUBE to put our metadata in. So there's a horizontal use case of value. So to me that tells me it's a market there. So when you have horizontal scalability in the use case you're going to have a stack. So you publish this stack and it has an application at the top, applications like Jasper out there. You're seeing ChatGPT. But you go after the bottom, you got silicon, cloud, foundational model operations, the foundational models themselves, tooling, sources, actions. Where'd you get this from? How'd you put this together? Did you just work backwards from the startups or was there a thesis behind this? Could you share your thoughts behind this foundational model stack? >> Sure. Well, I'm a recovering product manager and my job that I think about as a product manager is who is my customer and what problem he wants to solve. And so to put myself in the mindset of an application developer and a founder who is actually my customer as a partner at Madrona, I think about what technology and resources does she need to be really powerful, to be able to take a brilliant idea, and actually bring that to life. And if you spend time with that community, which I do and I've met with hundreds of founders now who are trying to do exactly this, you can see that the stack is emerging. In fact, we first drew it in, not in January 2023, but October 2022. And if you look at the difference between the October '22 and January '23 stacks you're going to see that holes in the stack that we identified in October around tooling and around foundation model ops and the rest are organically starting to get filled because of how much demand from the developers at the top of the stack. >> If you look at the young generation coming out and even some of the analysts, I was just reading an analyst report on who's following the whole data stacks area, Databricks, Snowflake, there's variety of analytics, realtime AI, data's hot. There's a lot of engineers coming out that were either data scientists or I would call data platform engineering folks are becoming very key resources in this area. What's the skillset emerging and what's the mindset of that entrepreneur that sees the opportunity? How does these startups come together? Is there a pattern in the formation? Is there a pattern in the competency or proficiency around the talent behind these ventures? >> Yes. I would say there's two groups. The first is a very distinct pattern, John. For the past 10 years or a little more we've seen a pattern of democratization of ML where more and more people had access to this powerful science and technology. And since about 2017, with the rise of the transformer architecture in these foundation models, that pattern has reversed. All of a sudden what has become broader access is now shrinking to a pretty small group of scientists who can actually train and manipulate the architectures of these models themselves. So that's one. And what that means is the teams who can do that have huge ability to make the future happen in ways that other people don't have access to yet. That's one. The second is there is a broader population of people who by definition has even more collective imagination 'cause there's even more people who sees what should be possible and can use things like the proprietary models, like the OpenAI models that are available off the shelf and try to create something that maybe nobody has seen before. And when they do that, Jasper AI is a great example of that. Jasper AI is a company that creates marketing copy automatically with generative models such as GPT-3. They do that and it's really useful and it's almost fun for a marketer to use that. But there are going to be questions of how they can defend that against someone else who has access to the same technology. It's a different population of founders who has to find other sources of differentiation without being able to go all the way down to the the silicon and the science. >> Yeah, and it's going to be also opportunity recognition is one thing. Building a viable venture product market fit. You got competition. And so when things get crowded you got to have some differentiation. I think that's going to be the key. And that's where I was trying to figure out and I think data with scale I think are big ones. Where's the vulnerability in the stack in terms of gaps? Where's the white space? I shouldn't say vulnerability. I should say where's the opportunity, where's the white space in the stack that you see opportunities for entrepreneurs to attack? >> I would say there's two. At the application level, there is almost infinite opportunity, John, because almost every kind of application is about to be reimagined or disrupted with a new generation that takes advantage of this really powerful new technology. And so if there is a kind of application in almost any vertical, it's hard to rule something out. Almost any vertical that a founder wishes she had created the original app in, well, now it's her time. So that's one. The second is, if you look at the tooling layer that we discussed, tooling is a really powerful way that you can provide more flexibility to app developers to get more differentiation for themselves. And the tooling layer is still forming. This is the interface between the models themselves and the applications. Tools that help bring in data, as you mentioned, connect to external actions, bring context across multiple calls, chain together multiple models. These kinds of things, there's huge opportunity there. >> Well, Jon, I really appreciate you coming in. I had a couple more questions, but I will take a minute to read some of your bios for the audience and we'll get into, I won't embarrass you, but I want to set the context. You said you were recovering product manager, 10 plus years at AWS. Obviously, recovering from AWS, which is a whole nother dimension of recovering. In all seriousness, I talked to Andy Jassy around that time and Dr. Matt Wood and it was about that time when AI was just getting on the radar when they started. So you guys started seeing the wave coming in early on. So I remember at that time as Amazon was starting to grow significantly and even just stock price and overall growth. From a tech perspective, it was pretty clear what was coming, so you were there when this tsunami hit. >> Jon: That's right. >> And you had a front row seat building tech, you were led the product teams for Computer Vision AI, Textract, AI intelligence for document processing, recognition for image and video analysis. You wrote the business product plan for AWS IoT and Greengrass, which we've covered a lot in theCUBE, which extends out to the whole edge thing. So you know a lot about AI/ML, edge computing, IOT, messaging, which I call the law of small numbers that scale become big. This is a big new thing. So as a former AWS leader who's been there and at Madrona, what's your investment thesis as you start to peruse the landscape and talk to entrepreneurs as you got the stack? What's the big picture? What are you looking for? What's the thesis? How do you see this next five years emerging? >> Five years is a really long time given some of this science is only six months out. I'll start with some, no pun intended, some foundational things. And we can talk about some implications of the technology. The basics are the same as they've always been. We want, what I like to call customers with their hair on fire. So they have problems, so urgent they'll buy half a product. The joke is if your hair is on fire you might want a bucket of cold water, but you'll take a tennis racket and you'll beat yourself over the head to put the fire out. You want those customers 'cause they'll meet you more than halfway. And when you find them, you can obsess about them and you can get better every day. So we want customers with their hair on fire. We want founders who have empathy for those customers, understand what is going to be required to serve them really well, and have what I like to call founder-market fit to be able to build the products that those customers are going to need. >> And because that's a good strategy from an emerging, not yet fully baked out requirements definition. >> Jon: That's right. >> Enough where directionally they're leaning in, more than in, they're part of the product development process. >> That's right. And when you're doing early stage development, which is where I personally spend a lot of my time at the seed and A and a little bit beyond that stage often that's going to be what you have to go on because the future is going to be so complex that you can't see the curves beyond it. But if you have customers with their hair on fire and talented founders who have the capability to serve those customers, that's got me interested. >> So if I'm an entrepreneur, I walk in and say, "I have customers that have their hair on fire." What kind of checks do you write? What's the kind of the average you're seeing for seed and series? Probably seed, seed rounds and series As. >> It can depend. I have seen seed rounds of double digit million dollars. I have seen seed rounds much smaller than that. It really depends on what is going to be the right thing for these founders to prove out the hypothesis that they're testing that says, "Look, we have this customer with her hair on fire. We think we can build at least a tennis racket that she can use to start beating herself over the head and put the fire out. And then we're going to have something really interesting that we can scale up from there and we can make the future happen. >> So it sounds like your advice to founders is go out and find some customers, show them a product, don't obsess over full completion, get some sort of vibe on fit and go from there. >> Yeah, and I think by the time founders come to me they may not have a product, they may not have a deck, but if they have a customer with her hair on fire, then I'm really interested. >> Well, I always love the professional services angle on these markets. You go in and you get some business and you understand it. Walk away if you don't like it, but you see the hair on fire, then you go in product mode. >> That's right. >> All Right, Jon, thank you for coming on theCUBE. Really appreciate you stopping by the studio and good luck on your investments. Great to see you. >> You too. >> Thanks for coming on. >> Thank you, Jon. >> CUBE coverage here at Palo Alto. I'm John Furrier, your host. More coverage with CUBE Conversations after this break. (upbeat music)
SUMMARY :
and great to have you on. that now seem to be the next wave coming. It's been kind of the next big thing. is that this seems to be this moment and offered more compute to more people What's the barriers to entry? is that the accuracy and the debate was, do you that there's going to be power laws but also the fidelity of how you query it. going to be critical. exactly how the prompt to get So that brings me to my next point and actually bring that to life. and even some of the analysts, But there are going to be questions Yeah, and it's going to be and the applications. the radar when they started. and talk to entrepreneurs the head to put the fire out. And because that's a good of the product development process. that you can't see the curves beyond it. What kind of checks do you write? and put the fire out. to founders is go out time founders come to me and you understand it. stopping by the studio More coverage with CUBE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Jon | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
January 2023 | DATE | 0.99+ |
Jon Turow | PERSON | 0.99+ |
October | DATE | 0.99+ |
18 | QUANTITY | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
$100 million | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
10 plus years | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
October 2022 | DATE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Madrona | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Madrona Venture Partners | ORGANIZATION | 0.99+ |
January '23 | DATE | 0.99+ |
two groups | QUANTITY | 0.99+ |
Matt Wood | PERSON | 0.99+ |
Madrona Venture Group | ORGANIZATION | 0.99+ |
180,000 | QUANTITY | 0.99+ |
October '22 | DATE | 0.99+ |
Jasper | TITLE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
six months | QUANTITY | 0.99+ |
2006 | DATE | 0.99+ |
million downloads | QUANTITY | 0.99+ |
Five years | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
last month | DATE | 0.99+ |
two poles | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Howie Xu | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
20 months | QUANTITY | 0.99+ |
Greengrass | ORGANIZATION | 0.99+ |
Madrona Venture Group | ORGANIZATION | 0.98+ |
second | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Supercloud | EVENT | 0.98+ |
RunwayML | TITLE | 0.98+ |
San Francisco | LOCATION | 0.98+ |
ZScaler | ORGANIZATION | 0.98+ |
yesterday | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
First | QUANTITY | 0.97+ |
CapEx | ORGANIZATION | 0.97+ |
eighties | DATE | 0.97+ |
ChatGPT | TITLE | 0.96+ |
Dr. | PERSON | 0.96+ |
Yves Sandfort, Comdivision Group | CloudNativeSecurityCon 23
(rousing music) >> Hello everyone. Welcome back to "theCUBE's" day one coverage of Cloud Native Security Con 23. This is going to be an exciting panel. I've got three great guests. I'm Lisa Martin, you know our esteemed analysts, John Furrier, and Dave Vellante well. And we're excited to welcome to "theCUBE" for the first time, Yves Sandfort, the CEO of Comdivision Group, who's coming to us from Germany. As you know, Cloud Native Security Con is a global event. Everyone welcome Yves, great to have you in particular. Welcome to "theCUBE." >> Great to be here. >> Thank you for inviting me. >> Yves, tell us a little bit, before we dig into really wanting to understand your perspectives on the event and get Dave and John's feedback as well, tell us a little bit about you. >> So yeah, talking about me, or talking about Comdivision real quick. We are in the business for over 27 years already. We started as a SaaS company, then became more like an architecture and, and Cloud Native company over the last few years. But what's interesting is, and I think that's, that's, that's really interesting when we look at our industry. It hasn't really, the requirements haven't really changed over the years. It's still security. We still have to figure out how we deal with security. We still have to figure out how we deal with compliance and everything else. And I think therefore, it's more and more important that we take these items more seriously. Also, based on the fact that when we look at it, how development and other things happen nowadays, it's, it's, everybody says it's like open source. It's great because everybody can look into the code. We, I think the last few years have shown us enough example that that's not necessarily solving all the issues, but it's also code and development has changed rapidly when we look at the Cloud Native approach, where it's far more about gluing the pieces together, versus the development pieces. When I was actually doing software development 25 years ago, and had to basically build my code because I didn't have that much internet access for it. So it has evolved, but even back then we had to deal with security and everything. >> Right. The focus on security is, is incredibly important, and the focus keeps growing as you mentioned. This is, guys, and I want to get your perspectives on this. We're going to start with John. This is the first time Cloud Native Security Con is its own event being extracted from, and amplified from KubeCon. John, I want to understand from your perspective, break down the event, what you see, what you've heard, and Cloud Native Security in general. What does this mean to companies? What does it mean to customers? Is this a reality? >> Well, I think that's the topic we want to discuss, and I think Yves background, you see the VMware certification, I love that. Because what VMware did with virtualization, was abstract that from server virtualization, kind of really changed the game on things, and you start to see Cloud Native kind of go that next level of how companies will be operating their business, not just digital transformation, as digital transformation goes to completion, it's total business transformation where IT is everywhere. And so you're starting to see the trends where, "Okay, that's happening." Now you're starting to see, that's Cloud Native Con, or KubeCon, AWS re:Invent, or whatever show, or whatever way you want to look at it. But in, in the past decade, past five years, security has always been front and center as almost a separate thing, and, in and of itself, but the same thing. So you're starting to see the breakout of security conversations around how to make things work. So a lot of operational conversations around what used to be DevOps makes infrastructure as code, and that was great, that fueled that. Then DevSecOps came. So the Cloud Native next level, is more application development at scale, developers driving the standards with developer first thinking, shifting left, I get all that. But down in the lower ends of the stack, you got real operational issues. DNS we've heard in the keynote, we heard about the Colonel, the Lennox Colonel. Things that need to be managed and taken care of at a security level. These are like, seem like in the weeds, but you're starting to see that happen. And the other thing that I think's real about Cloud Native Security Con that's going to be interesting to watch, is Amazon has pretty much canceled all their re:Invent like shows except for two; Re:Invent, which is their annual conference, and Re:Inforce, which is dedicated to securities. So Cloud Native, Linux, the Linux Foundation has now breaking out Cloud Native Con and KubeCon, and now Cloud Native Security Con. They can't call it KubeCon because it's not Kubernetes, but it's like security focus. I think this is the beginning of starting to see this new developer driving, developers driving the standards, and it has it implications, what used to be called IT ops, and that's like the VMwares of the world. You saw all the stuff that was not at developer focus, but more ops, becoming much more in the application. So I think, I think it's real. The question is where does it go? How fast does it develop? So to me, I think it's a real trend, and it's worthy of a breakout, but it's not yet clear of where the landing zone is for people to start doing it, how they get started, what are the best practices. Machine learning's going to be a big part of this. So to me it's totally cool, but I'm not yet seeing the beachhead. So that's kind of my take. >> Dave, our inventor and host of breaking analysis, what's your take? >> So when you, I think when you zoom out, there's some, there's a big macro change that's been going on. I think when you look back, let's say 10, 12 years ago, the, the need for speed far trumped the, the, the security aspect, the governance, the data privacy. It was like, "Yeah, the risks, they're not that great compared to our opportunity." That has completely changed because the risks are now so much higher. And so what's happening, I think there's a, there's a major effort amongst CIOs and CISOs to try to make security not a blocker because it use to be, it still is. "Okay, I got this great initiative." Eh, give it to the SecOps pros, and let them take it for a while before we can go to market. And so a huge challenge now is to simplify, automate, AI comes in, the whole supply chain security, so the, so the companies can not be facing so much friction. And that is non-trivial. I don't think we're anywhere close there, but I think the goal is by, within the next several years, we're going to be in a position, that security, we heard today, is, wasn't designed in to the initial internet protocols. It was bolted on. And so increasingly, the fundamental architecture of the internet, the Cloud, et cetera, is, is seeing designed in security, and, and that is an imperative, or else business is going to come to a grinding halt. >> Right. It's no longer, the bolt no longer works. Yves, what's your perspective on Cloud Native Security, where it stands today? What's in it for customers, whether we're talking about banks, or hospitals, or retailers, what do you think? >> I think when we, when we look at security in the, in the modern world, is we need to as, as Dave mentioned, we need to rethink how we apply it. Very often, security in the past has been always bolted on in the end. If we continue to do that, it'll become more and more difficult, because as companies evolve, and as companies want to bring products and software to market in a much faster and faster way, it's getting more and more difficult if we bolt on the security process at the end. It's like, developers build something and then someone checks security. That's not going to work any longer. Especially if we also consider now the changes in the industry. We had Stack Overflow over the last 10 years. If I would've had Stack Overflow 15, 20, what, 25 years ago when I was a developer, it would've changed a hell lot. Looking at it now, and looking at it what we had in the last few weeks, it's like where nearly all of my team members say is like finally I don't need any script kiddies anymore because I can't go to (indistinct) who writes the code for me. Which is on one end great, because it enables us to solve certain problems in a much higher pace. But the challenge with that is, if the people who just copy and past that code, don't understand the implications of that code, we have a much higher risk continuously. And what people thought was, is challenging with Stack Overflow. Imagine that something in one of these AI engines, is actually going ballistic, and it creates holes in nearly every one of these applications. And trust me, there will be enough developers who are going to use these tools to develop codes, the same as students in university are going to take this to write their essays and everything else. And so it's really important that every developer team basically has a security person within their team, and not a security at the end. So we build something, we check it, go through QA, and then it goes to security. Security needs to be at the forefront. And I think that's where we see Cloud Native Security Con, where we see AWS. I saw it during re:Invent already where they said is like, we have reinforced next year. I think this becomes more and more of a topic, and I think companies, as much as it is become a norm that you have a firewall and everything else, it needs to become a norm that when you are doing software development, and every development team needs to have a security person on that needs to be trained. >> I love that chat comment Dave, 'cause you and I were talking about this. And I think that is going to be the issue. Do we need security chat for the chat bot? And there's like a, like a recursive model there. The biases are built in. I think, and I think our interview with the Palo Alto Network's co-founder, Dave, when he talked about zero trust as a structured way to start things, but he was referencing that with Cloud, there's a chance to rethink or do a do-over in security. So, I think this is kind of to me, where this is all going. And I think you asked Pat Gelsinger what, year 2013, 2014, can, is security a do over? I think we're in that do over time. >> He said yes. >> He said yes. (laughing) He was right. But yeah, eight years later... But this is, how do you, zero trust gives you some structure, but how do you organize and redo security? Because to me, I think that's what's happening here. >> And John you heard, Zuk at Palo Alto Network said, "Yeah, the, the words security and architecture, they don't go together historically." And so it is a total, total retake. >> Well is that because there's too many tools out there and- >> Yeah. For sure. >> Yeah, well, first of all, a lot of hardware. And then yeah, a lot of tools. You even see IIOT and industry 40, you see IOT security coming up as another stove pipe, and that's not the right approach. And, and so- >> Well let me, let me ask you a question Dave, and Yves, if you don't mind. 'Cause I was just riffing on this yesterday about this. In the ML space, you're seeing the ML models, you're seeing proprietary models versus open source. Is security going to go down this proprietary security methods and open source? Because that's interesting, because the CNCF is run by the the Linux Foundation. So you can almost maybe see a model where there's more proprietary security methods than open source. Or is it, is that a non-issue? >> I would, I would, let me, if I, if I jump in here first, I think the last, especially last five or 10 years have clearly shown the, the whole and, and I invested early on in the, in the end 90s in several open source startups in the Bay area. So, I'm well behind the whole open source idea and, and mid (indistinct) and others back then several times. But the point is, I think what we have seen is open source is not in general, more secure or less secure, because code is too complex nowadays. You have millions of lines of code, and it's not that either one way or the other is going to solve it. The ways I think we are going to look at it is more is what's the role to market, because only because something is open source doesn't necessarily mean it's going to be available for everyone. And the same for proprietary source from that perspective, even though everybody mixes licensing and payments and all that all the time, but it doesn't necessarily have anything to do with it. But I think as we are going through it, and when we also look at the industry, security industry over the last 10 plus years has been primarily hardware focused. And a lot of these vendors have done a good business out of selling hardware boxes, putting software on top of it. Whereas in reality, those were still X86 standard boxes in the end. So it was not that we had specific security ethics or anything like that in there anymore. And so overall, the question of the market is going to change. And as we are looking into Cloud Native, think about someone like an AWS, do you really envision them to have a hardware box of every supplier in their data center, and that in every availability zone in every region? Same for Microsoft, same for Google, etc? So we need to have new ways on how we can apply security. And that applies both on the backend services, but also on the front end side. >> And if I, and if I could chime in, I think the, the good, I think the answer is, is, is no and yes. And what I mean by that is if you take, antivirus and known malware, I mean pretty much anybody today can, can solve that problem, it's the unknown malware. So I think the yes part of the answer is yes, it's, it's going to be proprietary, but in the sense we're going to use open source tooling, and then apply that in a proprietary way with, with specific algorithms and unique architectures that are going to solve problems. For example, XDR with, with unknown malware. So, and that's the, that's the hard part. As somebody said, I think this morning at the keynote, it's, it's all the stuff that, that the SecOps team couldn't find. That's the really hard part. >> (laughs) Well the question will be will, is the new IP, the ability to feed ChatGPT some magical spelled insertion query string that does the job, that's unique, that might be the new IP, the the question to ask. >> Well, that's what the hackers are going to do. And I, they're on offense. (John laughs) And the offense knows what play is coming. So, they're going to start. >> So guys, let's take this conversation up a level. I want to get your perspectives on what's in this for me as a customer? We know security is a board level conversation. We talk about this all the time. We also know that they're based on, I think David, was the conversations that you and I had, with Palo Alto Networks at Ignite in December. There's a, there's a lack of alignment between the executives and the board from a security perspective. When we talk about Cloud Native Security, we all talked about the value in that, what's in it for customers? I want to get your perspectives on should this be a board level conversation, and if so, how do you advise organizations, whether it is a hospital, or a bank, or an organization that is really affected by things like ransomware? How should they be thinking about this from an organizational perspective? >> Well, I'll start first, because we had this conversation during our Super Cloud event last month, and this comes up a lot. And this is, the CEO board level. Yes it is a board level conversation for security, as is application development as in terms of transforming their business to be competitive, not to be on the wrong side of history with this wave coming. So I think that's more of a management. But the issue is, they tell their people, "Go do it." And they're like, 'cause they get sold on the idea of, "Hey, won't you transform your business, and everything's going to be data driven, and machine learning's going to power your apps, get new customers, be profitable." "Oh, sign me up for that." When you have to implement this, it's really hard. And I think the core issue is, where are companies in their life cycle of the ability to execute and architect this thing properly as Dave said, Nick Zuk said, "You can't have architecture and security, you need platforms." So, I think the re-platforming, and the re-factoring of business is a big factor, and that's got to get down into the, the organizational shifts and the people to do it. So are there skills? Do I do a managed service? How do I architect it? Are there more services? Are there developers doing applications that are going to be more agile? So, this is not an easy thing. And to move a business from IT operations that is proven, to be positioned for this enablement, is just really difficult. And it's expensive. And if you screw it up, you could be, could be on the wrong side of things. So, to me, that's the big issue is, you sell the dream and then you got to implement it. And that's really difficult. >> Yves, give us your perspective on, based on John's comments, how do organizations shift so dramatically? There's a cultural element there as well, but there's also organizations that are, have competitive competitors in the rear view mirror, and there's time to waste. What are your thoughts on that? >> I think that's exactly the point. It's like, as an organization, you need to take the decision between the time, the risk, and all the other elements we have into this game. Because you can try to achieve 100% security, but that's exactly the same as trying to, to protect gold or anything else 100%. It's most likely not going to be from a risk perspective anyway sensible. And that's the same from a corporational perspective. When you look at building new internet services, or IOT services, or any kind of new shopping experience or whatever else, you need to balance out between the risks and the advantages out of it. And you also need to be accepting that you potentially on the way make mistakes, but then it's more important than ever that you are able to quickly fix any mistakes, and to adjust to anything what's happening in the market. Because as we are building all these new Cloud Native applications, and build up all these skill sets, one of the big scenarios is we are far more depending on individual building blocks. These building blocks come out of open source communities, which have a much different way. When we look back in software development, back then we had application servers from Oracle, Web Logic, whatsoever, they had a release cycles of every three to six months. As now we have to deal with open source, where sometimes release cycles are on a four week schedule, in between security patches. So you need to be much faster in adopting that, checking that, implementing that, getting things to work. So there is a security stretch from that perspective. There is a speech stretch on the other thing companies have to deal with, and on the other side it's always a measurement between the risk, and the security you can afford. Because reality is, you will not be 100% protected no matter what you do. So, you need to balance out what you as an organization can actually build on. But I think, coming back also to the point, it's on the bot level nowadays. It's like nearly every discussion we have with companies nowadays as they move into the Cloud, especially also here in Europe where for the last five years, it was always, it's like "It's data privacy." Data privacy is no longer, I mean, yes, for certain people, it's still the point, but for many more people it's like, "How protected is my data?" "What do we do in case of ransomware attack?" "What do we do in case of a denial of service?" All of these things become more vulnerable, where in the past you were discussing these things with a becking page, or, or like a stock exchange. They were, it's like, "What the hell is going to happen if we have a denial of service?" Now all of the sudden, this now affects nearly everyone in their storefronts and everything else, because everything is depending on it. >> Yeah, I think you're right on. You think about how cultural change occurs, it's bottom ups or, bottom up, top down or middle out. And what, what's happened with security is the people in the security team cared about it, they were the, everybody said, "Oh, it's their problem." And then it just did an end run to the board, kind of mid, early last decade. And then the board sort of pushed that down. And the line of business is realizing, "Holy cow. My business, my EBIT can be dramatically affected by this, so I care." Now it's this whole house, cultural team sport. I know it's sort of a, a cliche, but it, it's true. Everybody actually is beginning to care about security because the risks are now so high, and it's going to affect not only the bottom line of the company, the bottom line of the business, their job, it's, it's, it's virtually everywhere. It's a huge cultural shift that we're seeing. >> And that's a big challenge for organizations in any industry. And Yves, you talked about ransomware service. Every industry across the globe is vulnerable to this. But how can, maybe John, we'll start with you. How can Cloud Native Security help organizations if they're able to embrace it, operationally, culturally, dial down some of the vulnerabilities that just seem to keep growing? >> Well, I mean that's the big question. The breaches are, are critical. The governances also could be a way that anchors down growth. So I think the balance between the governance compliance piece of it is key, but making the developers faster and more productive is the key to me. And I think having the security paradigm where they're not blockers, as Dave said, is critical. So I love the whole shift left, but now that we have more data focused initiatives around how that, you can use data to understand the security issues, I think data and security are together, and I think there's a going to be a data operating system model emerging, where data and security will be almost one thing. And that will be set up by the security teams, and the data teams together. And that will feed guardrails into the developer environment. So the developer should feel no pain at all in doing this. So I think the best practice will end up being what we're seeing with supply chain, security, with making sure code's verified. And you're going to see the container, security side completely address has been, and KubeCon, we just, I asked Scott Johnson, the CEO of Docker, and I asked him directly, "Are you guys all tight on container security?" He said, yes, but other people are suggesting that's not true. There's a lot of issues with the container security. So, there's all kinds of areas where there's holes. So Cloud Native is cool on one hand, and very relevant, but if it's not shored up, it's going to be a problem. But I, so I think that's where the action will be, at the developer pipeline, in the containers, and the data. So, that will be very relevant, and if companies nail that, they'll be faster, they'll have better apps, and that'll be the differentiator. And again, if they don't on this next wave, they're going to be driftwood. >> Dave, how do they prevent becoming driftwood? >> Well, I think Cloud has had a huge impact. And a Cloud's by no means a panacea, but let's face it, it's dramatically improved a lot of companies security posture. Now there's still that shared responsibility. Even though an S3 bucket is encrypted, it's still your responsibility to make sure that it doesn't get decrypted by somebody who has access to it. So there are things like that, but to Yve's earlier point, that can be, that's done through software now, it's done through best practices. Those best practices can be shared. So the way you, you don't become driftwood, is you start to, you step back, rethink that security architecture as we were talking about earlier, take advantage of the Cloud, take advantage of Cloud Native, and all the, the rapid pace of innovation that's occurring there, and you don't use, it's called before, The audit is the last line of defense. That's no longer a check box item. "Oh yeah, we're in compliance." It's, this is a business imperative, and because we're going to reduce our expected loss and reduce our business risk. That's part of the business case today. >> Yeah. >> It's a huge, critically important part of the business case. Yves, question for you. If you're in an elevator with a CEO, a CFO, and a CISO, and they're talking about security and Cloud Native Security, what's your value proposition to them on a, on a say a 32nd elevator ride? >> Difficult story. I think at the moment, the most important part is, we need to get people to work together, and we need to train people to work more much better together. I think that's the overall most important part for all of these solutions, because in the end, security is always a person issue. If, we can have the best tools in the industry, as long as we don't get all of these teams to work together, then we have a problem. If the security team is always seen as the end of the solution to fix everything, that's not going to work because they always are the bad guys in the game. And so we need to bring the teams together. And once we have the teams work together, I think we have a far better track on, on maintaining security. >> John and Dave, I want to get your perspectives on what Yves just said. In all the experience that the two of you have as industry analysts here on "theCUBE," Wikibon, Siliconangle Media. How do you advise organizations to get those teams together? As Eve said, that alignment is critical, but John, we'll start with you, then Dave go to you. What's your advice for organizations that need to align those teams and really don't have a lot of time to wait to do it? >> (chuckling) That's a great question. I think, I think that's everyone pays hundreds of thousands of millions of dollars to get that advice from these consultants, organizations out there doing the transformations. But I think it comes down to personnel and commitment. I think if there's a C-level commitment to the effort, you'll see the institutional structure change. So you can see really getting behind it with their, with their wallet and their, and their support of either getting more personnel to support and assist, or manage services, or giving the power to the teams to execute and doing it in a way that, that's, that's well known and best practices. Start small, build out the pilots, build the platform, and then start getting it right. And I think that's the key. Not the magic wand, the old model of rolling out stuff in, in six month cycles. It's really, get the proof points, double down and change the culture, but also execute and have real metrics. And changing the architecture, like having more penetration tests as a service. Doing pen tests is like a joke now. So that doesn't make any sense. You got to have that built in almost every day, and every minute. So, these kinds of new techniques have to be implemented and have to be tried. So that's why these communities are growing. That's why I like what open source has been doing, and I like the open source as the place to have these conversations, because that's where the action will be for new stuff. And I think people will implement open source like they did before, but with different ways, better testing, better supply chain on the software side, verifying code. So, I see open source actually getting a tailwind from this, not a headwind. So, I'm bullish on the open source piece here on, on all levels, machine learning- >> Lisa, my answer is intramural sports. And it's 'cause I think it's cultural. And what I mean by that, is you take your your best and brightest security, and this is what frankly, a lot of CISOs do, an examples is Lena Smart, MongoDB. Take your best and brightest security pros, make them captains of the intramural teams, and pair them up with pods of individuals across the organization, which is most people who don't know anything about security, and put them together, so that they can, they, so that the folks that understand security can, can realize how little people know, what, what, what, how, what the worst practices that are out there in the reverse, how they can cross pollinate. And they do that on a regular basis, I know at Mongo and other companies. And that kind of cultural assimilation is a starting point for how you get security awareness up to your question around making it a team sport. >> Absolutely critical. Yves, I want to kind of wrap things with you. We've got a couple of minutes left. When you're really looking at the Cloud Native community, the growth of it, we talked about earlier in the program, Cloud Native Security Con being now extracted and elevated out of KubeCon, what are your thoughts on the groundswell that this community is generating around Cloud Native Security, the benefits that organizations will achieve from it? >> I think overall, when we have these securities conferences, or these security arms a bit spread out and separated out of the main conference, it helps to a certain degree, because especially in the security space, when you look at at other like black hat or white hat conferences and things like that in the past, although they were not focused on Cloud Native, a lot of these security folks didn't feel well taken care of in any of the other conferences because they were always these, it's like they are always blocking us, they're always making us problems, and all these kinds of things. Now that we really take the Cloud Native piece and the security piece together, or like AWS does it with re:Inforce, I think we will see more and more that people understand is that security is a permanent topic we need to cover, but we need to bring different people together, because security also has compliance and a lot of other components in there. So we will see at these conferences moving forward, also a different audience. It's not going to be only the Cloud Native developers. And if I see some of these security audiences, I can't really imagine them to really be at KubeCon because there is too much other things going on. And you couldn't really see much of that at re:Invent because re:Invent by itself has become a complete monster of a conference. It covers too many topics. And so having this very, very important security piece separated, also gives the opportunity, I think, that we can bring in the security people, but also have the type of board level discussions potentially, between the leaders of the industry, to also discuss on how we can evolve, how we can make things better, and how, how we can actually, yeah, evolve our industry for it. Because let's face it, that threat is not going to go away. It's, it's a business. And one of the last security conferences I was on, on the ransomware part, it was one of the topics someone said is like, "Look, currently on average, it takes a hacker group roughly around they said 15 to 20 K to break into a company, and they on average make 100K. It's a business, let's face it. And it's a business we don't like. And ethically, it's no discussion that this is not good, but that's something which is happening. People are making money with it. And as long as that's going to go on, and we have enough countries where these people can hide, it's going to stay and survive. And so, with that being said, it's important for us to really build an industry around this. But I also think it's good that we have separate conferences. In the past we had more the RSA conference, which tried to cover all of these areas. But that is not really fitting Cloud Native and everything else. So I think it's good that we have these new opportunities, the Cloud Native one, but also what AWS brings up for someone. >> Yves, you just nailed it. It just comes down to simple math. It's a fraction. Revenue over cost. And if you could increase the hacker's cost, increase the denominator, their ROI will go down. And that is the game. >> Great point, Dave. What I'm hearing guys, and we can talk about technology for days and days. I know all of you. But there's, there's a big component that, that the elevation of Cloud Native Security, on its own as standalone is critical, as is the people component. You guys all talked about that. We talked about the cultural change necessary for that. Hopefully what we're seeing with Cloud Native Security Con 23, this first event is going to give us more insight over the next couple of days, and the next months or so, as to how this elevation, and how the people can come together to really help organizations from a math perspective as, as Dave talked about, really dial down the risks there, understand more of the vulnerabilities so that ransomware as a service is not as lucrative as it is today. Guys, so much appreciate your time, really breaking down Cloud Native Security, the value in it from different perspectives, and what your thoughts are on where it's going. Thanks so much for your time. >> All right. Thanks. >> Thanks, Lisa. >> Thank you. >> Thanks, Yves. >> All right. For my guests, I'm Lisa Martin. You're watching theCUBE's day one coverage of Cloud Native Security Con 23. Thanks for watching. (rousing music)
SUMMARY :
the CEO of Comdivision Group, perspectives on the event We are in the business and the focus keeps and that's like the VMwares of the world. And so increasingly, the the bolt no longer works. and not a security at the end. And I think that is going to be the issue. Because to me, I think And John you heard, Zuk and that's not the right approach. because the CNCF is run by and all that all the time, that the SecOps team couldn't find. is the new IP, the ability to feed ChatGPT And the offense knows what play is coming. between the executives and the board and the people to do it. and there's time to waste. and the security you can afford. And the line of business is realizing, that just seem to keep growing? is the key to me. The audit is the last line of defense. of the business case. because in the end, security that the two of you have or giving the power to the teams so that the folks that the growth of it, and the security piece together, And that is the game. and how the people can come together All right. of Cloud Native Security Con 23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Eve | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nick Zuk | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Pat Gelsinger | PERSON | 0.99+ |
Zuk | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Yves | PERSON | 0.99+ |
Yves Sandfort | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Palo Alto Network | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Scott Johnson | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Lena Smart | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Comdivision Group | ORGANIZATION | 0.99+ |
December | DATE | 0.99+ |
four week | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Web Logic | ORGANIZATION | 0.99+ |
Cloud Native Security Con | EVENT | 0.99+ |
Siliconangle Media | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
DevSecOps | TITLE | 0.99+ |
next year | DATE | 0.99+ |
Palo Alto Network | ORGANIZATION | 0.99+ |
eight years later | DATE | 0.99+ |
last month | DATE | 0.99+ |
Cloud Native Security Con 23 | EVENT | 0.99+ |
KubeCon | EVENT | 0.99+ |
20 K | QUANTITY | 0.98+ |
six months | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
32nd elevator | QUANTITY | 0.98+ |
DevOps | TITLE | 0.98+ |
over 27 years | QUANTITY | 0.98+ |
Yve | PERSON | 0.98+ |
Cloud Native | TITLE | 0.98+ |
2013 | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
MongoDB | ORGANIZATION | 0.97+ |
Re:Inforce | EVENT | 0.97+ |
25 years ago | DATE | 0.97+ |
Closing Remarks | Supercloud2
>> Welcome back everyone to the closing remarks here before we kick off our ecosystem portion of the program. We're live in Palo Alto for theCUBE special presentation of Supercloud 2. It's the second edition, the first one was in August. I'm John Furrier with Dave Vellante. Here to wrap up with our special guest analyst George Gilbert, investor and industry legend former colleague of ours, analyst at Wikibon. George great to see you. Dave, you know, wrapping up this day what in a phenomenal program. We had a contribution from industry vendors, industry experts, practitioners and customers building and redefining their company's business model. Rolling out technology for Supercloud and multicloud and ultimately changing how they do data. And data was the theme today. So very, very great program. Before we jump into our favorite parts let's give a shout out to the folks who make this possible. Free contents our mission. We'll always stay true to that mission. We want to thank VMware, alkira, ChaosSearch, prosimo for being sponsors of this great program. We will have Supercloud 3 coming up in a month or so, or two months. We'll see. Or sooner, we don't know. But it'll be more about security, but a lot more momentum. Okay, so that's... >> And don't forget too that this program not going to end now. We've got a whole ecosystem speaks track so stay tuned for that. >> John: Yeah, we got another 20 interviews. Feels like it. >> Well, you're going to hear from Saks, Veronika Durgin. You're going to hear from Western Union, Harveer Singh. You're going to hear from Ionis Pharmaceuticals, Nick Taylor. Brian Gracely chimes in on Supecloud. So he's the man behind the cloud cast. >> Yeah, and you know, the practitioners again, pay attention to also to the cloud networking interviews. Lot of change going on there that's going to be disruptive and actually change the landscape as well. Again, as Supercloud progresses to be the next big thing. If you're not on this next wave, you'll drift what, as Pat Gelsinger says. >> Yep. >> To kick off the closing segments, George, Dave, this is a wave that's been identified. Again, people debate the word all you want Supercloud. It is a gateway to multicloud eventually it is the standard for new applications, new ways to do data. There's new computer science being generated and customer requirements being addressed. So it's the confluence of, you know, tectonic plates shifting in the industry, new computer science seeing things like AI and machine learning and data at the center of it and new infrastructure all kind of coming together. So, to me, that's my takeaway so far. That is the big story and it's going to change society and ultimately the business models of these companies. >> Well, we've had 10, you know, you think about it we came out of the financial crisis. We've had 10, 12 years despite the Covid of tech success, right? And just now CIOs are starting to hit the brakes. And so my point is you've had all this innovation building up for a decade and you've got this massive ecosystem that is running on the cloud and the ecosystem is saying, hey, we can have even more value by tapping best of of breed across clouds. And you've got customers saying, hey, we need help. We want to do more and we want to point our business and our intellectual property, our software tooling at our customers and monetize our data. So you have all these forces coming together and it's sort of entering a new era. >> George, I want to go to you for a second because you are big contributor to this event. Your interview with Bob Moglia with Dave was I thought a watershed moment for me to hear that the data apps, how databases are being rethought because we've been seeing a diversity of databases with Amazon Web services, you know, promoting no one database rules of the world. Now it's not one database kind of architecture that's puling these new apps. What's your takeaway from this event? >> So if you keep your eye on this North Star where instead of building apps that are based on code you're building apps that are defined by data coming off of things that are linked to the real world like people, places, things and activities. Then the idea is, and the example we use is, you know, Uber but it could be, you know, amazon.com is defined by stuff coming off data in the Amazon ecosystem or marketplace. And then the question is, and everyone was talking at different angles on this, which was, where's the data live? How much do you hide from the developer? You know, and when can you offer that? You know, and you started with Walmart which was describing apps, traditional apps that are just code. And frankly that's easier to make that cross cloud and you know, essentially location independent. As soon as you have data you need data management technology that a customer does not have the sophistication to build. And then the argument was like, so how much can you hide from the developer who's building data apps? Tristan's version was you take the modern data stack and you start adding these APIs that define business concepts like bookings, billings and revenue, you know, or in the Uber example like drivers and riders, you know, and ETA's and prices. But those things execute still on the data warehouse or data lakehouse. Then Bob Muglia was saying you're not really hiding enough from the developer because you still got to say how to do all that. And his vision is not only do you hide where the data is but you hide how to sort of get at all that code by just saying what you want. You define how a car and how a driver and how a rider works. And then those things automatically figure out underneath the cover. >> So huge challenges, right? There's governance, there's security, they could be big blockers to, you know, the Supercloud but the industry's going to be attacking that problem. >> Well, what's your take? What's your favorite segment? Zhamak Dehghani came on, she's starting in that company, exclusive news. That was big notable moment for theCUBE. She launched her company. She pioneered the data mesh concept. And I think what George is saying and what data mesh points to is something that we've been saying for a long time. That data is now going to flip the script on how apps behave. And the Uber example I think is illustrated 'cause people can relate to Uber. But imagine that for every business whether it's a manufacturing business or retail or oil and gas or FinTech, they can look at their business like a game almost gamify it with data, riders, cars you know, moving data around the value of data. This is something that Adam Selipsky teased out at AWS, Dave. So what's your takeaway from this Supercloud? Where are we in your mind? Well big thing is data products and decentralizing your data architecture, but putting data in the hands of domain experts who can actually monetize the data. And I think that's, to me that's really exciting. Because look, data products financial industry has always been doing building data products. Mortgage backed securities is a data product. But why should the financial industry have all the fun? I mean virtually every organization can tap its ecosystem build data products, take its internal IP and processes and software and point it to the world and actually begin to make money out of it. >> Okay, so let's go around the horn. I'll start, I'll get you guys some time to think. Next question, what did you learn today? I learned that I think it's an infrastructure game and talking to Kit Colbert at VMware, I think it's all about infrastructure refactoring and I think the data's going to be an ingredient that's going to be operating system like. I think you're going to see the infrastructure influencing operations that will enable Superclouds to be real. And developers won't even know what a Supercloud is because they'll be using it. It's the operations focus is going to be very critical. Just like DevOps movements started Cloud native I think you're going to see a data native movement and I think infrastructure is critical as people go to the next level. That's my big takeaway today. And I'll say the data conversation is at the center. I think security, data are going to be always active horizontally scalable concepts, but every company's going to reset their infrastructure, how it looks and if it's not set up for data and or things that there need to be agile on, it's going to be a non-starter. So I think that's the cloud NextGen, distributed computing. >> I mean, what came into focus for me was I think the hyperscaler is going to continue to do their thing, you know, and be very, very successful and they're each coming at it from different approaches. We talk about this all the time in theCUBE. Amazon the best infrastructure, you know, Google's got its you know, data and AI thing and it's playing catch up and Microsoft's got this massive estate. Okay, cool. Check. The next wave of innovation which is coming from data, I've always said follow the data. That's where the where the money's going to be is going to come from other places. People want to be able to, organizations want to be able to share data across clouds across their organization, outside of their ecosystem and make money with that data sharing. They don't want to FTP it anymore. I got it. You take it. They want to work with live data in real time and I think the edge, we didn't talk much about the edge today is going to even take that to a new level real time inferencing at the edge, AI and and being able to do new things with data that we haven't even seen. But playing around with ChatGPT, it's blowing our mind. And I think you're right, it's like when we first saw the browser, holy crap, this is going to change the world. >> Yeah. And the ChatGPT by the way is going to create a wave of machine learning and data refactoring for sure. But also Howie Liu had an interesting comment, he was asked by a VC how much to replicate that and he said it's in the hundreds of millions, not billions. Now if you asked that same question how much does it cost to replicate AWS? The CapEx alone is unstoppable, they're already done. So, you know, the hyperscalers are going to continue to boom. I think they're going to drive the infrastructure. I think Amazon's going to be really strong at silicon and physics and squeeze every ounce atom out of every physical thing and then get latency as your bottleneck and the rest is all going to be... >> That never blew me away, a hundred million to create kind of an open AI, you know, competitor. Look at companies like Lacework. >> John: Some people have that much cash on the balance sheet. >> These are security companies that have raised a billion dollars, right? To compete. You know, so... >> If you're not shifting left what do you do with data, shift up? >> But, you know. >> What did you learn, George? >> I'm listening to you and I think you're helping me crystallize something which is the software infrastructure to enable the data apps is wide open. The way Zhamak described it is like if you want a data product like a sales and operation plan, that is built on other data products, like a sales plan which has a forecast in it, it has a production plan, it has a procurement plan and then a sales and operation plan is actually a composition of all those and they call each other. Now in her current platform, you need to expose to the developer a certain amount of mechanics on how to move all that data, when to move it. Like what happens if something fails. Now Muglia is saying I can hide that completely. So all you have to say is what you want and the underlying machinery takes care of everything. The problem is Muglia stuff is still a few years off. And Tristan is saying, I can give you much of that today but it's got to run in the data warehouse. So this trade offs all different ways. But again, I agree with you that the Cloud platform vendors or the ecosystem participants who can run across Cloud platforms and private infrastructure will be the next platform. And then the cloud platform is sort of where you run the big honking centralized stuff where someone else manages the operations. >> Sounds like middleware to me, Dave >> And key is, I'll just end with this. The key is being able to get to the data, whether it's in a data warehouse or a data lake or a S3 bucket or an object store, Oracle database, whatever. It's got to be inclusive that is critical to execute on the vision that you just talked about 'cause that data's in different systems and you're not going to put it all into some new system. >> So creating middleware in the cloud that sounds what it sounds like to me. >> It's like, you discovered PaaS >> It's a super PaaS. >> But it's platform services 'cause PaaS connotes like a tightly integrated platform. >> Well this is the real thing that's going on. We're going to see how this evolves. George, great to have you on, Dave. Thanks for the summary. I enjoyed this segment a lot today. This ends our stage performance live here in Palo Alto. As you know, we're live stage performance and syndicate out virtually. Our afternoon program's going to kick in now you're going to hear some great interviews. We got ChaosSearch. Defining the network Supercloud from prosimo. Future of Cloud Network, alkira. We got Saks, a retail company here, Veronika Durgin. We got Dave with Western Union. So a lot of customers, a pharmaceutical company Warner Brothers, Discovery, media company. And then you know, what is really needed for Supercloud, good panels. So stay with us for the afternoon program. That's part two of Supercloud 2. This is a wrap up for our stage live performance. I'm John Furrier with Dave Vellante and George Gilbert here wrapping up. Thanks for watching and enjoy the program. (bright music)
SUMMARY :
to the closing remarks here program not going to end now. John: Yeah, we got You're going to hear from Yeah, and you know, It is a gateway to multicloud starting to hit the brakes. go to you for a second the sophistication to build. but the industry's going to And I think that's, to me and talking to Kit Colbert at VMware, to do their thing, you know, I think Amazon's going to be really strong kind of an open AI, you know, competitor. on the balance sheet. that have raised a billion dollars, right? I'm listening to you and I think It's got to be inclusive that is critical So creating middleware in the cloud But it's platform services George, great to have you on, Dave.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tristan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Bob Moglia | PERSON | 0.99+ |
Veronika Durgin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Western Union | ORGANIZATION | 0.99+ |
Nick Taylor | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
10 | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
Howie Liu | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ionis Pharmaceuticals | ORGANIZATION | 0.99+ |
August | DATE | 0.99+ |
Warner Brothers | ORGANIZATION | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
billions | QUANTITY | 0.99+ |
Zhamak | PERSON | 0.99+ |
Muglia | PERSON | 0.99+ |
20 interviews | QUANTITY | 0.99+ |
Discovery | ORGANIZATION | 0.99+ |
second edition | QUANTITY | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
two months | QUANTITY | 0.99+ |
Supercloud 2 | TITLE | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
Saks | ORGANIZATION | 0.98+ |
PaaS | TITLE | 0.98+ |
amazon.com | ORGANIZATION | 0.98+ |
first one | QUANTITY | 0.98+ |
Lacework | ORGANIZATION | 0.98+ |
Harveer Singh | PERSON | 0.98+ |
Oracle | ORGANIZATION | 0.97+ |
alkira | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
Supercloud | ORGANIZATION | 0.95+ |
Supercloud2 | TITLE | 0.94+ |
Wikibon | ORGANIZATION | 0.94+ |
Supecloud | ORGANIZATION | 0.94+ |
each | QUANTITY | 0.93+ |
hundred million | QUANTITY | 0.92+ |
multicloud | ORGANIZATION | 0.92+ |
every ounce atom | QUANTITY | 0.91+ |
Amazon Web | ORGANIZATION | 0.88+ |
Supercloud 3 | TITLE | 0.87+ |
Analyst Predictions 2023: The Future of Data Management
(upbeat music) >> Hello, this is Dave Valente with theCUBE, and one of the most gratifying aspects of my role as a host of "theCUBE TV" is I get to cover a wide range of topics. And quite often, we're able to bring to our program a level of expertise that allows us to more deeply explore and unpack some of the topics that we cover throughout the year. And one of our favorite topics, of course, is data. Now, in 2021, after being in isolation for the better part of two years, a group of industry analysts met up at AWS re:Invent and started a collaboration to look at the trends in data and predict what some likely outcomes will be for the coming year. And it resulted in a very popular session that we had last year focused on the future of data management. And I'm very excited and pleased to tell you that the 2023 edition of that predictions episode is back, and with me are five outstanding market analyst, Sanjeev Mohan of SanjMo, Tony Baer of dbInsight, Carl Olofson from IDC, Dave Menninger from Ventana Research, and Doug Henschen, VP and Principal Analyst at Constellation Research. Now, what is it that we're calling you, guys? A data pack like the rat pack? No, no, no, no, that's not it. It's the data crowd, the data crowd, and the crowd includes some of the best minds in the data analyst community. They'll discuss how data management is evolving and what listeners should prepare for in 2023. Guys, welcome back. Great to see you. >> Good to be here. >> Thank you. >> Thanks, Dave. (Tony and Dave faintly speaks) >> All right, before we get into 2023 predictions, we thought it'd be good to do a look back at how we did in 2022 and give a transparent assessment of those predictions. So, let's get right into it. We're going to bring these up here, the predictions from 2022, they're color-coded red, yellow, and green to signify the degree of accuracy. And I'm pleased to report there's no red. Well, maybe some of you will want to debate that grading system. But as always, we want to be open, so you can decide for yourselves. So, we're going to ask each analyst to review their 2022 prediction and explain their rating and what evidence they have that led them to their conclusion. So, Sanjeev, please kick it off. Your prediction was data governance becomes key. I know that's going to knock you guys over, but elaborate, because you had more detail when you double click on that. >> Yeah, absolutely. Thank you so much, Dave, for having us on the show today. And we self-graded ourselves. I could have very easily made my prediction from last year green, but I mentioned why I left it as yellow. I totally fully believe that data governance was in a renaissance in 2022. And why do I say that? You have to look no further than AWS launching its own data catalog called DataZone. Before that, mid-year, we saw Unity Catalog from Databricks went GA. So, overall, I saw there was tremendous movement. When you see these big players launching a new data catalog, you know that they want to be in this space. And this space is highly critical to everything that I feel we will talk about in today's call. Also, if you look at established players, I spoke at Collibra's conference, data.world, work closely with Alation, Informatica, a bunch of other companies, they all added tremendous new capabilities. So, it did become key. The reason I left it as yellow is because I had made a prediction that Collibra would go IPO, and it did not. And I don't think anyone is going IPO right now. The market is really, really down, the funding in VC IPO market. But other than that, data governance had a banner year in 2022. >> Yeah. Well, thank you for that. And of course, you saw data clean rooms being announced at AWS re:Invent, so more evidence. And I like how the fact that you included in your predictions some things that were binary, so you dinged yourself there. So, good job. Okay, Tony Baer, you're up next. Data mesh hits reality check. As you see here, you've given yourself a bright green thumbs up. (Tony laughing) Okay. Let's hear why you feel that was the case. What do you mean by reality check? >> Okay. Thanks, Dave, for having us back again. This is something I just wrote and just tried to get away from, and this just a topic just won't go away. I did speak with a number of folks, early adopters and non-adopters during the year. And I did find that basically that it pretty much validated what I was expecting, which was that there was a lot more, this has now become a front burner issue. And if I had any doubt in my mind, the evidence I would point to is what was originally intended to be a throwaway post on LinkedIn, which I just quickly scribbled down the night before leaving for re:Invent. I was packing at the time, and for some reason, I was doing Google search on data mesh. And I happened to have tripped across this ridiculous article, I will not say where, because it doesn't deserve any publicity, about the eight (Dave laughing) best data mesh software companies of 2022. (Tony laughing) One of my predictions was that you'd see data mesh washing. And I just quickly just hopped on that maybe three sentences and wrote it at about a couple minutes saying this is hogwash, essentially. (laughs) And that just reun... And then, I left for re:Invent. And the next night, when I got into my Vegas hotel room, I clicked on my computer. I saw a 15,000 hits on that post, which was the most hits of any single post I put all year. And the responses were wildly pro and con. So, it pretty much validates my expectation in that data mesh really did hit a lot more scrutiny over this past year. >> Yeah, thank you for that. I remember that article. I remember rolling my eyes when I saw it, and then I recently, (Tony laughing) I talked to Walmart and they actually invoked Martin Fowler and they said that they're working through their data mesh. So, it takes a really lot of thought, and it really, as we've talked about, is really as much an organizational construct. You're not buying data mesh >> Bingo. >> to your point. Okay. Thank you, Tony. Carl Olofson, here we go. You've graded yourself a yellow in the prediction of graph databases. Take off. Please elaborate. >> Yeah, sure. So, I realized in looking at the prediction that it seemed to imply that graph databases could be a major factor in the data world in 2022, which obviously didn't become the case. It was an error on my part in that I should have said it in the right context. It's really a three to five-year time period that graph databases will really become significant, because they still need accepted methodologies that can be applied in a business context as well as proper tools in order for people to be able to use them seriously. But I stand by the idea that it is taking off, because for one thing, Neo4j, which is the leading independent graph database provider, had a very good year. And also, we're seeing interesting developments in terms of things like AWS with Neptune and with Oracle providing graph support in Oracle database this past year. Those things are, as I said, growing gradually. There are other companies like TigerGraph and so forth, that deserve watching as well. But as far as becoming mainstream, it's going to be a few years before we get all the elements together to make that happen. Like any new technology, you have to create an environment in which ordinary people without a whole ton of technical training can actually apply the technology to solve business problems. >> Yeah, thank you for that. These specialized databases, graph databases, time series databases, you see them embedded into mainstream data platforms, but there's a place for these specialized databases, I would suspect we're going to see new types of databases emerge with all this cloud sprawl that we have and maybe to the edge. >> Well, part of it is that it's not as specialized as you might think it. You can apply graphs to great many workloads and use cases. It's just that people have yet to fully explore and discover what those are. >> Yeah. >> And so, it's going to be a process. (laughs) >> All right, Dave Menninger, streaming data permeates the landscape. You gave yourself a yellow. Why? >> Well, I couldn't think of a appropriate combination of yellow and green. Maybe I should have used chartreuse, (Dave laughing) but I was probably a little hard on myself making it yellow. This is another type of specialized data processing like Carl was talking about graph databases is a stream processing, and nearly every data platform offers streaming capabilities now. Often, it's based on Kafka. If you look at Confluent, their revenues have grown at more than 50%, continue to grow at more than 50% a year. They're expected to do more than half a billion dollars in revenue this year. But the thing that hasn't happened yet, and to be honest, they didn't necessarily expect it to happen in one year, is that streaming hasn't become the default way in which we deal with data. It's still a sidecar to data at rest. And I do expect that we'll continue to see streaming become more and more mainstream. I do expect perhaps in the five-year timeframe that we will first deal with data as streaming and then at rest, but the worlds are starting to merge. And we even see some vendors bringing products to market, such as K2View, Hazelcast, and RisingWave Labs. So, in addition to all those core data platform vendors adding these capabilities, there are new vendors approaching this market as well. >> I like the tough grading system, and it's not trivial. And when you talk to practitioners doing this stuff, there's still some complications in the data pipeline. And so, but I think, you're right, it probably was a yellow plus. Doug Henschen, data lakehouses will emerge as dominant. When you talk to people about lakehouses, practitioners, they all use that term. They certainly use the term data lake, but now, they're using lakehouse more and more. What's your thoughts on here? Why the green? What's your evidence there? >> Well, I think, I was accurate. I spoke about it specifically as something that vendors would be pursuing. And we saw yet more lakehouse advocacy in 2022. Google introduced its BigLake service alongside BigQuery. Salesforce introduced Genie, which is really a lakehouse architecture. And it was a safe prediction to say vendors are going to be pursuing this in that AWS, Cloudera, Databricks, Microsoft, Oracle, SAP, Salesforce now, IBM, all advocate this idea of a single platform for all of your data. Now, the trend was also supported in 2023, in that we saw a big embrace of Apache Iceberg in 2022. That's a structured table format. It's used with these lakehouse platforms. It's open, so it ensures portability and it also ensures performance. And that's a structured table that helps with the warehouse side performance. But among those announcements, Snowflake, Google, Cloud Era, SAP, Salesforce, IBM, all embraced Iceberg. But keep in mind, again, I'm talking about this as something that vendors are pursuing as their approach. So, they're advocating end users. It's very cutting edge. I'd say the top, leading edge, 5% of of companies have really embraced the lakehouse. I think, we're now seeing the fast followers, the next 20 to 25% of firms embracing this idea and embracing a lakehouse architecture. I recall Christian Kleinerman at the big Snowflake event last summer, making the announcement about Iceberg, and he asked for a show of hands for any of you in the audience at the keynote, have you heard of Iceberg? And just a smattering of hands went up. So, the vendors are ahead of the curve. They're pushing this trend, and we're now seeing a little bit more mainstream uptake. >> Good. Doug, I was there. It was you, me, and I think, two other hands were up. That was just humorous. (Doug laughing) All right, well, so I liked the fact that we had some yellow and some green. When you think about these things, there's the prediction itself. Did it come true or not? There are the sub predictions that you guys make, and of course, the degree of difficulty. So, thank you for that open assessment. All right, let's get into the 2023 predictions. Let's bring up the predictions. Sanjeev, you're going first. You've got a prediction around unified metadata. What's the prediction, please? >> So, my prediction is that metadata space is currently a mess. It needs to get unified. There are too many use cases of metadata, which are being addressed by disparate systems. For example, data quality has become really big in the last couple of years, data observability, the whole catalog space is actually, people don't like to use the word data catalog anymore, because data catalog sounds like it's a catalog, a museum, if you may, of metadata that you go and admire. So, what I'm saying is that in 2023, we will see that metadata will become the driving force behind things like data ops, things like orchestration of tasks using metadata, not rules. Not saying that if this fails, then do this, if this succeeds, go do that. But it's like getting to the metadata level, and then making a decision as to what to orchestrate, what to automate, how to do data quality check, data observability. So, this space is starting to gel, and I see there'll be more maturation in the metadata space. Even security privacy, some of these topics, which are handled separately. And I'm just talking about data security and data privacy. I'm not talking about infrastructure security. These also need to merge into a unified metadata management piece with some knowledge graph, semantic layer on top, so you can do analytics on it. So, it's no longer something that sits on the side, it's limited in its scope. It is actually the very engine, the very glue that is going to connect data producers and consumers. >> Great. Thank you for that. Doug. Doug Henschen, any thoughts on what Sanjeev just said? Do you agree? Do you disagree? >> Well, I agree with many aspects of what he says. I think, there's a huge opportunity for consolidation and streamlining of these as aspects of governance. Last year, Sanjeev, you said something like, we'll see more people using catalogs than BI. And I have to disagree. I don't think this is a category that's headed for mainstream adoption. It's a behind the scenes activity for the wonky few, or better yet, companies want machine learning and automation to take care of these messy details. We've seen these waves of management technologies, some of the latest data observability, customer data platform, but they failed to sweep away all the earlier investments in data quality and master data management. So, yes, I hope the latest tech offers, glimmers that there's going to be a better, cleaner way of addressing these things. But to my mind, the business leaders, including the CIO, only want to spend as much time and effort and money and resources on these sorts of things to avoid getting breached, ending up in headlines, getting fired or going to jail. So, vendors bring on the ML and AI smarts and the automation of these sorts of activities. >> So, if I may say something, the reason why we have this dichotomy between data catalog and the BI vendors is because data catalogs are very soon, not going to be standalone products, in my opinion. They're going to get embedded. So, when you use a BI tool, you'll actually use the catalog to find out what is it that you want to do, whether you are looking for data or you're looking for an existing dashboard. So, the catalog becomes embedded into the BI tool. >> Hey, Dave Menninger, sometimes you have some data in your back pocket. Do you have any stats (chuckles) on this topic? >> No, I'm glad you asked, because I'm going to... Now, data catalogs are something that's interesting. Sanjeev made a statement that data catalogs are falling out of favor. I don't care what you call them. They're valuable to organizations. Our research shows that organizations that have adequate data catalog technologies are three times more likely to express satisfaction with their analytics for just the reasons that Sanjeev was talking about. You can find what you want, you know you're getting the right information, you know whether or not it's trusted. So, those are good things. So, we expect to see the capabilities, whether it's embedded or separate. We expect to see those capabilities continue to permeate the market. >> And a lot of those catalogs are driven now by machine learning and things. So, they're learning from those patterns of usage by people when people use the data. (airy laughs) >> All right. Okay. Thank you, guys. All right. Let's move on to the next one. Tony Bear, let's bring up the predictions. You got something in here about the modern data stack. We need to rethink it. Is the modern data stack getting long at the tooth? Is it not so modern anymore? >> I think, in a way, it's got almost too modern. It's gotten too, I don't know if it's being long in the tooth, but it is getting long. The modern data stack, it's traditionally been defined as basically you have the data platform, which would be the operational database and the data warehouse. And in between, you have all the tools that are necessary to essentially get that data from the operational realm or the streaming realm for that matter into basically the data warehouse, or as we might be seeing more and more, the data lakehouse. And I think, what's important here is that, or I think, we have seen a lot of progress, and this would be in the cloud, is with the SaaS services. And especially you see that in the modern data stack, which is like all these players, not just the MongoDBs or the Oracles or the Amazons have their database platforms. You see they have the Informatica's, and all the other players there in Fivetrans have their own SaaS services. And within those SaaS services, you get a certain degree of simplicity, which is it takes all the housekeeping off the shoulders of the customers. That's a good thing. The problem is that what we're getting to unfortunately is what I would call lots of islands of simplicity, which means that it leads it (Dave laughing) to the customer to have to integrate or put all that stuff together. It's a complex tool chain. And so, what we really need to think about here, we have too many pieces. And going back to the discussion of catalogs, it's like we have so many catalogs out there, which one do we use? 'Cause chances are of most organizations do not rely on a single catalog at this point. What I'm calling on all the data providers or all the SaaS service providers, is to literally get it together and essentially make this modern data stack less of a stack, make it more of a blending of an end-to-end solution. And that can come in a number of different ways. Part of it is that we're data platform providers have been adding services that are adjacent. And there's some very good examples of this. We've seen progress over the past year or so. For instance, MongoDB integrating search. It's a very common, I guess, sort of tool that basically, that the applications that are developed on MongoDB use, so MongoDB then built it into the database rather than requiring an extra elastic search or open search stack. Amazon just... AWS just did the zero-ETL, which is a first step towards simplifying the process from going from Aurora to Redshift. You've seen same thing with Google, BigQuery integrating basically streaming pipelines. And you're seeing also a lot of movement in database machine learning. So, there's some good moves in this direction. I expect to see more than this year. Part of it's from basically the SaaS platform is adding some functionality. But I also see more importantly, because you're never going to get... This is like asking your data team and your developers, herding cats to standardizing the same tool. In most organizations, that is not going to happen. So, take a look at the most popular combinations of tools and start to come up with some pre-built integrations and pre-built orchestrations, and offer some promotional pricing, maybe not quite two for, but in other words, get two products for the price of two services or for the price of one and a half. I see a lot of potential for this. And it's to me, if the class was to simplify things, this is the next logical step and I expect to see more of this here. >> Yeah, and you see in Oracle, MySQL heat wave, yet another example of eliminating that ETL. Carl Olofson, today, if you think about the data stack and the application stack, they're largely separate. Do you have any thoughts on how that's going to play out? Does that play into this prediction? What do you think? >> Well, I think, that the... I really like Tony's phrase, islands of simplification. It really says (Tony chuckles) what's going on here, which is that all these different vendors you ask about, about how these stacks work. All these different vendors have their own stack vision. And you can... One application group is going to use one, and another application group is going to use another. And some people will say, let's go to, like you go to a Informatica conference and they say, we should be the center of your universe, but you can't connect everything in your universe to Informatica, so you need to use other things. So, the challenge is how do we make those things work together? As Tony has said, and I totally agree, we're never going to get to the point where people standardize on one organizing system. So, the alternative is to have metadata that can be shared amongst those systems and protocols that allow those systems to coordinate their operations. This is standard stuff. It's not easy. But the motive for the vendors is that they can become more active critical players in the enterprise. And of course, the motive for the customer is that things will run better and more completely. So, I've been looking at this in terms of two kinds of metadata. One is the meaning metadata, which says what data can be put together. The other is the operational metadata, which says basically where did it come from? Who created it? What's its current state? What's the security level? Et cetera, et cetera, et cetera. The good news is the operational stuff can actually be done automatically, whereas the meaning stuff requires some human intervention. And as we've already heard from, was it Doug, I think, people are disinclined to put a lot of definition into meaning metadata. So, that may be the harder one, but coordination is key. This problem has been with us forever, but with the addition of new data sources, with streaming data with data in different formats, the whole thing has, it's been like what a customer of mine used to say, "I understand your product can make my system run faster, but right now I just feel I'm putting my problems on roller skates. (chuckles) I don't need that to accelerate what's already not working." >> Excellent. Okay, Carl, let's stay with you. I remember in the early days of the big data movement, Hadoop movement, NoSQL was the big thing. And I remember Amr Awadallah said to us in theCUBE that SQL is the killer app for big data. So, your prediction here, if we bring that up is SQL is back. Please elaborate. >> Yeah. So, of course, some people would say, well, it never left. Actually, that's probably closer to true, but in the perception of the marketplace, there's been all this noise about alternative ways of storing, retrieving data, whether it's in key value stores or document databases and so forth. We're getting a lot of messaging that for a while had persuaded people that, oh, we're not going to do analytics in SQL anymore. We're going to use Spark for everything, except that only a handful of people know how to use Spark. Oh, well, that's a problem. Well, how about, and for ordinary conventional business analytics, Spark is like an over-engineered solution to the problem. SQL works just great. What's happened in the past couple years, and what's going to continue to happen is that SQL is insinuating itself into everything we're seeing. We're seeing all the major data lake providers offering SQL support, whether it's Databricks or... And of course, Snowflake is loving this, because that is what they do, and their success is certainly points to the success of SQL, even MongoDB. And we were all, I think, at the MongoDB conference where on one day, we hear SQL is dead. They're not teaching SQL in schools anymore, and this kind of thing. And then, a couple days later at the same conference, they announced we're adding a new analytic capability-based on SQL. But didn't you just say SQL is dead? So, the reality is that SQL is better understood than most other methods of certainly of retrieving and finding data in a data collection, no matter whether it happens to be relational or non-relational. And even in systems that are very non-relational, such as graph and document databases, their query languages are being built or extended to resemble SQL, because SQL is something people understand. >> Now, you remember when we were in high school and you had had to take the... Your debating in the class and you were forced to take one side and defend it. So, I was was at a Vertica conference one time up on stage with Curt Monash, and I had to take the NoSQL, the world is changing paradigm shift. And so just to be controversial, I said to him, Curt Monash, I said, who really needs acid compliance anyway? Tony Baer. And so, (chuckles) of course, his head exploded, but what are your thoughts (guests laughing) on all this? >> Well, my first thought is congratulations, Dave, for surviving being up on stage with Curt Monash. >> Amen. (group laughing) >> I definitely would concur with Carl. We actually are definitely seeing a SQL renaissance and if there's any proof of the pudding here, I see lakehouse is being icing on the cake. As Doug had predicted last year, now, (clears throat) for the record, I think, Doug was about a year ahead of time in his predictions that this year is really the year that I see (clears throat) the lakehouse ecosystems really firming up. You saw the first shots last year. But anyway, on this, data lakes will not go away. I've actually, I'm on the home stretch of doing a market, a landscape on the lakehouse. And lakehouse will not replace data lakes in terms of that. There is the need for those, data scientists who do know Python, who knows Spark, to go in there and basically do their thing without all the restrictions or the constraints of a pre-built, pre-designed table structure. I get that. Same thing for developing models. But on the other hand, there is huge need. Basically, (clears throat) maybe MongoDB was saying that we're not teaching SQL anymore. Well, maybe we have an oversupply of SQL developers. Well, I'm being facetious there, but there is a huge skills based in SQL. Analytics have been built on SQL. They came with lakehouse and why this really helps to fuel a SQL revival is that the core need in the data lake, what brought on the lakehouse was not so much SQL, it was a need for acid. And what was the best way to do it? It was through a relational table structure. So, the whole idea of acid in the lakehouse was not to turn it into a transaction database, but to make the data trusted, secure, and more granularly governed, where you could govern down to column and row level, which you really could not do in a data lake or a file system. So, while lakehouse can be queried in a manner, you can go in there with Python or whatever, it's built on a relational table structure. And so, for that end, for those types of data lakes, it becomes the end state. You cannot bypass that table structure as I learned the hard way during my research. So, the bottom line I'd say here is that lakehouse is proof that we're starting to see the revenge of the SQL nerds. (Dave chuckles) >> Excellent. Okay, let's bring up back up the predictions. Dave Menninger, this one's really thought-provoking and interesting. We're hearing things like data as code, new data applications, machines actually generating plans with no human involvement. And your prediction is the definition of data is expanding. What do you mean by that? >> So, I think, for too long, we've thought about data as the, I would say facts that we collect the readings off of devices and things like that, but data on its own is really insufficient. Organizations need to manipulate that data and examine derivatives of the data to really understand what's happening in their organization, why has it happened, and to project what might happen in the future. And my comment is that these data derivatives need to be supported and managed just like the data needs to be managed. We can't treat this as entirely separate. Think about all the governance discussions we've had. Think about the metadata discussions we've had. If you separate these things, now you've got more moving parts. We're talking about simplicity and simplifying the stack. So, if these things are treated separately, it creates much more complexity. I also think it creates a little bit of a myopic view on the part of the IT organizations that are acquiring these technologies. They need to think more broadly. So, for instance, metrics. Metric stores are becoming much more common part of the tooling that's part of a data platform. Similarly, feature stores are gaining traction. So, those are designed to promote the reuse and consistency across the AI and ML initiatives. The elements that are used in developing an AI or ML model. And let me go back to metrics and just clarify what I mean by that. So, any type of formula involving the data points. I'm distinguishing metrics from features that are used in AI and ML models. And the data platforms themselves are increasingly managing the models as an element of data. So, just like figuring out how to calculate a metric. Well, if you're going to have the features associated with an AI and ML model, you probably need to be managing the model that's associated with those features. The other element where I see expansion is around external data. Organizations for decades have been focused on the data that they generate within their own organization. We see more and more of these platforms acquiring and publishing data to external third-party sources, whether they're within some sort of a partner ecosystem or whether it's a commercial distribution of that information. And our research shows that when organizations use external data, they derive even more benefits from the various analyses that they're conducting. And the last great frontier in my opinion on this expanding world of data is the world of driver-based planning. Very few of the major data platform providers provide these capabilities today. These are the types of things you would do in a spreadsheet. And we all know the issues associated with spreadsheets. They're hard to govern, they're error-prone. And so, if we can take that type of analysis, collecting the occupancy of a rental property, the projected rise in rental rates, the fluctuations perhaps in occupancy, the interest rates associated with financing that property, we can project forward. And that's a very common thing to do. What the income might look like from that property income, the expenses, we can plan and purchase things appropriately. So, I think, we need this broader purview and I'm beginning to see some of those things happen. And the evidence today I would say, is more focused around the metric stores and the feature stores starting to see vendors offer those capabilities. And we're starting to see the ML ops elements of managing the AI and ML models find their way closer to the data platforms as well. >> Very interesting. When I hear metrics, I think of KPIs, I think of data apps, orchestrate people and places and things to optimize around a set of KPIs. It sounds like a metadata challenge more... Somebody once predicted they'll have more metadata than data. Carl, what are your thoughts on this prediction? >> Yeah, I think that what Dave is describing as data derivatives is in a way, another word for what I was calling operational metadata, which not about the data itself, but how it's used, where it came from, what the rules are governing it, and that kind of thing. If you have a rich enough set of those things, then not only can you do a model of how well your vacation property rental may do in terms of income, but also how well your application that's measuring that is doing for you. In other words, how many times have I used it, how much data have I used and what is the relationship between the data that I've used and the benefits that I've derived from using it? Well, we don't have ways of doing that. What's interesting to me is that folks in the content world are way ahead of us here, because they have always tracked their content using these kinds of attributes. Where did it come from? When was it created, when was it modified? Who modified it? And so on and so forth. We need to do more of that with the structure data that we have, so that we can track what it's used. And also, it tells us how well we're doing with it. Is it really benefiting us? Are we being efficient? Are there improvements in processes that we need to consider? Because maybe data gets created and then it isn't used or it gets used, but it gets altered in some way that actually misleads people. (laughs) So, we need the mechanisms to be able to do that. So, I would say that that's... And I'd say that it's true that we need that stuff. I think, that starting to expand is probably the right way to put it. It's going to be expanding for some time. I think, we're still a distance from having all that stuff really working together. >> Maybe we should say it's gestating. (Dave and Carl laughing) >> Sorry, if I may- >> Sanjeev, yeah, I was going to say this... Sanjeev, please comment. This sounds to me like it supports Zhamak Dehghani's principles, but please. >> Absolutely. So, whether we call it data mesh or not, I'm not getting into that conversation, (Dave chuckles) but data (audio breaking) (Tony laughing) everything that I'm hearing what Dave is saying, Carl, this is the year when data products will start to take off. I'm not saying they'll become mainstream. They may take a couple of years to become so, but this is data products, all this thing about vacation rentals and how is it doing, that data is coming from different sources. I'm packaging it into our data product. And to Carl's point, there's a whole operational metadata associated with it. The idea is for organizations to see things like developer productivity, how many releases am I doing of this? What data products are most popular? I'm actually in right now in the process of formulating this concept that just like we had data catalogs, we are very soon going to be requiring data products catalog. So, I can discover these data products. I'm not just creating data products left, right, and center. I need to know, do they already exist? What is the usage? If no one is using a data product, maybe I want to retire and save cost. But this is a data product. Now, there's a associated thing that is also getting debated quite a bit called data contracts. And a data contract to me is literally just formalization of all these aspects of a product. How do you use it? What is the SLA on it, what is the quality that I am prescribing? So, data product, in my opinion, shifts the conversation to the consumers or to the business people. Up to this point when, Dave, you're talking about data and all of data discovery curation is a very data producer-centric. So, I think, we'll see a shift more into the consumer space. >> Yeah. Dave, can I just jump in there just very quickly there, which is that what Sanjeev has been saying there, this is really central to what Zhamak has been talking about. It's basically about making, one, data products are about the lifecycle management of data. Metadata is just elemental to that. And essentially, one of the things that she calls for is making data products discoverable. That's exactly what Sanjeev was talking about. >> By the way, did everyone just no notice how Sanjeev just snuck in another prediction there? So, we've got- >> Yeah. (group laughing) >> But you- >> Can we also say that he snuck in, I think, the term that we'll remember today, which is metadata museums. >> Yeah, but- >> Yeah. >> And also comment to, Tony, to your last year's prediction, you're really talking about it's not something that you're going to buy from a vendor. >> No. >> It's very specific >> Mm-hmm. >> to an organization, their own data product. So, touche on that one. Okay, last prediction. Let's bring them up. Doug Henschen, BI analytics is headed to embedding. What does that mean? >> Well, we all know that conventional BI dashboarding reporting is really commoditized from a vendor perspective. It never enjoyed truly mainstream adoption. Always that 25% of employees are really using these things. I'm seeing rising interest in embedding concise analytics at the point of decision or better still, using analytics as triggers for automation and workflows, and not even necessitating human interaction with visualizations, for example, if we have confidence in the analytics. So, leading companies are pushing for next generation applications, part of this low-code, no-code movement we've seen. And they want to build that decision support right into the app. So, the analytic is right there. Leading enterprise apps vendors, Salesforce, SAP, Microsoft, Oracle, they're all building smart apps with the analytics predictions, even recommendations built into these applications. And I think, the progressive BI analytics vendors are supporting this idea of driving insight to action, not necessarily necessitating humans interacting with it if there's confidence. So, we want prediction, we want embedding, we want automation. This low-code, no-code development movement is very important to bringing the analytics to where people are doing their work. We got to move beyond the, what I call swivel chair integration, between where people do their work and going off to separate reports and dashboards, and having to interpret and analyze before you can go back and do take action. >> And Dave Menninger, today, if you want, analytics or you want to absorb what's happening in the business, you typically got to go ask an expert, and then wait. So, what are your thoughts on Doug's prediction? >> I'm in total agreement with Doug. I'm going to say that collectively... So, how did we get here? I'm going to say collectively as an industry, we made a mistake. We made BI and analytics separate from the operational systems. Now, okay, it wasn't really a mistake. We were limited by the technology available at the time. Decades ago, we had to separate these two systems, so that the analytics didn't impact the operations. You don't want the operations preventing you from being able to do a transaction. But we've gone beyond that now. We can bring these two systems and worlds together and organizations recognize that need to change. As Doug said, the majority of the workforce and the majority of organizations doesn't have access to analytics. That's wrong. (chuckles) We've got to change that. And one of the ways that's going to change is with embedded analytics. 2/3 of organizations recognize that embedded analytics are important and it even ranks higher in importance than AI and ML in those organizations. So, it's interesting. This is a really important topic to the organizations that are consuming these technologies. The good news is it works. Organizations that have embraced embedded analytics are more comfortable with self-service than those that have not, as opposed to turning somebody loose, in the wild with the data. They're given a guided path to the data. And the research shows that 65% of organizations that have adopted embedded analytics are comfortable with self-service compared with just 40% of organizations that are turning people loose in an ad hoc way with the data. So, totally behind Doug's predictions. >> Can I just break in with something here, a comment on what Dave said about what Doug said, which (laughs) is that I totally agree with what you said about embedded analytics. And at IDC, we made a prediction in our future intelligence, future of intelligence service three years ago that this was going to happen. And the thing that we're waiting for is for developers to build... You have to write the applications to work that way. It just doesn't happen automagically. Developers have to write applications that reference analytic data and apply it while they're running. And that could involve simple things like complex queries against the live data, which is through something that I've been calling analytic transaction processing. Or it could be through something more sophisticated that involves AI operations as Doug has been suggesting, where the result is enacted pretty much automatically unless the scores are too low and you need to have a human being look at it. So, I think that that is definitely something we've been watching for. I'm not sure how soon it will come, because it seems to take a long time for people to change their thinking. But I think, as Dave was saying, once they do and they apply these principles in their application development, the rewards are great. >> Yeah, this is very much, I would say, very consistent with what we were talking about, I was talking about before, about basically rethinking the modern data stack and going into more of an end-to-end solution solution. I think, that what we're talking about clearly here is operational analytics. There'll still be a need for your data scientists to go offline just in their data lakes to do all that very exploratory and that deep modeling. But clearly, it just makes sense to bring operational analytics into where people work into their workspace and further flatten that modern data stack. >> But with all this metadata and all this intelligence, we're talking about injecting AI into applications, it does seem like we're entering a new era of not only data, but new era of apps. Today, most applications are about filling forms out or codifying processes and require a human input. And it seems like there's enough data now and enough intelligence in the system that the system can actually pull data from, whether it's the transaction system, e-commerce, the supply chain, ERP, and actually do something with that data without human involvement, present it to humans. Do you guys see this as a new frontier? >> I think, that's certainly- >> Very much so, but it's going to take a while, as Carl said. You have to design it, you have to get the prediction into the system, you have to get the analytics at the point of decision has to be relevant to that decision point. >> And I also recall basically a lot of the ERP vendors back like 10 years ago, we're promising that. And the fact that we're still looking at the promises shows just how difficult, how much of a challenge it is to get to what Doug's saying. >> One element that could be applied in this case is (indistinct) architecture. If applications are developed that are event-driven rather than following the script or sequence that some programmer or designer had preconceived, then you'll have much more flexible applications. You can inject decisions at various points using this technology much more easily. It's a completely different way of writing applications. And it actually involves a lot more data, which is why we should all like it. (laughs) But in the end (Tony laughing) it's more stable, it's easier to manage, easier to maintain, and it's actually more efficient, which is the result of an MIT study from about 10 years ago, and still, we are not seeing this come to fruition in most business applications. >> And do you think it's going to require a new type of data platform database? Today, data's all far-flung. We see that's all over the clouds and at the edge. Today, you cache- >> We need a super cloud. >> You cache that data, you're throwing into memory. I mentioned, MySQL heat wave. There are other examples where it's a brute force approach, but maybe we need new ways of laying data out on disk and new database architectures, and just when we thought we had it all figured out. >> Well, without referring to disk, which to my mind, is almost like talking about cave painting. I think, that (Dave laughing) all the things that have been mentioned by all of us today are elements of what I'm talking about. In other words, the whole improvement of the data mesh, the improvement of metadata across the board and improvement of the ability to track data and judge its freshness the way we judge the freshness of a melon or something like that, to determine whether we can still use it. Is it still good? That kind of thing. Bringing together data from multiple sources dynamically and real-time requires all the things we've been talking about. All the predictions that we've talked about today add up to elements that can make this happen. >> Well, guys, it's always tremendous to get these wonderful minds together and get your insights, and I love how it shapes the outcome here of the predictions, and let's see how we did. We're going to leave it there. I want to thank Sanjeev, Tony, Carl, David, and Doug. Really appreciate the collaboration and thought that you guys put into these sessions. Really, thank you. >> Thank you. >> Thanks, Dave. >> Thank you for having us. >> Thanks. >> Thank you. >> All right, this is Dave Valente for theCUBE, signing off for now. Follow these guys on social media. Look for coverage on siliconangle.com, theCUBE.net. Thank you for watching. (upbeat music)
SUMMARY :
and pleased to tell you (Tony and Dave faintly speaks) that led them to their conclusion. down, the funding in VC IPO market. And I like how the fact And I happened to have tripped across I talked to Walmart in the prediction of graph databases. But I stand by the idea and maybe to the edge. You can apply graphs to great And so, it's going to streaming data permeates the landscape. and to be honest, I like the tough grading the next 20 to 25% of and of course, the degree of difficulty. that sits on the side, Thank you for that. And I have to disagree. So, the catalog becomes Do you have any stats for just the reasons that And a lot of those catalogs about the modern data stack. and more, the data lakehouse. and the application stack, So, the alternative is to have metadata that SQL is the killer app for big data. but in the perception of the marketplace, and I had to take the NoSQL, being up on stage with Curt Monash. (group laughing) is that the core need in the data lake, And your prediction is the and examine derivatives of the data to optimize around a set of KPIs. that folks in the content world (Dave and Carl laughing) going to say this... shifts the conversation to the consumers And essentially, one of the things (group laughing) the term that we'll remember today, to your last year's prediction, is headed to embedding. and going off to separate happening in the business, so that the analytics didn't And the thing that we're waiting for and that deep modeling. that the system can of decision has to be relevant And the fact that we're But in the end We see that's all over the You cache that data, and improvement of the and I love how it shapes the outcome here Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
Dave Menninger | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Carl Olofson | PERSON | 0.99+ |
Dave Menninger | PERSON | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Dave Valente | PERSON | 0.99+ |
Collibra | ORGANIZATION | 0.99+ |
Curt Monash | PERSON | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Christian Kleinerman | PERSON | 0.99+ |
Dave Valente | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sanjeev | PERSON | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Hazelcast | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Tony Bear | PERSON | 0.99+ |
25% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
last year | DATE | 0.99+ |
65% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
five-year | QUANTITY | 0.99+ |
TigerGraph | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two services | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
RisingWave Labs | ORGANIZATION | 0.99+ |
Ignite22 Analysis | Palo Alto Networks Ignite22
>>The Cube presents Ignite 22, brought to you by Palo Alto Networks. >>Welcome back everyone. We're so glad that you're still with us. It's the Cube Live at the MGM Grand. This is our second day of coverage of Palo Alto Networks Ignite. This is takeaways from Ignite 22. Lisa Martin here with two really smart guys, Dave Valante. Dave, we're joined by one of our cube alumni, a friend, a friend of the, we say friend of the Cube. >>Yeah, otc. A friend of the Cube >>Karala joined us. Guys, it's great to have you here. It's been an exciting show. A lot of cybersecurity is one of my favorite topics to talk about. But I'd love to get some of the big takeaways from both of you. Dave, we'll start with you. >>A breathing room from two weeks ago. Yeah, that was, that was really pleasant. You know, I mean, I know was, yes, you sat in the analyst program, interested in what your takeaways were from there. But, you know, coming into this, we wrote a piece, Palo Alto's Gold Standard, what they need to do to, to keep that, that status. And we hear it a lot about consolidation. That's their big theme now, which is timely, right? Cause people wanna save money, they wanna do more with less. But I'm really interested in hearing zeus's thoughts on how that's playing in the market. How customers, how easy is it to just say, oh, hey, I'm gonna consolidate. I wanna get into that a little bit with you, how well the strategy's working. We're gonna get into some of the m and a activity and really bring your perspectives to the table. Well, >>It's, it's not easy. I mean, people have been calling for the consolidation of security for decades, and it's, it's, they're the first company that's actually made it happen. Right? And, and I think this is what we're seeing here is the culmination of this long term strategy, this company trying to build more of a platform. And they, you know, they, they came out as a firewall vendor. And I think it's safe to say they're more than firewall today. That's only about two thirds of their revenue now. So down from 80% a few years ago. And when I think of what Palo Alto has become, they're really a data company. Now, if you look at, you know, unit 42 in Cortex, the, the, the Cortex Data Lake, they've done an excellent job of taking telemetry from their products and from the acquisitions they have, right? And bringing that together into one big data lake. >>And then they're able to use that to, to do faster threat notification, forensics, things like that. And so I think the old model of security of create signatures for known threats, it's safe to say it never really worked and it wasn't ever gonna work. You had too many day zero exploits and things. The only way to fight security today is with a AI and ML based analytics. And they have, they're the gold standard. I think the one thing about your post that I would add the gold standard from a data standpoint, and that's given them this competitive advantage to go out and become a platform for a security. Which, like I said, the people have tried to do that for years. And the first one that's actually done it, well, >>We've heard this from some of the startups, like Lacework will say, oh, we treat security as a data problem. Of course there's a startup, Palo Alto's got, you know, whatever, 10, 15 years of, of, of history. But one of the things I wanted to explore with you coming into this was the notion of can you be best of breed and develop a suite? And we, we've been hearing a consistent answer to that question, which is, and, and do you need to, and the answer is, well, best of breed in security requires that full spectrum, that full view. So here's my question to you. So, okay, let's take Esty win relatively new for these guys, right? Yeah. Okay. And >>And one of the few products are not top two, top three in, right? Exactly. >>Yeah. So that's why I want to take that. Yeah. Because in bakeoffs, they're gonna lose on a head-to-head best of breed. And so the customer's gonna say, Hey, you know, I love your, your consolidation play, your esty win's. Just, okay, how about a little discount on that? And you know, these guys are premium priced. Yes. So, you know, are they in essentially through their pricing strategies, sort of creating that stuff, fighting that, is that friction for them where they've got, you know, the customer says, all right, well forget it, we're gonna go stove pipe with the SD WAN will consolidate some of the stuff. Are you seeing that? >>Yeah, I, I, I still think the sales model is that way. And I think that's something they need to work on changing. If they get into a situation where they have to get down into a feature battle of my SD WAN versus your SD wan, my firewall versus your firewall, frankly they've already lost, you know, because their value prop is the suite and, and is the platform. And I was talking to the CISO here that told me, he realizes now that you don't need best of breed everywhere to have best in class threat protection. In fact, best of breed everywhere leads to suboptimal threat protection. Cuz you have all these data data sets that are in silos, right? And so from a data scientist standpoint, right, there's the good data leads to good insights. Well, partial data leads to fragmented insights and that's, that's what the best, best of breed approach gives you. And so I was talking with Palo about this, can they have this vision of being best of breed and platform? I don't really think you can maintain best of breed everywhere across this portfolio this big, but you don't need to. >>That was my second point of my >>Question. That's the point. >>Yeah. And so, cuz cuz because you know, we've talked about this, that that sweets always win in the long run, >>Sweets >>Win. Yeah. But here's the thing, I, I wonder to your your point about, you know, the customer, you know, understanding that that that, that this resonates with them. I, my guess is a lot of customers, you know, at that mid-level and the fat middle are like still sort of wed, you know, hugging that, that tool. So there's, there's work to be done here, but I think they, they, they got it right Because if they devolve, to your point, if they devolve down to that speeds and feeds, eh, what's the point of that? Where's their valuable? >>You do not wanna get into a knife fight. And I, and I, and I think for them the, a big challenge now is convincing customers that the suite, the suite approach does work. And they have to be able to do that in actual customer examples. And so, you know, I I interviewed a bunch of customers here and the ones that have bought into XDR and xor and even are looking at their sim have told me that the, the, so think of soc operations, the old way heavily manually oriented, right? You have multiple panes of glass and you know, and then you've got, so there's a lot of people work before you bring the tools in, right? If done correctly with AI and ml, the machines would do all the heavy lifting and then you'd bring people in at the end to clean up the little bits that were missed, right? >>And so you, you moved to, from something that was very people heavy to something that's machine heavy and machines can work a lot faster than people. And the, and so the ones that I've talked that have, that have done that have said, look, our engineers have moved on to a lot different things. They're doing penetration testing, they're, you know, helping us with, with strategy and they're not fighting that, that daily fight of looking through log files. And the only proof point you need, Dave, is look at every big breach that we've had over the last five years. There's some SIM vendor up there that says, we caught it. Yeah. >>Yeah. We we had the data. >>Yeah. But, but, but the security team missed it. Well they missed it because you're, nobody can look at that much data manually. And so the, I I think their approach of relying heavily on machines to fight the fight is actually the right way. >>Is that a differentiator for them versus, we were talking before we went live that you and I first hit our very first segment back in 2017 at Fort Net. Is that, where do the two stand in your >>Yeah, it's funny cuz if you talk to the two vendors, they don't really see each other in a lot of accounts because Fort Net's more small market mid-market. It's the same strategy to some degree where Fort Net relies heavily on in-house development and Palo Alto relies heavily on acquisition. Yeah. And so I think from a consistently feature set, you know, Fort Net has an advantage there because it, it's all run off their, their their silicon. Where, where Palo's able to innovate very quickly. The, it it requires a lot of work right? To, to bring the front end and back ends together. But they're serving different markets. So >>Do you see that as a differentiator? The integration strategy that Palo Alto has as a differentiator? We talk to so many companies who have an a strong m and a strategy and, and execution arm. But the challenge is always integrating the technology so that the customer to, you know, ultimately it's the customer. >>I actually think they're, they're underrated as a, an acquirer. In fact, Dave wrote a post to a prior on Silicon Angle prior to Accelerate and he, he on, you put it on Twitter and you asked people to rank 'em as an acquirer and they were in the middle of the pack, >>Right? It was, it was. So it was Oracle, VMware, emc, ibm, Cisco, ServiceNow, and Palo Alto. Yeah. Or Oracle got very high marks. It was like 8.5 out of, you know, 10. Yeah. VMware I think was 6.5. Nice. Era was high emc, big range. IBM five to seven. Cisco was three to eight. Yeah. Yeah, right. ServiceNow was a seven. And then, yeah, Palo Alto was like a five. And I, which I think it was unfair. >>Well, and I think it depends on how you look at it. And I, so I think a lot of the acquisitions Palo Altos made, they've done a good job of integrating their backend data and they've almost ignored the front end. And so when you buy some of the products, it's a little clunky today. You know, if you work with Prisma Cloud, it could be a little bit cleaner. And even with, you know, the SD wan that took 'em a long time to bring CloudGenix in and stuff. But I think the approach is right. I don't, I don't necessarily believe you should integrate the front end until you've integrated the back end. >>That's >>The hard part, right? Because UL ultimately what you're gonna get, you're gonna get two panes of glass and one pane of glass and it might look pretty all mush together, but ultimately you're not solving the bigger problem, right. Of, of being able to create that big data like the, the fight security. And so I think, you know, the approach they've taken is the right one. I think from a user standpoint, maybe it doesn't show up as neatly because you don't see the frontend integration, but the way they're doing it is the right way to do it. And I'm glad they're doing it that way versus caving to the pressures of what, you know, the industry might want >>Showed up in the performance of the company. I mean, this company was basically gonna double revenues to 7 billion from 2020 to >>2023. Three. Think about that at that, that >>Make a, that's unbelievable, right? I mean, and then and they wanna double again. Yeah. You know, so, well >>What did, what did Nikesh was quoted as saying they wanna be the first cyber company that's a hundred billion dollars. He didn't give a timeline market cap. >>Right. >>Market cap, right. Do what I wanna get both of your opinions on what you saw and heard and felt this week. What do you think the likelihood is? And and do you have any projections on how, you know, how many years it's gonna take for them to get there? >>Well, >>Well I think so if they're gonna get that big, right? And, and we were talking about this pre-show, any company that's becoming a big company does it through ecosystem >>Bingo. >>Right? And that when you look around the show floor, it's not that impressive. And if that, if there's an area they need to focus on, it's building that ecosystem. And it's not with other security vendors, it's with application vendors and it's with the cloud companies and stuff. And they've got some relationships there, but they need to do more. I actually challenge 'em on that. One of the analyst sessions. They said, look, we've got 800 cortex partners. Well where are they? Right? Why isn't there a cortex stand here with a bunch of the small companies here? So I do think that that is an area they need to focus on. If they are gonna get to that, that market caps number, they will do so do so through ecosystem. Because every company that's achieved that has done it through ecosystem. >>A hundred percent agree. And you know, if you look at CrowdStrike's ecosystem, it's pretty similar. Yeah. You know, it doesn't really, you know, make much, much, not much different from this, but I went back and just looked at some, you know, peak valuations during the pandemic and shortly thereafter CrowdStrike was 70 billion. You know, that's what their roughly their peak Palo Alto was 56, fortune was 59 for the actually diverged. Right. And now Palo Alto has taken the, the top mantle, you know, today it's market cap's 52. So it's held 93% of its peak value. Everybody else is tanking. Even Okta was 45 billion. It's been crushed as you well know. But, so Palo Alto wasn't always, you know, the number one in terms of market cap. But I guess my point is, look, if CrowdStrike could got to 70 billion during Yeah. During the frenzy, I think it's gonna take, to answer your question, I think it's gonna be five years. Okay. Before they get back there. I think this market's gonna be tough for a while from a valuation standpoint. I think generally tech is gonna kind of go up and down and sideways for a good year and a half, maybe even two years could be even longer. And then I think there's gonna be some next wave of productivity innovation that that hits. And then you're gonna, you're almost always gonna exceed the previous highs. It's gonna take a while. Yeah, >>Yeah, yeah. But I think their ability to disrupt the SIM market actually is something I, I believe they're gonna do. I've been calling for the death of the sim for a long time and I know some people at Palo Alto are very cautious about saying that cuz the Splunks and the, you know, they're, they're their partners. But I, I think the, you know, it's what I said before, the, the tools are catching them, but they're, it's not in a way that's useful for the IT pro and, but I, I don't think the SIM vendors have that ecosystem of insight across network cloud endpoint. Right. Which is what you need in order to make a sim useful. >>CISO at an ETR roundtable said, if, if it weren't for my regulators, I would chuck my sim. >>Yes. >>But that's the only reason that, that this person was keeping it. So, >>Yeah. And I think the, the fact that most of those companies have moved to a perpetual MO or a a recurring revenue model actually helps unseat them. Typically when you pour a bunch of money into something, you remember the old computer associate days, nobody ever took it out cuz the sunk dollars you spent to do it. But now that you're paying an annual recurring fee, it's actually makes it easier to take out. So >>Yeah, it's it's an ebb and flow, right? Yeah. Because the maintenance costs were, you know, relatively low. Maybe it was 20% of the total. And then, you know, once every five years you had to do a refresh and you were still locked into the sort of maintenance and, and so yeah, I think you're right. The switching costs with sas, you know, in theory anyway, should be less >>Yeah. As long as you can migrate the data over. And I think they've got a pretty good handle on that. So, >>Yeah. So guys, I wanna get your perspective as a whole bunch of announcements here. We've only been here for a couple days, not a big conference as, as you can see from behind us. What Zs in your opinion was Palo Alto's main message and and what do you think about it main message at this event? And then same question for you. >>Yeah, I, I think their message largely wrapped around disruption, right? And, and they, in The's keynote already talked about that, right? And where they disrupted the firewall market by creating a NextGen firewall. In fact, if you look at all the new services they added to their firewall, you, you could almost say it's a NextGen NextGen firewall. But, but I do think the, the work they've done in the area of cloud and cortex actually I think is, is pretty impressive. And I think that's the, the SOC is ripe for disruption because it's for, for the most part, most socks still, you know, run off legacy playbooks. They run off legacy, you know, forensic models and things and they don't work. It's why we have so many breaches today. The, the dirty little secret that nobody ever wants to talk about is the bad guys are using machine learning, right? And so if you're using a signature based model, all they're do is tweak their model a little bit and it becomes, it bypasses them. So I, I think the only way to fight the the bad guys today is with you gotta fight fire with fire. And I think that's, that's the path they've, they've headed >>Down and the bad guys are hiding in plain sight, you know? >>Yeah, yeah. Well it's, it's not hard to do now with a lot of those legacy tools. So >>I think, I think for me, you know, the stat that we threw out earlier, I think yesterday at our keynote analysis was, you know, the ETR data shows that are, that are that last survey around 35% of the respondents said we are actively consolidating, sorry, 44%, sorry, 35 says we're actively consolidating vendors, redundant vendors today. That number's up to 44%. Yeah. It's by far the number one cost optimization technique. That's what these guys are pitching. And I think it's gonna resonate with people and, and I think to your point, they're integrating at the backend, their beeps are technical, right? I mean, they can deal with that complexity. Yeah. And so they don't need eye candy. Eventually they, they, they want to have that cuz it'll allow 'em to have deeper market penetration and make people more productive. But you know, that consolidation message came through loud and clear. >>Yeah. The big change in this industry too is all the new startups are all cloud native, right? They're all built on Amazon or Google or whatever. Yeah. And when your cloud native and you buy a cloud native integration is fast. It's not like having to integrate this big monolithic software stack anymore. Right. So I I think their pace of integration will only accelerate from here because everything's now cloud native. >>If a customer comes to you or when a customer comes to you and says, Zs help us with this cyber transformation we have, our board isn't necessarily with our executives in terms of execution of a security strategy. How do you advise them where Palo Alto is concerned? >>Yeah. You know, a lot, a lot of this is just fighting legacy mindset. And I've, I was talking with some CISOs here from state and local governments and things and they're, you know, they can't get more budget. They're fighting the tide. But what they did find is through the use of automation technology, they're able to bring their people costs way down. Right. And then be able to use that budget to invest in a lot of new projects. And so with that, you, you have to start with your biggest pain points, apply automation where you can, and then be able to use that budget to reinvest back in your security strategy. And it's good for the IT pros too, the security pros, my advice to, to it pros is if you're doing things today that aren't resume building, stop doing them. Right? Find a way to automate the money your job. And so if you're patching systems and you're looking through log files, there's no reason machines can't do that. And you go do something a lot more interesting. >>So true. It's like storage guys 10 years ago, provisioning loans. Yes. It's like, stop doing that. Yeah. You're gonna be outta a job. And so who, last question I have is, is who do you see as the big competitors, the horses on the track question, right? So obviously Cisco kind of service has led for a while and you know, big portfolio company, CrowdStrike coming at it from end point. You know who, who, who do you see as the real players going for that? You know, right now the market's three to 4%. The leader has three, three 4% of the market. You know who they're all going for? 10, 15, maybe 20% of the market. Who, who are the likely candidates? Yeah, >>I don't know if CrowdStrike really has the breadth of portfolio to compete long term though. I I think they've had a nice run, but I, we might start to see the follow 'em. I think Microsoft is gonna be for middle. They've laid down the gauntlet, right? They are a security vendor, right? We, we were at Reinvent and a AWS is the platform for security vendors. Yes. Middle, somewhere in the middle. But Microsoft make no mistake, they're in security. They've got some good products. I think a lot of 'em are kind of good enough and they, they tie it to the licensing and I'm not sure that works in security, but they've certainly got the ear of a lot of it pros. >>It might work in smb. >>Yeah. Yeah. It, it might. And, and I do like Zscaler. I, I know these guys poo poo the proxy model, but they've, they've done about as much with proxies as you can. And I, I think it's, it's a battle of, I love the, the, the near, you know, proxies are dead and Jay's model, you know, Jay over at c skater throw 'em back at 'em. So I, it's good to see that kind of fight going on between the two. >>Oh, it's great. Well, and, and again, ZScaler's coming at it from their cloud security angle. CrowdStrike's coming at it from endpoint. I, I do think CrowdStrike has an opportunity to build out the portfolio through m and a and maybe ecosystem. And then obviously, you know, Palo Alto's getting it done. How about Cisco? >>Yeah. Cisco's interesting. And I, I think if Cisco can make the network matter in security and it should, right? We're talking about how a lot of you need a lot of forensics to fight security today. Well, they're gonna see things long before anybody else because they have all that network data. If they can tie network security, I, I mean they could really have that business take off. But we've been saying that about Cisco for 20 years. >>But big install based though. Yeah. It's hard for a company, any company to just say, okay, hey Cisco customer sweep the floor and come with us. That's, that's >>A tough thing. They have a lot of good peace parts, right? And like duo's a good product and umbrella's a good product. They've, they've not done a good job. >>They're the opposite of these guys. >>They've not done a good job of the backend integration that, that's where Cisco needs to, to focus. And I do think g G two Patel there fixed the WebEx group and I think he's now, in fact when you talk to him, he's doing very little on WebEx that that group's running itself and he's more focused in security. So I, I think we could see a resurgence there. But you know, they have a, from a revenue perspective, it's a little misleading cuz they have this big legacy base that's in decline while they're moving to cloud and stuff. So, but they, but they, there's a lot of work there're trying to, to tie to network. >>Right. Lots of fuel for conversation. We're gonna have to carry this on, on Silicon angle.com guys. Yes. And Wikibon, lets do see us. Thank you so much for joining Dave and me giving us your insights as to this event. Where are you gonna be next? Are you gonna be on vacation? >>There's nothing more fun than mean on the cube, so, right. What's outside of that though? Yeah, you know, Christmas coming up, I gotta go see family and do the obligatory, although for me that's a lot of travel, so I guess >>More planes. Yeah. >>Hopefully not in Vegas. >>Not in Vegas. >>Awesome. Nothing against Vegas. Yeah, no, >>We love it. We >>Love it. Although I will say my year started off with ces. Yeah. And it's finishing up with Palo Alto here. The bookends. Yeah, exactly. In Vegas bookends. >>Well thanks so much for joining us. Thank you Dave. Always a pleasure to host a show with you and hear your insights. Reading your breaking analysis always kicks off my prep for show and it's always great to see, but predictions come true. So thank you for being my co-host bet. All right. For Dave Valante Enz as Carla, I'm Lisa Martin. You've been watching The Cube, the leader in live, emerging and enterprise tech coverage. Thanks for watching.
SUMMARY :
It's the Cube Live at A friend of the Cube Guys, it's great to have you here. You know, I mean, I know was, yes, you sat in the analyst program, interested in what your takeaways were And they, you know, they, they came out as a firewall vendor. And so I think the old model of security of create Palo Alto's got, you know, whatever, 10, 15 years of, of, of history. And one of the few products are not top two, top three in, right? And so the customer's gonna say, Hey, you know, I love your, your consolidation play, And I think that's something they need to work on changing. That's the point. win in the long run, my guess is a lot of customers, you know, at that mid-level and the fat middle are like still sort And so, you know, I I interviewed a bunch of customers here and the ones that have bought into XDR And the only proof point you need, Dave, is look at every big breach that we've had over the last And so the, I I think their approach of relying heavily on Is that a differentiator for them versus, we were talking before we went live that you and I first hit our very first segment back And so I think from a consistently you know, ultimately it's the customer. Silicon Angle prior to Accelerate and he, he on, you put it on Twitter and you asked people to you know, 10. And even with, you know, the SD wan that took 'em a long time to bring you know, the approach they've taken is the right one. I mean, this company was basically gonna double revenues to 7 billion Think about that at that, that I mean, and then and they wanna double again. What did, what did Nikesh was quoted as saying they wanna be the first cyber company that's a hundred billion dollars. And and do you have any projections on how, you know, how many years it's gonna take for them to get And that when you look around the show floor, it's not that impressive. And you know, if you look at CrowdStrike's ecosystem, it's pretty similar. But I, I think the, you know, it's what I said before, the, the tools are catching I would chuck my sim. But that's the only reason that, that this person was keeping it. you remember the old computer associate days, nobody ever took it out cuz the sunk dollars you spent to do it. And then, you know, once every five years you had to do a refresh and you were still And I think they've got a pretty good handle on that. Palo Alto's main message and and what do you think about it main message at this event? So I, I think the only way to fight the the bad guys today is with you gotta fight Well it's, it's not hard to do now with a lot of those legacy tools. I think, I think for me, you know, the stat that we threw out earlier, I think yesterday at our keynote analysis was, And when your cloud native and you buy a cloud native If a customer comes to you or when a customer comes to you and says, Zs help us with this cyber transformation And you go do something a lot more interesting. of service has led for a while and you know, big portfolio company, CrowdStrike coming at it from end point. I don't know if CrowdStrike really has the breadth of portfolio to compete long term though. I love the, the, the near, you know, proxies are dead and Jay's model, And then obviously, you know, Palo Alto's getting it done. And I, I think if Cisco can hey Cisco customer sweep the floor and come with us. And like duo's a good product and umbrella's a good product. And I do think g G two Patel there fixed the WebEx group and I think he's now, Thank you so much for joining Dave and me giving us your insights as to this event. you know, Christmas coming up, I gotta go see family and do the obligatory, although for me that's a lot of travel, Yeah. Yeah, no, We love it. And it's finishing up with Palo Alto here. Always a pleasure to host a show with you and hear your insights.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Fort Net | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
93% | QUANTITY | 0.99+ |
Palo | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
Carla | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
7 billion | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
70 billion | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
44% | QUANTITY | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
45 billion | QUANTITY | 0.99+ |
52 | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
59 | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
two vendors | QUANTITY | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
Karala | PERSON | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
ibm | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
Jay | PERSON | 0.99+ |
8.5 | QUANTITY | 0.99+ |
Palo Altos | ORGANIZATION | 0.99+ |
Dave Valante Enz | PERSON | 0.99+ |
two panes | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
Three | QUANTITY | 0.99+ |
56 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Christmas | EVENT | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
second day | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
35 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Reinvent | ORGANIZATION | 0.98+ |
The Cube | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
WebEx | ORGANIZATION | 0.98+ |
first segment | QUANTITY | 0.98+ |
Palo Alto | LOCATION | 0.98+ |
emc | ORGANIZATION | 0.98+ |
two weeks ago | DATE | 0.98+ |
4% | QUANTITY | 0.98+ |
Takeaways from Ignite22 | Palo Alto Networks Ignite22
>>The Cube presents Ignite 22, brought to you by Palo Alto Networks. >>Welcome back everyone. We're so glad that you're still with us. It's the Cube Live at the MGM Grand. This is our second day of coverage of Palo Alto Networks Ignite. This is takeaways from Ignite 22. Lisa Martin here with two really smart guys, Dave Valante. Dave, we're joined by one of our cube alumni, a friend, a friend of the, we say friend of the Cube. >>Yeah, F otc. A friend of the Cube >>Karala joins us. Guys, it's great to have you here. It's been an exciting show. A lot of cybersecurity is one of my favorite topics to talk about. But I'd love to get some of the big takeaways from both of you. Dave, we'll start with >>You. A breathing room from two weeks ago. Yeah, that was, that was really pleasant. You know, I mean, I know was, yes, you sat in the analyst program, interested in what your takeaways were from there. But, you know, coming into this, we wrote a piece, Palo Alto's Gold Standard, what they need to do to, to keep that, that status. And we hear it a lot about consolidation. That's their big theme now, which is timely, right? Cause people wanna save money, they wanna do more with less. But I'm really interested in hearing zeus's thoughts on how that's playing in the market. How customers, how easy is it to just say, oh, hey, I'm gonna consolidate. I wanna get into that a little bit with you, how well the strategy's working. We're gonna get into some of the m and a activity and really bring your perspectives to the table. Well, >>It's, it's not easy. I mean, people have been calling for the consolidation of security for decades, and it's, it's, they're the first company that's actually made it happen. Right? And, and I think this is what we're seeing here is the culmination of this long-term strategy, this company trying to build more of a platform. And they, you know, they, they came out as a firewall vendor. And I think it's safe to say they're more than firewall today. That's only about two thirds of their revenue now. So down from 80% a few years ago. And when I think of what Palo Alto has become, they're really a data company. Now, if you look at, you know, unit 42 in Cortex, the, the, the Cortex Data Lake, they've done an excellent job of taking telemetry from their products and from the acquisitions they have, right? And bringing that together into one big data lake. >>And then they're able to use that to, to do faster threat notification, forensics, things like that. And so I think the old model of security of create signatures for known threats, it's safe to say it never really worked and it wasn't ever gonna work. You had too many days, zero exploits and things. The only way to fight security today is with a AI and ML based analytics. And they have, they're the gold standard. I think the one thing about your post that I would add, they're the gold standard from a data standpoint. And that's given them this competitive advantage to go out and become a platform for security. Which, like I said, the people have tried to do that for years. And the first one that's actually done it, well, >>We've heard this from some of the startups, like Lacework will say, oh, we treat security as a data problem. Of course there's a startup, Palo Alto's got, you know, whatever, 10, 15 years of, of, of history. But one of the things I wanted to explore with you coming into this was the notion of can you be best of breed and develop a suite? And we, we've been hearing a consistent answer to that question, which is, and, and do you need to, and the answer is, well, best of breed in security requires that full spectrum, that full view. So here's my question to you. So, okay, let's take Estee win relatively new for these guys, right? Yeah. Okay. And >>And one of the few products are not top two, top three in, right? >>Exactly. Yeah. So that's why I want to take that. Yeah. Because in bakeoffs, they're gonna lose on a head-to-head best of breed. And so the customer's gonna say, Hey, you know, I love your, your consolidation play, your esty win's. Just, okay, how about a little discount on that? And you know, these guys are premium priced. Yes. So, you know, are they in essentially through their pricing strategies, sort of creating that stuff, fighting that, is that friction for them where they've got, you know, the customer says, all right, well forget it, we're gonna go stove pipe with the SD WAN will consolidate some of the stuff. Are you seeing that? >>Yeah, I, I, I still think the sales model is that way. And I think that's something they need to work on changing. If they get into a situation where they have to get down into a feature battle of my SD WAN versus your SD wan, my firewall versus your firewall, frankly they've already lost, you know, because their value prop is the suite and, and is the platform. And I was talking with the CISO here that told me, he realizes now that you don't need best of breed everywhere to have best in class threat protection. In fact, best of breed everywhere leads to suboptimal threat protection. Cuz you have all these data data sets that are in silos, right? And so from a data scientist standpoint, right, there's the good data leads to good insights. Well, partial data leads to fragmented insights and that's, that's what the best, best of breed approach gives you. And so I was talking with Palo about this, can they have this vision of being best of breed and platform? I don't really think you can maintain best of breed everywhere across this portfolio this big, but you don't need to. >>That was my second point of my question. That's the point I'm saying. Yeah. And so, cuz cuz because you know, we've talked about this, that that sweets always win in the long run, >>Sweets win. >>Yeah. But here's the thing, I, I wonder to your your point about, you know, the customer, you know, understanding that that that, that this resonates with them. I, my guess is a lot of customers, you know, at that mid-level and the fat middle are like still sort of wed, you know, hugging that, that tool. So there's, there's work to be done here, but I think they, they, they got it right Because if they devolve, to your point, if they devolve down to that speeds and feeds, eh, what's the point of that? Where's their >>Valuable? You do not wanna get into a knife fight. And I, and I, and I think for them the, a big challenge now is convincing customers that the suite, the suite approach does work. And they have to be able to do that in actual customer examples. And so, you know, I I interviewed a bunch of customers here and the ones that have bought into XDR and xor and even are looking at their sim have told me that the, the, so think of soc operations, the old way heavily manually oriented, right? You have multiple panes of glass and you know, and then you've got, so there's a lot of people work before you bring the tools in, right? If done correctly with AI and ml, the machines would do all the heavy lifting and then you'd bring people in at the end to clean up the little bits that were missed, right? >>And so you, you moved to, from something that was very people heavy to something that's machine heavy and machines can work a lot faster than people. And the, and so the ones that I've talked that have, that have done that have said, look, our engineers have moved on to a lot different things. They're doing penetration testing, they're, you know, helping us with, with strategy and they're not fighting that, that daily fight of looking through log files. And the only proof point you need, Dave, is look at every big breach that we've had over the last five years. There's some SIM vendor up there that says, we caught it. Yeah. >>Yeah. We we had the data. >>Yeah. But, but, but the security team missed it. Well they missed it because you're, nobody can look at that much data manually. And so the, I I think their approach of relying heavily on machines to fight the fight is actually the right way. >>Is that a differentiator for them versus, we were talking before we went live that you and I first hit our very first segment back in 2017 at Fort Net. Is that, where do the two stand in your >>Yeah, it's funny cuz if you talk to the two vendors, they don't really see each other in a lot of accounts because Fort Net's more small market mid-market. It's the same strategy to some degree where Fort Net relies heavily on in-house development in Palo Alto relies heavily on acquisition. Yeah. And so I think from a consistently feature set, you know, Fort Net has an advantage there because it, it's all run off their, their their silicon. Where, where Palo's able to innovate very quickly. The, it it requires a lot of work right? To, to bring the front end and back ends together. But they're serving different markets. So >>Do you see that as a differentiator? The integration strategy that Palo Alto has as a differentiator? We talk to so many companies who have an a strong m and a strategy and, and execution arm. But the challenge is always integrating the technology so that the customer to, you know, ultimately it's the customer. >>I actually think they're, they're underrated as a, an acquirer. In fact, Dave wrote a post to a prior on Silicon Angle prior to Accelerate and he, he on, you put it on Twitter and you asked people to rank 'em as an acquirer and they were in the middle of the pack, >>Right? It was, it was. So it was Oracle, VMware, emc, ibm, Cisco, ServiceNow, and Palo Alto. Yeah. Or Oracle got very high marks. It was like 8.5 out of, you know, 10. Yeah. VMware I think was 6.5. Naira was high emc, big range. IBM five to seven. Cisco was three to eight. Yeah. Yeah, right. ServiceNow was a seven. And then, yeah, Palo Alto was like a five. And I, which I think it was unfair. Well, >>And I think it depends on how you look at it. And I, so I think a lot of the acquisitions Palo Alto's made, they've done a good job of integrating the backend data and they've almost ignored the front end. And so when you buy some of the products, it's a little clunky today. You know, if you work with Prisma Cloud, it could be a little bit cleaner. And even with, you know, the SD wan that took 'em a long time to bring CloudGenix in and stuff. But I think the approach is right. I don't, I don't necessarily believe you should integrate the front end until you've integrated the back end. >>That's >>The hard part, right? Because UL ultimately what you're gonna get, you're gonna get two panes of glass and one pane of glass and it might look pretty and all mush together, but ultimately you're not solving the bigger problem, right. Of, of being able to create that big data lake to, to fight security. And so I think, you know, the approach they've taken is the right one. I think from a user standpoint, maybe it doesn't show up as neatly because you don't see the frontend integration, but the way they're doing it is the right way to do it. And I'm glad they're doing it that way versus caving to the pressures of what, you know, the industry might want or >>Showed up in the performance of the company. I mean, this company was basically gonna double revenues to 7 billion from 2020 to >>2023. Think about that at that. That makes, >>I mean that's unbelievable, right? I mean, and then and they wanna double again. Yeah. You know, so, well >>What did, what did Nikesh was quoted as saying they wanna be the first cyber company that's a hundred billion dollars. He didn't give a timeline market >>Cap. Right. >>Market cap, right. Do what I wanna get both of your opinions on what you saw and heard and felt this week. What do you think the likelihood is? And and do you have any projections on how, you know, how many years it's gonna take for them to get there? >>Well, >>Well I think so if they're gonna get that big, right? And, and we were talking about this pre-show, any company that's becoming a big company does it through ecosystem >>Bingo >>Go, right? And that when you look around the show floor, it's not that impressive. No. And if that, if there's an area they need to focus on, it's building that ecosystem. And it's not with other security vendors, it's with application vendors and it's with the cloud companies and stuff. And they've got some relationships there, but they need to do more. I actually challenge 'em on that. One of the analyst sessions. They said, look, we've got 800 cortex partners. Well where are they? Right? Why isn't there a cortex stand here with a bunch of the small companies here? So I do think that that is an area they need to focus on. If they are gonna get to that, that market caps number, they will do so do so through ecosystem. Because every company that's achieved that has done it through ecosystem. >>A hundred percent agree. And you know, if you look at CrowdStrike's ecosystem, it's, I mean, pretty similar. Yeah. You know, it doesn't really, you know, make much, much, not much different from this, but I went back and just looked at some, you know, peak valuations during the pandemic and shortly thereafter CrowdStrike was 70 billion. You know, that's what their roughly their peak Palo Alto was 56, fortune was 59 for the actually diverged. Right. And now Palo Alto has taken the, the top mantle, you know, today it's market cap's 52. So it's held 93% of its peak value. Everybody else is tanking. Even Okta was 45 billion. It's been crushed as you well know. But, so Palo Alto wasn't always, you know, the number one in terms of market cap. But I guess my point is, look, if CrowdStrike could got to 70 billion during Yeah. During the frenzy, I think it's gonna take, to answer your question, I think it's gonna be five years. Okay. Before they get back there. I think this market's gonna be tough for a while from a valuation standpoint. I think generally tech is gonna kind of go up and down and sideways for a good year and a half, maybe even two years could be even longer. And then I think there's gonna be some next wave of productivity innovation that that hits. And then you're gonna, you're almost always gonna exceed the previous highs. It's gonna take a while. Yeah. >>Yeah, yeah. But I think their ability to disrupt the SIM market actually is something that I, I believe they're gonna do. I've been calling for the death of the sim for a long time and I know some people of Palo Alto are very cautious about saying that cuz the Splunks and the, you know, they're, they're their partners. But I, I think the, you know, it's what I said before, the, the tools are catching them, but they're, it's not in a way that's useful for the IT pro and, but I, I don't think the SIM vendors have that ecosystem of insight across network cloud endpoint. Right. Which is what you need in order to make a sim useful. >>CISO at an ETR round table said, if, if it weren't for my regulators, I would chuck my sim. >>Yes. >>But that's the only reason that, that this person was keeping it. No. >>Yeah. And I think the, the fact that most of those companies have moved to a perpetual MO or a a recurring revenue model actually helps unseat them. Typically when you pour a bunch of money into something, you remember the old computer associate says nobody ever took it out cuz the sunk dollars you spent to do it. But now that you're paying an annual recurring fee, it's actually makes it easier to take out. So >>Yeah, it's just an ebb and flow, right? Yeah. Because the maintenance costs were, you know, relatively low. Maybe it was 20% of the total. And then, you know, once every five years you had to do a refresh and you were still locked into the sort of maintenance and, and so yeah, I think you're right. The switching costs with sas, you know, in theory anyway, should be less >>Yeah. As long as you can migrate the data over. And I think they've got a pretty good handle on that. So, >>Yeah. So guys, I wanna get your perspective as a whole bunch of announcements here. We've only been here for a couple days, not a big conference as, as you can see from behind us. What Zs in your opinion was Palo Alto's main message and and what do you think about it main message at this event? And then same question for you. >>Yeah, I, I think their message largely wrapped around disruption, right? And, and they, and The's keynote already talked about that, right? And where they disrupted the firewall market by creating a NextGen firewall. In fact, if you look at all the new services they added to their firewall, you, you could almost say it's a NextGen NextGen firewall. But, but I do think the, the work they've done in the area of cloud and cortex actually I think is, is pretty impressive. And I think that's the, the SOC is ripe for disruption because it's for, for the most part, most socks still, you know, run off legacy playbooks. They run off legacy, you know, forensic models and things and they don't work. It's why we have so many breaches today. The, the dirty little secret that nobody ever wants to talk about is the bad guys are using machine learning, right? And so if you're using a signature based model, all they gotta do is tweak their model a little bit and it becomes, it bypasses them. So I, I think the only way to fight the the bad guys today is with you're gonna fight fire with fire. And I think that's, that's the path they've, they've headed >>Down. Yeah. The bad guys are hiding in plain sight, you know? Yeah, >>Yeah. Well it's, it's not hard to do now with a lot of those legacy tools. So >>I think, I think for me, you know, the stat that we threw out earlier, I think yesterday at our keynote analysis was, you know, the ETR data shows that are, that are that last survey around 35% of the respondents said we are actively consolidating, sorry, 44%, sorry, 35 says who are actively consolidating vendors, redundant vendors today that number's up to 44%. Yeah. It's by far the number one cost optimization technique. That's what these guys are pitching. And I think it's gonna resonate with people and, and I think to your point, they're integrating at the backend, their beeps are technical, right? I mean, they can deal with that complexity. Yeah. And so they don't need eye candy. Eventually they, they, they want to have that cuz it'll allow 'em to have deeper market penetration and make people more productive. But you know, that consolidation message came through loud and clear. >>Yeah. The big change in this industry too is all the new startups are all cloud native, right? They're all built on Amazon or Google or whatever. Yeah. And when your cloud native and you buy a cloud native integration is fast. It's not like having to integrate this big monolithic software stack anymore. Right. So I, I think their pace of integration will only accelerate from here because everything's now cloud native. >>If a customer comes to you or when a customer comes to you and says, Zs help us with this cyber transformation we have, our board isn't necessarily aligned with our executives in terms of execution of a security strategy. How do you advise them where Palo Alto is concerned? >>Yeah. You know, a lot, a lot of this is just fighting legacy mindset. And I've, I was talking with some CISOs here from state and local governments and things and they're, you know, they can't get more budget. They're fighting the tide. But what they did find is through the use of automation technology, they're able to bring their people costs way down. Right. And then be able to use that budget to invest in a lot of new projects. And so with that, you, you have to start with your biggest pain points, apply automation where you can, and then be able to use that budget to reinvest back in your security strategy. And it's good for the IT pros too, the security pros, my advice to the IT pros is, is if you're doing things today that aren't resume building, stop doing them. Right. Find a way to automate the money your job. And so if you're patching systems and you're looking through log files, there's no reason machines can't do that. And you go do something a lot more interesting. >>So true. It's like storage guys 10 years ago, provisioning loans. Yes. It's like, stop doing that. Yeah. You're gonna be outta a job. So who, last question I have is, is who do you see as the big competitors, the horses on the track question, right? So obviously Cisco kind of service has led for a while and you know, big portfolio company, CrowdStrike coming at it from end point. You know who, who, who do you see as the real players going for that? You know, right now the market's three to 4%. The leader has three, three 4% of the market. You know who they're all going for? 10, 15, maybe 20% of the market. Who, who are the likely candidates? Yeah, >>I don't know if CrowdStrike really has the breadth of portfolio to compete long term though. I I think they've had a nice run, but I, we might start to see the follow 'em. I think Microsoft is gonna be for middle. They've laid down the gauntlet, right? They are a security vendor, right? We, we were at Reinvent and a AWS is the platform for security vendors. Yes. Middle, somewhere in the middle. But Microsoft make no mistake, they're in security. They've got some good products. I think a lot of 'em are kind of good enough and they, they tie it to the licensing and I'm not sure that works in security, but they've certainly got the ear of a lot of it pros. >>It might work in smb. >>Yeah, yeah. It, it might. And, and I do like Zscaler. I, I know these guys poo poo the proxy model, but they've, they've done about as much with prox as you can. And I, I think it's, it's a battle of, I love the, the, the near, you know, proxies are dead and Jay's model, you know, Jay over at csca, throw 'em back at 'em. So I, it's good to see that kind of fight going on between the >>Two. Oh, it's great. Well, and, and again, ZScaler's coming at it from their cloud security angle. CrowdStrike's coming at it from endpoint. I, I do think CrowdStrike has an opportunity to build out the portfolio through m and a and maybe ecosystem. And then obviously, you know, Palo Alto's getting it done. How about Cisco? >>Yeah, Cisco's interesting. And I I think if Cisco can make the network matter in security and it should, right? We're talking about how a lot of you need a lot of forensics to fight security today. Well, they're gonna see things long before anybody else because they have all that network data. If they can tie network security, I, I mean they could really have that business take off. But we've been saying that about Cisco for 20 years. >>But big install based though. Yeah. It's hard for a company, any company to say, okay, hey Cisco customer sweep the floor and come with us. That's, that's >>A tough thing. They have a lot of good peace parts, right? And like duo's a good product and umbrella's a good product. They've, they've not done a good job. >>They're the opposite of these guys. >>They've not done a good job of the backend integration and that, that's where Cisco needs to, to focus. And I do think g G two Patel there fixed the WebEx group and I think he's now, in fact when you talk to him, he's doing very little on WebEx that that group's running itself and he's more focused in security. So I, I think we could see a resurgence there. But you know, they have a, from a revenue perspective, it's a little misleading cuz they have this big legacy base that's in decline while they're moving to cloud and stuff. So, but they, but they, there's a lot of Rick there trying to, to tie to network. >>Lots of fuel for conversation. We're gonna have to carry this on, on Silicon angle.com guys. Yes. And Wi KeePon. Lets do see us. Thank you so much for joining Dave and me giving us your insights as to this event. Where are gonna be next? Are you gonna be on >>Vacation? There's nothing more fun than mean on the cube. So what's outside of that though? Yeah, you know, Christmas coming up, I gotta go see family and be the obligatory, although for me that's a lot of travel, so I guess >>More planes. Yeah. >>Hopefully not in Vegas. >>Not in Vegas. >>Awesome. Nothing against Vegas. Yeah, no, >>We love it. We love >>It. Although I will say my year started off with ces. Yeah. And it's finishing up with Palo Alto here. The bookends. Yeah, exactly. In Vegas bookends. >>Well thanks so much for joining us. Thank you Dave. Always a pleasure to host a show with you and hear your insights. Reading your breaking analysis always kicks off my prep for show. And it, it's always great to see, but predictions come true. So thank you for being my co-host bet. All right. For Dave Valante Enz as Carla, I'm Lisa Martin. You've been watching The Cube, the leader in live, emerging and enterprise tech coverage. Thanks for watching.
SUMMARY :
The Cube presents Ignite 22, brought to you by Palo Alto It's the Cube Live at A friend of the Cube Guys, it's great to have you here. You know, I mean, I know was, yes, you sat in the analyst program, interested in what your takeaways were And I think it's safe to say they're more than firewall today. And so I think the old model of security of create Palo Alto's got, you know, whatever, 10, 15 years of, of, of history. And so the customer's gonna say, Hey, you know, I love your, your consolidation play, And I think that's something they need to work on changing. And so, cuz cuz because you know, we've talked about this, my guess is a lot of customers, you know, at that mid-level and the fat middle are like still sort And so, you know, I I interviewed a bunch of customers here and the ones that have bought into XDR And the only proof point you need, Dave, is look at every big breach that we've had over the last five And so the, I I think their approach of relying heavily on Is that a differentiator for them versus, we were talking before we went live that you and I first hit our very first segment back And so I think from a consistently you know, ultimately it's the customer. Angle prior to Accelerate and he, he on, you put it on Twitter and you asked people to rank you know, 10. And I think it depends on how you look at it. you know, the approach they've taken is the right one. I mean, this company was basically gonna double revenues to 7 billion That makes, I mean, and then and they wanna double again. What did, what did Nikesh was quoted as saying they wanna be the first cyber company that's a hundred billion dollars. And and do you have any projections on how, you know, how many years it's gonna take for them to get And that when you look around the show floor, it's not that impressive. And you know, if you look at CrowdStrike's ecosystem, it's, But I, I think the, you know, it's what I said before, the, the tools are catching I would chuck my sim. But that's the only reason that, that this person was keeping it. you remember the old computer associate says nobody ever took it out cuz the sunk dollars you spent to do it. And then, you know, once every five years you had to do a refresh and you were still And I think they've got a pretty good handle on that. Palo Alto's main message and and what do you think about it main message at this event? it's for, for the most part, most socks still, you know, run off legacy playbooks. Yeah, So I think, I think for me, you know, the stat that we threw out earlier, I think yesterday at our keynote analysis was, And when your cloud native and you buy a cloud native If a customer comes to you or when a customer comes to you and says, Zs help us with this cyber transformation And you go do something a lot more interesting. So obviously Cisco kind of service has led for a while and you know, big portfolio company, I don't know if CrowdStrike really has the breadth of portfolio to compete long term though. I love the, the, the near, you know, proxies are dead and Jay's model, And then obviously, you know, Palo Alto's getting it done. And I I think if Cisco can hey Cisco customer sweep the floor and come with us. And like duo's a good product and umbrella's a good product. And I do think g G two Patel there fixed the WebEx group and I think he's now, Thank you so much for joining Dave and me giving us your insights as to this event. you know, Christmas coming up, I gotta go see family and be the obligatory, although for me that's a lot of travel, Yeah. Yeah, no, We love it. And it's finishing up with Palo Alto here. Always a pleasure to host a show with you and hear your insights.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Fort Net | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Vegas | LOCATION | 0.99+ |
Carla | PERSON | 0.99+ |
70 billion | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
93% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Jay | PERSON | 0.99+ |
45 billion | QUANTITY | 0.99+ |
7 billion | QUANTITY | 0.99+ |
Dave Valante Enz | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Karala | PERSON | 0.99+ |
Palo | ORGANIZATION | 0.99+ |
44% | QUANTITY | 0.99+ |
ibm | ORGANIZATION | 0.99+ |
two vendors | QUANTITY | 0.99+ |
35 | QUANTITY | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
two panes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Christmas | EVENT | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
8.5 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
56 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
second day | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Reinvent | ORGANIZATION | 0.99+ |
Lacework | ORGANIZATION | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
second point | QUANTITY | 0.99+ |
59 | QUANTITY | 0.99+ |
emc | ORGANIZATION | 0.99+ |
4% | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Ignite22 | ORGANIZATION | 0.98+ |
two weeks ago | DATE | 0.98+ |
Naira | ORGANIZATION | 0.98+ |
The Cube | TITLE | 0.98+ |
2023 | DATE | 0.98+ |
Rick | PERSON | 0.98+ |
Karl Soderlund, Palo Alto Networks | Palo Alto Networks Ignite22
the cube presents ignite 22. brought to you by Palo Alto Networks hey guys and girls welcome back to Las Vegas it's thecube we are live at Palo Alto networks ignite 22. this is day one of two days of cube coverage Lisa Martin here with Dave vellante Dave we've had great conversations today talking with Executives the partner ecosystem is evolving it's growing at Palo Alto networks going to be digging into that next well we heard a lot of talk about you know Palo Alto you know the goal 100 billion dollar you know market cap company and to me a way and I think a critical way in which you get there is partner with the ecosystem because you can't do it alone the power of many versus the resources of one agree completely agree we've got Carl Sutherland with us SVP of North America ecosystem sales at Palo Alto networks welcome to the cube thanks so much for having me it's great being here so here we are the first full day of the conference actually started yesterday with the partner Summit give the audience a flavor of the partner Summit who was there what was talked about what's the current voice of the partner these days yeah great questions so we had a 150 Partners from around the globe representing all of our different routes to Market and for us our partner Community is expanding we work with system integrators we work with gsis we work with service providers Distributors traditional value-added resellers so it was a whole host of partners that were there it was a c-level audience and we really talked about the direction of where we're going as a company how they can continue to invest with us and have greater success long term and so from a voice of the partner standpoint what they're here to do is share with us where they want to engage more how we can enable them to be successful you talked about the Power of One Versus a community we're really looking at a segment of the marketplace right now for us to scale and hit our aspirational goals we can't do it with Palo Alto Network employees we have an employee base of 12 000 people if you take our ecosystem it's over a hundred thousand employees so if we can get them aligned and selling and motivated it's going to be a good day for all of us what so what are they telling you where do they want to spend their time where do they want to add value where are they winning yeah that's a great question so there's a transformation that's going on right now in the partner Community what's happening is a lot of Partners going that are transitioning from what would be traditional transactional Partners or resale Partners to being services-led and the Market's driving them there and what I mean by that is that customers are in a desperate dire State needing assistance figuring out and solving these very complex security problems so if there is a subset of Partners out there that have the skill set and capabilities that can come in from a consultative standpoint help them to develop the structure through deployment a full-blown management and do life cycle management that's a tremendous value I mean the numbers you hear thrown around in the industry right now is up to seven million uh security I.T jobs right now that are out there the open head count is tremendous people can't hire people fast enough all of us in the industry are going through and trying to find early in career or college graduates so we can train quickly or cross-train from other segments to get them into cyber security so if our part of the community can continue to get skilled and expand it's only going to help and the cloud is obviously where does the cloud fit in Carl because you know a lot of the partners when the clouds really start on the Steep part of the s-curve are like we have an opportunity here and by the way if we don't transition our business we could get commoditized yes so that you know that but you were talking about the transactional we can help people move to the cloud and a big part of that has got to be we can secure them in the cloud because it's a more in a lot of ways you know Cloud security is great but in a lot of ways it adds complexity what are you hearing from the party yeah so we are fortunate at Palo Alto networks when you look across the three loud largest cloud service provider from a Google AWS and Microsoft Azure we're either their number one isv or absolutely their number one security ISP so we've got a great uh relationships with them now our partners are coming along and saying how do we transact how do we add value a lot of times that value to your question is wrapping services around it to make sure it's a successful deployment because exactly what you stated the complexity is an all-time high so how do we make sure that we can solve a complex problem in a short term while increasing their security posture and that's really the goal and so where there there's sometimes complexity and mystery there's opportunity and partners can be profitable in doing that I wrote a piece once chaos is cash I have a security you know the criminals and vendors as well yes yes where there is is challenge and complexity there is great opportunity yeah talk about some of the partner program Evolution and some of the things that were announced with respect to the next wave program just yesterday yeah so at next wave um the program's been around for 12 years we constantly are looking to make enhancements and how we make those enhancements are by going out and speaking with these partners and listening to what they need so I have the honor to get to represent what their needs are and how we bring it to market for them so a couple interesting announcements that we made yesterday first of all we announced a new structural format for the program which is really going to allow our different route to markets to have a program that's fit for them because in the past when we were just traditionally a firewall company when the ecosystem just meant resale it was an easy model to have it's complex right now sometimes it's resale sometimes it's influence sometimes its services only we really need to be flexible and credible so we announced a Services only path so if you are a consulting company if you are a insurance company and you want to bring opportunities and leads to Palo Alto Network and you want to provide the services if you're not interested in the transaction you don't want to get involved in that we now have a pathway for you to support you to enable you and Kennedy to give you recognition within Palo Alto networks from an alignment standpoint so we're super excited about that uh as I know you guys speak quite a bit about the managed Services industry so it's a red hot area within Palo Alto networks one of the needs out there was that all not all managed Service Partners are created equally and so some have fantastic capabilities some have gaps we were calling it a P2P part of the partner program within managed services so our two managed Services Partners can actually work together to solve the problem that the end user has and give them a better outcome and fill each other's gaps so candidly it's been going on for a while the partnering but we've never really recognized it so we really built a program around it and now are sponsoring and supporting it versus people doing it on a sidebar so those guys were here in force yesterday yes sir right and and so obviously a lot of energy I'm sure do you see a day where they're here in force on the show floor yeah and and how do you see that evolving so they are here enforcement just right here you see a few of them I'm looking at AWS who's our you know we are their largest isv I'm looking at CDW we had them on the floor is our if not largest second largest partner globally right now and continuing to grow at a rate well they will probably be our first billion dollar partner to think about the size and scale of that relationship and where we've come from um their name CDW don't they never really thought of CDW right as a as a security firm wow what a transformation but please carry on and think about that let's talk about CDW saying think about reach that CDW has it's a 23 billion dollar organization and in a way an inside out sales model meaning there's a tremendous reach they have from their inside sales team and the relationships that they have traditionally historically they were procurement relationships in a way and I said this to the CDW team they were the easy button in the past now what they're doing is they made Seven Acquisitions over the last two years all of them Services oriented so now they're coming in as a consultative Viewpoint and solving a lot of complex problems and I see Google Cloud right here another great partner for us that we continue to invest in we have a great amount of integration and Technology integration with them and so and those are the three that I'm seeing just looking over my left shoulder right if I turn around I'll probably name five more so the majority of this room are the partners that fall within our ecosystem today fantastic so okay so what's your vision for where you want to take this ecosystem because as I said at the top I mean ecosystems are sort of the Hallmark of a I guess you're not a cloud company see I think you of you as a cloud company and so okay good so and I know you don't own your own public cloud and you know your history is you had your own data centers but yeah but you're the security Cloud yeah and so a security Cloud any Cloud needs a great ecosystem so what's your vision for the ecosystem let's go you know five plus years out sure you we start with the end in mind and what I mean by that is we always start with the end user what's the end user's needs the end user today needs flexibility with how they consume the technology they need help in how they support and deploy the technology they need guidance in how they plan out for their future and what their growth is so what we're doing is building a very diverse set of Partners in our ecosystem that all have special skills that they bring to the table so when nikesh sits up here and talks about being a 10 billion or a 20 billion or a 50 billion dollar company we absolutely cannot do it without our ecosystem and without having a very diverse ecosystem that all has different skills that can help us scale because again Palo Alto does not want to be a services company right let's work with the people who are the best at that when we think about the deloittees and accentures and the value they have within the end user base and our joint customer base what a fantastic time to to partner together and solve those boardroom challenges and that's where I really see the vision is that at the boardroom we're building out a plan that's three to five years that's going to continue to increase their security posture because we're not thinking if we're not forward thinking like that will be left behind because the Bad actors are thinking about how they find the different areas to penetrate they're getting so sophisticated the badocracy adversaries they are well funded they're motivated Grant the ransomware attack numbers in terms of the Velocity the complexity yes no longer are we going to get if it's when yeah uh big challenge for organizations Acro across I mean really across an organization regardless of Industry are you guys having any conversations with boards in the partner organization to help align the board with the executive level and really not just have security as a board level initiative but actually being able to execute a strategy yeah and you you nailed it it's not an initiative the initiative to me means there's a beginning and an end right a strategy means there's going to be a comprehensive approach how you continue to improve and we are very fortunate that a lot of our largest Partners around the globe have that position within the boards where they are the trusted advisor so what we're doing now is enabling them and giving them the skills so they can have a more comprehensive conversation around our platform approach around the challenges you know BJ I knew who was with you earlier today likes to say that the average customer he goes and sees has 50 to 70 disparate Technologies within their environment how do you manage that how do you maintain it how do you do renewals oh and by the way most likely the people who actually initially procured that aren't with you anymore they're in a different company so the need for a platform approach is there more so than ever but the decision for the platform quite often has to come from the most senior levels within the organization because again I'm going to go back to your what was your chaos line that you said chaos is Cash chaos is Cash well also chaos is job security so if you're at at the lower level within an organization that chaos and that magic gives you a little job security but that's a short term long term you really need to think about how you're protecting the environment holistically so it is a boardroom decision down that we need to have and you know that chaos the the motivation for that piece that I wrote was from the criminals standpoint right and then I was like okay but there's great opportunities for the technology industry but but I think that you know where we're headed I wonder if I get your thoughts on thoughts on this Carlos we always talk about the Board Room I think we're going now Beyond it here I am you know I'm hypersensitive about my security I got password managers two-factor authentication I don't want SMS based two-factor authentication I want my own authenticator and that's still not enough yeah I got air gaps yeah you know for my crypto you know and I'm super paranoid my point is I think the the individuals are getting much more Savvy about security why because we've all been hacked you know it's like when you lost your data in the because you weren't backed up you know that never happens anymore it's in the cloud or you know some people have multiple backups so it's it's becoming a cultural Trend beyond the board and it's because of the board lord said hey this is really important and so I think it's not only top down I think you're going to see bottom up and middle out and the exciting part for Palo Alto networks is and maybe for you as well is there any more exciting environment to talk about that's rapidly changing and constantly changing you could come back next week and our conversation is going to change as far as what we're doing we constantly need to be thinking three steps ahead of where we're going to move and be flexible and dynamic enough to change and that's what's going to keep us ahead of the economy yeah there's no segment as Dynamic I mean data is dynamic but not as fast changing as cyber I mean because of the adversary as you mentioned I mean so smart so now now they have open adversary ecosystems I mean the adversaries are building ecosystems right absolutely insane I've got peers that are bad guys yeah right right chaos is Cash what's your favorite partner story that you think really demonstrates the value of the ecosystem that Palo Alto networks has built yeah so without sharing names I'll talk about a large U.S national partner that was very uh that was founded on a networking business and partnered with a very large networking company and built that business and was successful doing that they wanted to Pivot into the security space and very early on they made a commitment to Paulo and Ulta networks to say we're going to learn we're going to invest we're going to align with your sales force and we're going to work together and right now they are our largest partner globally and they grew 70 year over year wow so think about that this is not on a small base we're talking about a half a billion dollars in Revenue growing at 70 year over year because to your point earlier it wasn't an initiative it was a strategy and they're executing on the strategy so I tell a lot of we call War Stories like that to other partners that are looking to invest from different markets it could be a large service provider that's you know trying to transform themselves into a security player and talk about the potential of what it could be in for their Marketplace and by the way I say publicly quite often Palo Alto networks will be your most profitable relationship that you have because of the total addressable Market that we're going after because of the solutions that we bring to Market and because of the opportunity within the end users right now and we're excited I want to come back to the mssp in that in its context so we've seen the rise of the mssp and particularly you know we were talking earlier I think it was with Wendy that uh no it was with CDW like 50 of the organizations in North America don't even have a sock yeah right so they need a service provider to come out so you said we you don't want to be in the services business right you're a product company right and that's from a financial standpoint that's phenomenal you're roughly 50 billion dollar market cap company let's let's call it six billion in Revenue so that's a nice Revenue multiple 8X you know and and and the Market's down so you're a 10x Revenue multiple company typically services companies are a 1x or a 2X are you seeing a change there where technology is giving these service providers operating leverage where they're able to scale whether it's because of the cloud because of the Partnerships the Eco would you call it before the the peer-to-peer ecosystem yes like the Gap fillers yes are you do you see the economics of services changing yeah from a baseline economic standpoint not looking at the valuations but let's look at it from a an opportunity to be profitable with Palo Alto networks we know if you are just doing the transaction you have a certain range of margin that you're going to make in the opportunity we know if you wrap services around it you're going to get 3x to 4X that margin we know that if it's managed services and there's life cycle management you're talking 5x to 8X that initial transaction and by the way it's recurring revenue for them so when you think about it if you just do a transaction you're only recurring revenue is a renewal that's predictable but it's not extremely profitable now we're saying the operating leverage you get is if you wrap that services and you're going to have an increased opportunity for a greater margin and it's sticky it's hard to replace a partner who's adding value to your team and A lot of times you walk in the end user you can't tell who the partner is and who the end user is because they are one team that's value yes and that's going to drive ebit yep for your partners and that's going to drive valuation you know you know I want to come back to valuation not that I'm not you can do that okay but because I was I predicted I do my prediction post every year and I predicted last year that we're going to see you know a Spate of MSS mssps I predicted you're going to see someone go public nobody's going public these days but I still think it's a great business yeah that's an untapped opportunity it's not an 8X or it's not a software marginal economics or but it's really sticky super high value yeah and I think it has you know long-term potential yeah to your point if you want to talk valuations for a second let's look at what's happened to the marketplace over the last 12 to 18 months the large majority of the non-public partners that we work with have taken on Capital from private Equity the private Equity that has come in has challenged them to go through a transformation that transformation is you we need you to be Services LED and that service is value because they believe there's going to is going to be a great greater evaluation from that end and they'll be able to scale and grow and stay ahead of the market doing that so when we have conversations when I have conversations yes I'm talking about the technology and the direction of the company but I'm also in there as a consultant saying where's the direction of your company and how do we have this great platform and how do we build it into your business and you wrap services around it and those are the conversations that CEOs want to have when I'm sitting down with our partner CEOs I bet they don't want to talk about our product being better than someone else's product they want to talk about the direction and health of their business yeah it's their business that's a business discussion business decision and they're thinking about okay what's my five-year strategic plan because they got to make bets yeah they're going to bet on a platform that they can add value to that creates that flywheel effect and they get a bet on your ecosystem as well correct oh correct absolutely good to be the leader it's good to be a leader and you know I'm sure as you've heard a few times we believe that economic headwinds are going to favor the market leaders and economic headwinds are going to favor the platform approach so we're going in more aggressive with our partner Community than ever before and there's just so much energy and excitement I feel like I keep on using that term over and over again but that's really what we walk away with last question for you is we have about 30 seconds left a lot of momentum in the partner ecosystem as you've described eloquently what's next what's next what's next yeah so when I I rolled out the strategy for what's next and what it is is a foundational platform that is going to allow flexibility for the partners and for them to decide where they want to invest and it can be in new areas it can be I went online closer with the cloud service providers it could be I want to build a managed Services business can you help us do this it could be I want to go through and I want to drive greater penetration into geographical areas we haven't been before so again we're almost acting as a consultant looking at what they're going from the direction and building a program and a platform where we can grow and work with them it's exciting it's fun it's great highly collaborative highly collaborative highly collaborative thank you for joining us on the program on the partner program the ecosystem Better Together what you guys are doing and ultimately how it benefits the end user customer we really appreciate your insights excellent thank you thank you so much appreciate it all right our pleasure for our guests and Dave vellante I'm Lisa Martin you're watching the cube the leader in live Enterprise and emerging Tech coverage [Music]
SUMMARY :
it's good to be a leader and you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
five-year | QUANTITY | 0.99+ |
CDW | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
3x | QUANTITY | 0.99+ |
Karl Soderlund | PERSON | 0.99+ |
Dave vellante | PERSON | 0.99+ |
10 billion | QUANTITY | 0.99+ |
20 billion | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
12 000 people | QUANTITY | 0.99+ |
first billion dollar | QUANTITY | 0.99+ |
5x | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Palo Alto Network | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
4X | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
six billion | QUANTITY | 0.99+ |
five plus years | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
23 billion dollar | QUANTITY | 0.99+ |
1x | QUANTITY | 0.99+ |
150 Partners | QUANTITY | 0.99+ |
Carlos | PERSON | 0.99+ |
North America | LOCATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Carl Sutherland | PERSON | 0.99+ |
8X | QUANTITY | 0.99+ |
Dave vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Kennedy | PERSON | 0.98+ |
100 billion dollar | QUANTITY | 0.98+ |
70 year | QUANTITY | 0.98+ |
over a hundred thousand employees | QUANTITY | 0.98+ |
70 year | QUANTITY | 0.98+ |
Dave | PERSON | 0.98+ |
Paulo | ORGANIZATION | 0.98+ |
50 billion dollar | QUANTITY | 0.98+ |
2X | QUANTITY | 0.98+ |
Ulta | ORGANIZATION | 0.98+ |
70 | QUANTITY | 0.98+ |
BJ | PERSON | 0.97+ |
five years | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
two-factor | QUANTITY | 0.96+ |
Palo Alto | ORGANIZATION | 0.96+ |
Palo Alto networks | ORGANIZATION | 0.95+ |
Palo Alto networks | ORGANIZATION | 0.95+ |
Google AWS | ORGANIZATION | 0.95+ |
up to seven million | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
about 30 seconds | QUANTITY | 0.94+ |
about a half a billion dollars | QUANTITY | 0.94+ |
first full day | QUANTITY | 0.93+ |
one team | QUANTITY | 0.93+ |
Alto | LOCATION | 0.93+ |
50 billion dollar | QUANTITY | 0.91+ |
second largest partner | QUANTITY | 0.88+ |
earlier today | DATE | 0.88+ |
Palo Alto | ORGANIZATION | 0.87+ |
Acro | ORGANIZATION | 0.85+ |
U.S | LOCATION | 0.84+ |
three steps | QUANTITY | 0.83+ |
second | QUANTITY | 0.82+ |
Azure | TITLE | 0.82+ |
50 of the organizations | QUANTITY | 0.81+ |
10x | QUANTITY | 0.79+ |
a day | QUANTITY | 0.78+ |
last two years | DATE | 0.77+ |
Palo | ORGANIZATION | 0.76+ |
first | QUANTITY | 0.76+ |