Image Title

Search Results for Ai:

Closing Panel | Generative AI: Riding the Wave | AWS Startup Showcase S3 E1


 

(mellow music) >> Hello everyone, welcome to theCUBE's coverage of AWS Startup Showcase. This is the closing panel session on AI machine learning, the top startups generating generative AI on AWS. It's a great panel. This is going to be the experts talking about riding the wave in generative AI. We got Ankur Mehrotra, who's the director and general manager of AI and machine learning at AWS, and Clem Delangue, co-founder and CEO of Hugging Face, and Ori Goshen, who's the co-founder and CEO of AI21 Labs. Ori from Tel Aviv dialing in, and rest coming in here on theCUBE. Appreciate you coming on for this closing session for the Startup Showcase. >> Thanks for having us. >> Thank you for having us. >> Thank you. >> I'm super excited to have you all on. Hugging Face was recently in the news with the AWS relationship, so congratulations. Open source, open science, really driving the machine learning. And we got the AI21 Labs access to the LLMs, generating huge scale live applications, commercial applications, coming to the market, all powered by AWS. So everyone, congratulations on all your success, and thank you for headlining this panel. Let's get right into it. AWS is powering this wave here. We're seeing a lot of push here from applications. Ankur, set the table for us on the AI machine learning. It's not new, it's been goin' on for a while. Past three years have been significant advancements, but there's been a lot of work done in AI machine learning. Now it's released to the public. Everybody's super excited and now says, "Oh, the future's here!" It's kind of been going on for a while and baking. Now it's kind of coming out. What's your view here? Let's get it started. >> Yes, thank you. So, yeah, as you may be aware, Amazon has been in investing in machine learning research and development since quite some time now. And we've used machine learning to innovate and improve user experiences across different Amazon products, whether it's Alexa or Amazon.com. But we've also brought in our expertise to extend what we are doing in the space and add more generative AI technology to our AWS products and services, starting with CodeWhisperer, which is an AWS service that we announced a few months ago, which is, you can think of it as a coding companion as a service, which uses generative AI models underneath. And so this is a service that customers who have no machine learning expertise can just use. And we also are talking to customers, and we see a lot of excitement about generative AI, and customers who want to build these models themselves, who have the talent and the expertise and resources. For them, AWS has a number of different options and capabilities they can leverage, such as our custom silicon, such as Trainium and Inferentia, as well as distributed machine learning capabilities that we offer as part of SageMaker, which is an end-to-end machine learning development service. At the same time, many of our customers tell us that they're interested in not training and building these generative AI models from scratch, given they can be expensive and can require specialized talent and skills to build. And so for those customers, we are also making it super easy to bring in existing generative AI models into their machine learning development environment within SageMaker for them to use. So we recently announced our partnership with Hugging Face, where we are making it super easy for customers to bring in those models into their SageMaker development environment for fine tuning and deployment. And then we are also partnering with other proprietary model providers such as AI21 and others, where we making these generative AI models available within SageMaker for our customers to use. So our approach here is to really provide customers options and choices and help them accelerate their generative AI journey. >> Ankur, thank you for setting the table there. Clem and Ori, I want to get your take, because the riding the waves, the theme of this session, and to me being in California, I imagine the big surf, the big waves, the big talent out there. This is like alpha geeks, alpha coders, developers are really leaning into this. You're seeing massive uptake from the smartest people. Whether they're young or around, they're coming in with their kind of surfboards, (chuckles) if you will. These early adopters, they've been on this for a while; Now the waves are hitting. This is a big wave, everyone sees it. What are some of those early adopter devs doing? What are some of the use cases you're seeing right out of the gate? And what does this mean for the folks that are going to come in and get on this wave? Can you guys share your perspective on this? Because you're seeing the best talent now leaning into this. >> Yeah, absolutely. I mean, from Hugging Face vantage points, it's not even a a wave, it's a tidal wave, or maybe even the tide itself. Because actually what we are seeing is that AI and machine learning is not something that you add to your products. It's very much a new paradigm to do all technology. It's this idea that we had in the past 15, 20 years, one way to build software and to build technology, which was writing a million lines of code, very rule-based, and then you get your product. Now what we are seeing is that every single product, every single feature, every single company is starting to adopt AI to build the next generation of technology. And that works both to make the existing use cases better, if you think of search, if you think of social network, if you think of SaaS, but also it's creating completely new capabilities that weren't possible with the previous paradigm. Now AI can generate text, it can generate image, it can describe your image, it can do so many new things that weren't possible before. >> It's going to really make the developers really productive, right? I mean, you're seeing the developer uptake strong, right? >> Yes, we have over 15,000 companies using Hugging Face now, and it keeps accelerating. I really think that maybe in like three, five years, there's not going to be any company not using AI. It's going to be really kind of the default to build all technology. >> Ori, weigh in on this. APIs, the cloud. Now I'm a developer, I want to have live applications, I want the commercial applications on this. What's your take? Weigh in here. >> Yeah, first, I absolutely agree. I mean, we're in the midst of a technology shift here. I think not a lot of people realize how big this is going to be. Just the number of possibilities is endless, and I think hard to imagine. And I don't think it's just the use cases. I think we can think of it as two separate categories. We'll see companies and products enhancing their offerings with these new AI capabilities, but we'll also see new companies that are AI first, that kind of reimagine certain experiences. They build something that wasn't possible before. And that's why I think it's actually extremely exciting times. And maybe more philosophically, I think now these large language models and large transformer based models are helping us people to express our thoughts and kind of making the bridge from our thinking to a creative digital asset in a speed we've never imagined before. I can write something down and get a piece of text, or an image, or a code. So I'll start by saying it's hard to imagine all the possibilities right now, but it's certainly big. And if I had to bet, I would say it's probably at least as big as the mobile revolution we've seen in the last 20 years. >> Yeah, this is the biggest. I mean, it's been compared to the Enlightenment Age. I saw the Wall Street Journal had a recent story on this. We've been saying that this is probably going to be bigger than all inflection points combined in the tech industry, given what transformation is coming. I guess I want to ask you guys, on the early adopters, we've been hearing on these interviews and throughout the industry that there's already a set of big companies, a set of companies out there that have a lot of data and they're already there, they're kind of tinkering. Kind of reminds me of the old hyper scaler days where they were building their own scale, and they're eatin' glass, spittin' nails out, you know, they're hardcore. Then you got everybody else kind of saying board level, "Hey team, how do I leverage this?" How do you see those two things coming together? You got the fast followers coming in behind the early adopters. What's it like for the second wave coming in? What are those conversations for those developers like? >> I mean, I think for me, the important switch for companies is to change their mindset from being kind of like a traditional software company to being an AI or machine learning company. And that means investing, hiring machine learning engineers, machine learning scientists, infrastructure in members who are working on how to put these models in production, team members who are able to optimize models, specialized models, customized models for the company's specific use cases. So it's really changing this mindset of how you build technology and optimize your company building around that. Things are moving so fast that I think now it's kind of like too late for low hanging fruits or small, small adjustments. I think it's important to realize that if you want to be good at that, and if you really want to surf this wave, you need massive investments. If there are like some surfers listening with this analogy of the wave, right, when there are waves, it's not enough just to stand and make a little bit of adjustments. You need to position yourself aggressively, paddle like crazy, and that's how you get into the waves. So that's what companies, in my opinion, need to do right now. >> Ori, what's your take on the generative models out there? We hear a lot about foundation models. What's your experience running end-to-end applications for large foundation models? Any insights you can share with the app developers out there who are looking to get in? >> Yeah, I think first of all, it's start create an economy, where it probably doesn't make sense for every company to create their own foundation models. You can basically start by using an existing foundation model, either open source or a proprietary one, and start deploying it for your needs. And then comes the second round when you are starting the optimization process. You bootstrap, whether it's a demo, or a small feature, or introducing new capability within your product, and then start collecting data. That data, and particularly the human feedback data, helps you to constantly improve the model, so you create this data flywheel. And I think we're now entering an era where customers have a lot of different choice of how they want to start their generative AI endeavor. And it's a good thing that there's a variety of choices. And the really amazing thing here is that every industry, any company you speak with, it could be something very traditional like industrial or financial, medical, really any company. I think peoples now start to imagine what are the possibilities, and seriously think what's their strategy for adopting this generative AI technology. And I think in that sense, the foundation model actually enabled this to become scalable. So the barrier to entry became lower; Now the adoption could actually accelerate. >> There's a lot of integration aspects here in this new wave that's a little bit different. Before it was like very monolithic, hardcore, very brittle. A lot more integration, you see a lot more data coming together. I have to ask you guys, as developers come in and grow, I mean, when I went to college and you were a software engineer, I mean, I got a degree in computer science, and software engineering, that's all you did was code, (chuckles) you coded. Now, isn't it like everyone's a machine learning engineer at this point? Because that will be ultimately the science. So, (chuckles) you got open source, you got open software, you got the communities. Swami called you guys the GitHub of machine learning, Hugging Face is the GitHub of machine learning, mainly because that's where people are going to code. So this is essentially, machine learning is computer science. What's your reaction to that? >> Yes, my co-founder Julien at Hugging Face have been having this thing for quite a while now, for over three years, which was saying that actually software engineering as we know it today is a subset of machine learning, instead of the other way around. People would call us crazy a few years ago when we're seeing that. But now we are realizing that you can actually code with machine learning. So machine learning is generating code. And we are starting to see that every software engineer can leverage machine learning through open models, through APIs, through different technology stack. So yeah, it's not crazy anymore to think that maybe in a few years, there's going to be more people doing AI and machine learning. However you call it, right? Maybe you'll still call them software engineers, maybe you'll call them machine learning engineers. But there might be more of these people in a couple of years than there is software engineers today. >> I bring this up as more tongue in cheek as well, because Ankur, infrastructure's co is what made Cloud great, right? That's kind of the DevOps movement. But here the shift is so massive, there will be a game-changing philosophy around coding. Machine learning as code, you're starting to see CodeWhisperer, you guys have had coding companions for a while on AWS. So this is a paradigm shift. How is the cloud playing into this for you guys? Because to me, I've been riffing on some interviews where it's like, okay, you got the cloud going next level. This is an example of that, where there is a DevOps-like moment happening with machine learning, whether you call it coding or whatever. It's writing code on its own. Can you guys comment on what this means on top of the cloud? What comes out of the scale? What comes out of the benefit here? >> Absolutely, so- >> Well first- >> Oh, go ahead. >> Yeah, so I think as far as scale is concerned, I think customers are really relying on cloud to make sure that the applications that they build can scale along with the needs of their business. But there's another aspect to it, which is that until a few years ago, John, what we saw was that machine learning was a data scientist heavy activity. They were data scientists who were taking the data and training models. And then as machine learning found its way more and more into production and actual usage, we saw the MLOps become a thing, and MLOps engineers become more involved into the process. And then we now are seeing, as machine learning is being used to solve more business critical problems, we're seeing even legal and compliance teams get involved. We are seeing business stakeholders more engaged. So, more and more machine learning is becoming an activity that's not just performed by data scientists, but is performed by a team and a group of people with different skills. And for them, we as AWS are focused on providing the best tools and services for these different personas to be able to do their job and really complete that end-to-end machine learning story. So that's where, whether it's tools related to MLOps or even for folks who cannot code or don't know any machine learning. For example, we launched SageMaker Canvas as a tool last year, which is a UI-based tool which data analysts and business analysts can use to build machine learning models. So overall, the spectrum in terms of persona and who can get involved in the machine learning process is expanding, and the cloud is playing a big role in that process. >> Ori, Clem, can you guys weigh in too? 'Cause this is just another abstraction layer of scale. What's it mean for you guys as you look forward to your customers and the use cases that you're enabling? >> Yes, I think what's important is that the AI companies and providers and the cloud kind of work together. That's how you make a seamless experience and you actually reduce the barrier to entry for this technology. So that's what we've been super happy to do with AWS for the past few years. We actually announced not too long ago that we are doubling down on our partnership with AWS. We're excited to have many, many customers on our shared product, the Hugging Face deep learning container on SageMaker. And we are working really closely with the Inferentia team and the Trainium team to release some more exciting stuff in the coming weeks and coming months. So I think when you have an ecosystem and a system where the AWS and the AI providers, AI startups can work hand in hand, it's to the benefit of the customers and the companies, because it makes it orders of magnitude easier for them to adopt this new paradigm to build technology AI. >> Ori, this is a scale on reasoning too. The data's out there and making sense out of it, making it reason, getting comprehension, having it make decisions is next, isn't it? And you need scale for that. >> Yes. Just a comment about the infrastructure side. So I think really the purpose is to streamline and make these technologies much more accessible. And I think we'll see, I predict that we'll see in the next few years more and more tooling that make this technology much more simple to consume. And I think it plays a very important role. There's so many aspects, like the monitoring the models and their kind of outputs they produce, and kind of containing and running them in a production environment. There's so much there to build on, the infrastructure side will play a very significant role. >> All right, that's awesome stuff. I'd love to change gears a little bit and get a little philosophy here around AI and how it's going to transform, if you guys don't mind. There's been a lot of conversations around, on theCUBE here as well as in some industry areas, where it's like, okay, all the heavy lifting is automated away with machine learning and AI, the complexity, there's some efficiencies, it's horizontal and scalable across all industries. Ankur, good point there. Everyone's going to use it for something. And a lot of stuff gets brought to the table with large language models and other things. But the key ingredient will be proprietary data or human input, or some sort of AI whisperer kind of role, or prompt engineering, people are saying. So with that being said, some are saying it's automating intelligence. And that creativity will be unleashed from this. If the heavy lifting goes away and AI can fill the void, that shifts the value to the intellect or the input. And so that means data's got to come together, interact, fuse, and understand each other. This is kind of new. I mean, old school AI was, okay, got a big model, I provisioned it long time, very expensive. Now it's all free flowing. Can you guys comment on where you see this going with this freeform, data flowing everywhere, heavy lifting, and then specialization? >> Yeah, I think- >> Go ahead. >> Yeah, I think, so what we are seeing with these large language models or generative models is that they're really good at creating stuff. But I think it's also important to recognize their limitations. They're not as good at reasoning and logic. And I think now we're seeing great enthusiasm, I think, which is justified. And the next phase would be how to make these systems more reliable. How to inject more reasoning capabilities into these models, or augment with other mechanisms that actually perform more reasoning so we can achieve more reliable results. And we can count on these models to perform for critical tasks, whether it's medical tasks, legal tasks. We really want to kind of offload a lot of the intelligence to these systems. And then we'll have to get back, we'll have to make sure these are reliable, we'll have to make sure we get some sort of explainability that we can understand the process behind the generated results that we received. So I think this is kind of the next phase of systems that are based on these generated models. >> Clem, what's your view on this? Obviously you're at open community, open source has been around, it's been a great track record, proven model. I'm assuming creativity's going to come out of the woodwork, and if we can automate open source contribution, and relationships, and onboarding more developers, there's going to be unleashing of creativity. >> Yes, it's been so exciting on the open source front. We all know Bert, Bloom, GPT-J, T5, Stable Diffusion, that work up. The previous or the current generation of open source models that are on Hugging Face. It has been accelerating in the past few months. So I'm super excited about ControlNet right now that is really having a lot of impact, which is kind of like a way to control the generation of images. Super excited about Flan UL2, which is like a new model that has been recently released and is open source. So yeah, it's really fun to see the ecosystem coming together. Open source has been the basis for traditional software, with like open source programming languages, of course, but also all the great open source that we've gotten over the years. So we're happy to see that the same thing is happening for machine learning and AI, and hopefully can help a lot of companies reduce a little bit the barrier to entry. So yeah, it's going to be exciting to see how it evolves in the next few years in that respect. >> I think the developer productivity angle that's been talked about a lot in the industry will be accelerated significantly. I think security will be enhanced by this. I think in general, applications are going to transform at a radical rate, accelerated, incredible rate. So I think it's not a big wave, it's the water, right? I mean, (chuckles) it's the new thing. My final question for you guys, if you don't mind, I'd love to get each of you to answer the question I'm going to ask you, which is, a lot of conversations around data. Data infrastructure's obviously involved in this. And the common thread that I'm hearing is that every company that looks at this is asking themselves, if we don't rebuild our company, start thinking about rebuilding our business model around AI, we might be dinosaurs, we might be extinct. And it reminds me that scene in Moneyball when, at the end, it's like, if we're not building the model around your model, every company will be out of business. What's your advice to companies out there that are having those kind of moments where it's like, okay, this is real, this is next gen, this is happening. I better start thinking and putting into motion plans to refactor my business, 'cause it's happening, business transformation is happening on the cloud. This kind of puts an exclamation point on, with the AI, as a next step function. Big increase in value. So it's an opportunity for leaders. Ankur, we'll start with you. What's your advice for folks out there thinking about this? Do they put their toe in the water? Do they jump right into the deep end? What's your advice? >> Yeah, John, so we talk to a lot of customers, and customers are excited about what's happening in the space, but they often ask us like, "Hey, where do we start?" So we always advise our customers to do a lot of proof of concepts, understand where they can drive the biggest ROI. And then also leverage existing tools and services to move fast and scale, and try and not reinvent the wheel where it doesn't need to be. That's basically our advice to customers. >> Get it. Ori, what's your advice to folks who are scratching their head going, "I better jump in here. "How do I get started?" What's your advice? >> So I actually think that need to think about it really economically. Both on the opportunity side and the challenges. So there's a lot of opportunities for many companies to actually gain revenue upside by building these new generative features and capabilities. On the other hand, of course, this would probably affect the cogs, and incorporating these capabilities could probably affect the cogs. So I think we really need to think carefully about both of these sides, and also understand clearly if this is a project or an F word towards cost reduction, then the ROI is pretty clear, or revenue amplifier, where there's, again, a lot of different opportunities. So I think once you think about this in a structured way, I think, and map the different initiatives, then it's probably a good way to start and a good way to start thinking about these endeavors. >> Awesome. Clem, what's your take on this? What's your advice, folks out there? >> Yes, all of these are very good advice already. Something that you said before, John, that I disagreed a little bit, a lot of people are talking about the data mode and proprietary data. Actually, when you look at some of the organizations that have been building the best models, they don't have specialized or unique access to data. So I'm not sure that's so important today. I think what's important for companies, and it's been the same for the previous generation of technology, is their ability to build better technology faster than others. And in this new paradigm, that means being able to build machine learning faster than others, and better. So that's how, in my opinion, you should approach this. And kind of like how can you evolve your company, your teams, your products, so that you are able in the long run to build machine learning better and faster than your competitors. And if you manage to put yourself in that situation, then that's when you'll be able to differentiate yourself to really kind of be impactful and get results. That's really hard to do. It's something really different, because machine learning and AI is a different paradigm than traditional software. So this is going to be challenging, but I think if you manage to nail that, then the future is going to be very interesting for your company. >> That's a great point. Thanks for calling that out. I think this all reminds me of the cloud days early on. If you went to the cloud early, you took advantage of it when the pandemic hit. If you weren't native in the cloud, you got hamstrung by that, you were flatfooted. So just get in there. (laughs) Get in the cloud, get into AI, you're going to be good. Thanks for for calling that. Final parting comments, what's your most exciting thing going on right now for you guys? Ori, Clem, what's the most exciting thing on your plate right now that you'd like to share with folks? >> I mean, for me it's just the diversity of use cases and really creative ways of companies leveraging this technology. Every day I speak with about two, three customers, and I'm continuously being surprised by the creative ideas. And the future is really exciting of what can be achieved here. And also I'm amazed by the pace that things move in this industry. It's just, there's not at dull moment. So, definitely exciting times. >> Clem, what are you most excited about right now? >> For me, it's all the new open source models that have been released in the past few weeks, and that they'll keep being released in the next few weeks. I'm also super excited about more and more companies getting into this capability of chaining different models and different APIs. I think that's a very, very interesting development, because it creates new capabilities, new possibilities, new functionalities that weren't possible before. You can plug an API with an open source embedding model, with like a no-geo transcription model. So that's also very exciting. This capability of having more interoperable machine learning will also, I think, open a lot of interesting things in the future. >> Clem, congratulations on your success at Hugging Face. Please pass that on to your team. Ori, congratulations on your success, and continue to, just day one. I mean, it's just the beginning. It's not even scratching the service. Ankur, I'll give you the last word. What are you excited for at AWS? More cloud goodness coming here with AI. Give you the final word. >> Yeah, so as both Clem and Ori said, I think the research in the space is moving really, really fast, so we are excited about that. But we are also excited to see the speed at which enterprises and other AWS customers are applying machine learning to solve real business problems, and the kind of results they're seeing. So when they come back to us and tell us the kind of improvement in their business metrics and overall customer experience that they're driving and they're seeing real business results, that's what keeps us going and inspires us to continue inventing on their behalf. >> Gentlemen, thank you so much for this awesome high impact panel. Ankur, Clem, Ori, congratulations on all your success. We'll see you around. Thanks for coming on. Generative AI, riding the wave, it's a tidal wave, it's the water, it's all happening. All great stuff. This is season three, episode one of AWS Startup Showcase closing panel. This is the AI ML episode, the top startups building generative AI on AWS. I'm John Furrier, your host. Thanks for watching. (mellow music)

Published Date : Mar 9 2023

SUMMARY :

This is the closing panel I'm super excited to have you all on. is to really provide and to me being in California, and then you get your product. kind of the default APIs, the cloud. and kind of making the I saw the Wall Street Journal I think it's important to realize that the app developers out there So the barrier to entry became lower; I have to ask you guys, instead of the other way around. That's kind of the DevOps movement. and the cloud is playing a and the use cases that you're enabling? the barrier to entry And you need scale for that. in the next few years and AI can fill the void, a lot of the intelligence and if we can automate reduce a little bit the barrier to entry. I'd love to get each of you drive the biggest ROI. to folks who are scratching So I think once you think Clem, what's your take on this? and it's been the same of the cloud days early on. And also I'm amazed by the pace in the past few weeks, Please pass that on to your team. and the kind of results they're seeing. This is the AI ML episode,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ankur MehrotraPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

ClemPERSON

0.99+

Ori GoshenPERSON

0.99+

John FurrierPERSON

0.99+

CaliforniaLOCATION

0.99+

OriPERSON

0.99+

Clem DelanguePERSON

0.99+

Hugging FaceORGANIZATION

0.99+

JulienPERSON

0.99+

AnkurPERSON

0.99+

AmazonORGANIZATION

0.99+

Tel AvivLOCATION

0.99+

threeQUANTITY

0.99+

AnkurORGANIZATION

0.99+

second roundQUANTITY

0.99+

AI21 LabsORGANIZATION

0.99+

two separate categoriesQUANTITY

0.99+

Amazon.comORGANIZATION

0.99+

last yearDATE

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.98+

over 15,000 companiesQUANTITY

0.98+

BothQUANTITY

0.98+

five yearsQUANTITY

0.98+

bothQUANTITY

0.98+

over three yearsQUANTITY

0.98+

three customersQUANTITY

0.98+

eachQUANTITY

0.98+

TrainiumORGANIZATION

0.98+

todayDATE

0.98+

AlexaTITLE

0.98+

Stable DiffusionORGANIZATION

0.97+

SwamiPERSON

0.97+

InferentiaORGANIZATION

0.96+

GPT-JORGANIZATION

0.96+

SageMakerTITLE

0.96+

AI21 LabsORGANIZATION

0.95+

Riding the WaveTITLE

0.95+

ControlNetORGANIZATION

0.94+

one wayQUANTITY

0.94+

a million linesQUANTITY

0.93+

Startup ShowcaseEVENT

0.92+

few months agoDATE

0.92+

second waveEVENT

0.91+

theCUBEORGANIZATION

0.91+

few years agoDATE

0.91+

CodeWhispererTITLE

0.9+

AI21ORGANIZATION

0.89+

Opening Panel | Generative AI: Hype or Reality | AWS Startup Showcase S3 E1


 

(light airy music) >> Hello, everyone, welcome to theCUBE's presentation of the AWS Startup Showcase, AI and machine learning. "Top Startups Building Generative AI on AWS." This is season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talking about AI machine learning. We have three great guests Bratin Saha, VP, Vice President of Machine Learning and AI Services at Amazon Web Services. Tom Mason, the CTO of Stability AI, and Aidan Gomez, CEO and co-founder of Cohere. Two practitioners doing startups and AWS. Gentlemen, thank you for opening up this session, this episode. Thanks for coming on. >> Thank you. >> Thank you. >> Thank you. >> So the topic is hype versus reality. So I think we're all on the reality is great, hype is great, but the reality's here. I want to get into it. Generative AI's got all the momentum, it's going mainstream, it's kind of come out of the behind the ropes, it's now mainstream. We saw the success of ChatGPT, opens up everyone's eyes, but there's so much more going on. Let's jump in and get your early perspectives on what should people be talking about right now? What are you guys working on? We'll start with AWS. What's the big focus right now for you guys as you come into this market that's highly active, highly hyped up, but people see value right out of the gate? >> You know, we have been working on generative AI for some time. In fact, last year we released Code Whisperer, which is about using generative AI for software development and a number of customers are using it and getting real value out of it. So generative AI is now something that's mainstream that can be used by enterprise users. And we have also been partnering with a number of other companies. So, you know, stability.ai, we've been partnering with them a lot. We want to be partnering with other companies as well. In seeing how we do three things, you know, first is providing the most efficient infrastructure for generative AI. And that is where, you know, things like Trainium, things like Inferentia, things like SageMaker come in. And then next is the set of models and then the third is the kind of applications like Code Whisperer and so on. So, you know, it's early days yet, but clearly there's a lot of amazing capabilities that will come out and something that, you know, our customers are starting to pay a lot of attention to. >> Tom, talk about your company and what your focus is and why the Amazon Web Services relationship's important for you? >> So yeah, we're primarily committed to making incredible open source foundation models and obviously stable effusions been our kind of first big model there, which we trained all on AWS. We've been working with them over the last year and a half to develop, obviously a big cluster, and bring all that compute to training these models at scale, which has been a really successful partnership. And we're excited to take it further this year as we develop commercial strategy of the business and build out, you know, the ability for enterprise customers to come and get all the value from these models that we think they can get. So we're really excited about the future. We got hugely exciting pipeline for this year with new modalities and video models and wonderful things and trying to solve images for once and for all and get the kind of general value and value proposition correct for customers. So it's a really exciting time and very honored to be part of it. >> It's great to see some of your customers doing so well out there. Congratulations to your team. Appreciate that. Aidan, let's get into what you guys do. What does Cohere do? What are you excited about right now? >> Yeah, so Cohere builds large language models, which are the backbone of applications like ChatGPT and GPT-3. We're extremely focused on solving the issues with adoption for enterprise. So it's great that you can make a super flashy demo for consumers, but it takes a lot to actually get it into billion user products and large global enterprises. So about six months ago, we released our command models, which are some of the best that exist for large language models. And in December, we released our multilingual text understanding models and that's on over a hundred different languages and it's trained on, you know, authentic data directly from native speakers. And so we're super excited to continue pushing this into enterprise and solving those barriers for adoption, making this transformation a reality. >> Just real quick, while I got you there on the new products coming out. Where are we in the progress? People see some of the new stuff out there right now. There's so much more headroom. Can you just scope out in your mind what that looks like? Like from a headroom standpoint? Okay, we see ChatGPT. "Oh yeah, it writes my papers for me, does some homework for me." I mean okay, yawn, maybe people say that, (Aidan chuckles) people excited or people are blown away. I mean, it's helped theCUBE out, it helps me, you know, feed up a little bit from my write-ups but it's not always perfect. >> Yeah, at the moment it's like a writing assistant, right? And it's still super early in the technologies trajectory. I think it's fascinating and it's interesting but its impact is still really limited. I think in the next year, like within the next eight months, we're going to see some major changes. You've already seen the very first hints of that with stuff like Bing Chat, where you augment these dialogue models with an external knowledge base. So now the models can be kept up to date to the millisecond, right? Because they can search the web and they can see events that happened a millisecond ago. But that's still limited in the sense that when you ask the question, what can these models actually do? Well they can just write text back at you. That's the extent of what they can do. And so the real project, the real effort, that I think we're all working towards is actually taking action. So what happens when you give these models the ability to use tools, to use APIs? What can they do when they can actually affect change out in the real world, beyond just streaming text back at the user? I think that's the really exciting piece. >> Okay, so I wanted to tee that up early in the segment 'cause I want to get into the customer applications. We're seeing early adopters come in, using the technology because they have a lot of data, they have a lot of large language model opportunities and then there's a big fast follower wave coming behind it. I call that the people who are going to jump in the pool early and get into it. They might not be advanced. Can you guys share what customer applications are being used with large language and vision models today and how they're using it to transform on the early adopter side, and how is that a tell sign of what's to come? >> You know, one of the things we have been seeing both with the text models that Aidan talked about as well as the vision models that stability.ai does, Tom, is customers are really using it to change the way you interact with information. You know, one example of a customer that we have, is someone who's kind of using that to query customer conversations and ask questions like, you know, "What was the customer issue? How did we solve it?" And trying to get those kinds of insights that was previously much harder to do. And then of course software is a big area. You know, generating software, making that, you know, just deploying it in production. Those have been really big areas that we have seen customers start to do. You know, looking at documentation, like instead of you know, searching for stuff and so on, you know, you just have an interactive way, in which you can just look at the documentation for a product. You know, all of this goes to where we need to take the technology. One of which is, you know, the models have to be there but they have to work reliably in a production setting at scale, with privacy, with security, and you know, making sure all of this is happening, is going to be really key. That is what, you know, we at AWS are looking to do, which is work with partners like stability and others and in the open source and really take all of these and make them available at scale to customers, where they work reliably. >> Tom, Aidan, what's your thoughts on this? Where are customers landing on this first use cases or set of low-hanging fruit use cases or applications? >> Yeah, so I think like the first group of adopters that really found product market fit were the copywriting companies. So one great example of that is HyperWrite. Another one is Jasper. And so for Cohere, that's the tip of the iceberg, like there's a very long tail of usage from a bunch of different applications. HyperWrite is one of our customers, they help beat writer's block by drafting blog posts, emails, and marketing copy. We also have a global audio streaming platform, which is using us the power of search engine that can comb through podcast transcripts, in a bunch of different languages. Then a global apparel brand, which is using us to transform how they interact with their customers through a virtual assistant, two dozen global news outlets who are using us for news summarization. So really like, these large language models, they can be deployed all over the place into every single industry sector, language is everywhere. It's hard to think of any company on Earth that doesn't use language. So it's, very, very- >> We're doing it right now. We got the language coming in. >> Exactly. >> We'll transcribe this puppy. All right. Tom, on your side, what do you see the- >> Yeah, we're seeing some amazing applications of it and you know, I guess that's partly been, because of the growth in the open source community and some of these applications have come from there that are then triggering this secondary wave of innovation, which is coming a lot from, you know, controllability and explainability of the model. But we've got companies like, you know, Jasper, which Aidan mentioned, who are using stable diffusion for image generation in block creation, content creation. We've got Lensa, you know, which exploded, and is built on top of stable diffusion for fine tuning so people can bring themselves and their pets and you know, everything into the models. So we've now got fine tuned stable diffusion at scale, which is democratized, you know, that process, which is really fun to see your Lensa, you know, exploded. You know, I think it was the largest growing app in the App Store at one point. And lots of other examples like NightCafe and Lexica and Playground. So seeing lots of cool applications. >> So much applications, we'll probably be a customer for all you guys. We'll definitely talk after. But the challenges are there for people adopting, they want to get into what you guys see as the challenges that turn into opportunities. How do you see the customers adopting generative AI applications? For example, we have massive amounts of transcripts, timed up to all the videos. I don't even know what to do. Do I just, do I code my API there. So, everyone has this problem, every vertical has these use cases. What are the challenges for people getting into this and adopting these applications? Is it figuring out what to do first? Or is it a technical setup? Do they stand up stuff, they just go to Amazon? What do you guys see as the challenges? >> I think, you know, the first thing is coming up with where you think you're going to reimagine your customer experience by using generative AI. You know, we talked about Ada, and Tom talked about a number of these ones and you know, you pick up one or two of these, to get that robust. And then once you have them, you know, we have models and we'll have more models on AWS, these large language models that Aidan was talking about. Then you go in and start using these models and testing them out and seeing whether they fit in use case or not. In many situations, like you said, John, our customers want to say, "You know, I know you've trained these models on a lot of publicly available data, but I want to be able to customize it for my use cases. Because, you know, there's some knowledge that I have created and I want to be able to use that." And then in many cases, and I think Aidan mentioned this. You know, you need these models to be up to date. Like you can't have it staying. And in those cases, you augmented with a knowledge base, you know you have to make sure that these models are not hallucinating. And so you need to be able to do the right kind of responsible AI checks. So, you know, you start with a particular use case, and there are a lot of them. Then, you know, you can come to AWS, and then look at one of the many models we have and you know, we are going to have more models for other modalities as well. And then, you know, play around with the models. We have a playground kind of thing where you can test these models on some data and then you can probably, you will probably want to bring your own data, customize it to your own needs, do some of the testing to make sure that the model is giving the right output and then just deploy it. And you know, we have a lot of tools. >> Yeah. >> To make this easy for our customers. >> How should people think about large language models? Because do they think about it as something that they tap into with their IP or their data? Or is it a large language model that they apply into their system? Is the interface that way? What's the interaction look like? >> In many situations, you can use these models out of the box. But in typical, in most of the other situations, you will want to customize it with your own data or with your own expectations. So the typical use case would be, you know, these are models are exposed through APIs. So the typical use case would be, you know you're using these APIs a little bit for testing and getting familiar and then there will be an API that will allow you to train this model further on your data. So you use that AI, you know, make sure you augmented the knowledge base. So then you use those APIs to customize the model and then just deploy it in an application. You know, like Tom was mentioning, a number of companies that are using these models. So once you have it, then you know, you again, use an endpoint API and use it in an application. >> All right, I love the example. I want to ask Tom and Aidan, because like most my experience with Amazon Web Service in 2007, I would stand up in EC2, put my code on there, play around, if it didn't work out, I'd shut it down. Is that a similar dynamic we're going to see with the machine learning where developers just kind of log in and stand up infrastructure and play around and then have a cloud-like experience? >> So I can go first. So I mean, we obviously, with AWS working really closely with the SageMaker team, do fantastic platform there for ML training and inference. And you know, going back to your point earlier, you know, where the data is, is hugely important for companies. Many companies bringing their models to their data in AWS on-premise for them is hugely important. Having the models to be, you know, open sources, makes them explainable and transparent to the adopters of those models. So, you know, we are really excited to work with the SageMaker team over the coming year to bring companies to that platform and make the most of our models. >> Aidan, what's your take on developers? Do they just need to have a team in place, if we want to interface with you guys? Let's say, can they start learning? What do they got to do to set up? >> Yeah, so I think for Cohere, our product makes it much, much easier to people, for people to get started and start building, it solves a lot of the productionization problems. But of course with SageMaker, like Tom was saying, I think that lowers a barrier even further because it solves problems like data privacy. So I want to underline what Bratin was saying earlier around when you're fine tuning or when you're using these models, you don't want your data being incorporated into someone else's model. You don't want it being used for training elsewhere. And so the ability to solve for enterprises, that data privacy and that security guarantee has been hugely important for Cohere, and that's very easy to do through SageMaker. >> Yeah. >> But the barriers for using this technology are coming down super quickly. And so for developers, it's just becoming completely intuitive. I love this, there's this quote from Andrej Karpathy. He was saying like, "It really wasn't on my 2022 list of things to happen that English would become, you know, the most popular programming language." And so the barrier is coming down- >> Yeah. >> Super quickly and it's exciting to see. >> It's going to be awesome for all the companies here, and then we'll do more, we're probably going to see explosion of startups, already seeing that, the maps, ecosystem maps, the landscape maps are happening. So this is happening and I'm convinced it's not yesterday's chat bot, it's not yesterday's AI Ops. It's a whole another ballgame. So I have to ask you guys for the final question before we kick off the company's showcasing here. How do you guys gauge success of generative AI applications? Is there a lens to look through and say, okay, how do I see success? It could be just getting a win or is it a bigger picture? Bratin we'll start with you. How do you gauge success for generative AI? >> You know, ultimately it's about bringing business value to our customers. And making sure that those customers are able to reimagine their experiences by using generative AI. Now the way to get their ease, of course to deploy those models in a safe, effective manner, and ensuring that all of the robustness and the security guarantees and the privacy guarantees are all there. And we want to make sure that this transitions from something that's great demos to actual at scale products, which means making them work reliably all of the time not just some of the time. >> Tom, what's your gauge for success? >> Look, I think this, we're seeing a completely new form of ways to interact with data, to make data intelligent, and directly to bring in new revenue streams into business. So if businesses can use our models to leverage that and generate completely new revenue streams and ultimately bring incredible new value to their customers, then that's fantastic. And we hope we can power that revolution. >> Aidan, what's your take? >> Yeah, reiterating Bratin and Tom's point, I think that value in the enterprise and value in market is like a huge, you know, it's the goal that we're striving towards. I also think that, you know, the value to consumers and actual users and the transformation of the surface area of technology to create experiences like ChatGPT that are magical and it's the first time in human history we've been able to talk to something compelling that's not a human. I think that in itself is just extraordinary and so exciting to see. >> It really brings up a whole another category of markets. B2B, B2C, it's B2D, business to developer. Because I think this is kind of the big trend the consumers have to win. The developers coding the apps, it's a whole another sea change. Reminds me everyone use the "Moneyball" movie as example during the big data wave. Then you know, the value of data. There's a scene in "Moneyball" at the end, where Billy Beane's getting the offer from the Red Sox, then the owner says to the Red Sox, "If every team's not rebuilding their teams based upon your model, there'll be dinosaurs." I think that's the same with AI here. Every company will have to need to think about their business model and how they operate with AI. So it'll be a great run. >> Completely Agree >> It'll be a great run. >> Yeah. >> Aidan, Tom, thank you so much for sharing about your experiences at your companies and congratulations on your success and it's just the beginning. And Bratin, thanks for coming on representing AWS. And thank you, appreciate for what you do. Thank you. >> Thank you, John. Thank you, Aidan. >> Thank you John. >> Thanks so much. >> Okay, let's kick off season three, episode one. I'm John Furrier, your host. Thanks for watching. (light airy music)

Published Date : Mar 9 2023

SUMMARY :

of the AWS Startup Showcase, of the behind the ropes, and something that, you know, and build out, you know, Aidan, let's get into what you guys do. and it's trained on, you know, it helps me, you know, the ability to use tools, to use APIs? I call that the people and you know, making sure the first group of adopters We got the language coming in. Tom, on your side, what do you see the- and you know, everything into the models. they want to get into what you guys see and you know, you pick for our customers. then you know, you again, All right, I love the example. and make the most of our models. And so the ability to And so the barrier is coming down- and it's exciting to see. So I have to ask you guys and ensuring that all of the robustness and directly to bring in new and it's the first time in human history the consumers have to win. and it's just the beginning. I'm John Furrier, your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

TomPERSON

0.99+

Tom MasonPERSON

0.99+

AidanPERSON

0.99+

Red SoxORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Andrej KarpathyPERSON

0.99+

Bratin SahaPERSON

0.99+

DecemberDATE

0.99+

2007DATE

0.99+

John FurrierPERSON

0.99+

Aidan GomezPERSON

0.99+

AmazonORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Billy BeanePERSON

0.99+

BratinPERSON

0.99+

MoneyballTITLE

0.99+

oneQUANTITY

0.99+

AdaPERSON

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

EarthLOCATION

0.99+

yesterdayDATE

0.99+

Two practitionersQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

ChatGPTTITLE

0.99+

next yearDATE

0.99+

Code WhispererTITLE

0.99+

thirdQUANTITY

0.99+

this yearDATE

0.99+

App StoreTITLE

0.99+

first timeQUANTITY

0.98+

firstQUANTITY

0.98+

InferentiaTITLE

0.98+

EC2TITLE

0.98+

GPT-3TITLE

0.98+

bothQUANTITY

0.98+

LensaTITLE

0.98+

SageMakerORGANIZATION

0.98+

three thingsQUANTITY

0.97+

CohereORGANIZATION

0.96+

over a hundred different languagesQUANTITY

0.96+

EnglishOTHER

0.96+

one exampleQUANTITY

0.96+

about six months agoDATE

0.96+

OneQUANTITY

0.96+

first useQUANTITY

0.96+

SageMakerTITLE

0.96+

Bing ChatTITLE

0.95+

one pointQUANTITY

0.95+

TrainiumTITLE

0.95+

LexicaTITLE

0.94+

PlaygroundTITLE

0.94+

three great guestsQUANTITY

0.93+

HyperWriteTITLE

0.92+

SiliconANGLE News | Beyond the Buzz: A deep dive into the impact of AI


 

(upbeat music) >> Hello, everyone, welcome to theCUBE. I'm John Furrier, the host of theCUBE in Palo Alto, California. Also it's SiliconANGLE News. Got two great guests here to talk about AI, the impact of the future of the internet, the applications, the people. Amr Awadallah, the founder and CEO, Ed Alban is the CEO of Vectara, a new startup that emerged out of the original Cloudera, I would say, 'cause Amr's known, famous for the Cloudera founding, which was really the beginning of the big data movement. And now as AI goes mainstream, there's so much to talk about, so much to go on. And plus the new company is one of the, now what I call the wave, this next big wave, I call it the fifth wave in the industry. You know, you had PCs, you had the internet, you had mobile. This generative AI thing is real. And you're starting to see startups come out in droves. Amr obviously was founder of Cloudera, Big Data, and now Vectara. And Ed Albanese, you guys have a new company. Welcome to the show. >> Thank you. It's great to be here. >> So great to see you. Now the story is theCUBE started in the Cloudera office. Thanks to you, and your friendly entrepreneurship views that you have. We got to know each other over the years. But Cloudera had Hadoop, which was the beginning of what I call the big data wave, which then became what we now call data lakes, data oceans, and data infrastructure that's developed from that. It's almost interesting to look back 12 plus years, and see that what AI is doing now, right now, is opening up the eyes to the mainstream, and the application's almost mind blowing. You know, Sati Natel called it the Mosaic Moment, didn't say Netscape, he built Netscape (laughing) but called it the Mosaic Moment. You're seeing companies in startups, kind of the alpha geeks running here, because this is the new frontier, and there's real meat on the bone, in terms of like things to do. Why? Why is this happening now? What's is the confluence of the forces happening, that are making this happen? >> Yeah, I mean if you go back to the Cloudera days, with big data, and so on, that was more about data processing. Like how can we process data, so we can extract numbers from it, and do reporting, and maybe take some actions, like this is a fraud transaction, or this is not. And in the meanwhile, many of the researchers working in the neural network, and deep neural network space, were trying to focus on data understanding, like how can I understand the data, and learn from it, so I can take actual actions, based on the data directly, just like a human does. And we were only good at doing that at the level of somebody who was five years old, or seven years old, all the way until about 2013. And starting in 2013, which is only 10 years ago, a number of key innovations started taking place, and each one added on. It was no major innovation that just took place. It was a couple of really incremental ones, but they added on top of each other, in a very exponentially additive way, that led to, by the end of 2019, we now have models, deep neural network models, that can read and understand human text just like we do. Right? And they can reason about it, and argue with you, and explain it to you. And I think that's what is unlocking this whole new wave of innovation that we're seeing right now. So data understanding would be the essence of it. >> So it's not a Big Bang kind of theory, it's been evolving over time, and I think that the tipping point has been the advancements and other things. I mean look at cloud computing, and look how fast it just crept up on AWS. I mean AWS you back three, five years ago, I was talking to Swami yesterday, and their big news about AI, expanding the Hugging Face's relationship with AWS. And just three, five years ago, there wasn't a model training models out there. But as compute comes out, and you got more horsepower,, these large language models, these foundational models, they're flexible, they're not monolithic silos, they're interacting. There's a whole new, almost fusion of data happening. Do you see that? I mean is that part of this? >> Of course, of course. I mean this wave is building on all the previous waves. We wouldn't be at this point if we did not have hardware that can scale, in a very efficient way. We wouldn't be at this point, if we don't have data that we're collecting about everything we do, that we're able to process in this way. So this, this movement, this motion, this phase we're in, absolutely builds on the shoulders of all the previous phases. For some of the observers from the outside, when they see chatGPT for the first time, for them was like, "Oh my god, this just happened overnight." Like it didn't happen overnight. (laughing) GPT itself, like GPT3, which is what chatGPT is based on, was released a year ahead of chatGPT, and many of us were seeing the power it can provide, and what it can do. I don't know if Ed agrees with that. >> Yeah, Ed? >> I do. Although I would acknowledge that the possibilities now, because of what we've hit from a maturity standpoint, have just opened up in an incredible way, that just wasn't tenable even three years ago. And that's what makes it, it's true that it developed incrementally, in the same way that, you know, the possibilities of a mobile handheld device, you know, in 2006 were there, but when the iPhone came out, the possibilities just exploded. And that's the moment we're in. >> Well, I've had many conversations over the past couple months around this area with chatGPT. John Markoff told me the other day, that he calls it, "The five dollar toy," because it's not that big of a deal, in context to what AI's doing behind the scenes, and all the work that's done on ethics, that's happened over the years, but it has woken up the mainstream, so everyone immediately jumps to ethics. "Does it work? "It's not factual," And everyone who's inside the industry is like, "This is amazing." 'Cause you have two schools of thought there. One's like, people that think this is now the beginning of next gen, this is now we're here, this ain't your grandfather's chatbot, okay?" With NLP, it's got reasoning, it's got other things. >> I'm in that camp for sure. >> Yeah. Well I mean, everyone who knows what's going on is in that camp. And as the naysayers start to get through this, and they go, "Wow, it's not just plagiarizing homework, "it's helping me be better. "Like it could rewrite my memo, "bring the lead to the top." It's so the format of the user interface is interesting, but it's still a data-driven app. >> Absolutely. >> So where does it go from here? 'Cause I'm not even calling this the first ending. This is like pregame, in my opinion. What do you guys see this going, in terms of scratching the surface to what happens next? >> I mean, I'll start with, I just don't see how an application is going to look the same in the next three years. Who's going to want to input data manually, in a form field? Who is going to want, or expect, to have to put in some text in a search box, and then read through 15 different possibilities, and try to figure out which one of them actually most closely resembles the question they asked? You know, I don't see that happening. Who's going to start with an absolute blank sheet of paper, and expect no help? That is not how an application will work in the next three years, and it's going to fundamentally change how people interact and spend time with opening any element on their mobile phone, or on their computer, to get something done. >> Yes. I agree with that. Like every single application, over the next five years, will be rewritten, to fit within this model. So imagine an HR application, I don't want to name companies, but imagine an HR application, and you go into application and you clicking on buttons, because you want to take two weeks of vacation, and menus, and clicking here and there, reasons and managers, versus just telling the system, "I'm taking two weeks of vacation, going to Las Vegas," book it, done. >> Yeah. >> And the system just does it for you. If you weren't completing in your input, in your description, for what you want, then the system asks you back, "Did you mean this? "Did you mean that? "Were you trying to also do this as well?" >> Yeah. >> "What was the reason?" And that will fit it for you, and just do it for you. So I think the user interface that we have with apps, is going to change to be very similar to the user interface that we have with each other. And that's why all these apps will need to evolve. >> I know we don't have a lot of time, 'cause you guys are very busy, but I want to definitely have multiple segments with you guys, on this topic, because there's so much to talk about. There's a lot of parallels going on here. I was talking again with Swami who runs all the AI database at AWS, and I asked him, I go, "This feels a lot like the original AWS. "You don't have to provision a data center." A lot of this heavy lifting on the back end, is these large language models, with these foundational models. So the bottleneck in the past, was the energy, and cost to actually do it. Now you're seeing it being stood up faster. So there's definitely going to be a tsunami of apps. I would see that clearly. What is it? We don't know yet. But also people who are going to leverage the fact that I can get started building value. So I see a startup boom coming, and I see an application tsunami of refactoring things. >> Yes. >> So the replatforming is already kind of happening. >> Yes, >> OpenAI, chatGPT, whatever. So that's going to be a developer environment. I mean if Amazon turns this into an API, or a Microsoft, what you guys are doing. >> We're turning it into API as well. That's part of what we're doing as well, yes. >> This is why this is exciting. Amr, you've lived the big data dream, and and we used to talk, if you didn't have a big data problem, if you weren't full of data, you weren't really getting it. Now people have all the data, and they got to stand this up. >> Yeah. >> So the analogy is again, the mobile, I like the mobile movement, and using mobile as an analogy, most companies were not building for a mobile environment, right? They were just building for the web, and legacy way of doing apps. And as soon as the user expectations shifted, that my expectation now, I need to be able to do my job on this small screen, on the mobile device with a touchscreen. Everybody had to invest in re-architecting, and re-implementing every single app, to fit within that model, and that model of interaction. And we are seeing the exact same thing happen now. And one of the core things we're focused on at Vectara, is how to simplify that for organizations, because a lot of them are overwhelmed by large language models, and ML. >> They don't have the staff. >> Yeah, yeah, yeah. They're understaffed, they don't have the skills. >> But they got developers, they've got DevOps, right? >> Yes. >> So they have the DevSecOps going on. >> Exactly, yes. >> So our goal is to simplify it enough for them that they can start leveraging this technology effectively, within their applications. >> Ed, you're the COO of the company, obviously a startup. You guys are growing. You got great backup, and good team. You've also done a lot of business development, and technical business development in this area. If you look at the landscape right now, and I agree the apps are coming, every company I talk to, that has that jet chatGPT of, you know, epiphany, "Oh my God, look how cool this is. "Like magic." Like okay, it's code, settle down. >> Mm hmm. >> But everyone I talk to is using it in a very horizontal way. I talk to a very senior person, very tech alpha geek, very senior person in the industry, technically. they're using it for log data, they're using it for configuration of routers. And in other areas, they're using it for, every vertical has a use case. So this is horizontally scalable from a use case standpoint. When you hear horizontally scalable, first thing I chose in my mind is cloud, right? >> Mm hmm. >> So cloud, and scalability that way. And the data is very specialized. So now you have this vertical specialization, horizontally scalable, everyone will be refactoring. What do you see, and what are you seeing from customers, that you talk to, and prospects? >> Yeah, I mean put yourself in the shoes of an application developer, who is actually trying to make their application a bit more like magic. And to have that soon-to-be, honestly, expected experience. They've got to think about things like performance, and how efficiently that they can actually execute a query, or a question. They've got to think about cost. Generative isn't cheap, like the inference of it. And so you've got to be thoughtful about how and when you take advantage of it, you can't use it as a, you know, everything looks like a nail, and I've got a hammer, and I'm going to hit everything with it, because that will be wasteful. Developers also need to think about how they're going to take advantage of, but not lose their own data. So there has to be some controls around what they feed into the large language model, if anything. Like, should they fine tune a large language model with their own data? Can they keep it logically separated, but still take advantage of the powers of a large language model? And they've also got to take advantage, and be aware of the fact that when data is generated, that it is a different class of data. It might not fully be their own. >> Yeah. >> And it may not even be fully verified. And so when the logical cycle starts, of someone making a request, the relationship between that request, and the output, those things have to be stored safely, logically, and identified as such. >> Yeah. >> And taken advantage of in an ongoing fashion. So these are mega problems, each one of them independently, that, you know, you can think of it as middleware companies need to take advantage of, and think about, to help the next wave of application development be logical, sensible, and effective. It's not just calling some raw API on the cloud, like openAI, and then just, you know, you get your answer and you're done, because that is a very brute force approach. >> Well also I will point, first of all, I agree with your statement about the apps experience, that's going to be expected, form filling. Great point. The interesting about chatGPT. >> Sorry, it's not just form filling, it's any action you would like to take. >> Yeah. >> Instead of clicking, and dragging, and dropping, and doing it on a menu, or on a touch screen, you just say it, and it's and it happens perfectly. >> Yeah. It's a different interface. And that's why I love that UIUX experiences, that's the people falling out of their chair moment with chatGPT, right? But a lot of the things with chatGPT, if you feed it right, it works great. If you feed it wrong and it goes off the rails, it goes off the rails big. >> Yes, yes. >> So the the Bing catastrophes. >> Yeah. >> And that's an example of garbage in, garbage out, classic old school kind of comp-side phrase that we all use. >> Yep. >> Yes. >> This is about data in injection, right? It reminds me the old SQL days, if you had to, if you can sling some SQL, you were a magician, you know, to get the right answer, it's pretty much there. So you got to feed the AI. >> You do, Some people call this, the early word to describe this as prompt engineering. You know, old school, you know, search, or, you know, engagement with data would be, I'm going to, I have a question or I have a query. New school is, I have, I have to issue it a prompt, because I'm trying to get, you know, an action or a reaction, from the system. And the active engineering, there are a lot of different ways you could do it, all the way from, you know, raw, just I'm going to send you whatever I'm thinking. >> Yeah. >> And you get the unintended outcomes, to more constrained, where I'm going to just use my own data, and I'm going to constrain the initial inputs, the data I already know that's first party, and I trust, to, you know, hyper constrain, where the application is actually, it's looking for certain elements to respond to. >> It's interesting Amr, this is why I love this, because one we are in the media, we're recording this video now, we'll stream it. But we got all your linguistics, we're talking. >> Yes. >> This is data. >> Yep. >> So the data quality becomes now the new intellectual property, because, if you have that prompt source data, it makes data or content, in our case, the original content, intellectual property. >> Absolutely. >> Because that's the value. And that's where you see chatGPT fall down, is because they're trying to scroll the web, and people think it's search. It's not necessarily search, it's giving you something that you wanted. It is a lot of that, I remember in Cloudera, you said, "Ask the right questions." Remember that phrase you guys had, that slogan? >> Mm hmm. And that's prompt engineering. So that's exactly, that's the reinvention of "Ask the right question," is prompt engineering is, if you don't give these models the question in the right way, and very few people know how to frame it in the right way with the right context, then you will get garbage out. Right? That is the garbage in, garbage out. But if you specify the question correctly, and you provide with it the metadata that constrain what that question is going to be acted upon or answered upon, then you'll get much better answers. And that's exactly what we solved Vectara. >> Okay. So before we get into the last couple minutes we have left, I want to make sure we get a plug in for the opportunity, and the profile of Vectara, your new company. Can you guys both share with me what you think the current situation is? So for the folks who are now having those moments of, "Ah, AI's bullshit," or, "It's not real, it's a lot of stuff," from, "Oh my god, this is magic," to, "Okay, this is the future." >> Yes. >> What would you say to that person, if you're at a cocktail party, or in the elevator say, "Calm down, this is the first inning." How do you explain the dynamics going on right now, to someone who's either in the industry, but not in the ropes? How would you explain like, what this wave's about? How would you describe it, and how would you prepare them for how to change their life around this? >> Yeah, so I'll go first and then I'll let Ed go. Efficiency, efficiency is the description. So we figured that a way to be a lot more efficient, a way where you can write a lot more emails, create way more content, create way more presentations. Developers can develop 10 times faster than they normally would. And that is very similar to what happened during the Industrial Revolution. I always like to look at examples from the past, to read what will happen now, and what will happen in the future. So during the Industrial Revolution, it was about efficiency with our hands, right? So I had to make a piece of cloth, like this piece of cloth for this shirt I'm wearing. Our ancestors, they had to spend month taking the cotton, making it into threads, taking the threads, making them into pieces of cloth, and then cutting it. And now a machine makes it just like that, right? And the ancestors now turned from the people that do the thing, to manage the machines that do the thing. And I think the same thing is going to happen now, is our efficiency will be multiplied extremely, as human beings, and we'll be able to do a lot more. And many of us will be able to do things they couldn't do before. So another great example I always like to use is the example of Google Maps, and GPS. Very few of us knew how to drive a car from one location to another, and read a map, and get there correctly. But once that efficiency of an AI, by the way, behind these things is very, very complex AI, that figures out how to do that for us. All of us now became amazing navigators that can go from any point to any point. So that's kind of how I look at the future. >> And that's a great real example of impact. Ed, your take on how you would talk to a friend, or colleague, or anyone who asks like, "How do I make sense of the current situation? "Is it real? "What's in it for me, and what do I do?" I mean every company's rethinking their business right now, around this. What would you say to them? >> You know, I usually like to show, rather than describe. And so, you know, the other day I just got access, I've been using an application for a long time, called Notion, and it's super popular. There's like 30 or 40 million users. And the new version of Notion came out, which has AI embedded within it. And it's AI that allows you primarily to create. So if you could break down the world of AI into find and create, for a minute, just kind of logically separate those two things, find is certainly going to be massively impacted in our experiences as consumers on, you know, Google and Bing, and I can't believe I just said the word Bing in the same sentence as Google, but that's what's happening now (all laughing), because it's a good example of change. >> Yes. >> But also inside the business. But on the crate side, you know, Notion is a wiki product, where you try to, you know, note down things that you are thinking about, or you want to share and memorialize. But sometimes you do need help to get it down fast. And just in the first day of using this new product, like my experience has really fundamentally changed. And I think that anybody who would, you know, anybody say for example, that is using an existing app, I would show them, open up the app. Now imagine the possibility of getting a starting point right off the bat, in five seconds of, instead of having to whole cloth draft this thing, imagine getting a starting point then you can modify and edit, or just dispose of and retry again. And that's the potential for me. I can't imagine a scenario where, in a few years from now, I'm going to be satisfied if I don't have a little bit of help, in the same way that I don't manually spell check every email that I send. I automatically spell check it. I love when I'm getting type ahead support inside of Google, or anything. Doesn't mean I always take it, or when texting. >> That's efficiency too. I mean the cloud was about developers getting stuff up quick. >> Exactly. >> All that heavy lifting is there for you, so you don't have to do it. >> Right? >> And you get to the value faster. >> Exactly. I mean, if history taught us one thing, it's, you have to always embrace efficiency, and if you don't fast enough, you will fall behind. Again, looking at the industrial revolution, the companies that embraced the industrial revolution, they became the leaders in the world, and the ones who did not, they all like. >> Well the AI thing that we got to watch out for, is watching how it goes off the rails. If it doesn't have the right prompt engineering, or data architecture, infrastructure. >> Yes. >> It's a big part. So this comes back down to your startup, real quick, I know we got a couple minutes left. Talk about the company, the motivation, and we'll do a deeper dive on on the company. But what's the motivation? What are you targeting for the market, business model? The tech, let's go. >> Actually, I would like Ed to go first. Go ahead. >> Sure, I mean, we're a developer-first, API-first platform. So the product is oriented around allowing developers who may not be superstars, in being able to either leverage, or choose, or select their own large language models for appropriate use cases. But they that want to be able to instantly add the power of large language models into their application set. We started with search, because we think it's going to be one of the first places that people try to take advantage of large language models, to help find information within an application context. And we've built our own large language models, focused on making it very efficient, and elegant, to find information more quickly. So what a developer can do is, within minutes, go up, register for an account, and get access to a set of APIs, that allow them to send data, to be converted into a format that's easy to understand for large language models, vectors. And then secondarily, they can issue queries, ask questions. And they can ask them very, the questions that can be asked, are very natural language questions. So we're talking about long form sentences, you know, drill down types of questions, and they can get answers that either come back in depending upon the form factor of the user interface, in list form, or summarized form, where summarized equals the opportunity to kind of see a condensed, singular answer. >> All right. I have a. >> Oh okay, go ahead, you go. >> I was just going to say, I'm going to be a customer for you, because I want, my dream was to have a hologram of theCUBE host, me and Dave, and have questions be generated in the metaverse. So you know. (all laughing) >> There'll be no longer any guests here. They'll all be talking to you guys. >> Give a couple bullets, I'll spit out 10 good questions. Publish a story. This brings the automation, I'm sorry to interrupt you. >> No, no. No, no, I was just going to follow on on the same. So another way to look at exactly what Ed described is, we want to offer you chatGPT for your own data, right? So imagine taking all of the recordings of all of the interviews you have done, and having all of the content of that being ingested by a system, where you can now have a conversation with your own data and say, "Oh, last time when I met Amr, "which video games did we talk about? "Which movie or book did we use as an analogy "for how we should be embracing data science, "and big data, which is moneyball," I know you use moneyball all the time. And you start having that conversation. So, now the data doesn't become a passive asset that you just have in your organization. No. It's an active participant that's sitting with you, on the table, helping you make decisions. >> One of my favorite things to do with customers, is to go to their site or application, and show them me using it. So for example, one of the customers I talked to was one of the biggest property management companies in the world, that lets people go and rent homes, and houses, and things like that. And you know, I went and I showed them me searching through reviews, looking for information, and trying different words, and trying to find out like, you know, is this place quiet? Is it comfortable? And then I put all the same data into our platform, and I showed them the world of difference you can have when you start asking that question wholeheartedly, and getting real information that doesn't have anything to do with the words you asked, but is really focused on the meaning. You know, when I asked like, "Is it quiet?" You know, answers would come back like, "The wind whispered through the trees peacefully," and you know, it's like nothing to do with quiet in the literal word sense, but in the meaning sense, everything to do with it. And that that was magical even for them, to see that. >> Well you guys are the front end of this big wave. Congratulations on the startup, Amr. I know you guys got great pedigree in big data, and you've got a great team, and congratulations. Vectara is the name of the company, check 'em out. Again, the startup boom is coming. This will be one of the major waves, generative AI is here. I think we'll look back, and it will be pointed out as a major inflection point in the industry. >> Absolutely. >> There's not a lot of hype behind that. People are are seeing it, experts are. So it's going to be fun, thanks for watching. >> Thanks John. (soft music)

Published Date : Feb 23 2023

SUMMARY :

I call it the fifth wave in the industry. It's great to be here. and the application's almost mind blowing. And in the meanwhile, and you got more horsepower,, of all the previous phases. in the same way that, you know, and all the work that's done on ethics, "bring the lead to the top." in terms of scratching the surface and it's going to fundamentally change and you go into application And the system just does it for you. is going to change to be very So the bottleneck in the past, So the replatforming is So that's going to be a That's part of what and they got to stand this up. And one of the core things don't have the skills. So our goal is to simplify it and I agree the apps are coming, I talk to a very senior And the data is very specialized. and be aware of the fact that request, and the output, some raw API on the cloud, about the apps experience, it's any action you would like to take. you just say it, and it's But a lot of the things with chatGPT, comp-side phrase that we all use. It reminds me the old all the way from, you know, raw, and I'm going to constrain But we got all your So the data quality And that's where you That is the garbage in, garbage out. So for the folks who are and how would you prepare them that do the thing, to manage the current situation? And the new version of Notion came out, But on the crate side, you I mean the cloud was about developers so you don't have to do it. and the ones who did not, they all like. If it doesn't have the So this comes back down to Actually, I would like Ed to go first. factor of the user interface, I have a. generated in the metaverse. They'll all be talking to you guys. This brings the automation, of all of the interviews you have done, one of the customers I talked to Vectara is the name of the So it's going to be fun, Thanks John.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John MarkoffPERSON

0.99+

2013DATE

0.99+

AWSORGANIZATION

0.99+

Ed AlbanPERSON

0.99+

AmazonORGANIZATION

0.99+

30QUANTITY

0.99+

10 timesQUANTITY

0.99+

2006DATE

0.99+

John FurrierPERSON

0.99+

two weeksQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

Ed AlbanesePERSON

0.99+

JohnPERSON

0.99+

five secondsQUANTITY

0.99+

Las VegasLOCATION

0.99+

EdPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

10 good questionsQUANTITY

0.99+

SwamiPERSON

0.99+

15 different possibilitiesQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

VectaraORGANIZATION

0.99+

Amr AwadallahPERSON

0.99+

GoogleORGANIZATION

0.99+

ClouderaORGANIZATION

0.99+

first timeQUANTITY

0.99+

bothQUANTITY

0.99+

end of 2019DATE

0.99+

yesterdayDATE

0.98+

Big DataORGANIZATION

0.98+

40 million usersQUANTITY

0.98+

two thingsQUANTITY

0.98+

two great guestsQUANTITY

0.98+

12 plus yearsQUANTITY

0.98+

oneQUANTITY

0.98+

five dollarQUANTITY

0.98+

NetscapeORGANIZATION

0.98+

five years agoDATE

0.98+

SQLTITLE

0.98+

first inningQUANTITY

0.98+

AmrPERSON

0.97+

two schoolsQUANTITY

0.97+

firstQUANTITY

0.97+

10 years agoDATE

0.97+

OneQUANTITY

0.96+

first dayQUANTITY

0.96+

threeDATE

0.96+

chatGPTTITLE

0.96+

first placesQUANTITY

0.95+

BingORGANIZATION

0.95+

NotionTITLE

0.95+

first thingQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

Beyond the BuzzTITLE

0.94+

Sati NatelPERSON

0.94+

Industrial RevolutionEVENT

0.93+

one locationQUANTITY

0.93+

three years agoDATE

0.93+

single applicationQUANTITY

0.92+

one thingQUANTITY

0.91+

first platformQUANTITY

0.91+

five years oldQUANTITY

0.91+

AI Meets the Supercloud | Supercloud2


 

(upbeat music) >> Okay, welcome back everyone at Supercloud 2 event, live here in Palo Alto, theCUBE Studios live stage performance, virtually syndicating it all over the world. I'm John Furrier with Dave Vellante here as Cube alumni, and special influencer guest, Howie Xu, VP of Machine Learning and Zscaler, also part-time as a CUBE analyst 'cause he is that good. Comes on all the time. You're basically a CUBE analyst as well. Thanks for coming on. >> Thanks for inviting me. >> John: Technically, you're not really a CUBE analyst, but you're kind of like a CUBE analyst. >> Happy New Year to everyone. >> Dave: Great to see you. >> Great to see you, Dave and John. >> John: We've been talking about ChatGPT online. You wrote a great post about it being more like Amazon, not like Google. >> Howie: More than just Google Search. >> More than Google Search. Oh, it's going to compete with Google Search, which it kind of does a little bit, but more its infrastructure. So a clever point, good segue into this conversation, because this is kind of the beginning of these kinds of next gen things we're going to see. Things where it's like an obvious next gen, it's getting real. Kind of like seeing the browser for the first time, Mosaic browser. Whoa, this internet thing's real. I think this is that moment and Supercloud like enablement is coming. So this has been a big part of the Supercloud kind of theme. >> Yeah, you talk about Supercloud, you talk about, you know, AI, ChatGPT. I really think the ChatGPT is really another Netscape moment, the browser moment. Because if you think about internet technology, right? It was brewing for 20 years before early 90s. Not until you had a, you know, browser, people realize, "Wow, this is how wonderful this technology could do." Right? You know, all the wonderful things. Then you have Yahoo and Amazon. I think we have brewing, you know, the AI technology for, you know, quite some time. Even then, you know, neural networks, deep learning. But not until ChatGPT came along, people realize, "Wow, you know, the user interface, user experience could be that great," right? So I really think, you know, if you look at the last 30 years, there is a browser moment, there is iPhone moment. I think ChatGPT moment is as big as those. >> Dave: What do you see as the intersection of things like ChatGPT and the Supercloud? Of course, the media's going to focus, journalists are going to focus on all the negatives and the privacy. Okay. You know we're going to get by that, right? Always do. Where do you see the Supercloud and sort of the distributed data fitting in with ChatGPT? Does it use that as a data source? What's the link? >> Howie: I think there are number of use cases. One of the use cases, we talked about why we even have Supercloud because of the complexity, because of the, you know, heterogeneous nature of different clouds. In order for me as a developer, in order for me to create applications, I have so many things to worry about, right? It's a complexity. But with ChatGPT, with the AI, I don't have to worry about it, right? Those kind of details will be taken care of by, you know, the underlying layer. So we have been talking about on this show, you know, over the last, what, year or so about the Supercloud, hey, defining that, you know, API layer spanning across, you know, multiple clouds. I think that will be happening. However, for a lot of the things, that will be more hidden, right? A lot of that will be automated by the bots. You know, we were just talking about it right before the show. One of the profound statement I heard from Adrian Cockcroft about 10 years ago was, "Hey Howie, you know, at Netflix, right? You know, IT is just one API call away." That's a profound statement I heard about a decade ago. I think next decade, right? You know, the IT is just one English language away, right? So when it's one English language away, it's no longer as important, API this, API that. You still need API just like hardware, right? You still need all of those things. That's going to be more hidden. The high level thing will be more, you know, English language or the language, right? Any language for that matter. >> Dave: And so through language, you'll tap services that live across the Supercloud, is what you're saying? >> Howie: You just tell what you want, what you desire, right? You know, the bots will help you to figure out where the complexity is, right? You know, like you said, a lot of criticism about, "Hey, ChatGPT doesn't do this, doesn't do that." But if you think about how to break things down, right? For instance, right, you know, ChatGPT doesn't have Microsoft stock price today, obviously, right? However, you can ask ChatGPT to write a program for you, retrieve the Microsoft stock price, (laughs) and then just run it, right? >> Dave: Yeah. >> So the thing to think about- >> John: It's only going to get better. It's only going to get better. >> The thing people kind of unfairly criticize ChatGPT is it doesn't do this. But can you not break down humans' task into smaller things and get complex things to be done by the ChatGPT? I think we are there already, you know- >> John: That to me is the real game changer. That's the assembly of atomic elements at the top of the stack, whether the interface is voice or some programmatic gesture based thing, you know, wave your hand or- >> Howie: One of the analogy I used in my blog was, you know, each person, each professional now is a quarterback. And we suddenly have, you know, a lot more linebacks or you know, any backs to work for you, right? For free even, right? You know, and then that's sort of, you should think about it. You are the quarterback of your day-to-day job, right? Your job is not to do everything manually yourself. >> Dave: You call the play- >> Yes. >> Dave: And they execute. Do your job. >> Yes, exactly. >> Yeah, all the players are there. All the elves are in the North Pole making the toys, Dave, as we say. But this is the thing, I want to get your point. This change is going to require a new kind of infrastructure software relationship, a new kind of operating runtime, a new kind of assembler, a new kind of loader link things. This very operating systems kind of concepts. >> Data intensive, right? How to process the data, how to, you know, process so gigantic data in parallel, right? That's actually a tough job, right? So if you think about ChatGPT, why OpenAI is ahead of the game, right? You know, Google may not want to acknowledge it, right? It's not necessarily they do, you know, not have enough data scientist, but the software engineering pieces, you know, behind it, right? To train the model, to actually do all those things in parallel, to do all those things in a cost effective way. So I think, you know, a lot of those still- >> Let me ask you a question. Let me ask you a question because we've had this conversation privately, but I want to do it while we're on stage here. Where are all the alpha geeks and developers and creators and entrepreneurs going to gravitate to? You know, in every wave, you see it in crypto, all the alphas went into crypto. Now I think with ChatGPT, you're going to start to see, like, "Wow, it's that moment." A lot of people are going to, you know, scrum and do startups. CTOs will invent stuff. There's a lot of invention, a lot of computer science and customer requirements to figure out. That's new. Where are the alpha entrepreneurs going to go to? What do you think they're going to gravitate to? If you could point to the next layer to enable this super environment, super app environment, Supercloud. 'Cause there's a lot to do to enable what you just said. >> Howie: Right. You know, if you think about using internet as the analogy, right? You know, in the early 90s, internet came along, browser came along. You had two kind of companies, right? One is Amazon, the other one is walmart.com. And then there were company, like maybe GE or whatnot, right? Really didn't take advantage of internet that much. I think, you know, for entrepreneurs, suddenly created the Yahoo, Amazon of the ChatGPT native era. That's what we should be all excited about. But for most of the Fortune 500 companies, your job is to surviving sort of the big revolution. So you at least need to do your walmart.com sooner than later, right? (laughs) So not be like GE, right? You know, hand waving, hey, I do a lot of the internet, but you know, when you look back last 20, 30 years, what did they do much with leveraging the- >> So you think they're going to jump in, they're going to build service companies or SaaS tech companies or Supercloud companies? >> Howie: Okay, so there are two type of opportunities from that perspective. One is, you know, the OpenAI ish kind of the companies, I think the OpenAI, the game is still open, right? You know, it's really Close AI today. (laughs) >> John: There's room for competition, you mean? >> There's room for competition, right. You know, you can still spend you know, 50, $100 million to build something interesting. You know, there are company like Cohere and so on and so on. There are a bunch of companies, I think there is that. And then there are companies who's going to leverage those sort of the new AI primitives. I think, you know, we have been talking about AI forever, but finally, finally, it's no longer just good, but also super useful. I think, you know, the time is now. >> John: And if you have the cloud behind you, what do you make the Amazon do differently? 'Cause Amazon Web Services is only going to grow with this. It's not going to get smaller. There's more horsepower to handle, there's more needs. >> Howie: Well, Microsoft already showed what's the future, right? You know, you know, yes, there is a kind of the container, you know, the serverless that will continue to grow. But the future is really not about- >> John: Microsoft's shown the future? >> Well, showing that, you know, working with OpenAI, right? >> Oh okay. >> They already said that, you know, we are going to have ChatGPT service. >> $10 billion, I think they're putting it. >> $10 billion putting, and also open up the Open API services, right? You know, I actually made a prediction that Microsoft future hinges on OpenAI. I think, you know- >> John: They believe that $10 billion bet. >> Dave: Yeah. $10 billion bet. So I want to ask you a question. It's somewhat academic, but it's relevant. For a number of years, it looked like having first mover advantage wasn't an advantage. PCs, spreadsheets, the browser, right? Social media, Friendster, right? Mobile. Apple wasn't first to mobile. But that's somewhat changed. The cloud, AWS was first. You could debate whether or not, but AWS okay, they have first mover advantage. Crypto, Bitcoin, first mover advantage. Do you think OpenAI will have first mover advantage? >> It certainly has its advantage today. I think it's year two. I mean, I think the game is still out there, right? You know, we're still in the first inning, early inning of the game. So I don't think that the game is over for the rest of the players, whether the big players or the OpenAI kind of the, sort of competitors. So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest, to get, you know, another shot to the OpenAI sort of the level?" You know, I did a- (laughs) >> Line up. >> That's classic VC. "How much does it cost me to replicate?" >> I'm pretty sure he asked the question to a bunch of guys, right? >> Good luck with that. (laughs) >> So we kind of did some napkin- >> What'd you come up with? (laughs) >> $100 million is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So 100 million. >> John: Hundreds of millions. >> Yeah, yeah, yeah. 100 million order of magnitude is what I came up with. You know, we can get into details, you know, in other sort of the time, but- >> Dave: That's actually not that much if you think about it. >> Howie: Exactly. So when he heard me articulating why is that, you know, he's thinking, right? You know, he actually, you know, asked me, "Hey, you know, there's this company. Do you happen to know this company? Can I reach out?" You know, those things. So I truly believe it's not a billion or 10 billion issue, it's more like 100. >> John: And also, your other point about referencing the internet revolution as a good comparable. The other thing there is online user population was a big driver of the growth of that. So what's the equivalent here for online user population for AI? Is it more apps, more users? I mean, we're still early on, it's first inning. >> Yeah. We're kind of the, you know- >> What's the key metric for success of this sector? Do you have a read on that? >> I think the, you know, the number of users is a good metrics, but I think it's going to be a lot of people are going to use AI services without even knowing they're using it, right? You know, I think a lot of the applications are being already built on top of OpenAI, and then they are kind of, you know, help people to do marketing, legal documents, you know, so they're already inherently OpenAI kind of the users already. So I think yeah. >> Well, Howie, we've got to wrap, but I really appreciate you coming on. I want to give you a last minute to wrap up here. In your experience, and you've seen many waves of innovation. You've even had your hands in a lot of the big waves past three inflection points. And obviously, machine learning you're doing now, you're deep end. Why is this Supercloud movement, this wave of Supercloud and the discussion of this next inflection point, why is it so important? For the folks watching, why should they be paying attention to this particular moment in time? Could you share your super clip on Supercloud? >> Howie: Right. So this is simple from my point of view. So why do you even have cloud to begin with, right? IT is too complex, too complex to operate or too expensive. So there's a newer model. There is a better model, right? Let someone else operate it, there is elasticity out of it, right? That's great. Until you have multiple vendors, right? Many vendors even, you know, we're talking about kind of how to make multiple vendors look like the same, but frankly speaking, even one vendor has, you know, thousand services. Now it's kind of getting, what Kid was talking about what, cloud chaos, right? It's the evolution. You know, the history repeats itself, right? You know, you have, you know, next great things and then too many great things, and then people need to sort of abstract this out. So it's almost that you must do this. But I think how to abstract this out is something that at this time, AI is going to help a lot, right? You know, like I mentioned, right? A lot of the abstraction, you don't have to think about API anymore. I bet 10 years from now, you know, IT is one language away, not API away. So think about that world, right? So Supercloud in, in my opinion, sure, you kind of abstract things out. You have, you know, consistent layers. But who's going to do that? Is that like we all agreed upon the model, agreed upon those APIs? Not necessary. There are certain, you know, truth in that, but there are other truths, let bots take care of, right? Whether you know, I want some X happens, whether it's going to be done by Azure, by AWS, by GCP, bots will figure out at a given time with certain contacts with your security requirement, posture requirement. I'll think that out. >> John: That's awesome. And you know, Dave, you and I have been talking about this. We think scale is the new ratification. If you have first mover advantage, I'll see the benefit, but scale is a huge thing. OpenAI, AWS. >> Howie: Yeah. Every day, we are using OpenAI. Today, we are labeling data for them. So you know, that's a little bit of the- (laughs) >> John: Yeah. >> First mover advantage that other people don't have, right? So it's kind of scary. So I'm very sure that Google is a little bit- (laughs) >> When we do our super AI event, you're definitely going to be keynoting. (laughs) >> Howie: I think, you know, we're talking about Supercloud, you know, before long, we are going to talk about super intelligent cloud. (laughs) >> I'm super excited, Howie, about this. Thanks for coming on. Great to see you, Howie Xu. Always a great analyst for us contributing to the community. VP of Machine Learning and Zscaler, industry legend and friend of theCUBE. Thanks for coming on and sharing really, really great advice and insight into what this next wave means. This Supercloud is the next wave. "If you're not on it, you're driftwood," says Pat Gelsinger. So you're going to see a lot more discussion. We'll be back more here live in Palo Alto after this short break. >> Thank you. (upbeat music)

Published Date : Feb 17 2023

SUMMARY :

it all over the world. but you're kind of like a CUBE analyst. Great to see you, You wrote a great post about Kind of like seeing the So I really think, you know, Of course, the media's going to focus, will be more, you know, You know, like you said, John: It's only going to get better. I think we are there already, you know- you know, wave your hand or- or you know, any backs Do your job. making the toys, Dave, as we say. So I think, you know, A lot of people are going to, you know, I think, you know, for entrepreneurs, One is, you know, the OpenAI I think, you know, the time is now. John: And if you have You know, you know, yes, They already said that, you know, $10 billion, I think I think, you know- that $10 billion bet. So I want to ask you a question. to get, you know, another "How much does it cost me to replicate?" Good luck with that. You know, not a billion, into details, you know, if you think about it. You know, he actually, you know, asked me, the internet revolution We're kind of the, you know- I think the, you know, in a lot of the big waves You have, you know, consistent layers. And you know, Dave, you and I So you know, that's a little bit of the- So it's kind of scary. to be keynoting. Howie: I think, you know, This Supercloud is the next wave. (upbeat music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Pat GelsingerPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

GEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Adrian CockcroftPERSON

0.99+

John FurrierPERSON

0.99+

$10 billionQUANTITY

0.99+

YahooORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

10 millionQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

50QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Howie XuPERSON

0.99+

CUBEORGANIZATION

0.99+

$100 millionQUANTITY

0.99+

100 millionQUANTITY

0.99+

Hundreds of millionsQUANTITY

0.99+

AppleORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

10 billionQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

North PoleLOCATION

0.99+

next decadeDATE

0.99+

firstQUANTITY

0.99+

CohereORGANIZATION

0.99+

first inningQUANTITY

0.99+

100QUANTITY

0.99+

TodayDATE

0.99+

Machine LearningORGANIZATION

0.99+

Supercloud 2EVENT

0.99+

EnglishOTHER

0.98+

each personQUANTITY

0.98+

two typeQUANTITY

0.98+

OneQUANTITY

0.98+

oneQUANTITY

0.98+

ZscalerORGANIZATION

0.98+

early 90sDATE

0.97+

HowiePERSON

0.97+

two kindQUANTITY

0.97+

one vendorQUANTITY

0.97+

one languageQUANTITY

0.97+

each professionalQUANTITY

0.97+

Oracle Aspires to be the Netflix of AI | Cube Conversation


 

(gentle music playing) >> For centuries, we've been captivated by the concept of machines doing the job of humans. And over the past decade or so, we've really focused on AI and the possibility of intelligent machines that can perform cognitive tasks. Now in the past few years, with the popularity of machine learning models ranging from recent ChatGPT to Bert, we're starting to see how AI is changing the way we interact with the world. How is AI transforming the way we do business? And what does the future hold for us there. At theCube, we've covered Oracle's AI and ML strategy for years, which has really been used to drive automation into Oracle's autonomous database. We've talked a lot about MySQL HeatWave in database machine learning, and AI pushed into Oracle's business apps. Oracle, it tends to lead in AI, but not competing as a direct AI player per se, but rather embedding AI and machine learning into its portfolio to enhance its existing products, and bring new services and offerings to the market. Now, last October at Cloud World in Las Vegas, Oracle partnered with Nvidia, which is the go-to AI silicon provider for vendors. And they announced an investment, a pretty significant investment to deploy tens of thousands more Nvidia GPUs to OCI, the Oracle Cloud Infrastructure and build out Oracle's infrastructure for enterprise scale AI. Now, Oracle CEO, Safra Catz said something to the effect of this alliance is going to help customers across industries from healthcare, manufacturing, telecoms, and financial services to overcome the multitude of challenges they face. Presumably she was talking about just driving more automation and more productivity. Now, to learn more about Oracle's plans for AI, we'd like to welcome in Elad Ziklik, who's the vice president of AI services at Oracle. Elad, great to see you. Welcome to the show. >> Thank you. Thanks for having me. >> You're very welcome. So first let's talk about Oracle's path to AI. I mean, it's the hottest topic going for years you've been incorporating machine learning into your products and services, you know, could you tell us what you've been working on, how you got here? >> So great question. So as you mentioned, I think most of the original four-way into AI was on embedding AI and using AI to make our applications, and databases better. So inside mySQL HeatWave, inside our autonomous database in power, we've been driving AI, all of course are SaaS apps. So Fusion, our large enterprise business suite for HR applications and CRM and ELP, and whatnot has built in AI inside it. Most recently, NetSuite, our small medium business SaaS suite started using AI for things like automated invoice processing and whatnot. And most recently, over the last, I would say two years, we've started exposing and bringing these capabilities into the broader OCI Oracle Cloud infrastructure. So the developers, and ISVs and customers can start using our AI capabilities to make their apps better and their experiences and business workflow better, and not just consume these as embedded inside Oracle. And this recent partnership that you mentioned with Nvidia is another step in bringing the best AI infrastructure capabilities into this platform so you can actually build any type of machine learning workflow or AI model that you want on Oracle Cloud. >> So when I look at the market, I see companies out there like DataRobot or C3 AI, there's maybe a half dozen that sort of pop up on my radar anyway. And my premise has always been that most customers, they don't want to become AI experts, they want to buy applications and have AI embedded or they want AI to manage their infrastructure. So my question to you is, how does Oracle help its OCI customers support their business with AI? >> So it's a great question. So I think what most customers want is business AI. They want AI that works for the business. They want AI that works for the enterprise. I call it the last mile of AI. And they want this thing to work. The majority of them don't want to hire a large and expensive data science teams to go and build everything from scratch. They just want the business problem solved by applying AI to it. My best analogy is Lego. So if you think of Lego, Lego has these millions Lego blocks that you can use to build anything that you want. But the majority of people like me or like my kids, they want the Lego death style kit or the Lego Eiffel Tower thing. They want a thing that just works, and it's very easy to use. And still Lego blocks, you still need to build some things together, which just works for the scenario that you're looking for. So that's our focus. Our focus is making it easy for customers to apply AI where they need to, in the right business context. So whether it's embedding it inside the business applications, like adding forecasting capabilities to your supply chain management or financial planning software, whether it's adding chat bots into the line of business applications, integrating these things into your analytics dashboard, even all the way to, we have a new platform piece we call ML applications that allows you to take a machine learning model, and scale it for the thousands of tenants that you would be. 'Cause this is a big problem for most of the ML use cases. It's very easy to build something for a proof of concept or a pilot or a demo. But then if you need to take this and then deploy it across your thousands of customers or your thousands of regions or facilities, then it becomes messy. So this is where we spend our time making it easy to take these things into production in the context of your business application or your business use case that you're interested in right now. >> So you mentioned chat bots, and I want to talk about ChatGPT, but my question here is different, we'll talk about that in a minute. So when you think about these chat bots, the ones that are conversational, my experience anyway is they're just meh, they're not that great. But the ones that actually work pretty well, they have a conditioned response. Now they're limited, but they say, which of the following is your problem? And then if that's one of the following is your problem, you can maybe solve your problem. But this is clearly a trend and it helps the line of business. How does Oracle think about these use cases for your customers? >> Yeah, so I think the key here is exactly what you said. It's about task completion. The general purpose bots are interesting, but as you said, like are still limited. They're getting much better, I'm sure we'll talk about ChatGPT. But I think what most enterprises want is around task completion. I want to automate my expense report processing. So today inside Oracle we have a chat bot where I submit my expenses the bot ask a couple of question, I answer them, and then I'm done. Like I don't need to go to our fancy application, and manually submit an expense report. I do this via Slack. And the key is around managing the right expectations of what this thing is capable of doing. Like, I have a story from I think five, six years ago when technology was much inferior than it is today. Well, one of the telco providers I was working with wanted to roll a chat bot that does realtime translation. So it was for a support center for of the call centers. And what they wanted do is, Hey, we have English speaking employees, whatever, 24/7, if somebody's calling, and the native tongue is different like Hebrew in my case, or Chinese or whatnot, then we'll give them a chat bot that they will interact with and will translate this on the fly and everything would work. And when they rolled it out, the feedback from customers was horrendous. Customers said, the technology sucks. It's not good. I hate it, I hate your company, I hate your support. And what they've done is they've changed the narrative. Instead of, you go to a support center, and you assume you're going to talk to a human, and instead you get a crappy chat bot, they're like, Hey, if you want to talk to a Hebrew speaking person, there's a four hour wait, please leave your phone and we'll call you back. Or you can try a new amazing Hebrew speaking AI powered bot and it may help your use case. Do you want to try it out? And some people said, yeah, let's try it out. Plus one to try it out. And the feedback, even though it was the exact same technology was amazing. People were like, oh my God, this is so innovative, this is great. Even though it was the exact same experience that they hated a few weeks earlier on. So I think the key lesson that I picked from this experience is it's all about setting the right expectations, and working around the right use case. If you are replacing a human, the level is different than if you are just helping or augmenting something that otherwise would take a lot of time. And I think this is the focus that we are doing, picking up the tasks that people want to accomplish or that enterprise want to accomplish for the customers, for the employees. And using chat bots to make those specific ones better rather than, hey, this is going to replace all humans everywhere, and just be better than that. >> Yeah, I mean, to the point you mentioned expense reports. I'm in a Twitter thread and one guy says, my favorite part of business travel is filling out expense reports. It's an hour of excitement to figure out which receipts won't scan. We can all relate to that. It's just the worst. When you think about companies that are building custom AI driven apps, what can they do on OCI? What are the best options for them? Do they need to hire an army of machine intelligence experts and AI specialists? Help us understand your point of view there. >> So over the last, I would say the two or three years we've developed a full suite of machine learning and AI services for, I would say probably much every use case that you would expect right now from applying natural language processing to understanding customer support tickets or social media, or whatnot to computer vision platforms or computer vision services that can understand and detect objects, and count objects on shelves or detect cracks in the pipe or defecting parts, all the way to speech services. It can actually transcribe human speech. And most recently we've launched a new document AI service. That can actually look at unstructured documents like receipts or invoices or government IDs or even proprietary documents, loan application, student application forms, patient ingestion and whatnot and completely automate them using AI. So if you want to do one of the things that are, I would say common bread and butter for any industry, whether it's financial services or healthcare or manufacturing, we have a suite of services that any developer can go, and use easily customized with their own data. You don't need to be an expert in deep learning or large language models. You could just use our automobile capabilities, and build your own version of the models. Just go ahead and use them. And if you do have proprietary complex scenarios that you need customer from scratch, we actually have the most cost effective platform for that. So we have the OCI data science as well as built-in machine learning platform inside the databases inside the Oracle database, and mySQL HeatWave that allow data scientists, python welding people that actually like to build and tweak and control and improve, have everything that they need to go and build the machine learning models from scratch, deploy them, monitor and manage them at scale in production environment. And most of it is brand new. So we did not have these technologies four or five years ago and we've started building them and they're now at enterprise scale over the last couple of years. >> So what are some of the state-of-the-art tools, that AI specialists and data scientists need if they're going to go out and develop these new models? >> So I think it's on three layers. I think there's an infrastructure layer where the Nvidia's of the world come into play. For some of these things, you want massively efficient, massively scaled infrastructure place. So we are the most cost effective and performant large scale GPU training environment today. We're going to be first to onboard the new Nvidia H100s. These are the new super powerful GPU's for large language model training. So we have that covered for you in case you need this 'cause you want to build these ginormous things. You need a data science platform, a platform where you can open a Python notebook, and just use all these fancy open source frameworks and create the models that you want, and then click on a button and deploy it. And it infinitely scales wherever you need it. And in many cases you just need the, what I call the applied AI services. You need the Lego sets, the Lego death style, Lego Eiffel Tower. So we have a suite of these sets for typical scenarios, whether it's cognitive services of like, again, understanding images, or documents all the way to solving particular business problems. So an anomaly detection service, demand focusing service that will be the equivalent of these Lego sets. So if this is the business problem that you're looking to solve, we have services out there where we can bring your data, call an API, train a model, get the model and use it in your production environment. So wherever you want to play, all the way into embedding this thing, inside this applications, obviously, wherever you want to play, we have the tools for you to go and engage from infrastructure to SaaS at the top, and everything in the middle. >> So when you think about the data pipeline, and the data life cycle, and the specialized roles that came out of kind of the (indistinct) era if you will. I want to focus on two developers and data scientists. So the developers, they hate dealing with infrastructure and they got to deal with infrastructure. Now they're being asked to secure the infrastructure, they just want to write code. And a data scientist, they're spending all their time trying to figure out, okay, what's the data quality? And they're wrangling data and they don't spend enough time doing what they want to do. So there's been a lack of collaboration. Have you seen that change, are these approaches allowing collaboration between data scientists and developers on a single platform? Can you talk about that a little bit? >> Yeah, that is a great question. One of the biggest set of scars that I have on my back from for building these platforms in other companies is exactly that. Every persona had a set of tools, and these tools didn't talk to each other and the handoff was painful. And most of the machine learning things evaporate or die on the floor because of this problem. It's very rarely that they are unsuccessful because the algorithm wasn't good enough. In most cases it's somebody builds something, and then you can't take it to production, you can't integrate it into your business application. You can't take the data out, train, create an endpoint and integrate it back like it's too painful. So the way we are approaching this is focused on this problem exactly. We have a single set of tools that if you publish a model as a data scientist and developers, and even business analysts that are seeing a inside of business application could be able to consume it. We have a single model store, a single feature store, a single management experience across the various personas that need to play in this. And we spend a lot of time building, and borrowing a word that cellular folks used, and I really liked it, building inside highways to make it easier to bring these insights into where you need them inside applications, both inside our applications, inside our SaaS applications, but also inside custom third party and even first party applications. And this is where a lot of our focus goes to just because we have dealt with so much pain doing this inside our own SaaS that we now have built the tools, and we're making them available for others to make this process of building a machine learning outcome driven insight in your app easier. And it's not just the model development, and it's not just the deployment, it's the entire journey of taking the data, building the model, training it, deploying it, looking at the real data that comes from the app, and creating this feedback loop in a more efficient way. And that's our focus area. Exactly this problem. >> Well thank you for that. So, last week we had our super cloud two event, and I had Juan Loza on and he spent a lot of time talking about how open Oracle is in its philosophy, and I got a lot of feedback. They were like, Oracle open, I don't really think, but the truth is if you think about database Oracle database, it never met a hardware platform that it didn't like. So in that sense it's open. So, but my point is, a big part of of machine learning and AI is driven by open source tools, frameworks, what's your open source strategy? What do you support from an open source standpoint? >> So I'm a strong believer that you don't actually know, nobody knows where the next slip fog or the next industry shifting innovation in AI is going to come from. If you look six months ago, nobody foreseen Dali, the magical text to image generation and the exploding brought into just art and design type of experiences. If you look six weeks ago, I don't think anybody's seen ChatGPT, and what it can do for a whole bunch of industries. So to me, assuming that a customer or partner or developer would want to lock themselves into only the tools that a specific vendor can produce is ridiculous. 'Cause nobody knows, if anybody claims that they know where the innovation is going to come from in a year or two, let alone in five or 10, they're just wrong or lying. So our strategy for Oracle is to, I call this the Netflix of AI. So if you think about Netflix, they produced a bunch of high quality shows on their own. A few years ago it was House of Cards. Last month my wife and I binge watched Ginny and Georgie, but they also curated a lot of shows that they found around the world and bought them to their customers. So it started with things like Seinfeld or Friends and most recently it was Squid games and those are famous Israeli TV series called Founder that Netflix bought in, and they bought it as is and they gave it the Netflix value. So you have captioning and you have the ability to speed the movie and you have it inside your app, and you can download it and watch it offline and everything, but nobody Netflix was involved in the production of these first seasons. Now if these things hunt and they're great, then the third season or the fourth season will get the full Netflix production value, high value budget, high value location shooting or whatever. But you as a customer, you don't care whether the producer and director, and screenplay writing is a Netflix employee or is somebody else's employee. It is fulfilled by Netflix. I believe that we will become, or we are looking to become the Netflix of AI. We are building a bunch of AI in a bunch of places where we think it's important and we have some competitive advantage like healthcare with Acellular partnership or whatnot. But I want to bring the best AI software and hardware to OCI and do a fulfillment by Oracle on that. So you'll get the Oracle security and identity and single bill and everything you'd expect from a company like Oracle. But we don't have to be building the data science, and the models for everything. So this means both open source recently announced a partnership with Anaconda, the leading provider of Python distribution in the data science ecosystem where we are are doing a joint strategic partnership of bringing all the goodness into Oracle customers as well as in the process of doing the same with Nvidia, and all those software libraries, not just the Hubble, both for other stuff like Triton, but also for healthcare specific stuff as well as other ISVs, other AI leading ISVs that we are in the process of partnering with to get their stuff into OCI and into Oracle so that you can truly consume the best AI hardware, and the best AI software in the world on Oracle. 'Cause that is what I believe our customers would want the ability to choose from any open source engine, and honestly from any ISV type of solution that is AI powered and they want to use it in their experiences. >> So you mentioned ChatGPT, I want to talk about some of the innovations that are coming. As an AI expert, you see ChatGPT on the one hand, I'm sure you weren't surprised. On the other hand, maybe the reaction in the market, and the hype is somewhat surprising. You know, they say that we tend to under or over-hype things in the early stages and under hype them long term, you kind of use the internet as example. What's your take on that premise? >> So. I think that this type of technology is going to be an inflection point in how software is being developed. I truly believe this. I think this is an internet style moment, and the way software interfaces, software applications are being developed will dramatically change over the next year two or three because of this type of technologies. I think there will be industries that will be shifted. I think education is a good example. I saw this thing opened on my son's laptop. So I think education is going to be transformed. Design industry like images or whatever, it's already been transformed. But I think that for mass adoption, like beyond the hype, beyond the peak of inflected expectations, if I'm using Gartner terminology, I think certain things need to go and happen. One is this thing needs to become more reliable. So right now it is a complete black box that sometimes produce magic, and sometimes produce just nonsense. And it needs to have better explainability and better lineage to, how did you get to this answer? 'Cause I think enterprises are going to really care about the things that they surface with the customers or use internally. So I think that is one thing that's going to come out. And the other thing that's going to come out is I think it's going to come industry specific large language models or industry specific ChatGPTs. Something like how OpenAI did co-pilot for writing code. I think we will start seeing this type of apps solving for specific business problems, understanding contracts, understanding healthcare, writing doctor's notes on behalf of doctors so they don't have to spend time manually recording and analyzing conversations. And I think that would become the sweet spot of this thing. There will be companies, whether it's OpenAI or Microsoft or Google or hopefully Oracle that will use this type of technology to solve for specific very high value business needs. And I think this will change how interfaces happen. So going back to your expense report, the world of, I'm going to go into an app, and I'm going to click on seven buttons in order to get some job done like this world is gone. Like I'm going to say, hey, please do this and that. And I expect an answer to come out. I've seen a recent demo about, marketing in sales. So a customer sends an email that is interested in something and then a ChatGPT powered thing just produces the answer. I think this is how the world is going to evolve. Like yes, there's a ton of hype, yes, it looks like magic and right now it is magic, but it's not yet productive for most enterprise scenarios. But in the next 6, 12, 24 months, this will start getting more dependable, and it's going to change how these industries are being managed. Like I think it's an internet level revolution. That's my take. >> It's very interesting. And it's going to change the way in which we have. Instead of accessing the data center through APIs, we're going to access it through natural language processing and that opens up technology to a huge audience. Last question, is a two part question. And the first part is what you guys are working on from the futures, but the second part of the question is, we got data scientists and developers in our audience. They love the new shiny toy. So give us a little glimpse of what you're working on in the future, and what would you say to them to persuade them to check out Oracle's AI services? >> Yep. So I think there's two main things that we're doing, one is around healthcare. With a new recent acquisition, we are spending a significant effort around revolutionizing healthcare with AI. Of course many scenarios from patient care using computer vision and cameras through automating, and making better insurance claims to research and pharma. We are making the best models from leading organizations, and internal available for hospitals and researchers, and insurance providers everywhere. And we truly are looking to become the leader in AI for healthcare. So I think that's a huge focus area. And the second part is, again, going back to the enterprise AI angle. Like we want to, if you have a business problem that you want to apply here to solve, we want to be your platform. Like you could use others if you want to build everything complicated and whatnot. We have a platform for that as well. But like, if you want to apply AI to solve a business problem, we want to be your platform. We want to be the, again, the Netflix of AI kind of a thing where we are the place for the greatest AI innovations accessible to any developer, any business analyst, any user, any data scientist on Oracle Cloud. And we're making a significant effort on these two fronts as well as developing a lot of the missing pieces, and building blocks that we see are needed in this space to make truly like a great experience for developers and data scientists. And what would I recommend? Get started, try it out. We actually have a shameless sales plug here. We have a free deal for all of our AI services. So it typically cost you nothing. I would highly recommend to just go, and try these things out. Go play with it. If you are a python welding developer, and you want to try a little bit of auto mail, go down that path. If you're not even there and you're just like, hey, I have these customer feedback things and I want to try out, if I can understand them and apply AI and visualize, and do some cool stuff, we have services for that. My recommendation is, and I think ChatGPT got us 'cause I see people that have nothing to do with AI, and can't even spell AI going and trying it out. I think this is the time. Go play with these things, go play with these technologies and find what AI can do to you or for you. And I think Oracle is a great place to start playing with these things. >> Elad, thank you. Appreciate you sharing your vision of making Oracle the Netflix of AI. Love that and really appreciate your time. >> Awesome. Thank you. Thank you for having me. >> Okay. Thanks for watching this Cube conversation. This is Dave Vellante. We'll see you next time. (gentle music playing)

Published Date : Jan 24 2023

SUMMARY :

AI and the possibility Thanks for having me. I mean, it's the hottest So the developers, So my question to you is, and scale it for the thousands So when you think about these chat bots, and the native tongue It's just the worst. So over the last, and create the models that you want, of the (indistinct) era if you will. So the way we are approaching but the truth is if you the movie and you have it inside your app, and the hype is somewhat surprising. and the way software interfaces, and what would you say to them and you want to try a of making Oracle the Netflix of AI. Thank you for having me. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NetflixORGANIZATION

0.99+

OracleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Elad ZiklikPERSON

0.99+

MicrosoftORGANIZATION

0.99+

twoQUANTITY

0.99+

Safra CatzPERSON

0.99+

EladPERSON

0.99+

thousandsQUANTITY

0.99+

AnacondaORGANIZATION

0.99+

two partQUANTITY

0.99+

fourth seasonQUANTITY

0.99+

House of CardsTITLE

0.99+

LegoORGANIZATION

0.99+

second partQUANTITY

0.99+

GoogleORGANIZATION

0.99+

first seasonsQUANTITY

0.99+

SeinfeldTITLE

0.99+

Last monthDATE

0.99+

third seasonQUANTITY

0.99+

four hourQUANTITY

0.99+

last weekDATE

0.99+

HebrewOTHER

0.99+

Las VegasLOCATION

0.99+

last OctoberDATE

0.99+

OCIORGANIZATION

0.99+

three yearsQUANTITY

0.99+

bothQUANTITY

0.99+

two frontsQUANTITY

0.99+

first partQUANTITY

0.99+

Juan LozaPERSON

0.99+

FounderTITLE

0.99+

fourDATE

0.99+

six weeks agoDATE

0.99+

todayDATE

0.99+

two yearsQUANTITY

0.99+

pythonTITLE

0.99+

fiveQUANTITY

0.99+

a yearQUANTITY

0.99+

six months agoDATE

0.99+

two developersQUANTITY

0.99+

firstQUANTITY

0.98+

PythonTITLE

0.98+

H100sCOMMERCIAL_ITEM

0.98+

five years agoDATE

0.98+

oneQUANTITY

0.98+

FriendsTITLE

0.98+

one guyQUANTITY

0.98+

10QUANTITY

0.97+

Breaking Analysis: AI Goes Mainstream But ROI Remains Elusive


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> A decade of big data investments combined with cloud scale, the rise of much more cost effective processing power. And the introduction of advanced tooling has catapulted machine intelligence to the forefront of technology investments. No matter what job you have, your operation will be AI powered within five years and machines may actually even be doing your job. Artificial intelligence is being infused into applications, infrastructure, equipment, and virtually every aspect of our lives. AI is proving to be extremely helpful at things like controlling vehicles, speeding up medical diagnoses, processing language, advancing science, and generally raising the stakes on what it means to apply technology for business advantage. But business value realization has been a challenge for most organizations due to lack of skills, complexity of programming models, immature technology integration, sizable upfront investments, ethical concerns, and lack of business alignment. Mastering AI technology will not be a requirement for success in our view. However, figuring out how and where to apply AI to your business will be crucial. That means understanding the business case, picking the right technology partner, experimenting in bite-sized chunks, and quickly identifying winners to double down on from an investment standpoint. Hello and welcome to this week's Wiki-bond CUBE Insights powered by ETR. In this breaking analysis, we update you on the state of AI and what it means for the competition. And to do so, we invite into our studios Andy Thurai of Constellation Research. Andy covers AI deeply. He knows the players, he knows the pitfalls of AI investment, and he's a collaborator. Andy, great to have you on the program. Thanks for coming into our CUBE studios. >> Thanks for having me on. >> You're very welcome. Okay, let's set the table with a premise and a series of assertions we want to test with Andy. I'm going to lay 'em out. And then Andy, I'd love for you to comment. So, first of all, according to McKinsey, AI adoption has more than doubled since 2017, but only 10% of organizations report seeing significant ROI. That's a BCG and MIT study. And part of that challenge of AI is it requires data, is requires good data, data proficiency, which is not trivial, as you know. Firms that can master both data and AI, we believe are going to have a competitive advantage this decade. Hyperscalers, as we show you dominate AI and ML. We'll show you some data on that. And having said that, there's plenty of room for specialists. They need to partner with the cloud vendors for go to market productivity. And finally, organizations increasingly have to put data and AI at the center of their enterprises. And to do that, most are going to rely on vendor R&D to leverage AI and ML. In other words, Andy, they're going to buy it and apply it as opposed to build it. What are your thoughts on that setup and that premise? >> Yeah, I see that a lot happening in the field, right? So first of all, the only 10% of realizing a return on investment. That's so true because we talked about this earlier, the most companies are still in the innovation cycle. So they're trying to innovate and see what they can do to apply. A lot of these times when you look at the solutions, what they come up with or the models they create, the experimentation they do, most times they don't even have a good business case to solve, right? So they just experiment and then they figure it out, "Oh my God, this model is working. Can we do something to solve it?" So it's like you found a hammer and then you're trying to find the needle kind of thing, right? That never works. >> 'Cause it's cool or whatever it is. >> It is, right? So that's why, I always advise, when they come to me and ask me things like, "Hey, what's the right way to do it? What is the secret sauce?" And, we talked about this. The first thing I tell them is, "Find out what is the business case that's having the most amount of problems, that that can be solved using some of the AI use cases," right? Not all of them can be solved. Even after you experiment, do the whole nine yards, spend millions of dollars on that, right? And later on you make it efficient only by saving maybe $50,000 for the company or a $100,000 for the company, is it really even worth the experiment, right? So you got to start with the saying that, you know, where's the base for this happening? Where's the need? What's a business use case? It doesn't have to be about cost efficient and saving money in the existing processes. It could be a new thing. You want to bring in a new revenue stream, but figure out what is a business use case, how much money potentially I can make off of that. The same way that start-ups go after. Right? >> Yeah. Pretty straightforward. All right, let's take a look at where ML and AI fit relative to the other hot sectors of the ETR dataset. This XY graph shows net score spending velocity in the vertical axis and presence in the survey, they call it sector perversion for the October survey, the January survey's in the field. Then that squiggly line on ML/AI represents the progression. Since the January 21 survey, you can see the downward trajectory. And we position ML and AI relative to the other big four hot sectors or big three, including, ML/AI is four. Containers, cloud and RPA. These have consistently performed above that magic 40% red dotted line for most of the past two years. Anything above 40%, we think is highly elevated. And we've just included analytics and big data for context and relevant adjacentness, if you will. Now note that green arrow moving toward, you know, the 40% mark on ML/AI. I got a glimpse of the January survey, which is in the field. It's got more than a thousand responses already, and it's trending up for the current survey. So Andy, what do you make of this downward trajectory over the past seven quarters and the presumed uptick in the coming months? >> So one of the things you have to keep in mind is when the pandemic happened, it's about survival mode, right? So when somebody's in a survival mode, what happens, the luxury and the innovations get cut. That's what happens. And this is exactly what happened in the situation. So as you can see in the last seven quarters, which is almost dating back close to pandemic, everybody was trying to keep their operations alive, especially digital operations. How do I keep the lights on? That's the most important thing for them. So while the numbers spent on AI, ML is less overall, I still think the AI ML to spend to sort of like a employee experience or the IT ops, AI ops, ML ops, as we talked about, some of those areas actually went up. There are companies, we talked about it, Atlassian had a lot of platform issues till the amount of money people are spending on that is exorbitant and simply because they are offering the solution that was not available other way. So there are companies out there, you can take AoPS or incident management for that matter, right? A lot of companies have a digital insurance, they don't know how to properly manage it. How do you find an intern solve it immediately? That's all using AI ML and some of those areas actually growing unbelievable, the companies in that area. >> So this is a really good point. If you can you bring up that chart again, what Andy's saying is a lot of the companies in the ETR taxonomy that are doing things with AI might not necessarily show up in a granular fashion. And I think the other point I would make is, these are still highly elevated numbers. If you put on like storage and servers, they would read way, way down the list. And, look in the pandemic, we had to deal with work from home, we had to re-architect the network, we had to worry about security. So those are really good points that you made there. Let's, unpack this a little bit and look at the ML AI sector and the ETR data and specifically at the players and get Andy to comment on this. This chart here shows the same x y dimensions, and it just notes some of the players that are specifically have services and products that people spend money on, that CIOs and IT buyers can comment on. So the table insert shows how the companies are plotted, it's net score, and then the ends in the survey. And Andy, the hyperscalers are dominant, as you can see. You see Databricks there showing strong as a specialist, and then you got to pack a six or seven in there. And then Oracle and IBM, kind of the big whales of yester year are in the mix. And to your point, companies like Salesforce that you mentioned to me offline aren't in that mix, but they do a lot in AI. But what are your takeaways from that data? >> If you could put the slide back on please. I want to make quick comments on a couple of those. So the first one is, it's surprising other hyperscalers, right? As you and I talked about this earlier, AWS is more about logo blocks. We discussed that, right? >> Like what? Like a SageMaker as an example. >> We'll give you all the components what do you need. Whether it's MLOps component or whether it's, CodeWhisperer that we talked about, or a oral platform or data or data, whatever you want. They'll give you the blocks and then you'll build things on top of it, right? But Google took a different way. Matter of fact, if we did those numbers a few years ago, Google would've been number one because they did a lot of work with their acquisition of DeepMind and other things. They're way ahead of the pack when it comes to AI for longest time. Now, I think Microsoft's move of partnering and taking a huge competitor out would open the eyes is unbelievable. You saw that everybody is talking about chat GPI, right? And the open AI tool and ChatGPT rather. Remember as Warren Buffet is saying that, when my laundry lady comes and talk to me about stock market, it's heated up. So that's how it's heated up. Everybody's using ChatGPT. What that means is at the end of the day is they're creating, it's still in beta, keep in mind. It's not fully... >> Can you play with it a little bit? >> I have a little bit. >> I have, but it's good and it's not good. You know what I mean? >> Look, so at the end of the day, you take the massive text of all the available text in the world today, mass them all together. And then you ask a question, it's going to basically search through that and figure it out and answer that back. Yes, it's good. But again, as we discussed, if there's no business use case of what problem you're going to solve. This is building hype. But then eventually they'll figure out, for example, all your chats, online chats, could be aided by your AI chat bots, which is already there, which is not there at that level. This could build help that, right? Or the other thing we talked about is one of the areas where I'm more concerned about is that it is able to produce equal enough original text at the level that humans can produce, for example, ChatGPT or the equal enough, the large language transformer can help you write stories as of Shakespeare wrote it. Pretty close to it. It'll learn from that. So when it comes down to it, talk about creating messages, articles, blogs, especially during political seasons, not necessarily just in US, but anywhere for that matter. If people are able to produce at the emission speed and throw it at the consumers and confuse them, the elections can be won, the governments can be toppled. >> Because to your point about chatbots is chatbots have obviously, reduced the number of bodies that you need to support chat. But they haven't solved the problem of serving consumers. Most of the chat bots are conditioned response, which of the following best describes your problem? >> The current chatbot. >> Yeah. Hey, did we solve your problem? No. Is the answer. So that has some real potential. But if you could bring up that slide again, Ken, I mean you've got the hyperscalers that are dominant. You talked about Google and Microsoft is ubiquitous, they seem to be dominant in every ETR category. But then you have these other specialists. How do those guys compete? And maybe you could even, cite some of the guys that you know, how do they compete with the hyperscalers? What's the key there for like a C3 ai or some of the others that are on there? >> So I've spoken with at least two of the CEOs of the smaller companies that you have on the list. One of the things they're worried about is that if they continue to operate independently without being part of hyperscaler, either the hyperscalers will develop something to compete against them full scale, or they'll become irrelevant. Because at the end of the day, look, cloud is dominant. Not many companies are going to do like AI modeling and training and deployment the whole nine yards by independent by themselves. They're going to depend on one of the clouds, right? So if they're already going to be in the cloud, by taking them out to come to you, it's going to be extremely difficult issue to solve. So all these companies are going and saying, "You know what? We need to be in hyperscalers." For example, you could have looked at DataRobot recently, they made announcements, Google and AWS, and they are all over the place. So you need to go where the customers are. Right? >> All right, before we go on, I want to share some other data from ETR and why people adopt AI and get your feedback. So the data historically shows that feature breadth and technical capabilities were the main decision points for AI adoption, historically. What says to me that it's too much focus on technology. In your view, is that changing? Does it have to change? Will it change? >> Yes. Simple answer is yes. So here's the thing. The data you're speaking from is from previous years. >> Yes >> I can guarantee you, if you look at the latest data that's coming in now, those two will be a secondary and tertiary points. The number one would be about ROI. And how do I achieve? I've spent ton of money on all of my experiments. This is the same thing theme I'm seeing across when talking to everybody who's spending money on AI. I've spent so much money on it. When can I get it live in production? How much, how can I quickly get it? Because you know, the board is breathing down their neck. You already spend this much money. Show me something that's valuable. So the ROI is going to become, take it from me, I'm predicting this for 2023, that's going to become number one. >> Yeah, and if people focus on it, they'll figure it out. Okay. Let's take a look at some of the top players that won, some of the names we just looked at and double click on that and break down their spending profile. So the chart here shows the net score, how net score is calculated. So pay attention to the second set of bars that Databricks, who was pretty prominent on the previous chart. And we've annotated the colors. The lime green is, we're bringing the platform in new. The forest green is, we're going to spend 6% or more relative to last year. And the gray is flat spending. The pinkish is our spending's going to be down on AI and ML, 6% or worse. And the red is churn. So you don't want big red. You subtract the reds from the greens and you get net score, which is shown by those blue dots that you see there. So AWS has the highest net score and very little churn. I mean, single low single digit churn. But notably, you see Databricks and DataRobot are next in line within Microsoft and Google also, they've got very low churn. Andy, what are your thoughts on this data? >> So a couple of things that stands out to me. Most of them are in line with my conversation with customers. Couple of them stood out to me on how bad IBM Watson is doing. >> Yeah, bring that back up if you would. Let's take a look at that. IBM Watson is the far right and the red, that bright red is churning and again, you want low red here. Why do you think that is? >> Well, so look, IBM has been in the forefront of innovating things for many, many years now, right? And over the course of years we talked about this, they moved from a product innovation centric company into more of a services company. And over the years they were making, as at one point, you know that they were making about majority of that money from services. Now things have changed Arvind has taken over, he came from research. So he's doing a great job of trying to reinvent themselves as a company. But it's going to have a long way to catch up. IBM Watson, if you think about it, that played what, jeopardy and chess years ago, like 15 years ago? >> It was jaw dropping when you first saw it. And then they weren't able to commercialize that. >> Yeah. >> And you're making a good point. When Gerstner took over IBM at the time, John Akers wanted to split the company up. He wanted to have a database company, he wanted to have a storage company. Because that's where the industry trend was, Gerstner said no, he came from AMEX, right? He came from American Express. He said, "No, we're going to have a single throat to choke for the customer." They bought PWC for relatively short money. I think it was $15 billion, completely transformed and I would argue saved IBM. But the trade off was, it sort of took them out of product leadership. And so from Gerstner to Palmisano to Remedi, it was really a services led company. And I think Arvind is really bringing it back to a product company with strong consulting. I mean, that's one of the pillars. And so I think that's, they've got a strong story in data and AI. They just got to sort of bring it together and better. Bring that chart up one more time. I want to, the other point is Oracle, Oracle sort of has the dominant lock-in for mission critical database and they're sort of applying AI there. But to your point, they're really not an AI company in the sense that they're taking unstructured data and doing sort of new things. It's really about how to make Oracle better, right? >> Well, you got to remember, Oracle is about database for the structure data. So in yesterday's world, they were dominant database. But you know, if you are to start storing like videos and texts and audio and other things, and then start doing search of vector search and all that, Oracle is not necessarily the database company of choice. And they're strongest thing being apps and building AI into the apps? They are kind of surviving in that area. But again, I wouldn't name them as an AI company, right? But the other thing that that surprised me in that list, what you showed me is yes, AWS is number one. >> Bring that back up if you would, Ken. >> AWS is number one as you, it should be. But what what actually caught me by surprise is how DataRobot is holding, you know? I mean, look at that. The either net new addition and or expansion, DataRobot seem to be doing equally well, even better than Microsoft and Google. That surprises me. >> DataRobot's, and again, this is a function of spending momentum. So remember from the previous chart that Microsoft and Google, much, much larger than DataRobot. DataRobot more niche. But with spending velocity and has always had strong spending velocity, despite some of the recent challenges, organizational challenges. And then you see these other specialists, H2O.ai, Anaconda, dataiku, little bit of red showing there C3.ai. But these again, to stress are the sort of specialists other than obviously the hyperscalers. These are the specialists in AI. All right, so we hit the bigger names in the sector. Now let's take a look at the emerging technology companies. And one of the gems of the ETR dataset is the emerging technology survey. It's called ETS. They used to just do it like twice a year. It's now run four times a year. I just discovered it kind of mid-2022. And it's exclusively focused on private companies that are potential disruptors, they might be M&A candidates and if they've raised enough money, they could be acquirers of companies as well. So Databricks would be an example. They've made a number of investments in companies. SNEAK would be another good example. Companies that are private, but they're buyers, they hope to go IPO at some point in time. So this chart here, shows the emerging companies in the ML AI sector of the ETR dataset. So the dimensions of this are similar, they're net sentiment on the Y axis and mind share on the X axis. Basically, the ETS study measures awareness on the x axis and intent to do something with, evaluate or implement or not, on that vertical axis. So it's like net score on the vertical where negatives are subtracted from the positives. And again, mind share is vendor awareness. That's the horizontal axis. Now that inserted table shows net sentiment and the ends in the survey, which informs the position of the dots. And you'll notice we're plotting TensorFlow as well. We know that's not a company, but it's there for reference as open source tooling is an option for customers. And ETR sometimes like to show that as a reference point. Now we've also drawn a line for Databricks to show how relatively dominant they've become in the past 10 ETS surveys and sort of mind share going back to late 2018. And you can see a dozen or so other emerging tech vendors. So Andy, I want you to share your thoughts on these players, who were the ones to watch, name some names. We'll bring that data back up as you as you comment. >> So Databricks, as you said, remember we talked about how Oracle is not necessarily the database of the choice, you know? So Databricks is kind of trying to solve some of the issue for AI/ML workloads, right? And the problem is also there is no one company that could solve all of the problems. For example, if you look at the names in here, some of them are database names, some of them are platform names, some of them are like MLOps companies like, DataRobot (indistinct) and others. And some of them are like future based companies like, you know, the Techton and stuff. >> So it's a mix of those sub sectors? >> It's a mix of those companies. >> We'll talk to ETR about that. They'd be interested in your input on how to make this more granular and these sub-sectors. You got Hugging Face in here, >> Which is NLP, yeah. >> Okay. So your take, are these companies going to get acquired? Are they going to go IPO? Are they going to merge? >> Well, most of them going to get acquired. My prediction would be most of them will get acquired because look, at the end of the day, hyperscalers need these capabilities, right? So they're going to either create their own, AWS is very good at doing that. They have done a lot of those things. But the other ones, like for particularly Azure, they're going to look at it and saying that, "You know what, it's going to take time for me to build this. Why don't I just go and buy you?" Right? Or or even the smaller players like Oracle or IBM Cloud, this will exist. They might even take a look at them, right? So at the end of the day, a lot of these companies are going to get acquired or merged with others. >> Yeah. All right, let's wrap with some final thoughts. I'm going to make some comments Andy, and then ask you to dig in here. Look, despite the challenge of leveraging AI, you know, Ken, if you could bring up the next chart. We're not repeating, we're not predicting the AI winter of the 1990s. Machine intelligence. It's a superpower that's going to permeate every aspect of the technology industry. AI and data strategies have to be connected. Leveraging first party data is going to increase AI competitiveness and shorten time to value. Andy, I'd love your thoughts on that. I know you've got some thoughts on governance and AI ethics. You know, we talked about ChatGBT, Deepfakes, help us unpack all these trends. >> So there's so much information packed up there, right? The AI and data strategy, that's very, very, very important. If you don't have a proper data, people don't realize that AI is, your AI is the morals that you built on, it's predominantly based on the data what you have. It's not, AI cannot predict something that's going to happen without knowing what it is. It need to be trained, it need to understand what is it you're talking about. So 99% of the time you got to have a good data for you to train. So this where I mentioned to you, the problem is a lot of these companies can't afford to collect the real world data because it takes too long, it's too expensive. So a lot of these companies are trying to do the synthetic data way. It has its own set of issues because you can't use all... >> What's that synthetic data? Explain that. >> Synthetic data is basically not a real world data, but it's a created or simulated data equal and based on real data. It looks, feels, smells, taste like a real data, but it's not exactly real data, right? This is particularly useful in the financial and healthcare industry for world. So you don't have to, at the end of the day, if you have real data about your and my medical history data, if you redact it, you can still reverse this. It's fairly easy, right? >> Yeah, yeah. >> So by creating a synthetic data, there is no correlation between the real data and the synthetic data. >> So that's part of AI ethics and privacy and, okay. >> So the synthetic data, the issue with that is that when you're trying to commingle that with that, you can't create models based on just on synthetic data because synthetic data, as I said is artificial data. So basically you're creating artificial models, so you got to blend in properly that that blend is the problem. And you know how much of real data, how much of synthetic data you could use. You got to use judgment between efficiency cost and the time duration stuff. So that's one-- >> And risk >> And the risk involved with that. And the secondary issues which we talked about is that when you're creating, okay, you take a business use case, okay, you think about investing things, you build the whole thing out and you're trying to put it out into the market. Most companies that I talk to don't have a proper governance in place. They don't have ethics standards in place. They don't worry about the biases in data, they just go on trying to solve a business case >> It's wild west. >> 'Cause that's what they start. It's a wild west! And then at the end of the day when they are close to some legal litigation action or something or something else happens and that's when the Oh Shit! moments happens, right? And then they come in and say, "You know what, how do I fix this?" The governance, security and all of those things, ethics bias, data bias, de-biasing, none of them can be an afterthought. It got to start with the, from the get-go. So you got to start at the beginning saying that, "You know what, I'm going to do all of those AI programs, but before we get into this, we got to set some framework for doing all these things properly." Right? And then the-- >> Yeah. So let's go back to the key points. I want to bring up the cloud again. Because you got to get cloud right. Getting that right matters in AI to the points that you were making earlier. You can't just be out on an island and hyperscalers, they're going to obviously continue to do well. They get more and more data's going into the cloud and they have the native tools. To your point, in the case of AWS, Microsoft's obviously ubiquitous. Google's got great capabilities here. They've got integrated ecosystems partners that are going to continue to strengthen through the decade. What are your thoughts here? >> So a couple of things. One is the last mile ML or last mile AI that nobody's talking about. So that need to be attended to. There are lot of players in the market that coming up, when I talk about last mile, I'm talking about after you're done with the experimentation of the model, how fast and quickly and efficiently can you get it to production? So that's production being-- >> Compressing that time is going to put dollars in your pocket. >> Exactly. Right. >> So once, >> If you got it right. >> If you get it right, of course. So there are, there are a couple of issues with that. Once you figure out that model is working, that's perfect. People don't realize, the moment you decide that moment when the decision is made, it's like a new car. After you purchase the value decreases on a minute basis. Same thing with the models. Once the model is created, you need to be in production right away because it starts losing it value on a seconds minute basis. So issue number one, how fast can I get it over there? So your deployment, you are inferencing efficiently at the edge locations, your optimization, your security, all of this is at issue. But you know what is more important than that in the last mile? You keep the model up, you continue to work on, again, going back to the car analogy, at one point you got to figure out your car is costing more than to operate. So you got to get a new car, right? And that's the same thing with the models as well. If your model has reached a stage, it is actually a potential risk for your operation. To give you an idea, if Uber has a model, the first time when you get a car from going from point A to B cost you $60. If the model decayed the next time I might give you a $40 rate, I would take it definitely. But it's lost for the company. The business risk associated with operating on a bad model, you should realize it immediately, pull the model out, retrain it, redeploy it. That's is key. >> And that's got to be huge in security model recency and security to the extent that you can get real time is big. I mean you, you see Palo Alto, CrowdStrike, a lot of other security companies are injecting AI. Again, they won't show up in the ETR ML/AI taxonomy per se as a pure play. But ServiceNow is another company that you have have mentioned to me, offline. AI is just getting embedded everywhere. >> Yep. >> And then I'm glad you brought up, kind of real-time inferencing 'cause a lot of the modeling, if we can go back to the last point that we're going to make, a lot of the AI today is modeling done in the cloud. The last point we wanted to make here, I'd love to get your thoughts on this, is real-time AI inferencing for instance at the edge is going to become increasingly important for us. It's going to usher in new economics, new types of silicon, particularly arm-based. We've covered that a lot on "Breaking Analysis", new tooling, new companies and that could disrupt the sort of cloud model if new economics emerge. 'Cause cloud obviously very centralized, they're trying to decentralize it. But over the course of this decade we could see some real disruption there. Andy, give us your final thoughts on that. >> Yes and no. I mean at the end of the day, cloud is kind of centralized now, but a lot of this companies including, AWS is kind of trying to decentralize that by putting their own sub-centers and edge locations. >> Local zones, outposts. >> Yeah, exactly. Particularly the outpost concept. And if it can even become like a micro center and stuff, it won't go to the localized level of, I go to a single IOT level. But again, the cloud extends itself to that level. So if there is an opportunity need for it, the hyperscalers will figure out a way to fit that model. So I wouldn't too much worry about that, about deployment and where to have it and what to do with that. But you know, figure out the right business use case, get the right data, get the ethics and governance place and make sure they get it to production and make sure you pull the model out when it's not operating well. >> Excellent advice. Andy, I got to thank you for coming into the studio today, helping us with this "Breaking Analysis" segment. Outstanding collaboration and insights and input in today's episode. Hope we can do more. >> Thank you. Thanks for having me. I appreciate it. >> You're very welcome. All right. I want to thank Alex Marson who's on production and manages the podcast. Ken Schiffman as well. Kristen Martin and Cheryl Knight helped get the word out on social media and our newsletters. And Rob Hoof is our editor-in-chief over at Silicon Angle. He does some great editing for us. Thank you all. Remember all these episodes are available as podcast. Wherever you listen, all you got to do is search "Breaking Analysis" podcast. I publish each week on wikibon.com and silicon angle.com or you can email me at david.vellante@siliconangle.com to get in touch, or DM me at dvellante or comment on our LinkedIn posts. Please check out ETR.AI for the best survey data and the enterprise tech business, Constellation Research. Andy publishes there some awesome information on AI and data. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching everybody and we'll see you next time on "Breaking Analysis". (gentle closing tune plays)

Published Date : Dec 29 2022

SUMMARY :

bringing you data-driven Andy, great to have you on the program. and AI at the center of their enterprises. So it's like you found a of the AI use cases," right? I got a glimpse of the January survey, So one of the things and it just notes some of the players So the first one is, Like a And the open AI tool and ChatGPT rather. I have, but it's of all the available text of bodies that you need or some of the others that are on there? One of the things they're So the data historically So here's the thing. So the ROI is going to So the chart here shows the net score, Couple of them stood out to me IBM Watson is the far right and the red, And over the course of when you first saw it. I mean, that's one of the pillars. Oracle is not necessarily the how DataRobot is holding, you know? So it's like net score on the vertical database of the choice, you know? on how to make this more Are they going to go IPO? So at the end of the day, of the technology industry. So 99% of the time you What's that synthetic at the end of the day, and the synthetic data. So that's part of AI that blend is the problem. And the risk involved with that. So you got to start at data's going into the cloud So that need to be attended to. is going to put dollars the first time when you that you can get real time is big. a lot of the AI today is I mean at the end of the day, and make sure they get it to production Andy, I got to thank you for Thanks for having me. and manages the podcast.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Alex MarsonPERSON

0.99+

AndyPERSON

0.99+

Andy ThuraiPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Tom DavenportPERSON

0.99+

AMEXORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Rashmi KumarPERSON

0.99+

Rob HoofPERSON

0.99+

GoogleORGANIZATION

0.99+

UberORGANIZATION

0.99+

KenPERSON

0.99+

OracleORGANIZATION

0.99+

OctoberDATE

0.99+

6%QUANTITY

0.99+

$40QUANTITY

0.99+

January 21DATE

0.99+

ChipotleORGANIZATION

0.99+

$15 billionQUANTITY

0.99+

fiveQUANTITY

0.99+

RashmiPERSON

0.99+

$50,000QUANTITY

0.99+

$60QUANTITY

0.99+

USLOCATION

0.99+

JanuaryDATE

0.99+

AntonioPERSON

0.99+

John AkersPERSON

0.99+

Warren BuffetPERSON

0.99+

late 2018DATE

0.99+

IkeaORGANIZATION

0.99+

American ExpressORGANIZATION

0.99+

MITORGANIZATION

0.99+

PWCORGANIZATION

0.99+

99%QUANTITY

0.99+

HPEORGANIZATION

0.99+

DominoORGANIZATION

0.99+

ArvindPERSON

0.99+

Palo AltoLOCATION

0.99+

30 billionQUANTITY

0.99+

last yearDATE

0.99+

Constellation ResearchORGANIZATION

0.99+

GerstnerPERSON

0.99+

120 billionQUANTITY

0.99+

$100,000QUANTITY

0.99+

HPE Compute Engineered for your Hybrid World-Containers to Deploy Higher Performance AI Applications


 

>> Hello, everyone. Welcome to theCUBE's coverage of "Compute Engineered for your Hybrid World," sponsored by HPE and Intel. Today we're going to discuss the new 4th Gen Intel Xeon Scalable process impact on containers and AI. I'm John Furrier, your host of theCUBE, and I'm joined by three experts to guide us along. We have Jordan Plum, Senior Director of AI and products for Intel, Bradley Sweeney, Big Data and AI Product Manager, Mainstream Compute Workloads at HPE, and Gary Wang, Containers Product Manager, Mainstream Compute Workloads at HPE. Welcome to the program gentlemen. Thanks for coming on. >> Thanks John. >> Thank you for having us. >> This segment is going to be talking about containers to deploy high performance AI applications. This is a really important area right now. We're seeing a lot more AI deployed, kind of next gen AI coming. How is HPE supporting and testing and delivering containers for AI? >> Yeah, so what we're doing from HPE's perspective is we're taking these container platforms, combining with the next generation Intel servers to fully validate the deployment of the containers. So what we're doing is we're publishing the reference architectures. We're creating these automation scripts, and also creating a monitoring and security strategy for these container platforms. So for customers to easily deploy these Kubernete clusters and to easily secure their community environments. >> Gary, give us a quick overview of the new Proliant DL 360 and 380 Gen 11 servers. >> Yeah, the load, for example, for container platforms what we're seeing mostly is the DL 360 and DL 380 for matching really well for container use cases, especially for AI. The DL 360, with the expended now the DDR five memory and the new PCI five slots really, really helps the speeds to deploy these container environments and also to grow the data that's required to store it within these container environments. So for example, like the DL 380 if you want to deploy a data fabric whether it's the Ezmeral data fabric or different vendors data fabric software you can do so with the DL 360 and DL 380 with the new Intel Xeon processors. >> How does HP help customers with Kubernetes deployments? >> Yeah, like I mentioned earlier so we do a full validation to ensure the container deployment is easy and it's fast. So we create these automation scripts and then we publish them on GitHub for customers to use and to reference. So they can take that and then they can adjust as they need to. But following the deployment guide that we provide will make the, deploy the community deployment much easier, much faster. So we also have demo videos that's also published and then for reference architecture document that's published to guide the customer step by step through the process. >> Great stuff. Thanks everyone. We'll be going to take a quick break here and come back. We're going to do a deep dive on the fourth gen Intel Xeon scalable process and the impact on AI and containers. You're watching theCUBE, the leader in tech coverage. We'll be right back. (intense music) Hey, welcome back to theCUBE's continuing coverage of "Compute Engineered for your Hybrid World" series. I'm John Furrier with the Cube, joined by Jordan Plum with Intel, Bradley Sweeney with HPE, and Gary Wang from HPE. We're going to do a drill down and do a deeper dive into the AI containers with the fourth gen Intel Xeon scalable processors we appreciate your time coming in. Jordan, great to see you. I got to ask you right out of the gate, what is the view right now in terms of Intel's approach to containers for AI? It's hot right now. AI is booming. You're seeing kind of next gen use cases. What's your approach to containers relative to AI? >> Thanks John and thanks for the question. With the fourth generation Xeon scalable processor launch we have tested and validated this platform with over 400 deep learning and machine learning models and workloads. These models and workloads are publicly available in the framework repositories and they can be downloaded by anybody. Yet customers are not only looking for model validation they're looking for model performance and performance is usually a combination of a given throughput at a target latency. And to do that in the data center all the way to the factory floor, this is not always delivered from these generic proxy models that are publicly available in the industry. >> You know, performance is critical. We're seeing more and more developers saying, "Hey, I want to go faster on a better platform, faster all the time." No one wants to run slower stuff, that's for sure. Can you talk more about the different container approaches Intel is pursuing? >> Sure. First our approach is to meet the customers where they are and help them build and deploy AI everywhere. Some customers just want to focus on deployment they have more mature use cases, and they just want to download a model that works that's high performing and run. Others are really focused more on development and innovation. They want to build and train models from scratch or at least highly customize them. Therefore we have several container approaches to accelerate the customer's time to solution and help them meet their business SLA along their AI journey. >> So what developers can just download these containers and just go? >> Yeah, so let me talk about the different kinds of containers we have. We start off with pre-trained containers. We'll have about 55 or more of these containers where the model is actually pre-trained, highly performant, some are optimized for low latency, others are optimized for throughput and the customers can just download these from Intel's website or from HPE and they can just go into production right away. >> That's great. A lot of choice. People can just get jump right in. That's awesome. Good, good choice for developers. They want more faster velocity. We know that. What else does Intel provide? Can you share some thoughts there? What you guys else provide developers? >> Yeah, so we talked about how hey some are just focused on deployment and they maybe they have more mature use cases. Other customers really want to do some more customization or optimization. So we have another class of containers called development containers and this includes not just the kind of a model itself but it's integrated with the framework and some other capabilities and techniques like model serving. So now that customers can download just not only the model but an entire AI stack and they can be sort of do some optimizations but they can also be sure that Intel has optimized that specific stack on top of the HPE servers. >> So it sounds simple to just get started using the DL model and containers. Is that it? Where, what else are customers looking for? What can you take a little bit deeper? >> Yeah, not quite. Well, while the customer customer's ability to reproduce performance on their site that HPE and Intel have measured in our own labs is fantastic. That's not actually what the customer is only trying to do. They're actually building very complex end-to-end AI pipelines, okay? And a lot of data scientists are really good at building models, really good at building algorithms but they're less experienced in building end-to-end pipelines especially 'cause the number of use cases end-to-end are kind of infinite. So we are building end-to-end pipeline containers for use cases like media analytics and sentiment analysis, anomaly detection. Therefore a customer can download these end-to-end containers, right? They can either use them as a reference, just like, see how we built them and maybe they have some changes in their own data center where they like to use different tools, but they can just see, "Okay this is what's possible with an end-to-end container on top of an HPE server." And other cases they could actually, if the overlap in the use case is pretty close, they can just take our containers and go directly into production. So this provides developers, all three types of containers that I discussed provide developers an easy starting point to get them up and running quickly and make them productive. And that's a really important point. You talked a lot about performance, John. But really when we talk to data scientists what they really want to be is productive, right? They're under pressure to change the business to transform the business and containers is a great way to get started fast >> People take product productivity, you know, seriously now with developer productivity is the hottest trend obviously they want performance. Totally nailed it. Where can customers get these containers? >> Right. Great, thank you John. Our pre-trained model containers, our developmental containers, and our end-to-end containers are available at intel.com at the developer catalog. But we'd also post these on many third party marketplaces that other people like to pull containers from. And they're frequently updated. >> Love the developer productivity angle. Great stuff. We've still got more to discuss with Jordan, Bradley, and Gary. We're going to take a short break here. You're watching theCUBE, the leader in high tech coverage. We'll be right back. (intense music) Welcome back to theCUBE's coverage of "Compute Engineered for your Hybrid World." I'm John Furrier with theCUBE and we'll be discussing and wrapping up our discussion on containers to deploy high performance AI. This is a great segment on really a lot of demand for AI and the applications involved. And we got the fourth gen Intel Xeon scalable processors with HP Gen 11 servers. Bradley, what is the top AI use case that Gen 11 HP Proliant servers are optimized for? >> Yeah, thanks John. I would have to say intelligent video analytics. It's a use case that's supplied across industries and verticals. For example, a smart hospital solution that we conducted with Nvidia and Artisight in our previous customer success we've seen 5% more hospital procedures, a 16 times return on investment using operating room coordination. With that IVA, so with the Gen 11 DL 380 that we provide using the the Intel four gen Xeon processors it can really support workloads at scale. Whether that is a smart hospital solution whether that's manufacturing at the edge security camera integration, we can do it all with Intel. >> You know what's really great about AI right now you're starting to see people starting to figure out kind of where the value is does a lot of the heavy lifting on setting things up to make humans more productive. This has been clearly now kind of going neck level. You're seeing it all in the media now and all these new tools coming out. How does HPE make it easier for customers to manage their AI workloads? I imagine there's going to be a surge in demand. How are you guys making it easier to manage their AI workloads? >> Well, I would say the biggest way we do this is through GreenLake, which is our IT as a service model. So customers deploying AI workloads can get fully-managed services to optimize not only their operations but also their spending and the cost that they're putting towards it. In addition to that we have our Gen 11 reliance servers equipped with iLO 6 technology. What this does is allows customers to securely manage their server complete environment from anywhere in the world remotely. >> Any last thoughts or message on the overall fourth gen intel Xeon based Proliant Gen 11 servers? How they will improve workload performance? >> You know, with this generation, obviously the performance is only getting ramped up as the needs and requirements for customers grow. We partner with Intel to support that. >> Jordan, gimme the last word on the container's effect on AI applications. Your thoughts as we close out. >> Yeah, great. I think it's important to remember that containers themselves don't deliver performance, right? The AI stack is a very complex set of software that's compiled together and what we're doing together is to make it easier for customers to get access to that software, to make sure it all works well together and that it can be easily installed and run on sort of a cloud native infrastructure that's hosted by HPE Proliant servers. Hence the title of this talk. How to use Containers to Deploy High Performance AI Applications. Thank you. >> Gentlemen. Thank you for your time on the Compute Engineered for your Hybrid World sponsored by HPE and Intel. Again, I love this segment for AI applications Containers to Deploy Higher Performance. This is a great topic. Thanks for your time. >> Thank you. >> Thanks John. >> Okay, I'm John. We'll be back with more coverage. See you soon. (soft music)

Published Date : Dec 27 2022

SUMMARY :

Welcome to the program gentlemen. and delivering containers for AI? and to easily secure their of the new Proliant DL 360 and also to grow the data that's required and then they can adjust as they need to. and the impact on AI and containers. And to do that in the about the different container and they just want to download a model and they can just go into A lot of choice. and they can be sort of So it sounds simple to just to use different tools, is the hottest trend to pull containers from. on containers to deploy we can do it all with Intel. for customers to manage and the cost that they're obviously the performance on the container's effect How to use Containers on the Compute Engineered We'll be back with more coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jordan PlumPERSON

0.99+

GaryPERSON

0.99+

JohnPERSON

0.99+

NvidiaORGANIZATION

0.99+

Gary WangPERSON

0.99+

BradleyPERSON

0.99+

HPEORGANIZATION

0.99+

John FurrierPERSON

0.99+

16 timesQUANTITY

0.99+

5%QUANTITY

0.99+

JordanPERSON

0.99+

ArtisightORGANIZATION

0.99+

DL 360COMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

three expertsQUANTITY

0.99+

DL 380COMMERCIAL_ITEM

0.99+

HPORGANIZATION

0.99+

Compute Engineered for your Hybrid WorldTITLE

0.98+

FirstQUANTITY

0.98+

Bradley SweeneyPERSON

0.98+

over 400 deep learningQUANTITY

0.97+

intelORGANIZATION

0.97+

theCUBEORGANIZATION

0.96+

Gen 11 DL 380COMMERCIAL_ITEM

0.95+

XeonCOMMERCIAL_ITEM

0.95+

TodayDATE

0.95+

fourth genQUANTITY

0.92+

GitHubORGANIZATION

0.91+

380 Gen 11COMMERCIAL_ITEM

0.9+

about 55 or moreQUANTITY

0.89+

four gen XeonCOMMERCIAL_ITEM

0.88+

Big DataORGANIZATION

0.88+

Gen 11COMMERCIAL_ITEM

0.87+

five slotsQUANTITY

0.86+

ProliantCOMMERCIAL_ITEM

0.84+

GreenLakeORGANIZATION

0.75+

Compute Engineered for your HybridTITLE

0.7+

EzmeralORGANIZATION

0.68+

ML & AI Keynote Analysis | AWS re:Invent 2022


 

>>Hey, welcome back everyone. Day three of eight of us Reinvent 2022. I'm John Farmer with Dave Volante, co-host the q Dave. 10 years for us, the leader in high tech coverage is our slogan. Now 10 years of reinvent day. We've been to every single one except with the original, which we would've come to if Amazon actually marketed the event, but they didn't. It's more of a customer event. This is day three. Is the machine learning ai keynote sws up there. A lot of announcements. We're gonna break this down. We got, we got Andy Thra here, vice President, prince Constellation Research. Andy, great to see you've been on the cube before one of our analysts bringing the, bringing the, the analysis, commentary to the keynote. This is your wheelhouse. Ai. What do you think about Swami up there? I mean, he's awesome. We love him. Big fan Oh yeah. Of of the Cuban we're fans of him, but he got 13 announcements. >>A lot. A lot, >>A lot. >>So, well some of them are, first of all, thanks for having me here and I'm glad to have both of you on the same show attacking me. I'm just kidding. But some of the announcement really sort of like a game changer announcements and some of them are like, meh, you know, just to plug in the holes what they have and a lot of golf claps. Yeah. Meeting today. And you could have also noticed that by, when he was making the announcements, you know, the, the, the clapping volume difference, you could say, which is better, right? But some of the announcements are, are really, really good. You know, particularly we talked about, one of that was Microsoft took that out of, you know, having the open AI in there, doing the large language models. And then they were going after that, you know, having the transformer available to them. And Amazon was a little bit weak in the area, so they couldn't, they don't have a large language model. So, you know, they, they are taking a different route saying that, you know what, I'll help you train the large language model by yourself, customized models. So I can provide the necessary instance. I can provide the instant volume, memory, the whole thing. Yeah. So you can train the model by yourself without depending on them kind >>Of thing. So Dave and Andy, I wanna get your thoughts cuz first of all, we've been following Amazon's deep bench on the, on the infrastructure pass. They've been doing a lot of machine learning and ai, a lot of data. It just seems that the sentiment is that there's other competitors doing a good job too. Like Google, Dave. And I've heard folks in the hallway, even here, ex Amazonians saying, Hey, they're train their models on Google than they bring up the SageMaker cuz it's better interface. So you got, Google's making a play for being that data cloud. Microsoft's obviously putting in a, a great kind of package to kind of make it turnkey. How do they really stand versus the competition guys? >>Good question. So they, you know, each have their own uniqueness and the we variation that take it to the field, right? So for example, if you were to look at it, Microsoft is known for as industry or later things that they are been going after, you know, industry verticals and whatnot. So that's one of the things I looked here, you know, they, they had this omic announcement, particularly towards that healthcare genomics space. That's a huge space for hpz related AIML applications. And they have put a lot of things in together in here in the SageMaker and in the, in their models saying that, you know, how do you, how do you use this transmit to do things like that? Like for example, drug discovery, for genomics analysis, for cancer treatment, the whole, right? That's a few volumes of data do. So they're going in that healthcare area. Google has taken a different route. I mean they want to make everything simple. All I have to do is I gotta call an api, give what I need and then get it done. But Amazon wants to go at a much deeper level saying that, you know what? I wanna provide everything you need. You can customize the whole thing for what you need. >>So to me, the big picture here is, and and Swami references, Hey, we are a data company. We started, he talked about books and how that informed them as to, you know, what books to place front and center. Here's the, here's the big picture. In my view, companies need to put data at the core of their business and they haven't, they've generally put humans at the core of their business and data. And now machine learning are at the, at the outside and the periphery. Amazon, Google, Microsoft, Facebook have put data at their core. So the question is how do incumbent companies, and you mentioned some Toyota Capital One, Bristol Myers Squibb, I don't know, are those data companies, you know, we'll see, but the challenge is most companies don't have the resources as you well know, Andy, to actually implement what Google and Facebook and others have. >>So how are they gonna do that? Well, they're gonna buy it, right? So are they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft and Google, I pulled some ETR data to say, okay, who are the top companies that are showing up in terms of spending? Who's spending with whom? AWS number one, Microsoft number two, Google number three, data bricks. Number four, just in terms of, you know, presence. And then it falls down DataRobot, Anaconda data icu, Oracle popped up actually cuz they're embedding a lot of AI into their products and, and of course IBM and then a lot of smaller companies. But do companies generally customers have the resources to do what it takes to implement AI into applications and into workflows? >>So a couple of things on that. One is when it comes to, I mean it's, it's no surprise that the, the top three or the hyperscalers, because they all want to bring their business to them to run the specific workloads on the next biggest workload. As you was saying, his keynote are two things. One is the A AIML workloads and the other one is the, the heavy unstructured workloads that he was talking about. 80%, 90% of the data that's coming off is unstructured. So how do you analyze that? Such as the geospatial data. He was talking about the volumes of data you need to analyze the, the neural deep neural net drug you ought to use, only hyperscale can do it, right? So that's no wonder all of them on top for the data, one of the things they announced, which not many people paid attention, there was a zero eight L that that they talked about. >>What that does is a little bit of a game changing moment in a sense that you don't have to, for example, if you were to train the data, data, if the data is distributed everywhere, if you have to bring them all together to integrate it, to do that, it's a lot of work to doing the dl. So by taking Amazon, Aurora, and then Rich combine them as zero or no ETL and then have Apaches Apaches Spark applications run on top of analytical applications, ML workloads. That's huge. So you don't have to move around the data, use the data where it is, >>I, I think you said it, they're basically filling holes, right? Yeah. They created this, you know, suite of tools, let's call it. You might say it's a mess. It's not a mess because it's, they're really powerful but they're not well integrated and now they're starting to take the seams as I say. >>Well yeah, it's a great point. And I would double down and say, look it, I think that boring is good. You know, we had that phase in Kubernetes hype cycle where it got boring and that was kind of like, boring is good. Boring means we're getting better, we're invisible. That's infrastructure that's in the weeds, that's in between the toes details. It's the stuff that, you know, people we have to get done. So, you know, you look at their 40 new data sources with data Wrangler 50, new app flow connectors, Redshift Auto Cog, this is boring. Good important shit Dave. The governance, you gotta get it and the governance is gonna be key. So, so to me, this may not jump off the page. Adam's keynote also felt a little bit of, we gotta get these gaps done in a good way. So I think that's a very positive sign. >>Now going back to the bigger picture, I think the real question is can there be another independent cloud data cloud? And that's the, to me, what I try to get at my story and you're breaking analysis kind of hit a home run on this, is there's interesting opportunity for an independent data cloud. Meaning something that isn't aws, that isn't, Google isn't one of the big three that could sit in. And so let me give you an example. I had a conversation last night with a bunch of ex Amazonian engineering teams that left the conversation was interesting, Dave. They were like talking, well data bricks and Snowflake are basically batch, okay, not transactional. And you look at Aerospike, I can see their booth here. Transactional data bases are hot right now. Streaming data is different. Confluence different than data bricks. Is data bricks good at hosting? >>No, Amazon's better. So you start to see these kinds of questions come up where, you know, data bricks is great, but maybe not good for this, that and the other thing. So you start to see the formation of swim lanes or visibility into where people might sit in the ecosystem, but what came out was transactional. Yep. And batch the relationship there and streaming real time and versus you know, the transactional data. So you're starting to see these new things emerge. Andy, what do you, what's your take on this? You're following this closely. This seems to be the alpha nerd conversation and it all points to who's gonna have the best data cloud, say data, super clouds, I call it. What's your take? >>Yes, data cloud is important as well. But also the computational that goes on top of it too, right? Because when, when the data is like unstructured data, it's that much of a huge data, it's going to be hard to do that with a low model, you know, compute power. But going back to your data point, the training of the AIML models required the batch data, right? That's when you need all the, the historical data to train your models. And then after that, when you do inference of it, that's where you need the streaming real time data that's available to you too. You can make an inference. One of the things, what, what they also announced, which is somewhat interesting, is you saw that they have like 700 different instances geared towards every single workload. And there are some of them very specifically run on the Amazon's new chip. The, the inference in two and theran tr one chips that basically not only has a specific instances but also is run on a high powered chip. And then if you have that data to support that, both the training as well as towards the inference, the efficiency, again, those numbers have to be proven. They claim that it could be anywhere between 40 to 60% faster. >>Well, so a couple things. You're definitely right. I mean Snowflake started out as a data warehouse that was simpler and it's not architected, you know, in and it's first wave to do real time inference, which is not now how, how could they, the other second point is snowflake's two or three years ahead when it comes to governance, data sharing. I mean, Amazon's doing what always does. It's copying, you know, it's customer driven. Cuz they probably walk into an account and they say, Hey look, what's Snowflake's doing for us? This stuff's kicking ass. And they go, oh, that's a good idea, let's do that too. You saw that with separating compute from storage, which is their tiering. You saw it today with extending data, sharing Redshift, data sharing. So how does Snowflake and data bricks approach this? They deal with ecosystem. They bring in ecosystem partners, they bring in open source tooling and that's how they compete. I think there's unquestionably an opportunity for a data cloud. >>Yeah, I think, I think the super cloud conversation and then, you know, sky Cloud with Berkeley Paper and other folks talking about this kind of pre, multi-cloud era. I mean that's what I would call us right now. We are, we're kind of in the pre era of multi-cloud, which by the way is not even yet defined. I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. Yeah. People have multiple clouds. They got, they, they end up by default, not by design as Dell likes to say. Right? And they gotta deal with it. So it's more of they're inheriting multiple cloud environments. It's not necessarily what they want in the situation. So to me that is a big, big issue. >>Yeah, I mean, again, going back to your snowflake and data breaks announcements, they're a data company. So they, that's how they made their mark in the market saying that, you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. And, and Amazon is catching up with that with a lot of that announcements they made, how far it's gonna get traction, you know, to change when I to say, >>Yeah, I mean to me, to me there's no doubt about Dave. I think, I think what Swamee is doing, if Amazon can get corner the market on out of the box ML and AI capabilities so that people can make it easier, that's gonna be the end of the day tell sign can they fill in the gaps. Again, boring is good competition. I don't know mean, mean I'm not following the competition. Andy, this is a real question mark for me. I don't know where they stand. Are they more comprehensive? Are they more deeper? Are they have deeper services? I mean, obviously shows to all the, the different, you know, capabilities. Where, where, where does Amazon stand? What's the process? >>So what, particularly when it comes to the models. So they're going at, at a different angle that, you know, I will help you create the models we talked about the zero and the whole data. We'll get the data sources in, we'll create the model. We'll move the, the whole model. We are talking about the ML ops teams here, right? And they have the whole functionality that, that they built ind over the year. So essentially they want to become the platform that I, when you come in, I'm the only platform you would use from the model training to deployment to inference, to model versioning to management, the old s and that's angle they're trying to take. So it's, it's a one source platform. >>What about this idea of technical debt? Adrian Carro was on yesterday. John, I know you talked to him as well. He said, look, Amazon's Legos, you wanna buy a toy for Christmas, you can go out and buy a toy or do you wanna build a, to, if you buy a toy in a couple years, you could break and what are you gonna do? You're gonna throw it out. But if you, if you, if part of your Lego needs to be extended, you extend it. So, you know, George Gilbert was saying, well, there's a lot of technical debt. Adrian was countering that. Does Amazon have technical debt or is that Lego blocks analogy the right one? >>Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes APIs? It depends on what team you're on. If you're on the runtime gene, you're gonna optimize for Kubernetes, but E two is the resources you want to use. So I think the idea of the 15 years of technical debt, I, I don't believe that. I think the APIs are still hardened. The issue that he brings up that I think is relevant is it's an end situation, not an or. You can have the bag of Legos, which is the primitives and build a durable application platform, monitor it, customize it, work with it, build it. It's harder, but the outcome is durability and sustainability. Building a toy, having a toy with those Legos glued together for you, you can get the play with, but it'll break over time. Then you gotta replace it. So there's gonna be a toy business and there's gonna be a Legos business. Make your own. >>So who, who are the toys in ai? >>Well, out of >>The box and who's outta Legos? >>The, so you asking about what what toys Amazon building >>Or, yeah, I mean Amazon clearly is Lego blocks. >>If people gonna have out the box, >>What about Google? What about Microsoft? Are they basically more, more building toys, more solutions? >>So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. But, but if it comes to vertical industry solutions, Microsoft is, is is ahead, right? Because they have, they have had years of indu industry experience. I mean there are other smaller cloud are trying to do that too. IBM being an example, but you know, the, now they are starting to go after the specific industry use cases. They think that through, for example, you know the medical one we talked about, right? So they want to build the, the health lake, security health lake that they're trying to build, which will HIPPA and it'll provide all the, the European regulations, the whole line yard, and it'll help you, you know, personalize things as you need as well. For example, you know, if you go for a certain treatment, it could analyze you based on your genome profile saying that, you know, the treatment for this particular person has to be individualized this way, but doing that requires a anomalous power, right? So if you do applications like that, you could bring in a lot of the, whether healthcare, finance or what have you, and then easy for them to use. >>What's the biggest mistake customers make when it comes to machine intelligence, ai, machine learning, >>So many things, right? I could start out with even the, the model. Basically when you build a model, you, you should be able to figure out how long that model is effective. Because as good as creating a model and, and going to the business and doing things the right way, there are people that they leave the model much longer than it's needed. It's hurting your business more than it is, you know, it could be things like that. Or you are, you are not building a responsibly or later things. You are, you are having a bias and you model and are so many issues. I, I don't know if I can pinpoint one, but there are many, many issues. Responsible ai, ethical ai. All >>Right, well, we'll leave it there. You're watching the cube, the leader in high tech coverage here at J three at reinvent. I'm Jeff, Dave Ante. Andy joining us here for the critical analysis and breaking down the commentary. We'll be right back with more coverage after this short break.

Published Date : Nov 30 2022

SUMMARY :

Ai. What do you think about Swami up there? A lot. of, you know, having the open AI in there, doing the large language models. So you got, Google's making a play for being that data cloud. So they, you know, each have their own uniqueness and the we variation that take it to have the resources as you well know, Andy, to actually implement what Google and they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft the neural deep neural net drug you ought to use, only hyperscale can do it, right? So you don't have to move around the data, use the data where it is, They created this, you know, It's the stuff that, you know, people we have to get done. And so let me give you an example. So you start to see these kinds of questions come up where, you know, it's going to be hard to do that with a low model, you know, compute power. was simpler and it's not architected, you know, in and it's first wave to do real time inference, I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. the different, you know, capabilities. at a different angle that, you know, I will help you create the models we talked about the zero and you know, George Gilbert was saying, well, there's a lot of technical debt. Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. you know, it could be things like that. We'll be right back with more coverage after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

George GilbertPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AdrianPERSON

0.99+

DavePERSON

0.99+

AndyPERSON

0.99+

GoogleORGANIZATION

0.99+

IBMORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Adrian CarroPERSON

0.99+

Dave VolantePERSON

0.99+

Andy ThraPERSON

0.99+

90%QUANTITY

0.99+

15 yearsQUANTITY

0.99+

JohnPERSON

0.99+

AdamPERSON

0.99+

13 announcementsQUANTITY

0.99+

LegoORGANIZATION

0.99+

John FarmerPERSON

0.99+

Dave AntePERSON

0.99+

twoQUANTITY

0.99+

10 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Bristol Myers SquibbORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Constellation ResearchORGANIZATION

0.99+

OneQUANTITY

0.99+

ChristmasEVENT

0.99+

second pointQUANTITY

0.99+

yesterdayDATE

0.99+

AnacondaORGANIZATION

0.99+

todayDATE

0.99+

Berkeley PaperORGANIZATION

0.99+

oneQUANTITY

0.99+

eightQUANTITY

0.98+

700 different instancesQUANTITY

0.98+

three yearsQUANTITY

0.98+

SwamiPERSON

0.98+

AerospikeORGANIZATION

0.98+

bothQUANTITY

0.98+

SnowflakeORGANIZATION

0.98+

two thingsQUANTITY

0.98+

60%QUANTITY

0.98+

Snehal Antani, Horizon3.ai Market Deepdive


 

foreign welcome back everyone to our special presentation here at thecube with Horizon 3.a I'm John Furrier host thecube here in Palo Alto back it's niho and Tony CEO and co-founder of horizon 3 for deep dive on going under the hood around the big news and also the platform autonomous pen testing changing the game and security great to see you welcome back thank you John I love what you guys have been doing with the cube huge fan been here a bunch of times and yeah looking forward to the conversation let's get into it all right so what what's the market look like and how do you see it evolving we're in a down Market relative to startups some say our data we're reporting on siliconangle in the cube that yeah there might be a bit of downturn in the economy with inflation but the tech Market is booming because the hyperscalers are still pumping out massive scale and still innovating so so you know for the first time in history this is a recession or downturn where there's now Cloud scale players that are an economic engine what's your view on this where's the market heading relative to the downturn and how are you guys navigating that so um I think about it one the there's a lot of belief out there that we're going to hit a downturn and we started to see that we started to see deals get longer and longer to close back in May across the board in the industry we continue to see deals get at least backloaded in the quarter as people understand their procurement how much money they really have to spend what their earnings are going to be so we're seeing this across the board one is quarters becoming lumpier for tech companies and we think that that's going to become kind of the norm over the next over the next year but what's interesting in our space of security testing is a very basic supply and demand problem the demand for security testing has skyrocketed when I was a CIO eight years ago I only had to worry about my on-prem attack surface my perimeter and Insider threat those are my primary threat vectors now if I was a CIO I have to include multiple clouds all of the data in my SAS offerings my Salesforce account and so on as well as work from home threat vectors and other pieces and I've got Regulatory Compliance in Europe in Asia in in the U.S tons of demand for testing and there's just not enough Supply there's only 5 000 certified pen testers in the United States so I think for starters you have a fundamental supply and demand problem that plays to our strength because we're able to bring a tremendous amount of pen testing supply to the table but now let's flip to if you are the CEO of a large security company or whether it's a Consulting shop or so on you've got a whole bunch of deferred revenue in your business model around security testing services and what we've done in our past in previous companies I worked at is if we didn't think we were going to make the money the quarter with product Revenue we would start to unlock some of that deferred Services Revenue to make the number to hit what we expected Wall Street to hit what Wall Street expected of us in testing that's not possible because there's not enough Supply except us so if I'm the CEO of an mssp or a large security company and I need I see a huge backlog of security testing revenue on the table the easy button to convert that to recognized revenue is Horizon 3. and when I think about the next six months and the amount of Revenue misses we're going to see in security shops especially those that can't fulfill their orders I think there's a ripe opportunity for us to win yeah one of the few opportunities where on any Market you win because the forces will drive your flywheel that's exactly right very basic supply and demand forces that are only increasing with pressure and there's no way it takes 10 years just to build a master hacker just it's a very hard complex space we become the easy button to address that supply problem yeah and this and the autonomous aspect makes appsec reviews as new things get pushed with Cloud native developers they're shifting left but still the security policies need to stay Pace as these new vectors threat vectors appear yeah I mean because that's what's happening a new new thing makes a vector possible that's exactly right I think there's two aspects one is the as you in increase change in your environment you need to increase testing they are absolutely correlated the second thing though is you know for 20 years we focused on remote code execution or rces as an industry what was the latest rce that gave an attacker access to my environment but if you look over the past few years that entire mindset has shifted credentials are the new code execution what I mean by that is if I have a large organization with a hundred a thousand ten thousand employees all it takes is one of them to have a password I can crack in credential spray and gain access to as an attacker and once I've gained access to a single user I'm going to systematically snowball that into something of consequence and so I think that the attackers have shifted away from looking for code execution and looked more towards harvesting credentials and cascading credentials from a regular domain user into an admin this brings up the conversation I would like to do it more Deep dive now shift into more of like the real kind of landscape of the market and your positioning and value proposition in that and that is managed services are becoming really popular as we move into this next next wave of super cloud and multi-cloud and hybrid Cloud because I mean multi-cloud and hybrid hybrid than multi-cloud sounds good on paper but the security Ops become big and one of the things we're reporting with here on the cube and siliconangle the past six months is devops has made the developer the IT team because they've essentially run it now in CI CD pipeline as they say that means it's replaced by data Ops or AI Ops or security Ops and data and security kind of go hand in hand so I can see that playing out do you believe that to be true that that's kind of the new operational kind of beach head that's critical and if so secure if data is part of security that makes security the new it yeah I I think that if you think about organizations hell even for Horizon 3 right now I don't need to hire a CIO I'll have a CSO and that CSO will own it and governance risk and compliance and security operations because at the end of the day the most pressing question for me to answer as a CEO is my security posture IIT is a supporting function of that security posture and we see that at say or a growth stage company like Horizon 3 but when I thought about my time at GE Capital we really shifted to this mindset of security by Design architecture as code and it was very much security driven conversation and I think that is the norm going forward and how do you view the idea that you have to enable a managed service provider with security also managing comp and which then manages the company to enable them to have agile security um security is code because what you're getting at is this autonomous layer that's going to be automated away to make the next talented layer whether it's coder or architect scale so the question is what is abstracted away at at automation seems to be the conversation that's coming out of this big cloud native or super cloud next wave of cloud scale I think there's uh there's two Dimensions to that and honestly I think the more interesting Dimension is not the technical side of it but rather think of the Equifax hack a bunch of years ago had Equifax used a managed security services provider would the CEO have been fired after the breach and the answer is probably not I think the CEO would have transferred enough reputational risk in operational risk to the third party mssp to save his job from being you know from him being fired you can look at that across the board I think that if if I were a CIO again I would be hard-pressed to build my own internal security function because I'm accepting that risk as an executive and we saw what just happened at Uber there's a ton of risk coming with that with the with accepting that as a security person so I think in the future the role of the mssp becomes more significant as a mechanism for transferring enough reputational and operational and legal risk to a third party so that you as the Core Company are able to protect yourself and your people now then what you think is a super cloud printables and Concepts being applied at mssp scale and I think that becomes really interesting talk about the talent opportunity because I think the managed service providers point to markets that are growing and changing also having managed service means that the customers can't always hire Talent hence they go to a Channel or a partner this seems to be a key part of the growth in your area talk about the talent aspect of it yeah um think back to what we saw in Cloud so as as Cloud picked up we saw IBM HP other Hardware companies sell more servers but to fewer customers Amazon Google and others right and so I think something similar is going to happen in the security space where I think you're going to see security tools providers selling more volume but to fewer customers that are just really big mssps so that is the the path forward and I think that the underlying Talent issue gives us economies at scale and that's what we saw this with Cloud we're going to see the same thing in the mssp space I've got a density of Talent Plus a density of automation plus a density of of relationships and ecosystem that give mssps a huge economies of scale advantage over everybody else I mean I want to get into the mssp business sounds like I make a lot of money yeah definitely it's profitable no doubt about it like that I got to ask more on the more of the burden side of it because if you're a partner I don't need another training class I don't need another tool I don't need someone saying this is the highest margin product I need to actually downsize my tools so right now there's hundreds of tools that mssps have all the time dealing with and does the customer so tools platforms we've kind of teased this out in previous conversations together but more more relevant to the mssp is what they do to the customers so talk about this uh burden of tools and the socks out there in the in in the landscape how do you how do you view that and what's the conversation like on average an organization has 130 different cyber security tools installed none of those tools were designed to work together none of those tools are from the same vendor and in fact oftentimes they're from vendors that have competing products and so what we don't have and they're still getting breached in the industry we don't have a tools problem we have an Effectiveness problem we have to reduce the number of tools we have get more out of out of the the effectiveness out of the existing infrastructure build muscle memory you know how to detect and respond to a breach and continuously verify that posture I think that's what the the most successful security organizations have mastered the fundamentals and they mastered that by making sure they were effective in detection and response not mastering it by buying the next shiny AI tool on the defensive side okay so you mentioned supply and demand early since you're brought up economics we'll get into the economic equations here when you have great profits that's going to attract more entrance into the marketplace so as more mssps enter the market you're going to start to see a little bit of competition maybe some fud maybe some price competitive price penetration all kinds of different Tactics get out go on there um how does that impact you because now does that impact your price or are you now part of them just competing on their own value what's that mean for the channel as more entrants come in hey you know I can compete against that other one does that create conflict is that an opportunity does are you neutral on that what's the position it's a great question actually I think the way it plays out is one we are neutral two the mssp has to stand on their own with their own unique value proposition otherwise they're going to become commoditized we saw this in the early cloud provider days the cloud providers that were just basically wrapping existing Hardware with with a race to the bottom pricing model didn't survive those that use the the cloud infrastructure as a starting point to build higher value capabilities they're the ones that have succeeded to this day the same Mo I think will occur in mssps which is there's a base level of capability that they've got to be able to deliver and it is the burden of the mssp to innovate effectively to elevate their value problem it's interesting Dynamic and I brought it up mainly because if you believe that this is going to be a growing New Market price erosion is more in mature markets so it's interesting to see that Dynamic come up and we'll see how that handles on the on the economics and just the macro side of it getting more into kind of like the next gen autonomous pen testing is a leading indicator that a new kind of security assessment is here um if I said that to you how do you respond to that what is this new security assessment mean what does that mean for the customer and to the partner and that that relationship down that whole chain yeah um back to I'm wearing a CIO hat right now don't tell me we're secure in PowerPoint show me we're secure Today Show me where we're secure tomorrow and then show me we're secure again next week because that's what matters to me if you can show me we're secure I can understand the risk I'm accepting and articulate it up to my board to my Regulators up until now we've had a PowerPoint tell me where secure culture and security and I just don't think that's going to last all that much longer so I think the future of security testing and assessment is this shift from a PowerPoint report to truly showing me that my I'm secure enough you guys auto-generate those statements now you mentioned that earlier that's exactly right because the other part is you know the classic way to do security reports was garbage in garbage out you had a human kind of theoretically fill out a spreadsheet that magically came up with the risk score or security posture that doesn't work that's a check the box mentality what you want to have is an accurate High Fidelity understanding of your blind spots your threat vectors what data is at risk what credentials are at risk you want to look at those results over time how quickly did I find problems how quickly did I fix them how often did they reoccur and that is how you get to a show me where secure culture whether I'm a company or I'm a channel partner working with Horizon 3.ai I have to put my name on the line and say Here's a service level agreement I'm going to stand behind there's levels of compliance you mentioned that earlier how do you guys help that area because that becomes I call the you know below the line I got to do it anyway usually it's you know they grind out the work but it has to be fundamental because if the threats vectors are increasing and you're handling it like you say you are the way it is real time today tomorrow the next day you got to have that other stuff flow into it can you describe how that works under the hood yeah there's there's two parts to it the first part is that attackers don't have to hack in with zero days they log in with credentials that they found but often what attackers are doing is chaining together different types of problems so if you have 10 different tactics you can chain those together a number of different ways it's not just 10 to the 10th it's it's actually because you don't you don't have to use all the tactics at once this is a very large number of combinations that an attacker can apply upon you is what it comes down to and so at the base level what you want to have is what are the the primary tactics that are being used and those tactics are always being added to and evolving what are the primary outcomes that an attacker is trying to achieve steal your data disrupt your systems become a domain admin and borrow and now what you have is it actually looks more like a chess game algorithm than it does any sort of hard-coded automation or anything else which is based on the pieces on the board the the it infrastructure I've discovered what is the next best action to become a domain admin or steal your data and that's the underlying innovation in IP we've created which is next best action Knowledge Graph analytics and adaptiveness to figure out how to combine different problems together to achieve an objective that an attacker cares about so the 3D chess players out there I'd say that's more like 3D chess are the practitioners implementing it but when I think about compliance managers I don't see 3D chess players I see back office accountants in my mind like okay are they actually even understand what comes out of that so how do you handle the compliance side do you guys just check the boxes there is it not part of it is it yeah I I know I don't Envision the compliance guys on the front lines identifying vectors do you know what it doesn't even know what it means yeah it's a great question when you think about uh the market segmentation I think there are we've seen are three basic types of users you've got the the really mature high frequency security testing purple team type folks and for them we are the the force multiplier for them to secure the environment you then have the middle group where the IT person and the security person are the same individual they are barely Treading Water they don't know what their attack surface is and they don't know what to focus on we end up that's actually where we started with the barely Treading Water Persona and that's why we had a product that helped those Network Engineers become superheroes the third segment are those that view security and compliance as synonymous and they don't really care about continuous they care about running and checking the box for PCI and forever else and those customers while they use us they are better served by our partner ecosystem and that's really so the the first two categories tend to use us directly self-service pen tests as often as they want that compliance-minded folks end up going through our partners because they're better served there steel great to have you on thanks for this deep dive on um under the hood section of the interview appreciate it and I think autonomous is is an indicator Beyond pen testing pen testing has become like okay penetration security but this is not going away where do you see this evolving what's next what's next for Horizon take a minute to give a plug for what's going on with copy how do you see it I know you got good margins you're raising Capital always raising money you're not yet public um looking good right now as they say yeah yeah well I think the first thing is our company strategy is in three chapters chapter one is become the best security testing platform in the industry period that's it and be very good at helping you find and fix your security blind spots that's chapter one we've been crushing it there with great customer attraction great partner traction chapter two which we've started to enter is look at our results over time to help that that GRC officer or auditor accurately assess the security posture of an organization and we're going to enter that chapter about this time next year longer term though the big Vision I have is how do I use offense to inform defense so for me chapter three is how do I get away from just security testing towards autonomous security overall where you can use our security testing platform to identify ways to attack that informs defensive tools exactly where to focus how to adjust and so on and now you've got offset and integrated learning Loop between attack and defense that's the future never been done before Master the art of attack to become a better Defender is the bigger vision of the company love the new paradigm security congratulations been following you guys we will continue to follow you thanks for coming on the Special Report congratulations on the new Market expansion International going indirect that a big way congratulations thank you John appreciate it okay this is a special presentation with the cube and Horizon 3.ai I'm John Furrier your host thanks for watching thank you

Published Date : Oct 11 2022

SUMMARY :

the game and security great to see you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
10 yearsQUANTITY

0.99+

Snehal AntaniPERSON

0.99+

EquifaxORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

EuropeLOCATION

0.99+

JohnPERSON

0.99+

Palo AltoLOCATION

0.99+

GE CapitalORGANIZATION

0.99+

UberORGANIZATION

0.99+

next weekDATE

0.99+

TonyPERSON

0.99+

PowerPointTITLE

0.99+

two partsQUANTITY

0.99+

10 different tacticsQUANTITY

0.99+

tomorrowDATE

0.99+

U.SLOCATION

0.99+

first partQUANTITY

0.99+

United StatesLOCATION

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

GRCORGANIZATION

0.99+

third segmentQUANTITY

0.99+

IBMORGANIZATION

0.99+

two aspectsQUANTITY

0.99+

10thQUANTITY

0.99+

AsiaLOCATION

0.99+

first two categoriesQUANTITY

0.99+

three basic typesQUANTITY

0.99+

MayDATE

0.99+

10QUANTITY

0.98+

first timeQUANTITY

0.98+

todayDATE

0.98+

second thingQUANTITY

0.98+

CloudTITLE

0.97+

eight years agoDATE

0.97+

Horizon 3TITLE

0.96+

hundreds of toolsQUANTITY

0.95+

next yearDATE

0.95+

single userQUANTITY

0.95+

horizonORGANIZATION

0.94+

Horizon 3.aiTITLE

0.93+

oneQUANTITY

0.93+

past six monthsDATE

0.93+

hundred a thousand ten thousand employeesQUANTITY

0.92+

5 000 certified pen testersQUANTITY

0.92+

zero daysQUANTITY

0.92+

130 different cyber security toolsQUANTITY

0.91+

next dayDATE

0.9+

waveEVENT

0.89+

Horizon 3.aORGANIZATION

0.88+

threeQUANTITY

0.87+

next six monthsDATE

0.87+

SASORGANIZATION

0.87+

chapter threeOTHER

0.86+

Horizon 3ORGANIZATION

0.85+

lot of moneyQUANTITY

0.82+

first thingQUANTITY

0.77+

CEOPERSON

0.74+

nihoPERSON

0.72+

chapter oneOTHER

0.71+

of years agoDATE

0.7+

chapter twoOTHER

0.7+

two DimensionsQUANTITY

0.7+

past few yearsDATE

0.7+

StreetLOCATION

0.7+

HorizonORGANIZATION

0.7+

3TITLE

0.65+

SalesforceTITLE

0.64+

Wall StreetORGANIZATION

0.63+

twoQUANTITY

0.61+

GoogleORGANIZATION

0.61+

HPORGANIZATION

0.61+

3.aiTITLE

0.6+

CSOTITLE

0.59+

usersQUANTITY

0.5+

WallORGANIZATION

0.5+

TodayDATE

0.47+

Horizon3.ai Signal | Horizon3.ai Partner Program Expands Internationally


 

hello I'm John Furrier with thecube and welcome to this special presentation of the cube and Horizon 3.ai they're announcing a global partner first approach expanding their successful pen testing product Net Zero you're going to hear from leading experts in their staff their CEO positioning themselves for a successful Channel distribution expansion internationally in Europe Middle East Africa and Asia Pacific in this Cube special presentation you'll hear about the expansion the expanse partner program giving Partners a unique opportunity to offer Net Zero to their customers Innovation and Pen testing is going International with Horizon 3.ai enjoy the program [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're here with Jennifer Lee head of Channel sales at Horizon 3.ai Jennifer welcome to the cube thanks for coming on great well thank you for having me so big news around Horizon 3.aa driving Channel first commitment you guys are expanding the channel partner program to include all kinds of new rewards incentives training programs help educate you know Partners really drive more recurring Revenue certainly cloud and Cloud scale has done that you got a great product that fits into that kind of Channel model great Services you can wrap around it good stuff so let's get into it what are you guys doing what are what are you guys doing with this news why is this so important yeah for sure so um yeah we like you said we recently expanded our Channel partner program um the driving force behind it was really just um to align our like you said our Channel first commitment um and creating awareness around the importance of our partner ecosystems um so that's it's really how we go to market is is through the channel and a great International Focus I've talked with the CEO so you know about the solution and he broke down all the action on why it's important on the product side but why now on the go to market change what's the what's the why behind this big this news on the channel yeah for sure so um we are doing this now really to align our business strategy which is built on the concept of enabling our partners to create a high value high margin business on top of our platform and so um we offer a solution called node zero it provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture um so we our company vision we have this tagline that states that our pen testing enables organizations to see themselves Through The Eyes of an attacker and um we use the like the attacker's perspective to identify exploitable weaknesses and vulnerabilities so we created this partner program from a perspective of the partner so the partner's perspective and we've built It Through The Eyes of our partner right so we're prioritizing really what the partner is looking for and uh will ensure like Mutual success for us yeah the partners always want to get in front of the customers and bring new stuff to them pen tests have traditionally been really expensive uh and so bringing it down in one to a service level that's one affordable and has flexibility to it allows a lot of capability so I imagine people getting excited by it so I have to ask you about the program What specifically are you guys doing can you share any details around what it means for the partners what they get what's in it for them can you just break down some of the mechanics and mechanisms or or details yeah yep um you know we're really looking to create business alignment um and like I said establish Mutual success with our partners so we've got two um two key elements that we were really focused on um that we bring to the partners so the opportunity the profit margin expansion is one of them and um a way for our partners to really differentiate themselves and stay relevant in the market so um we've restructured our discount model really um you know highlighting profitability and maximizing profitability and uh this includes our deal registration we've we've created deal registration program we've increased discount for partners who take part in our partner certification uh trainings and we've we have some other partner incentives uh that we we've created that that's going to help out there we've we put this all so we've recently Gone live with our partner portal um it's a Consolidated experience for our partners where they can access our our sales tools and we really view our partners as an extension of our sales and Technical teams and so we've extended all of our our training material that we use internally we've made it available to our partners through our partner portal um we've um I'm trying I'm thinking now back what else is in that partner portal here we've got our partner certification information so all the content that's delivered during that training can be found in the portal we've got deal registration uh um co-branded marketing materials pipeline management and so um this this portal gives our partners a One-Stop place to to go to find all that information um and then just really quickly on the second part of that that I mentioned is our technology really is um really disruptive to the market so you know like you said autonomous pen testing it's um it's still it's well it's still still relatively new topic uh for security practitioners and um it's proven to be really disruptive so um that on top of um just well recently we found an article that um that mentioned by markets and markets that reports that the global pen testing markets really expanding and so it's expected to grow to like 2.7 billion um by 2027. so the Market's there right the Market's expanding it's growing and so for our partners it's just really allows them to grow their revenue um across their customer base expand their customer base and offering this High profit margin while you know getting in early to Market on this just disruptive technology big Market a lot of opportunities to make some money people love to put more margin on on those deals especially when you can bring a great solution that everyone knows is hard to do so I think that's going to provide a lot of value is there is there a type of partner that you guys see emerging or you aligning with you mentioned the alignment with the partners I can see how that the training and the incentives are all there sounds like it's all going well is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this yeah absolutely so we work with all different kinds of Partners we work with our traditional resale Partners um we've worked we're working with systems integrators we have a really strong MSP mssp program um we've got Consulting partners and the Consulting Partners especially with the ones that offer pen test services so we they use us as a as we act as a force multiplier just really offering them profit margin expansion um opportunity there we've got some technology partner partners that we really work with for co-cell opportunities and then we've got our Cloud Partners um you'd mentioned that earlier and so we are in AWS Marketplace so our ccpo partners we're part of the ISP accelerate program um so we we're doing a lot there with our Cloud partners and um of course we uh we go to market with uh distribution Partners as well gotta love the opportunity for more margin expansion every kind of partner wants to put more gross profit on their deals is there a certification involved I have to ask is there like do you get do people get certified or is it just you get trained is it self-paced training is it in person how are you guys doing the whole training certification thing because is that is that a requirement yeah absolutely so we do offer a certification program and um it's been very popular this includes a a seller's portion and an operator portion and and so um this is at no cost to our partners and um we operate both virtually it's it's law it's virtually but live it's not self-paced and we also have in person um you know sessions as well and we also can customize these to any partners that have a large group of people and we can just we can do one in person or virtual just specifically for that partner well any kind of incentive opportunities and marketing opportunities everyone loves to get the uh get the deals just kind of rolling in leads from what we can see if our early reporting this looks like a hot product price wise service level wise what incentive do you guys thinking about and and Joint marketing you mentioned co-sell earlier in pipeline so I was kind of kind of honing in on that piece sure and yes and then to follow along with our partner certification program we do incentivize our partners there if they have a certain number certified their discount increases so that's part of it we have our deal registration program that increases discount as well um and then we do have some um some partner incentives that are wrapped around meeting setting and um moving moving opportunities along to uh proof of value gotta love the education driving value I have to ask you so you've been around the industry you've seen the channel relationships out there you're seeing companies old school new school you know uh Horizon 3.ai is kind of like that new school very cloud specific a lot of Leverage with we mentioned AWS and all the clouds um why is the company so hot right now why did you join them and what's why are people attracted to this company what's the what's the attraction what's the vibe what do you what do you see and what what do you use what did you see in in this company well this is just you know like I said it's very disruptive um it's really in high demand right now and um and and just because because it's new to Market and uh a newer technology so we are we can collaborate with a manual pen tester um we can you know we can allow our customers to run their pen test um with with no specialty teams and um and and then so we and like you know like I said we can allow our partners can actually build businesses profitable businesses so we can they can use our product to increase their services revenue and um and build their business model you know around around our services what's interesting about the pen test thing is that it's very expensive and time consuming the people who do them are very talented people that could be working on really bigger things in the in absolutely customers so bringing this into the channel allows them if you look at the price Delta between a pen test and then what you guys are offering I mean that's a huge margin Gap between street price of say today's pen test and what you guys offer when you show people that they follow do they say too good to be true I mean what are some of the things that people say when you kind of show them that are they like scratch their head like come on what's the what's the catch here right so the cost savings is a huge is huge for us um and then also you know like I said working as a force multiplier with a pen testing company that offers the services and so they can they can do their their annual manual pen tests that may be required around compliance regulations and then we can we can act as the continuous verification of their security um um you know that that they can run um weekly and so it's just um you know it's just an addition to to what they're offering already and an expansion so Jennifer thanks for coming on thecube really appreciate you uh coming on sharing the insights on the channel uh what's next what can we expect from the channel group what are you thinking what's going on right so we're really looking to expand our our Channel um footprint and um very strategically uh we've got um we've got some big plans um for for Horizon 3.ai awesome well thanks for coming on really appreciate it you're watching thecube the leader in high tech Enterprise coverage [Music] [Music] hello and welcome to the Cube's special presentation with Horizon 3.ai with Raina Richter vice president of emea Europe Middle East and Africa and Asia Pacific APAC for Horizon 3 today welcome to this special Cube presentation thanks for joining us thank you for the invitation so Horizon 3 a guy driving Global expansion big international news with a partner first approach you guys are expanding internationally let's get into it you guys are driving this new expanse partner program to new heights tell us about it what are you seeing in the momentum why the expansion what's all the news about well I would say uh yeah in in international we have I would say a similar similar situation like in the US um there is a global shortage of well-educated penetration testers on the one hand side on the other side um we have a raising demand of uh network and infrastructure security and with our approach of an uh autonomous penetration testing I I believe we are totally on top of the game um especially as we have also now uh starting with an international instance that means for example if a customer in Europe is using uh our service node zero he will be connected to a node zero instance which is located inside the European Union and therefore he has doesn't have to worry about the conflict between the European the gdpr regulations versus the US Cloud act and I would say there we have a total good package for our partners that they can provide differentiators to their customers you know we've had great conversations here on thecube with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company and honestly I can just Connect the Dots here but I'd like you to weigh in more on how that translates into the go to market here because you got great Cloud scale with with the security product you guys are having success with great leverage there I've seen a lot of success there what's the momentum on the channel partner program internationally why is it so important to you is it just the regional segmentation is it the economics why the momentum well there are it's there are multiple issues first of all there is a raising demand in penetration testing um and don't forget that uh in international we have a much higher level in number a number or percentage in SMB and mid-market customers so these customers typically most of them even didn't have a pen test done once a year so for them pen testing was just too expensive now with our offering together with our partners we can provide different uh ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with with a traditional manual paint test so and that is because we have our uh Consulting plus package which is for typically pain testers they can go out and can do a much faster much quicker and their pain test at many customers once in after each other so they can do more pain tests on a lower more attractive price on the other side there are others what even the same ones who are providing um node zero as an mssp service so they can go after s p customers saying okay well you only have a couple of hundred uh IP addresses no worries we have the perfect package for you and then you have let's say the mid Market let's say the thousands and more employees then they might even have an annual subscription very traditional but for all of them it's all the same the customer or the service provider doesn't need a piece of Hardware they only need to install a small piece of a Docker container and that's it and that makes it so so smooth to go in and say okay Mr customer we just put in this this virtual attacker into your network and that's it and and all the rest is done and within within three clicks they are they can act like a pen tester with 20 years of experience and that's going to be very Channel friendly and partner friendly I can almost imagine so I have to ask you and thank you for calling the break calling out that breakdown and and segmentation that was good that was very helpful for me to understand but I want to follow up if you don't mind um what type of partners are you seeing the most traction with and why well I would say at the beginning typically you have the the innovators the early adapters typically Boutique size of Partners they start because they they are always looking for Innovation and those are the ones you they start in the beginning so we have a wide range of Partners having mostly even um managed by the owner of the company so uh they immediately understand okay there is the value and they can change their offering they're changing their offering in terms of penetration testing because they can do more pen tests and they can then add other ones or we have those ones who offer 10 tests services but they did not have their own pen testers so they had to go out on the open market and Source paint testing experts um to get the pen test at a particular customer done and now with node zero they're totally independent they can't go out and say okay Mr customer here's the here's the service that's it we turn it on and within an hour you're up and running totally yeah and those pen tests are usually expensive and hard to do now it's right in line with the sales delivery pretty interesting for a partner absolutely but on the other hand side we are not killing the pain testers business we do something we're providing with no tiers I would call something like the foundation work the foundational work of having an an ongoing penetration testing of the infrastructure the operating system and the pen testers by themselves they can concentrate in the future on things like application pen testing for example so those Services which we we're not touching so we're not killing the paint tester Market we're just taking away the ongoing um let's say foundation work call it that way yeah yeah that was one of my questions I was going to ask is there's a lot of interest in this autonomous pen testing one because it's expensive to do because those skills are required are in need and they're expensive so you kind of cover the entry level and the blockers that are in there I've seen people say to me this pen test becomes a blocker for getting things done so there's been a lot of interest in the autonomous pen testing and for organizations to have that posture and it's an overseas issue too because now you have that that ongoing thing so can you explain that particular benefit for an organization to have that continuously verifying an organization's posture yep certainly so I would say um typically you are you you have to do your patches you have to bring in new versions of operating systems of different Services of uh um operating systems of some components and and they are always bringing new vulnerabilities the difference here is that with node zero we are telling the customer or the partner package we're telling them which are the executable vulnerabilities because previously they might have had um a vulnerability scanner so this vulnerability scanner brought up hundreds or even thousands of cves but didn't say anything about which of them are vulnerable really executable and then you need an expert digging in one cve after the other finding out is it is it really executable yes or no and that is where you need highly paid experts which we have a shortage so with notes here now we can say okay we tell you exactly which ones are the ones you should work on because those are the ones which are executable we rank them accordingly to the risk level how easily they can be used and by a sudden and then the good thing is convert it or indifference to the traditional penetration test they don't have to wait for a year for the next pain test to find out if the fixing was effective they weren't just the next scan and say Yes closed vulnerability is gone the time is really valuable and if you're doing any devops Cloud native you're always pushing new things so pen test ongoing pen testing is actually a benefit just in general as a kind of hygiene so really really interesting solution really bring that global scale is going to be a new new coverage area for us for sure I have to ask you if you don't mind answering what particular region are you focused on or plan to Target for this next phase of growth well at this moment we are concentrating on the countries inside the European Union Plus the United Kingdom um but we are and they are of course logically I'm based into Frankfurt area that means we cover more or less the countries just around so it's like the total dark region Germany Switzerland Austria plus the Netherlands but we also already have Partners in the nordics like in Finland or in Sweden um so it's it's it it's rapidly we have Partners already in the UK and it's rapidly growing so I'm for example we are now starting with some activities in Singapore um um and also in the in the Middle East area um very important we uh depending on let's say the the way how to do business currently we try to concentrate on those countries where we can have um let's say um at least English as an accepted business language great is there any particular region you're having the most success with right now is it sounds like European Union's um kind of first wave what's them yes that's the first definitely that's the first wave and now we're also getting the uh the European instance up and running it's clearly our commitment also to the market saying okay we know there are certain dedicated uh requirements and we take care of this and and we're just launching it we're building up this one uh the instance um in the AWS uh service center here in Frankfurt also with some dedicated Hardware internet in a data center in Frankfurt where we have with the date six by the way uh the highest internet interconnection bandwidth on the planet so we have very short latency to wherever you are on on the globe that's a great that's a great call outfit benefit too I was going to ask that what are some of the benefits your partners are seeing in emea and Asia Pacific well I would say um the the benefits is for them it's clearly they can they can uh talk with customers and can offer customers penetration testing which they before and even didn't think about because it penetrates penetration testing in a traditional way was simply too expensive for them too complex the preparation time was too long um they didn't have even have the capacity uh to um to support a pain an external pain tester now with this service you can go in and say even if they Mr customer we can do a test with you in a couple of minutes within we have installed the docker container within 10 minutes we have the pen test started that's it and then we just wait and and I would say that is we'll we are we are seeing so many aha moments then now because on the partner side when they see node zero the first time working it's like this wow that is great and then they work out to customers and and show it to their typically at the beginning mostly the friendly customers like wow that's great I need that and and I would say um the feedback from the partners is that is a service where I do not have to evangelize the customer everybody understands penetration testing I don't have to say describe what it is they understand the customer understanding immediately yes penetration testing good about that I know I should do it but uh too complex too expensive now with the name is for example as an mssp service provided from one of our partners but it's getting easy yeah it's great and it's great great benefit there I mean I gotta say I'm a huge fan of what you guys are doing I like this continuous automation that's a major benefit to anyone doing devops or any kind of modern application development this is just a godsend for them this is really good and like you said the pen testers that are doing it they were kind of coming down from their expertise to kind of do things that should have been automated they get to focus on the bigger ticket items that's a really big point so we free them we free the pain testers for the higher level elements of the penetration testing segment and that is typically the application testing which is currently far away from being automated yeah and that's where the most critical workloads are and I think this is the nice balance congratulations on the international expansion of the program and thanks for coming on this special presentation really I really appreciate it thank you you're welcome okay this is thecube special presentation you know check out pen test automation International expansion Horizon 3 dot AI uh really Innovative solution in our next segment Chris Hill sector head for strategic accounts will discuss the power of Horizon 3.ai and Splunk in action you're watching the cube the leader in high tech Enterprise coverage foreign [Music] [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're with Chris Hill sector head for strategic accounts and federal at Horizon 3.ai a great Innovative company Chris great to see you thanks for coming on thecube yeah like I said uh you know great to meet you John long time listener first time caller so excited to be here with you guys yeah we were talking before camera you had Splunk back in 2013 and I think 2012 was our first splunk.com and boy man you know talk about being in the right place at the right time now we're at another inflection point and Splunk continues to be relevant um and continuing to have that data driving Security in that interplay and your CEO former CTO of his plug as well at Horizon who's been on before really Innovative product you guys have but you know yeah don't wait for a breach to find out if you're logging the right data this is the topic of this thread Splunk is very much part of this new international expansion announcement uh with you guys tell us what are some of the challenges that you see where this is relevant for the Splunk and Horizon AI as you guys expand uh node zero out internationally yeah well so across so you know my role uh within Splunk it was uh working with our most strategic accounts and so I looked back to 2013 and I think about the sales process like working with with our small customers you know it was um it was still very siled back then like I was selling to an I.T team that was either using this for it operations um we generally would always even say yeah although we do security we weren't really designed for it we're a log management tool and we I'm sure you remember back then John we were like sort of stepping into the security space and and the public sector domain that I was in you know security was 70 of what we did when I look back to sort of uh the transformation that I was witnessing in that digital transformation um you know when I look at like 2019 to today you look at how uh the IT team and the security teams are being have been forced to break down those barriers that they used to sort of be silent away would not commute communicate one you know the security guys would be like oh this is my box I.T you're not allowed in today you can't get away with that and I think that the value that we bring to you know and of course Splunk has been a huge leader in that space and continues to do Innovation across the board but I think what we've we're seeing in the space and I was talking with Patrick Coughlin the SVP of uh security markets about this is that you know what we've been able to do with Splunk is build a purpose-built solution that allows Splunk to eat more data so Splunk itself is ulk know it's an ingest engine right the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it but without data it doesn't do anything right so how do you drive and how do you bring more data in and most importantly from a customer perspective how do you bring the right data in and so if you think about what node zero and what we're doing in a horizon 3 is that sure we do pen testing but because we're an autonomous pen testing tool we do it continuously so this whole thought I'd be like oh crud like my customers oh yeah we got a pen test coming up it's gonna be six weeks the week oh yeah you know and everyone's gonna sit on their hands call me back in two months Chris we'll talk to you then right not not a real efficient way to test your environment and shoot we saw that with Uber this week right um you know and that's a case where we could have helped oh just right we could explain the Uber thing because it was a contractor just give a quick highlight of what happened so you can connect the doctor yeah no problem so um it was uh I got I think it was yeah one of those uh you know games where they would try and test an environment um and with the uh pen tester did was he kept on calling them MFA guys being like I need to reset my password we need to set my right password and eventually the um the customer service guy said okay I'm resetting it once he had reset and bypassed the multi-factor authentication he then was able to get in and get access to the building area that he was in or I think not the domain but he was able to gain access to a partial part of that Network he then paralleled over to what I would assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains and So within minutes they had access and that's the sort of stuff that we do you know a lot of these tools like um you know you think about the cacophony of tools that are out there in a GTA architect architecture right I'm gonna get like a z-scale or I'm going to have uh octum and I have a Splunk I've been into the solar system I mean I don't mean to name names we have crowdstriker or Sentinel one in there it's just it's a cacophony of things that don't work together they weren't designed work together and so we have seen so many times in our business through our customer support and just working with customers when we do their pen tests that there will be 5 000 servers out there three are misconfigured those three misconfigurations will create the open door because remember the hacker only needs to be right once the defender needs to be right all the time and that's the challenge and so that's what I'm really passionate about what we're doing uh here at Horizon three I see this my digital transformation migration and security going on which uh we're at the tip of the spear it's why I joined sey Hall coming on this journey uh and just super excited about where the path's going and super excited about the relationship with Splunk I get into more details on some of the specifics of that but um you know well you're nailing I mean we've been doing a lot of things on super cloud and this next gen environment we're calling it next gen you're really seeing devops obviously devsecops has already won the it role has moved to the developer shift left is an indicator of that it's one of the many examples higher velocity code software supply chain you hear these things that means that it is now in the developer hands it is replaced by the new Ops data Ops teams and security where there's a lot of horizontal thinking to your point about access there's no more perimeter huge 100 right is really right on things one time you know to get in there once you're in then you can hang out move around move laterally big problem okay so we get that now the challenges for these teams as they are transitioning organizationally how do they figure out what to do okay this is the next step they already have Splunk so now they're kind of in transition while protecting for a hundred percent ratio of success so how would you look at that and describe the challenge is what do they do what is it what are the teams facing with their data and what's next what are they what are they what action do they take so let's use some vernacular that folks will know so if I think about devsecops right we both know what that means that I'm going to build security into the app it normally talks about sec devops right how am I building security around the perimeter of what's going inside my ecosystem and what are they doing and so if you think about what we're able to do with somebody like Splunk is we can pen test the entire environment from Soup To Nuts right so I'm going to test the end points through to its I'm going to look for misconfigurations I'm going to I'm going to look for um uh credential exposed credentials you know I'm going to look for anything I can in the environment again I'm going to do it at light speed and and what what we're doing for that SEC devops space is to you know did you detect that we were in your environment so did we alert Splunk or the Sim that there's someone in the environment laterally moving around did they more importantly did they log us into their environment and when do they detect that log to trigger that log did they alert on us and then finally most importantly for every CSO out there is going to be did they stop us and so that's how we we do this and I think you when speaking with um stay Hall before you know we've come up with this um boils but we call it fine fix verifying so what we do is we go in is we act as the attacker right we act in a production environment so we're not going to be we're a passive attacker but we will go in on credentialed on agents but we have to assume to have an assumed breach model which means we're going to put a Docker container in your environment and then we're going to fingerprint the environment so we're going to go out and do an asset survey now that's something that's not something that Splunk does super well you know so can Splunk see all the assets do the same assets marry up we're going to log all that data and think and then put load that into this long Sim or the smoke logging tools just to have it in Enterprise right that's an immediate future ad that they've got um and then we've got the fix so once we've completed our pen test um we are then going to generate a report and we can talk about these in a little bit later but the reports will show an executive summary the assets that we found which would be your asset Discovery aspect of that a fix report and the fixed report I think is probably the most important one it will go down and identify what we did how we did it and then how to fix that and then from that the pen tester or the organization should fix those then they go back and run another test and then they validate like a change detection environment to see hey did those fixes taste play take place and you know snehaw when he was the CTO of jsoc he shared with me a number of times about it's like man there would be 15 more items on next week's punch sheet that we didn't know about and it's and it has to do with how we you know how they were uh prioritizing the cves and whatnot because they would take all CBDs it was critical or non-critical and it's like we are able to create context in that environment that feeds better information into Splunk and whatnot that brings that brings up the efficiency for Splunk specifically the teams out there by the way the burnout thing is real I mean this whole I just finished my list and I got 15 more or whatever the list just can keeps growing how did node zero specifically help Splunk teams be more efficient like that's the question I want to get at because this seems like a very scale way for Splunk customers and teams service teams to be more so the question is how does node zero help make Splunk specifically their service teams be more efficient so so today in our early interactions we're building customers we've seen are five things um and I'll start with sort of identifying the blind spots right so kind of what I just talked about with you did we detect did we log did we alert did they stop node zero right and so I would I put that you know a more Layman's third grade term and if I was going to beat a fifth grader at this game would be we can be the sparring partner for a Splunk Enterprise customer a Splunk Essentials customer someone using Splunk soar or even just an Enterprise Splunk customer that may be a small shop with three people and just wants to know where am I exposed so by creating and generating these reports and then having um the API that actually generates the dashboard they can take all of these events that we've logged and log them in and then where that then comes in is number two is how do we prioritize those logs right so how do we create visibility to logs that that um are have critical impacts and again as I mentioned earlier not all cves are high impact regard and also not all or low right so if you daisy chain a bunch of low cves together boom I've got a mission critical AP uh CPE that needs to be fixed now such as a credential moving to an NT box that's got a text file with a bunch of passwords on it that would be very bad um and then third would be uh verifying that you have all of the hosts so one of the things that splunk's not particularly great at and they'll literate themselves they don't do asset Discovery so dude what assets do we see and what are they logging from that um and then for from um for every event that they are able to identify one of the cool things that we can do is actually create this low code no code environment so they could let you know Splunk customers can use Splunk sword to actually triage events and prioritize that event so where they're being routed within it to optimize the Sox team time to Market or time to triage any given event obviously reducing MTR and then finally I think one of the neatest things that we'll be seeing us develop is um our ability to build glass cables so behind me you'll see one of our triage events and how we build uh a Lockheed Martin kill chain on that with a glass table which is very familiar to the community we're going to have the ability and not too distant future to allow people to search observe on those iocs and if people aren't familiar with it ioc it's an instant of a compromise so that's a vector that we want to drill into and of course who's better at Drilling in the data and smoke yeah this is a critter this is an awesome Synergy there I mean I can see a Splunk customer going man this just gives me so much more capability action actionability and also real understanding and I think this is what I want to dig into if you don't mind understanding that critical impact okay is kind of where I see this coming got the data data ingest now data's data but the question is what not to log you know where are things misconfigured these are critical questions so can you talk about what it means to understand critical impact yeah so I think you know going back to the things that I just spoke about a lot of those cves where you'll see um uh low low low and then you daisy chain together and they're suddenly like oh this is high now but then your other impact of like if you're if you're a Splunk customer you know and I had it I had several of them I had one customer that you know terabytes of McAfee data being brought in and it was like all right there's a lot of other data that you probably also want to bring but they could only afford wanted to do certain data sets because that's and they didn't know how to prioritize or filter those data sets and so we provide that opportunity to say hey these are the critical ones to bring in but there's also the ones that you don't necessarily need to bring in because low cve in this case really does mean low cve like an ILO server would be one that um that's the print server uh where the uh your admin credentials are on on like a printer and so there will be credentials on that that's something that a hacker might go in to look at so although the cve on it is low is if you daisy chain with somebody that's able to get into that you might say Ah that's high and we would then potentially rank it giving our AI logic to say that's a moderate so put it on the scale and we prioritize those versus uh of all of these scanners just going to give you a bunch of CDs and good luck and translating that if I if I can and tell me if I'm wrong that kind of speaks to that whole lateral movement that's it challenge right print serve a great example looks stupid low end who's going to want to deal with the print server oh but it's connected into a critical system there's a path is that kind of what you're getting at yeah I use Daisy Chain I think that's from the community they came from uh but it's just a lateral movement it's exactly what they're doing in those low level low critical lateral movements is where the hackers are getting in right so that's the beauty thing about the uh the Uber example is that who would have thought you know I've got my monthly Factor authentication going in a human made a mistake we can't we can't not expect humans to make mistakes we're fallible right the reality is is once they were in the environment they could have protected themselves by running enough pen tests to know that they had certain uh exposed credentials that would have stopped the breach and they did not had not done that in their environment and I'm not poking yeah but it's an interesting Trend though I mean it's obvious if sometimes those low end items are also not protected well so it's easy to get at from a hacker standpoint but also the people in charge of them can be fished easily or spearfished because they're not paying attention because they don't have to no one ever told them hey be careful yeah for the community that I came from John that's exactly how they they would uh meet you at a uh an International Event um introduce themselves as a graduate student these are National actor States uh would you mind reviewing my thesis on such and such and I was at Adobe at the time that I was working on this instead of having to get the PDF they opened the PDF and whoever that customer was launches and I don't know if you remember back in like 2008 time frame there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it and John that's or LinkedIn hey I want to get a joke we want to hire you double the salary oh I'm gonna click on that for sure you know yeah right exactly yeah the one thing I would say to you is like uh when we look at like sort of you know because I think we did 10 000 pen tests last year is it's probably over that now you know we have these sort of top 10 ways that we think and find people coming into the environment the funniest thing is that only one of them is a cve related vulnerability like uh you know you guys know what they are right so it's it but it's it's like two percent of the attacks are occurring through the cves but yeah there's all that attention spent to that and very little attention spent to this pen testing side which is sort of this continuous threat you know monitoring space and and this vulnerability space where I think we play a such an important role and I'm so excited to be a part of the tip of the spear on this one yeah I'm old enough to know the movie sneakers which I loved as a you know watching that movie you know professional hackers are testing testing always testing the environment I love this I got to ask you as we kind of wrap up here Chris if you don't mind the the benefits to Professional Services from this Alliance big news Splunk and you guys work well together we see that clearly what are what other benefits do Professional Services teams see from the Splunk and Horizon 3.ai Alliance so if you're I think for from our our from both of our uh Partners uh as we bring these guys together and many of them already are the same partner right uh is that uh first off the licensing model is probably one of the key areas that we really excel at so if you're an end user you can buy uh for the Enterprise by the number of IP addresses you're using um but uh if you're a partner working with this there's solution ways that you can go in and we'll license as to msps and what that business model on msps looks like but the unique thing that we do here is this C plus license and so the Consulting plus license allows like a uh somebody a small to mid-sized to some very large uh you know Fortune 100 uh consulting firms use this uh by buying into a license called um Consulting plus where they can have unlimited uh access to as many IPS as they want but you can only run one test at a time and as you can imagine when we're going and hacking passwords and um checking hashes and decrypting hashes that can take a while so but for the right customer it's it's a perfect tool and so I I'm so excited about our ability to go to market with uh our partners so that we understand ourselves understand how not to just sell to or not tell just to sell through but we know how to sell with them as a good vendor partner I think that that's one thing that we've done a really good job building bring it into the market yeah I think also the Splunk has had great success how they've enabled uh partners and Professional Services absolutely you know the services that layer on top of Splunk are multi-fold tons of great benefits so you guys Vector right into that ride that way with friction and and the cool thing is that in you know in one of our reports which could be totally customized uh with someone else's logo we're going to generate you know so I I used to work in another organization it wasn't Splunk but we we did uh you know pen testing as for for customers and my pen testers would come on site they'd do the engagement and they would leave and then another release someone would be oh shoot we got another sector that was breached and they'd call you back you know four weeks later and so by August our entire pen testings teams would be sold out and it would be like well even in March maybe and they're like no no I gotta breach now and and and then when they do go in they go through do the pen test and they hand over a PDF and they pack on the back and say there's where your problems are you need to fix it and the reality is that what we're going to generate completely autonomously with no human interaction is we're going to go and find all the permutations of anything we found and the fix for those permutations and then once you've fixed everything you just go back and run another pen test it's you know for what people pay for one pen test they can have a tool that does that every every Pat patch on Tuesday and that's on Wednesday you know triage throughout the week green yellow red I wanted to see the colors show me green green is good right not red and one CIO doesn't want who doesn't want that dashboard right it's it's exactly it and we can help bring I think that you know I'm really excited about helping drive this with the Splunk team because they get that they understand that it's the green yellow red dashboard and and how do we help them find more green uh so that the other guys are in red yeah and get in the data and do the right thing and be efficient with how you use the data know what to look at so many things to pay attention to you know the combination of both and then go to market strategy real brilliant congratulations Chris thanks for coming on and sharing um this news with the detail around the Splunk in action around the alliance thanks for sharing John my pleasure thanks look forward to seeing you soon all right great we'll follow up and do another segment on devops and I.T and security teams as the new new Ops but and super cloud a bunch of other stuff so thanks for coming on and our next segment the CEO of horizon 3.aa will break down all the new news for us here on thecube you're watching thecube the leader in high tech Enterprise coverage [Music] yeah the partner program for us has been fantastic you know I think prior to that you know as most organizations most uh uh most Farmers most mssps might not necessarily have a a bench at all for penetration testing uh maybe they subcontract this work out or maybe they do it themselves but trying to staff that kind of position can be incredibly difficult for us this was a differentiator a a new a new partner a new partnership that allowed us to uh not only perform services for our customers but be able to provide a product by which that they can do it themselves so we work with our customers in a variety of ways some of them want more routine testing and perform this themselves but we're also a certified service provider of horizon 3 being able to perform uh penetration tests uh help review the the data provide color provide analysis for our customers in a broader sense right not necessarily the the black and white elements of you know what was uh what's critical what's high what's medium what's low what you need to fix but are there systemic issues this has allowed us to onboard new customers this has allowed us to migrate some penetration testing services to us from from competitors in the marketplace But ultimately this is occurring because the the product and the outcome are special they're unique and they're effective our customers like what they're seeing they like the routineness of it many of them you know again like doing this themselves you know being able to kind of pen test themselves parts of their networks um and the the new use cases right I'm a large organization I have eight to ten Acquisitions per year wouldn't it be great to have a tool to be able to perform a penetration test both internal and external of that acquisition before we integrate the two companies and maybe bringing on some risk it's a very effective partnership uh one that really is uh kind of taken our our Engineers our account Executives by storm um you know this this is a a partnership that's been very valuable to us [Music] a key part of the value and business model at Horizon 3 is enabling Partners to leverage node zero to make more revenue for themselves our goal is that for sixty percent of our Revenue this year will be originated by partners and that 95 of our Revenue next year will be originated by partners and so a key to that strategy is making us an integral part of your business models as a partner a key quote from one of our partners is that we enable every one of their business units to generate Revenue so let's talk about that in a little bit more detail first is that if you have a pen test Consulting business take Deloitte as an example what was six weeks of human labor at Deloitte per pen test has been cut down to four days of Labor using node zero to conduct reconnaissance find all the juicy interesting areas of the of the Enterprise that are exploitable and being able to go assess the entire organization and then all of those details get served up to the human to be able to look at understand and determine where to probe deeper so what you see in that pen test Consulting business is that node zero becomes a force multiplier where those Consulting teams were able to cover way more accounts and way more IPS within those accounts with the same or fewer consultants and so that directly leads to profit margin expansion for the Penn testing business itself because node 0 is a force multiplier the second business model here is if you're an mssp as an mssp you're already making money providing defensive cyber security operations for a large volume of customers and so what they do is they'll license node zero and use us as an upsell to their mssb business to start to deliver either continuous red teaming continuous verification or purple teaming as a service and so in that particular business model they've got an additional line of Revenue where they can increase the spend of their existing customers by bolting on node 0 as a purple team as a service offering the third business model or customer type is if you're an I.T services provider so as an I.T services provider you make money installing and configuring security products like Splunk or crowdstrike or hemio you also make money reselling those products and you also make money generating follow-on services to continue to harden your customer environments and so for them what what those it service providers will do is use us to verify that they've installed Splunk correctly improved to their customer that Splunk was installed correctly or crowdstrike was installed correctly using our results and then use our results to drive follow-on services and revenue and then finally we've got the value-added reseller which is just a straight up reseller because of how fast our sales Cycles are these vars are able to typically go from cold email to deal close in six to eight weeks at Horizon 3 at least a single sales engineer is able to run 30 to 50 pocs concurrently because our pocs are very lightweight and don't require any on-prem customization or heavy pre-sales post sales activity so as a result we're able to have a few amount of sellers driving a lot of Revenue and volume for us well the same thing applies to bars there isn't a lot of effort to sell the product or prove its value so vars are able to sell a lot more Horizon 3 node zero product without having to build up a huge specialist sales organization so what I'm going to do is talk through uh scenario three here as an I.T service provider and just how powerful node zero can be in driving additional Revenue so in here think of for every one dollar of node zero license purchased by the IT service provider to do their business it'll generate ten dollars of additional revenue for that partner so in this example kidney group uses node 0 to verify that they have installed and deployed Splunk correctly so Kitty group is a Splunk partner they they sell it services to install configure deploy and maintain Splunk and as they deploy Splunk they're going to use node 0 to attack the environment and make sure that the right logs and alerts and monitoring are being handled within the Splunk deployment so it's a way of doing QA or verifying that Splunk has been configured correctly and that's going to be internally used by kidney group to prove the quality of their services that they've just delivered then what they're going to do is they're going to show and leave behind that node zero Report with their client and that creates a resell opportunity for for kidney group to resell node 0 to their client because their client is seeing the reports and the results and saying wow this is pretty amazing and those reports can be co-branded where it's a pen testing report branded with kidney group but it says powered by Horizon three under it from there kidney group is able to take the fixed actions report that's automatically generated with every pen test through node zero and they're able to use that as the starting point for a statement of work to sell follow-on services to fix all of the problems that node zero identified fixing l11r misconfigurations fixing or patching VMware or updating credentials policies and so on so what happens is node 0 has found a bunch of problems the client often lacks the capacity to fix and so kidney group can use that lack of capacity by the client as a follow-on sales opportunity for follow-on services and finally based on the findings from node zero kidney group can look at that report and say to the customer you know customer if you bought crowdstrike you'd be able to uh prevent node Zero from attacking and succeeding in the way that it did for if you bought humano or if you bought Palo Alto networks or if you bought uh some privileged access management solution because of what node 0 was able to do with credential harvesting and attacks and so as a result kidney group is able to resell other security products within their portfolio crowdstrike Falcon humano Polito networks demisto Phantom and so on based on the gaps that were identified by node zero and that pen test and what that creates is another feedback loop where kidney group will then go use node 0 to verify that crowdstrike product has actually been installed and configured correctly and then this becomes the cycle of using node 0 to verify a deployment using that verification to drive a bunch of follow-on services and resell opportunities which then further drives more usage of the product now the way that we licensed is that it's a usage-based license licensing model so that the partner will grow their node zero Consulting plus license as they grow their business so for example if you're a kidney group then week one you've got you're going to use node zero to verify your Splunk install in week two if you have a pen testing business you're going to go off and use node zero to be a force multiplier for your pen testing uh client opportunity and then if you have an mssp business then in week three you're going to use node zero to go execute a purple team mssp offering for your clients so not necessarily a kidney group but if you're a Deloitte or ATT these larger companies and you've got multiple lines of business if you're Optive for instance you all you have to do is buy one Consulting plus license and you're going to be able to run as many pen tests as you want sequentially so now you can buy a single license and use that one license to meet your week one client commitments and then meet your week two and then meet your week three and as you grow your business you start to run multiple pen tests concurrently so in week one you've got to do a Splunk verify uh verify Splunk install and you've got to run a pen test and you've got to do a purple team opportunity you just simply expand the number of Consulting plus licenses from one license to three licenses and so now as you systematically grow your business you're able to grow your node zero capacity with you giving you predictable cogs predictable margins and once again 10x additional Revenue opportunity for that investment in the node zero Consulting plus license my name is Saint I'm the co-founder and CEO here at Horizon 3. I'm going to talk to you today about why it's important to look at your Enterprise Through The Eyes of an attacker the challenge I had when I was a CIO in banking the CTO at Splunk and serving within the Department of Defense is that I had no idea I was Secure until the bad guys had showed up am I logging the right data am I fixing the right vulnerabilities are my security tools that I've paid millions of dollars for actually working together to defend me and the answer is I don't know does my team actually know how to respond to a breach in the middle of an incident I don't know I've got to wait for the bad guys to show up and so the challenge I had was how do we proactively verify our security posture I tried a variety of techniques the first was the use of vulnerability scanners and the challenge with vulnerability scanners is being vulnerable doesn't mean you're exploitable I might have a hundred thousand findings from my scanner of which maybe five or ten can actually be exploited in my environment the other big problem with scanners is that they can't chain weaknesses together from machine to machine so if you've got a thousand machines in your environment or more what a vulnerability scanner will do is tell you you have a problem on machine one and separately a problem on machine two but what they can tell you is that an attacker could use a load from machine one plus a low from machine two to equal to critical in your environment and what attackers do in their tactics is they chain together misconfigurations dangerous product defaults harvested credentials and exploitable vulnerabilities into attack paths across different machines so to address the attack pads across different machines I tried layering in consulting-based pen testing and the issue is when you've got thousands of hosts or hundreds of thousands of hosts in your environment human-based pen testing simply doesn't scale to test an infrastructure of that size moreover when they actually do execute a pen test and you get the report oftentimes you lack the expertise within your team to quickly retest to verify that you've actually fixed the problem and so what happens is you end up with these pen test reports that are incomplete snapshots and quickly going stale and then to mitigate that problem I tried using breach and attack simulation tools and the struggle with these tools is one I had to install credentialed agents everywhere two I had to write my own custom attack scripts that I didn't have much talent for but also I had to maintain as my environment changed and then three these types of tools were not safe to run against production systems which was the the majority of my attack surface so that's why we went off to start Horizon 3. so Tony and I met when we were in Special Operations together and the challenge we wanted to solve was how do we do infrastructure security testing at scale by giving the the power of a 20-year pen testing veteran into the hands of an I.T admin a network engineer in just three clicks and the whole idea is we enable these fixers The Blue Team to be able to run node Zero Hour pen testing product to quickly find problems in their environment that blue team will then then go off and fix the issues that were found and then they can quickly rerun the attack to verify that they fixed the problem and the whole idea is delivering this without requiring custom scripts be developed without requiring credential agents be installed and without requiring the use of external third-party consulting services or Professional Services self-service pen testing to quickly Drive find fix verify there are three primary use cases that our customers use us for the first is the sock manager that uses us to verify that their security tools are actually effective to verify that they're logging the right data in Splunk or in their Sim to verify that their managed security services provider is able to quickly detect and respond to an attack and hold them accountable for their slas or that the sock understands how to quickly detect and respond and measuring and verifying that or that the variety of tools that you have in your stack most organizations have 130 plus cyber security tools none of which are designed to work together are actually working together the second primary use case is proactively hardening and verifying your systems this is when the I that it admin that network engineer they're able to run self-service pen tests to verify that their Cisco environment is installed in hardened and configured correctly or that their credential policies are set up right or that their vcenter or web sphere or kubernetes environments are actually designed to be secure and what this allows the it admins and network Engineers to do is shift from running one or two pen tests a year to 30 40 or more pen tests a month and you can actually wire those pen tests into your devops process or into your detection engineering and the change management processes to automatically trigger pen tests every time there's a change in your environment the third primary use case is for those organizations lucky enough to have their own internal red team they'll use node zero to do reconnaissance and exploitation at scale and then use the output as a starting point for the humans to step in and focus on the really hard juicy stuff that gets them on stage at Defcon and so these are the three primary use cases and what we'll do is zoom into the find fix verify Loop because what I've found in my experience is find fix verify is the future operating model for cyber security organizations and what I mean here is in the find using continuous pen testing what you want to enable is on-demand self-service pen tests you want those pen tests to find attack pads at scale spanning your on-prem infrastructure your Cloud infrastructure and your perimeter because attackers don't only state in one place they will find ways to chain together a perimeter breach a credential from your on-prem to gain access to your cloud or some other permutation and then the third part in continuous pen testing is attackers don't focus on critical vulnerabilities anymore they know we've built vulnerability Management Programs to reduce those vulnerabilities so attackers have adapted and what they do is chain together misconfigurations in your infrastructure and software and applications with dangerous product defaults with exploitable vulnerabilities and through the collection of credentials through a mix of techniques at scale once you've found those problems the next question is what do you do about it well you want to be able to prioritize fixing problems that are actually exploitable in your environment that truly matter meaning they're going to lead to domain compromise or domain user compromise or access your sensitive data the second thing you want to fix is making sure you understand what risk your crown jewels data is exposed to where is your crown jewels data is in the cloud is it on-prem has it been copied to a share drive that you weren't aware of if a domain user was compromised could they access that crown jewels data you want to be able to use the attacker's perspective to secure the critical data you have in your infrastructure and then finally as you fix these problems you want to quickly remediate and retest that you've actually fixed the issue and this fine fix verify cycle becomes that accelerator that drives purple team culture the third part here is verify and what you want to be able to do in the verify step is verify that your security tools and processes in people can effectively detect and respond to a breach you want to be able to integrate that into your detection engineering processes so that you know you're catching the right security rules or that you've deployed the right configurations you also want to make sure that your environment is adhering to the best practices around systems hardening in cyber resilience and finally you want to be able to prove your security posture over a time to your board to your leadership into your regulators so what I'll do now is zoom into each of these three steps so when we zoom in to find here's the first example using node 0 and autonomous pen testing and what an attacker will do is find a way to break through the perimeter in this example it's very easy to misconfigure kubernetes to allow an attacker to gain remote code execution into your on-prem kubernetes environment and break through the perimeter and from there what the attacker is going to do is conduct Network reconnaissance and then find ways to gain code execution on other machines in the environment and as they get code execution they start to dump credentials collect a bunch of ntlm hashes crack those hashes using open source and dark web available data as part of those attacks and then reuse those credentials to log in and laterally maneuver throughout the environment and then as they loudly maneuver they can reuse those credentials and use credential spraying techniques and so on to compromise your business email to log in as admin into your cloud and this is a very common attack and rarely is a CV actually needed to execute this attack often it's just a misconfiguration in kubernetes with a bad credential policy or password policy combined with bad practices of credential reuse across the organization here's another example of an internal pen test and this is from an actual customer they had 5 000 hosts within their environment they had EDR and uba tools installed and they initiated in an internal pen test on a single machine from that single initial access point node zero enumerated the network conducted reconnaissance and found five thousand hosts were accessible what node 0 will do under the covers is organize all of that reconnaissance data into a knowledge graph that we call the Cyber terrain map and that cyber Terrain map becomes the key data structure that we use to efficiently maneuver and attack and compromise your environment so what node zero will do is they'll try to find ways to get code execution reuse credentials and so on in this customer example they had Fortinet installed as their EDR but node 0 was still able to get code execution on a Windows machine from there it was able to successfully dump credentials including sensitive credentials from the lsas process on the Windows box and then reuse those credentials to log in as domain admin in the network and once an attacker becomes domain admin they have the keys to the kingdom they can do anything they want so what happened here well it turns out Fortinet was misconfigured on three out of 5000 machines bad automation the customer had no idea this had happened they would have had to wait for an attacker to show up to realize that it was misconfigured the second thing is well why didn't Fortinet stop the credential pivot in the lateral movement and it turned out the customer didn't buy the right modules or turn on the right services within that particular product and we see this not only with Ford in it but we see this with Trend Micro and all the other defensive tools where it's very easy to miss a checkbox in the configuration that will do things like prevent credential dumping the next story I'll tell you is attackers don't have to hack in they log in so another infrastructure pen test a typical technique attackers will take is man in the middle uh attacks that will collect hashes so in this case what an attacker will do is leverage a tool or technique called responder to collect ntlm hashes that are being passed around the network and there's a variety of reasons why these hashes are passed around and it's a pretty common misconfiguration but as an attacker collects those hashes then they start to apply techniques to crack those hashes so they'll pass the hash and from there they will use open source intelligence common password structures and patterns and other types of techniques to try to crack those hashes into clear text passwords so here node 0 automatically collected hashes it automatically passed the hashes to crack those credentials and then from there it starts to take the domain user user ID passwords that it's collected and tries to access different services and systems in your Enterprise in this case node 0 is able to successfully gain access to the Office 365 email environment because three employees didn't have MFA configured so now what happens is node 0 has a placement and access in the business email system which sets up the conditions for fraud lateral phishing and other techniques but what's especially insightful here is that 80 of the hashes that were collected in this pen test were cracked in 15 minutes or less 80 percent 26 of the user accounts had a password that followed a pretty obvious pattern first initial last initial and four random digits the other thing that was interesting is 10 percent of service accounts had their user ID the same as their password so VMware admin VMware admin web sphere admin web Square admin so on and so forth and so attackers don't have to hack in they just log in with credentials that they've collected the next story here is becoming WS AWS admin so in this example once again internal pen test node zero gets initial access it discovers 2 000 hosts are network reachable from that environment if fingerprints and organizes all of that data into a cyber Terrain map from there it it fingerprints that hpilo the integrated lights out service was running on a subset of hosts hpilo is a service that is often not instrumented or observed by security teams nor is it easy to patch as a result attackers know this and immediately go after those types of services so in this case that ILO service was exploitable and were able to get code execution on it ILO stores all the user IDs and passwords in clear text in a particular set of processes so once we gain code execution we were able to dump all of the credentials and then from there laterally maneuver to log in to the windows box next door as admin and then on that admin box we're able to gain access to the share drives and we found a credentials file saved on a share Drive from there it turned out that credentials file was the AWS admin credentials file giving us full admin authority to their AWS accounts not a single security alert was triggered in this attack because the customer wasn't observing the ILO service and every step thereafter was a valid login in the environment and so what do you do step one patch the server step two delete the credentials file from the share drive and then step three is get better instrumentation on privileged access users and login the final story I'll tell is a typical pattern that we see across the board with that combines the various techniques I've described together where an attacker is going to go off and use open source intelligence to find all of the employees that work at your company from there they're going to look up those employees on dark web breach databases and other forms of information and then use that as a starting point to password spray to compromise a domain user all it takes is one employee to reuse a breached password for their Corporate email or all it takes is a single employee to have a weak password that's easily guessable all it takes is one and once the attacker is able to gain domain user access in most shops domain user is also the local admin on their laptop and once your local admin you can dump Sam and get local admin until M hashes you can use that to reuse credentials again local admin on neighboring machines and attackers will start to rinse and repeat then eventually they're able to get to a point where they can dump lsas or by unhooking the anti-virus defeating the EDR or finding a misconfigured EDR as we've talked about earlier to compromise the domain and what's consistent is that the fundamentals are broken at these shops they have poor password policies they don't have least access privilege implemented active directory groups are too permissive where domain admin or domain user is also the local admin uh AV or EDR Solutions are misconfigured or easily unhooked and so on and what we found in 10 000 pen tests is that user Behavior analytics tools never caught us in that lateral movement in part because those tools require pristine logging data in order to work and also it becomes very difficult to find that Baseline of normal usage versus abnormal usage of credential login another interesting Insight is there were several Marquee brand name mssps that were defending our customers environment and for them it took seven hours to detect and respond to the pen test seven hours the pen test was over in less than two hours and so what you had was an egregious violation of the service level agreements that that mssp had in place and the customer was able to use us to get service credit and drive accountability of their sock and of their provider the third interesting thing is in one case it took us seven minutes to become domain admin in a bank that bank had every Gucci security tool you could buy yet in 7 minutes and 19 seconds node zero started as an unauthenticated member of the network and was able to escalate privileges through chaining and misconfigurations in lateral movement and so on to become domain admin if it's seven minutes today we should assume it'll be less than a minute a year or two from now making it very difficult for humans to be able to detect and respond to that type of Blitzkrieg attack so that's in the find it's not just about finding problems though the bulk of the effort should be what to do about it the fix and the verify so as you find those problems back to kubernetes as an example we will show you the path here is the kill chain we took to compromise that environment we'll show you the impact here is the impact or here's the the proof of exploitation that we were able to use to be able to compromise it and there's the actual command that we executed so you could copy and paste that command and compromise that cubelet yourself if you want and then the impact is we got code execution and we'll actually show you here is the impact this is a critical here's why it enabled perimeter breach affected applications will tell you the specific IPS where you've got the problem how it maps to the miter attack framework and then we'll tell you exactly how to fix it we'll also show you what this problem enabled so you can accurately prioritize why this is important or why it's not important the next part is accurate prioritization the hardest part of my job as a CIO was deciding what not to fix so if you take SMB signing not required as an example by default that CVSs score is a one out of 10. but this misconfiguration is not a cve it's a misconfig enable an attacker to gain access to 19 credentials including one domain admin two local admins and access to a ton of data because of that context this is really a 10 out of 10. you better fix this as soon as possible however of the seven occurrences that we found it's only a critical in three out of the seven and these are the three specific machines and we'll tell you the exact way to fix it and you better fix these as soon as possible for these four machines over here these didn't allow us to do anything of consequence so that because the hardest part is deciding what not to fix you can justifiably choose not to fix these four issues right now and just add them to your backlog and surge your team to fix these three as quickly as possible and then once you fix these three you don't have to re-run the entire pen test you can select these three and then one click verify and run a very narrowly scoped pen test that is only testing this specific issue and what that creates is a much faster cycle of finding and fixing problems the other part of fixing is verifying that you don't have sensitive data at risk so once we become a domain user we're able to use those domain user credentials and try to gain access to databases file shares S3 buckets git repos and so on and help you understand what sensitive data you have at risk so in this example a green checkbox means we logged in as a valid domain user we're able to get read write access on the database this is how many records we could have accessed and we don't actually look at the values in the database but we'll show you the schema so you can quickly characterize that pii data was at risk here and we'll do that for your file shares and other sources of data so now you can accurately articulate the data you have at risk and prioritize cleaning that data up especially data that will lead to a fine or a big news issue so that's the find that's the fix now we're going to talk about the verify the key part in verify is embracing and integrating with detection engineering practices so when you think about your layers of security tools you've got lots of tools in place on average 130 tools at any given customer but these tools were not designed to work together so when you run a pen test what you want to do is say did you detect us did you log us did you alert on us did you stop us and from there what you want to see is okay what are the techniques that are commonly used to defeat an environment to actually compromise if you look at the top 10 techniques we use and there's far more than just these 10 but these are the most often executed nine out of ten have nothing to do with cves it has to do with misconfigurations dangerous product defaults bad credential policies and it's how we chain those together to become a domain admin or compromise a host so what what customers will do is every single attacker command we executed is provided to you as an attackivity log so you can actually see every single attacker command we ran the time stamp it was executed the hosts it executed on and how it Maps the minor attack tactics so our customers will have are these attacker logs on one screen and then they'll go look into Splunk or exabeam or Sentinel one or crowdstrike and say did you detect us did you log us did you alert on us or not and to make that even easier if you take this example hey Splunk what logs did you see at this time on the VMware host because that's when node 0 is able to dump credentials and that allows you to identify and fix your logging blind spots to make that easier we've got app integration so this is an actual Splunk app in the Splunk App Store and what you can come is inside the Splunk console itself you can fire up the Horizon 3 node 0 app all of the pen test results are here so that you can see all of the results in one place and you don't have to jump out of the tool and what you'll show you as I skip forward is hey there's a pen test here are the critical issues that we've identified for that weaker default issue here are the exact commands we executed and then we will automatically query into Splunk all all terms on between these times on that endpoint that relate to this attack so you can now quickly within the Splunk environment itself figure out that you're missing logs or that you're appropriately catching this issue and that becomes incredibly important in that detection engineering cycle that I mentioned earlier so how do our customers end up using us they shift from running one pen test a year to 30 40 pen tests a month oftentimes wiring us into their deployment automation to automatically run pen tests the other part that they'll do is as they run more pen tests they find more issues but eventually they hit this inflection point where they're able to rapidly clean up their environment and that inflection point is because the red and the blue teams start working together in a purple team culture and now they're working together to proactively harden their environment the other thing our customers will do is run us from different perspectives they'll first start running an RFC 1918 scope to see once the attacker gained initial access in a part of the network that had wide access what could they do and then from there they'll run us within a specific Network segment okay from within that segment could the attacker break out and gain access to another segment then they'll run us from their work from home environment could they Traverse the VPN and do something damaging and once they're in could they Traverse the VPN and get into my cloud then they'll break in from the outside all of these perspectives are available to you in Horizon 3 and node zero as a single SKU and you can run as many pen tests as you want if you run a phishing campaign and find that an intern in the finance department had the worst phishing behavior you can then inject their credentials and actually show the end-to-end story of how an attacker fished gained credentials of an intern and use that to gain access to sensitive financial data so what our customers end up doing is running multiple attacks from multiple perspectives and looking at those results over time I'll leave you two things one is what is the AI in Horizon 3 AI those knowledge graphs are the heart and soul of everything that we do and we use machine learning reinforcement techniques reinforcement learning techniques Markov decision models and so on to be able to efficiently maneuver and analyze the paths in those really large graphs we also use context-based scoring to prioritize weaknesses and we're also able to drive collective intelligence across all of the operations so the more pen tests we run the smarter we get and all of that is based on our knowledge graph analytics infrastructure that we have finally I'll leave you with this was my decision criteria when I was a buyer for my security testing strategy what I cared about was coverage I wanted to be able to assess my on-prem cloud perimeter and work from home and be safe to run in production I want to be able to do that as often as I wanted I want to be able to run pen tests in hours or days not weeks or months so I could accelerate that fine fix verify loop I wanted my it admins and network Engineers with limited offensive experience to be able to run a pen test in a few clicks through a self-service experience and not have to install agent and not have to write custom scripts and finally I didn't want to get nickeled and dimed on having to buy different types of attack modules or different types of attacks I wanted a single annual subscription that allowed me to run any type of attack as often as I wanted so I could look at my Trends in directions over time so I hope you found this talk valuable uh we're easy to find and I look forward to seeing seeing you use a product and letting our results do the talking when you look at uh you know kind of the way no our pen testing algorithms work is we dynamically select uh how to compromise an environment based on what we've discovered and the goal is to become a domain admin compromise a host compromise domain users find ways to encrypt data steal sensitive data and so on but when you look at the the top 10 techniques that we ended up uh using to compromise environments the first nine have nothing to do with cves and that's the reality cves are yes a vector but less than two percent of cves are actually used in a compromise oftentimes it's some sort of credential collection credential cracking uh credential pivoting and using that to become an admin and then uh compromising environments from that point on so I'll leave this up for you to kind of read through and you'll have the slides available for you but I found it very insightful that organizations and ourselves when I was a GE included invested heavily in just standard vulnerability Management Programs when I was at DOD that's all disa cared about asking us about was our our kind of our cve posture but the attackers have adapted to not rely on cves to get in because they know that organizations are actively looking at and patching those cves and instead they're chaining together credentials from one place with misconfigurations and dangerous product defaults in another to take over an environment a concrete example is by default vcenter backups are not encrypted and so as if an attacker finds vcenter what they'll do is find the backup location and there are specific V sender MTD files where the admin credentials are parsippled in the binaries so you can actually as an attacker find the right MTD file parse out the binary and now you've got the admin credentials for the vcenter environment and now start to log in as admin there's a bad habit by signal officers and Signal practitioners in the in the Army and elsewhere where the the VM notes section of a virtual image has the password for the VM well those VM notes are not stored encrypted and attackers know this and they're able to go off and find the VMS that are unencrypted find the note section and pull out the passwords for those images and then reuse those credentials across the board so I'll pause here and uh you know Patrick love you get some some commentary on on these techniques and other things that you've seen and what we'll do in the last say 10 to 15 minutes is uh is rolled through a little bit more on what do you do about it yeah yeah no I love it I think um I think this is pretty exhaustive what I like about what you've done here is uh you know we've seen we've seen double-digit increases in the number of organizations that are reporting actual breaches year over year for the last um for the last three years and it's often we kind of in the Zeitgeist we pegged that on ransomware which of course is like incredibly important and very top of mind um but what I like about what you have here is you know we're reminding the audience that the the attack surface area the vectors the matter um you know has to be more comprehensive than just thinking about ransomware scenarios yeah right on um so let's build on this when you think about your defense in depth you've got multiple security controls that you've purchased and integrated and you've got that redundancy if a control fails but the reality is that these security tools aren't designed to work together so when you run a pen test what you want to ask yourself is did you detect node zero did you log node zero did you alert on node zero and did you stop node zero and when you think about how to do that every single attacker command executed by node zero is available in an attacker log so you can now see you know at the bottom here vcenter um exploit at that time on that IP how it aligns to minor attack what you want to be able to do is go figure out did your security tools catch this or not and that becomes very important in using the attacker's perspective to improve your defensive security controls and so the way we've tried to make this easier back to like my my my the you know I bleed Green in many ways still from my smoke background is you want to be able to and what our customers do is hey we'll look at the attacker logs on one screen and they'll look at what did Splunk see or Miss in another screen and then they'll use that to figure out what their logging blind spots are and what that where that becomes really interesting is we've actually built out an integration into Splunk where there's a Splunk app you can download off of Splunk base and you'll get all of the pen test results right there in the Splunk console and from that Splunk console you're gonna be able to see these are all the pen tests that were run these are the issues that were found um so you can look at that particular pen test here are all of the weaknesses that were identified for that particular pen test and how they categorize out for each of those weaknesses you can click on any one of them that are critical in this case and then we'll tell you for that weakness and this is where where the the punch line comes in so I'll pause the video here for that weakness these are the commands that were executed on these endpoints at this time and then we'll actually query Splunk for that um for that IP address or containing that IP and these are the source types that surface any sort of activity so what we try to do is help you as quickly and efficiently as possible identify the logging blind spots in your Splunk environment based on the attacker's perspective so as this video kind of plays through you can see it Patrick I'd love to get your thoughts um just seeing so many Splunk deployments and the effectiveness of those deployments and and how this is going to help really Elevate the effectiveness of all of your Splunk customers yeah I'm super excited about this I mean I think this these kinds of purpose-built integration snail really move the needle for our customers I mean at the end of the day when I think about the power of Splunk I think about a product I was first introduced to 12 years ago that was an on-prem piece of software you know and at the time it sold on sort of Perpetual and term licenses but one made it special was that it could it could it could eat data at a speed that nothing else that I'd have ever seen you can ingest massively scalable amounts of data uh did cool things like schema on read which facilitated that there was this language called SPL that you could nerd out about uh and you went to a conference once a year and you talked about all the cool things you were splunking right but now as we think about the next phase of our growth um we live in a heterogeneous environment where our customers have so many different tools and data sources that are ever expanding and as you look at the as you look at the role of the ciso it's mind-blowing to me the amount of sources Services apps that are coming into the ciso span of let's just call it a span of influence in the last three years uh you know we're seeing things like infrastructure service level visibility application performance monitoring stuff that just never made sense for the security team to have visibility into you um at least not at the size and scale which we're demanding today um and and that's different and this isn't this is why it's so important that we have these joint purpose-built Integrations that um really provide more prescription to our customers about how do they walk on that Journey towards maturity what does zero to one look like what does one to two look like whereas you know 10 years ago customers were happy with platforms today they want integration they want Solutions and they want to drive outcomes and I think this is a great example of how together we are stepping to the evolving nature of the market and also the ever-evolving nature of the threat landscape and what I would say is the maturing needs of the customer in that environment yeah for sure I think especially if if we all anticipate budget pressure over the next 18 months due to the economy and elsewhere while the security budgets are not going to ever I don't think they're going to get cut they're not going to grow as fast and there's a lot more pressure on organizations to extract more value from their existing Investments as well as extracting more value and more impact from their existing teams and so security Effectiveness Fierce prioritization and automation I think become the three key themes of security uh over the next 18 months so I'll do very quickly is run through a few other use cases um every host that we identified in the pen test were able to score and say this host allowed us to do something significant therefore it's it's really critical you should be increasing your logging here hey these hosts down here we couldn't really do anything as an attacker so if you do have to make trade-offs you can make some trade-offs of your logging resolution at the lower end in order to increase logging resolution on the upper end so you've got that level of of um justification for where to increase or or adjust your logging resolution another example is every host we've discovered as an attacker we Expose and you can export and we want to make sure is every host we found as an attacker is being ingested from a Splunk standpoint a big issue I had as a CIO and user of Splunk and other tools is I had no idea if there were Rogue Raspberry Pi's on the network or if a new box was installed and whether Splunk was installed on it or not so now you can quickly start to correlate what hosts did we see and how does that reconcile with what you're logging from uh finally or second to last use case here on the Splunk integration side is for every single problem we've found we give multiple options for how to fix it this becomes a great way to prioritize what fixed actions to automate in your soar platform and what we want to get to eventually is being able to automatically trigger soar actions to fix well-known problems like automatically invalidating passwords for for poor poor passwords in our credentials amongst a whole bunch of other things we could go off and do and then finally if there is a well-known kill chain or attack path one of the things I really wish I could have done when I was a Splunk customer was take this type of kill chain that actually shows a path to domain admin that I'm sincerely worried about and use it as a glass table over which I could start to layer possible indicators of compromise and now you've got a great starting point for glass tables and iocs for actual kill chains that we know are exploitable in your environment and that becomes some super cool Integrations that we've got on the roadmap between us and the Splunk security side of the house so what I'll leave with actually Patrick before I do that you know um love to get your comments and then I'll I'll kind of leave with one last slide on this wartime security mindset uh pending you know assuming there's no other questions no I love it I mean I think this kind of um it's kind of glass table's approach to how do you how do you sort of visualize these workflows and then use things like sore and orchestration and automation to operationalize them is exactly where we see all of our customers going and getting away from I think an over engineered approach to soar with where it has to be super technical heavy with you know python programmers and getting more to this visual view of workflow creation um that really demystifies the power of Automation and also democratizes it so you don't have to have these programming languages in your resume in order to start really moving the needle on workflow creation policy enforcement and ultimately driving automation coverage across more and more of the workflows that your team is seeing yeah I think that between us being able to visualize the actual kill chain or attack path with you know think of a of uh the soar Market I think going towards this no code low code um you know configurable sore versus coded sore that's going to really be a game changer in improve or giving security teams a force multiplier so what I'll leave you with is this peacetime mindset of security no longer is sustainable we really have to get out of checking the box and then waiting for the bad guys to show up to verify that security tools are are working or not and the reason why we've got to really do that quickly is there are over a thousand companies that withdrew from the Russian economy over the past uh nine months due to the Ukrainian War there you should expect every one of them to be punished by the Russians for leaving and punished from a cyber standpoint and this is no longer about financial extortion that is ransomware this is about punishing and destroying companies and you can punish any one of these companies by going after them directly or by going after their suppliers and their Distributors so suddenly your attack surface is no more no longer just your own Enterprise it's how you bring your goods to Market and it's how you get your goods created because while I may not be able to disrupt your ability to harvest fruit if I can get those trucks stuck at the border I can increase spoilage and have the same effect and what we should expect to see is this idea of cyber-enabled economic Warfare where if we issue a sanction like Banning the Russians from traveling there is a cyber-enabled counter punch which is corrupt and destroy the American Airlines database that is below the threshold of War that's not going to trigger the 82nd Airborne to be mobilized but it's going to achieve the right effect ban the sale of luxury goods disrupt the supply chain and create shortages banned Russian oil and gas attack refineries to call a 10x spike in gas prices three days before the election this is the future and therefore I think what we have to do is shift towards a wartime mindset which is don't trust your security posture verify it see yourself Through The Eyes of the attacker build that incident response muscle memory and drive better collaboration between the red and the blue teams your suppliers and Distributors and your information uh sharing organization they have in place and what's really valuable for me as a Splunk customer was when a router crashes at that moment you don't know if it's due to an I.T Administration problem or an attacker and what you want to have are different people asking different questions of the same data and you want to have that integrated triage process of an I.T lens to that problem a security lens to that problem and then from there figuring out is is this an IT workflow to execute or a security incident to execute and you want to have all of that as an integrated team integrated process integrated technology stack and this is something that I very care I cared very deeply about as both a Splunk customer and a Splunk CTO that I see time and time again across the board so Patrick I'll leave you with the last word the final three minutes here and I don't see any open questions so please take us home oh man see how you think we spent hours and hours prepping for this together that that last uh uh 40 seconds of your talk track is probably one of the things I'm most passionate about in this industry right now uh and I think nist has done some really interesting work here around building cyber resilient organizations that have that has really I think helped help the industry see that um incidents can come from adverse conditions you know stress is uh uh performance taxations in the infrastructure service or app layer and they can come from malicious compromises uh Insider threats external threat actors and the more that we look at this from the perspective of of a broader cyber resilience Mission uh in a wartime mindset uh I I think we're going to be much better off and and will you talk about with operationally minded ice hacks information sharing intelligence sharing becomes so important in these wartime uh um situations and you know we know not all ice acts are created equal but we're also seeing a lot of um more ad hoc information sharing groups popping up so look I think I think you framed it really really well I love the concept of wartime mindset and um I I like the idea of applying a cyber resilience lens like if you have one more layer on top of that bottom right cake you know I think the it lens and the security lens they roll up to this concept of cyber resilience and I think this has done some great work there for us yeah you're you're spot on and that that is app and that's gonna I think be the the next um terrain that that uh that you're gonna see vendors try to get after but that I think Splunk is best position to win okay that's a wrap for this special Cube presentation you heard all about the global expansion of horizon 3.ai's partner program for their Partners have a unique opportunity to take advantage of their node zero product uh International go to Market expansion North America channel Partnerships and just overall relationships with companies like Splunk to make things more comprehensive in this disruptive cyber security world we live in and hope you enjoyed this program all the videos are available on thecube.net as well as check out Horizon 3 dot AI for their pen test Automation and ultimately their defense system that they use for testing always the environment that you're in great Innovative product and I hope you enjoyed the program again I'm John Furrier host of the cube thanks for watching

Published Date : Sep 28 2022

SUMMARY :

that's the sort of stuff that we do you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick CoughlinPERSON

0.99+

Jennifer LeePERSON

0.99+

ChrisPERSON

0.99+

TonyPERSON

0.99+

2013DATE

0.99+

Raina RichterPERSON

0.99+

SingaporeLOCATION

0.99+

EuropeLOCATION

0.99+

PatrickPERSON

0.99+

FrankfurtLOCATION

0.99+

JohnPERSON

0.99+

20-yearQUANTITY

0.99+

hundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

seven minutesQUANTITY

0.99+

95QUANTITY

0.99+

FordORGANIZATION

0.99+

2.7 billionQUANTITY

0.99+

MarchDATE

0.99+

FinlandLOCATION

0.99+

seven hoursQUANTITY

0.99+

sixty percentQUANTITY

0.99+

John FurrierPERSON

0.99+

SwedenLOCATION

0.99+

John FurrierPERSON

0.99+

six weeksQUANTITY

0.99+

seven hoursQUANTITY

0.99+

19 credentialsQUANTITY

0.99+

ten dollarsQUANTITY

0.99+

JenniferPERSON

0.99+

5 000 hostsQUANTITY

0.99+

Horizon 3TITLE

0.99+

WednesdayDATE

0.99+

30QUANTITY

0.99+

eightQUANTITY

0.99+

Asia PacificLOCATION

0.99+

American AirlinesORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

three licensesQUANTITY

0.99+

two companiesQUANTITY

0.99+

2019DATE

0.99+

European UnionORGANIZATION

0.99+

sixQUANTITY

0.99+

seven occurrencesQUANTITY

0.99+

70QUANTITY

0.99+

three peopleQUANTITY

0.99+

Horizon 3.aiTITLE

0.99+

ATTORGANIZATION

0.99+

Net ZeroORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

UberORGANIZATION

0.99+

fiveQUANTITY

0.99+

less than two percentQUANTITY

0.99+

less than two hoursQUANTITY

0.99+

2012DATE

0.99+

UKLOCATION

0.99+

AdobeORGANIZATION

0.99+

four issuesQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

next yearDATE

0.99+

three stepsQUANTITY

0.99+

node 0TITLE

0.99+

15 minutesQUANTITY

0.99+

hundred percentQUANTITY

0.99+

node zeroTITLE

0.99+

10xQUANTITY

0.99+

last yearDATE

0.99+

7 minutesQUANTITY

0.99+

one licenseQUANTITY

0.99+

second thingQUANTITY

0.99+

thousands of hostsQUANTITY

0.99+

five thousand hostsQUANTITY

0.99+

next weekDATE

0.99+

Jennifer Lee, Horizon3.ai | Horizon3.ai Partner Program Expands Internationally


 

(upbeat music) >> Welcome back everyone to theCUBE and Horizon3.ai special presentation. I'm John Furrier, host of theCUBE. We're here with Jennifer Lee head of channel sales Horizon3.ai, Jennifer, welcome to theCUBE, thanks for coming on. >> Great, well thank you for having me >> So big news around Horizon3.ai driving channel, first commitment you guys are expanding the channel partner program to include all kinds of new rewards, incentives, training programs to help educate, you know, partners, really drive more recurring revenue, certainly cloud and cloud scale has done that. You got a great product that fits into that kind of channel model, great services you can wrap around it, good stuff. So let's get into it. What are you guys doing? What are you guys doing with this news? Why is this so important? >> Yeah, for sure. So, yeah, we, like you said, we recently expanded our channel partner program. The driving force behind it was really just to align our, like you said, our channel first commitment and creating awareness around the importance of our partner ecosystems. So that's, it's really how we go to market, is through the channel. >> And a great international focus. I've talked with the CEO, you know, about the solution and he broke down all the action on why it's important on the product side, but why now on the go to market change? What's the why behind this big, this news on the channel? >> Yeah, for sure. So we are doing this now, really to align our business strategy, which is built on the concept of enabling our partners to create a high value, high margin business on top of our platform. And so we offer a solution called node zero. It provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture. So our, we, our company vision, we have this tagline that states that our pen testing enables organizations to see themselves through the eyes of an attacker. And we use the, like the attacker's perspective to identify exploitable weaknesses and vulnerabilities. So we created this partner program from a perspective of the partner. So the partner's perspective and we've built it through the eyes of our partner, right? So we're prioritizing really what the partner is looking for and will ensure like mutual success for us. >> Yeah, the partners always want to get in front of the customers and bring new stuff to them. Pen tests have traditionally been really expensive. And so bringing it down and in one, to a service level that's, one, affordable and has flexibility to it allows a lot of capabilities. So I imagine people are going to get excited by it. So I have to ask you about the program. What specifically are you guys doing? Can you share any details around what it means for the partners, what they get, what's in it for them? Can you just break down some of the mechanics and mechanisms or details? >> Yeah. Yep, so, you know, we're really looking to create business alignment. And like I said, established mutual success with our partners, so we've got 2 key elements that we were really focused on that we bring to the partners. So the opportunity, the profit margin expansion is one of 'em and a way for our partners to really differentiate themselves and stay relevant in the market. So we've restructured our discount model, really, you know, highlighting profitability and maximizing profitability. And this includes our deal registration. We've created a deal registration program. We've increased discount for partners who take part in our partner certification trainings, and we've, we have some other partner incentives that we've created that's going to help out there. We've put this all, so we've recently gone live with our partner portal, it's a consolidated experience for our partners where they can access our sales tools. And we really view our partners as an extension of our sales and technical teams. And so we've extended all of our training material that we use internally, we've made it available to our partners through our partner portal. We've, I'm trying, I'm thinking now back, what else is in that partner portal here? We've got our partner certification information. So all the content that's delivered during that training can be found in the portal. We've got deal registration, co-branded marketing materials, pipeline management. And so this portal gives our partners a one stop place to go to final event information. And then just really quickly on the second part of that, that I mentioned is our technology really is really disruptive to the market. So, you know, like you said, autonomous pen testing, it's still, it's, well, it's still a relatively new topic for security practitioners and it's proving to be really disruptive. So that on top of just, well, recently we found an article that mentioned by markets to markets that reports that the global pen testing market's really expanding. And so it's expected to grow to like 2.7 billion by 2027. So the market's there, right? The market's expanding, it's growing. And so for our partners, it just really allows them to grow their revenue across their customer base, expand their customer base and offering this high profit margin while, you know, getting in early to market on this disruptive technology. >> Big market, a lot of opportunities to make some money. People love to put more margin on those deals, especially when you can bring a great solution that everyone knows is hard to do. So I think that's going to provide a lot of value. Is there a type of partner that you guys see emerging or you aligning with, you mentioned the alignment with the partners. I can see how that, the training and the incentives are all there. Sounds like it's all going well. Is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this? >> Yeah, absolutely. So we work with all different kinds of partners. We work with our traditional resale partners. We're working with systems integrators. We have a really strong MSP, MSSP program. We've got consulting partners and the consulting partners especially with the ones that offer pen test services. So we, they use us as a, we act as a force multiplier, just really offering them profit margin expansion opportunity there. We've got some technology partners that we really work with for co-sell opportunities. And then we've got our cloud partners. You had mentioned that earlier and so we are in AWS marketplace, our CCPO partners, we're part of the ISV accelerate program. So we're doing a lot there with our cloud partners. And of course we go to market with distribution partners as well. >> Got to love the opportunity for more margin expansion. Every kind of partner wants to put more gross profit on their deals. Is there a certification involved, I have to ask? Is there like, do you get, do people get certified or is it just, you get train? Is it self-paced training? Is it in person? How are you guys doing the whole training, certification thing? Is that a requirement, or not? >> Yeah, absolutely. So we do offer a certification program and it's been very popular. This includes a seller's portion and an operator portion. And so this is at no cost to our partners and we offer it both virtually, it's live, it's virtually, but live, it's not self-paced. And we also have in person, you know, sessions as well. And we also can customize these to any partners that have a large group of people. And we can just, we can do one in person or virtual just specifically for that partner. >> Well, any kind of incentive opportunities and marketing opportunities? Everyone loves to get the deals just kind of rolling in leads, from what we can see, out early reportings, this looks like a hot product, price wise, service level wise. What incentives do you guys start thinking about and joint marketing, you mentioned co-sell earlier in pipeline, so I was kind of owning in on that piece. >> Sure and yes, and then to follow along with our partner certification program, we do incentivize our partners there. If they have a certain number certified, their discount increases. So that's part of it. We have our deal registration program that increases discount as well. And then we do have some partner incentives that are wrapped around meeting setting, and moving opportunities along to proof of value. >> Got to love the education driving value. I have to ask you, so you do, you've been around the industry, you've seen the channel relationships out there. You've seen companies, old school, new school, you know, Horizon3.ai is kind of like that new school, very cloud specific, a lot of leverage with, well, you mentioned AWS and all the clouds. Why is the company so hot right now? Why did you join them? And what's, why are people attracted to this company? What's the attraction, what's the vibe? What do you see and what do you, what did you see in this company? >> Well, this is just, you know, like I said, it's very disruptive. It's really in high demand right now. And just because it's new to market and a newer technology, so we are, we can collaborate with a manual pen tester. We can, you know, we can allow our customers to run their pen test with no specialty teams. And then, so we, and like, you know, like I said, we can allow, our partners can actually build businesses, profitable businesses, so we can, they can use our product to increase their services revenue and build their business model, you know, around, around our services. >> What's interesting about the pen testing is that it's very expensive and time consuming. And the people who do them are very talented people that could be working on really bigger things in the- >> Absolutely. >> In the customers. So bringing this into the channel allows them, if you look at the price dealt between a pen test and then what you guys are offering. I mean, that's a huge margin gap between street price of say today's pen test and what you guys offer. When you show people that, do they fall, do they say too good to be true? I mean, what are some of the things that people say when you kind of show 'em that? Are they like scratch their head, like, come on, what's the catch here? >> Right, so the cost savings is a huge, is huge for us. And then also, you know, like I said, working as a force multiplier with a pen testing company that offers the services and so they can do their annual manual pen test that may be required around compliance regulations. And then we can act as the continuous verification of their security, you know, that they can run weekly. And so it's just, you know, it's just an addition to what they're offering already and an expansion. >> So, Jennifer, thanks for coming on theCUBE, really appreciate you coming on, sharing the insights on the channel. What's next? What can we expect from the channel group? What are you thinking, what's going on? >> Right, so we're really looking to expand our channel footprint and very strategically, we've got some big plans for Horizon3.ai. >> Awesome, well, thanks for coming on. Really appreciate it, you're watching theCUBE, the leader in high tech enterprise coverage. (upbeat music)

Published Date : Sep 27 2022

SUMMARY :

Welcome back everyone to theCUBE What are you guys doing? like you said, our now on the go to market change? And so we offer a So I have to ask you about the program. And so it's expected to grow that you guys see emerging And of course we go to market How are you guys doing the whole training, And so this is at no cost to our partners What incentives do you And then we do have new school, you know, And then, so we, and like, you know, And the people who do them and what you guys offer. And then also, you know, like I said, really appreciate you coming on, really looking to expand the leader in high tech

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JenniferPERSON

0.99+

Jennifer LeePERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

2027DATE

0.99+

2.7 billionQUANTITY

0.99+

second partQUANTITY

0.99+

2 key elementsQUANTITY

0.99+

todayDATE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

Horizon3.aiTITLE

0.89+

node zeroTITLE

0.83+

Horizon3.ai Partner ProgramTITLE

0.76+

first commitmentQUANTITY

0.75+

first commitmentQUANTITY

0.75+

Horizon3.aiORGANIZATION

0.73+

Rainer Richter, Horizon3.ai | Horizon3.ai Partner Program Expands Internationally


 

(light music) >> Hello, and welcome to theCUBE's special presentation with Horizon3.ai with Rainer Richter, Vice President of EMEA, Europe, Middle East and Africa, and Asia Pacific, APAC Horizon3.ai. Welcome to this special CUBE presentation. Thanks for joining us. >> Thank you for the invitation. >> So Horizon3.ai, driving global expansion, big international news with a partner-first approach. You guys are expanding internationally. Let's get into it. You guys are driving this new expanse partner program to new heights. Tell us about it. What are you seeing in the momentum? Why the expansion? What's all the news about? >> Well, I would say in international, we have, I would say a similar situation like in the US. There is a global shortage of well-educated penetration testers on the one hand side. On the other side, we have a raising demand of network and infrastructure security. And with our approach of an autonomous penetration testing, I believe we are totally on top of the game, especially as we have also now starting with an international instance. That means for example, if a customer in Europe is using our service, NodeZero, he will be connected to a NodeZero instance, which is located inside the European Union. And therefore, he doesn't have to worry about the conflict between the European GDPR regulations versus the US CLOUD Act. And I would say there, we have a total good package for our partners that they can provide differentiators to their customers. >> You know, we've had great conversations here on theCUBE with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company. And obviously, I can just connect the dots here, but I'd like you to weigh in more on how that translates into the go-to-market here because you got great cloud scale with the security product you guys are having success with. Great leverage there, I'm seeing a lot of success there. What's the momentum on the channel partner program internationally? Why is it so important to you? Is it just the regional segmentation? Is it the economics? Why the momentum? >> Well, there are multiple issues. First of all, there is a raising demand in penetration testing. And don't forget that in international, we have a much higher level number or percentage in SMB and mid-market customers. So these customers, typically, most of them even didn't have a pen test done once a year. So for them, pen testing was just too expensive. Now with our offering together with our partners, we can provide different ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with a traditional manual pen test, and that is because we have our Consulting PLUS package, which is for typically pen testers. They can go out and can do a much faster, much quicker pen test at many customers after each other. So they can do more pen test on a lower, more attractive price. On the other side, there are others or even the same one who are providing NodeZero as an MSSP service. So they can go after SMP customers saying, "Okay, you only have a couple of hundred IP addresses. No worries, we have the perfect package for you." And then you have, let's say the mid-market. Let's say the thousand and more employees, then they might even have an annual subscription. Very traditional, but for all of them, it's all the same. The customer or the service provider doesn't need a piece of hardware. They only need to install a small piece of a Docker container and that's it. And that makes it so smooth to go in and say, "Okay, Mr. Customer, we just put in this virtual attacker into your network, and that's it and all the rest is done." And within three clicks, they can act like a pen tester with 20 years of experience. >> And that's going to be very channel-friendly and partner-friendly, I can almost imagine. So I have to ask you, and thank you for calling out that breakdown and segmentation. That was good, that was very helpful for me to understand, but I want to follow up, if you don't mind. What type of partners are you seeing the most traction with and why? >> Well, I would say at the beginning, typically, you have the innovators, the early adapters, typically boutique-size of partners. They start because they are always looking for innovation. Those are the ones, they start in the beginning. So we have a wide range of partners having mostly even managed by the owner of the company. So they immediately understand, okay, there is the value, and they can change their offering. They're changing their offering in terms of penetration testing because they can do more pen tests and they can then add others ones. Or we have those ones who offered pen test services, but they did not have their own pen testers. So they had to go out on the open market and source pen testing experts to get the pen test at a particular customer done. And now with NodeZero, they're totally independent. They can go out and say, "Okay, Mr. Customer, here's the service. That's it, we turn it on. And within an hour, you are up and running totally." >> Yeah, and those pen tests are usually expensive and hard to do. Now it's right in line with the sales delivery. Pretty interesting for a partner. >> Absolutely, but on the other hand side, we are not killing the pen tester's business. We are providing with NodeZero, I would call something like the foundational work. The foundational work of having an ongoing penetration testing of the infrastructure, the operating system. And the pen testers by themselves, they can concentrate in the future on things like application pen testing, for example. So those services, which we are not touching. So we are not killing the pen tester market. We are just taking away the ongoing, let's say foundation work, call it that way. >> Yeah, yeah. That was one of my questions. I was going to ask is there's a lot of interest in this autonomous pen testing. One because it's expensive to do because those skills are required are in need and they're expensive. (chuckles) So you kind of cover the entry-level and the blockers that are in there. I've seen people say to me, "This pen test becomes a blocker for getting things done." So there's been a lot of interest in the autonomous pen testing and for organizations to have that posture. And it's an overseas issue too because now you have that ongoing thing. So can you explain that particular benefit for an organization to have that continuously verifying an organization's posture? >> Certainly. So I would say typically, you have to do your patches. You have to bring in new versions of operating systems, of different services, of operating systems of some components, and they are always bringing new vulnerabilities. The difference here is that with NodeZero, we are telling the customer or the partner the package. We're telling them which are the executable vulnerabilities because previously, they might have had a vulnerability scanner. So this vulnerability scanner brought up hundreds or even thousands of CVEs, but didn't say anything about which of them are vulnerable, really executable. And then you need an expert digging in one CVE after the other, finding out is it really executable, yes or no? And that is where you need highly-paid experts, which where we have a shortage. So with NodeZero now, we can say, "Okay, we tell you exactly which ones are the ones you should work on because those are the ones which are executable. We rank them accordingly to risk level, how easily they can be used." And then the good thing is converted or in difference to the traditional penetration test, they don't have to wait for a year for the next pen test to find out if the fixing was effective. They run just the next scan and say, "Yes, closed. Vulnerability is gone." >> The time is really valuable. And if you're doing any DevOps, cloud-native, you're always pushing new things. So pen test, ongoing pen testing is actually a benefit just in general as a kind of hygiene. So really, really interesting solution. Really bringing that global scale is going to be a new coverage area for us, for sure. I have to ask you, if you don't mind answering, what particular region are you focused on or plan to target for this next phase of growth? >> Well, at this moment, we are concentrating on the countries inside the European Union plus United Kingdom. And of course, logically, I'm based in the Frankfurt area. That means we cover more or less the countries just around. So it's like the so-called DACH region, Germany, Switzerland, Austria, plus the Netherlands. But we also already have partners in the Nordic, like in Finland and Sweden. So we have partners already in the UK and it's rapidly growing. So for example, we are now starting with some activities in Singapore and also in the Middle East area. Very important, depending on let's say, the way how to do business. Currently, we try to concentrate on those countries where we can have, let's say at least English as an accepted business language. >> Great, is there any particular region you're having the most success with right now? Sounds like European Union's kind of first wave. What's the most- >> Yes, that's the first. Definitely, that's the first wave. And now with also getting the European INSTANCE up and running, it's clearly our commitment also to the market saying, "Okay, we know there are certain dedicated requirements and we take care of this." And we are just launching, we are building up this one, the instance in the AWS service center here in Frankfurt. Also, with some dedicated hardware, internet, and a data center in Frankfurt, where we have with the DE-CIX, by the way, the highest internet interconnection bandwidth on the planet. So we have very short latency to wherever you are on the globe. >> That's a great call out benefit too. I was going to ask that. What are some of the benefits your partners are seeing in EMEA and Asia Pacific? >> Well, I would say, the benefits for them, it's clearly they can talk with customers and can offer customers penetration testing, which they before even didn't think about because penetration testing in a traditional way was simply too expensive for them, too complex, the preparation time was too long, they didn't have even have the capacity to support an external pen tester. Now with this service, you can go in and even say, "Mr. Customer, we can do a test with you in a couple of minutes. We have installed a Docker container. Within 10 minutes, we have the pen test started. That's it and then we just wait." And I would say we are seeing so many aha moments then. On the partner side, when they see NodeZero the first time working, it's like they say, "Wow, that is great." And then they walk out to customers and show it to their typically at the beginning, mostly the friendly customers like, "Wow, that's great, I need that." And I would say the feedback from the partners is that is a service where I do not have to evangelize the customer. Everybody understands penetration testing, I don't have to describe what it is. The customer understanding immediately, "Yes. Penetration testing, heard about that. I know I should do it, but too complex, too expensive." Now for example, as an MSSP service provided from one of our partners, it's getting easy. >> Yeah, and it's great benefit there. I mean, I got to say I'm a huge fan of what you guys are doing. I like this continuous automation. That's a major benefit to anyone doing DevOps or any kind of modern application development. This is just a godsend for them, this is really good. And like you said, the pen testers that are doing it, they were kind of coming down from their expertise to kind of do things that should have been automated. They get to focus on the bigger ticket items. That's a really big point. >> Exactly. So we free them, we free the pen testers for the higher level elements of the penetration testing segment, and that is typically the application testing, which is currently far away from being automated. >> Yeah, and that's where the most critical workloads are, and I think this is the nice balance. Congratulations on the international expansion of the program, and thanks for coming on this special presentation. I really appreciate it. Thank you very much. >> You're welcome. >> Okay, this is theCUBE special presentation, you know, checking on pen test automation, international expansion, Horizon3.ai. A really innovative solution. In our next segment, Chris Hill, Sector Head for Strategic Accounts, will discuss the power of Horizon3.ai and Splunk in action. You're watching theCUBE, the leader in high tech enterprise coverage. (steady music)

Published Date : Sep 27 2022

SUMMARY :

Welcome to this special CUBE presentation. Why the expansion? On the other side, on the channel partner and that's it and all the rest is done." seeing the most traction with Those are the ones, they and hard to do. And the pen testers by themselves, and the blockers that are in there. in one CVE after the other, I have to ask you, if and also in the Middle East area. What's the most- Definitely, that's the first wave. What are some of the benefits "Mr. Customer, we can do a test with you the bigger ticket items. of the penetration testing segment, of the program, the leader in high tech

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

Chris HillPERSON

0.99+

FinlandLOCATION

0.99+

SwedenLOCATION

0.99+

SingaporeLOCATION

0.99+

AWSORGANIZATION

0.99+

UKLOCATION

0.99+

FrankfurtLOCATION

0.99+

hundredsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

APACORGANIZATION

0.99+

Rainer RichterPERSON

0.99+

Asia PacificLOCATION

0.99+

NetherlandsLOCATION

0.99+

NordicLOCATION

0.99+

US CLOUD ActTITLE

0.99+

Middle EastLOCATION

0.99+

EMEALOCATION

0.99+

SwitzerlandLOCATION

0.99+

USLOCATION

0.99+

AustriaLOCATION

0.99+

thousandsQUANTITY

0.99+

European UnionORGANIZATION

0.99+

United KingdomLOCATION

0.99+

three clicksQUANTITY

0.99+

once a yearQUANTITY

0.99+

GermanyLOCATION

0.99+

firstQUANTITY

0.99+

more than once a yearQUANTITY

0.98+

10 minutesQUANTITY

0.98+

NodeZeroORGANIZATION

0.98+

CUBEORGANIZATION

0.97+

EnglishOTHER

0.97+

Horizon3.aiTITLE

0.96+

FirstQUANTITY

0.96+

first timeQUANTITY

0.95+

OneQUANTITY

0.95+

European UnionLOCATION

0.94+

CVEsQUANTITY

0.94+

EMEAORGANIZATION

0.93+

DACH regionLOCATION

0.93+

a yearQUANTITY

0.92+

oneQUANTITY

0.92+

Vice PresidentPERSON

0.9+

first waveEVENT

0.89+

an hourQUANTITY

0.85+

DE-CIXOTHER

0.83+

one of my questionsQUANTITY

0.82+

EuropeanOTHER

0.82+

first approachQUANTITY

0.82+

NodeZeroCOMMERCIAL_ITEM

0.79+

theCUBEORGANIZATION

0.79+

hundred IP addressesQUANTITY

0.73+

thousand and more employeesQUANTITY

0.7+

UnionLOCATION

0.69+

AsiaORGANIZATION

0.67+

GDPRTITLE

0.63+

Horizon3.aiORGANIZATION

0.58+

SMPORGANIZATION

0.55+

NodeZeroTITLE

0.55+

coupleQUANTITY

0.53+

MiddleLOCATION

0.52+

EastORGANIZATION

0.52+

PacificLOCATION

0.51+

EuropeanORGANIZATION

0.51+

AfricaLOCATION

0.45+

minutesQUANTITY

0.38+

Partner ProgramOTHER

0.32+

Chris Hill, Horizon3.ai | Horizon3.ai Partner Program Expands Internationally


 

>>Welcome back everyone to the Cube and Horizon three.ai special presentation. I'm John Furrier, host of the Cube. We with Chris Hill, Sector head for strategic accounts and federal@horizonthree.ai. Great innovative company. Chris, great to see you. Thanks for coming on the Cube. >>Yeah, like I said, you know, great to meet you John. Long time listener. First time call. So excited to be here with >>You guys. Yeah, we were talking before camera. You had Splunk back in 2013 and I think 2012 was our first splunk.com. Yep. And boy man, you know, talk about being in the right place at the right time. Now we're at another inflection point and Splunk continues to be relevant and continuing to have that data driving security and that interplay. And your ceo, former CTO of Splunk as well at Horizons Neha, who's been on before. Really innovative product you guys have, but you know, Yeah, don't wait for a brief to find out if you're locking the right data. This is the topic of this thread. Splunk is very much part of this new international expansion announcement with you guys. Tell us what are some of the challenges that you see where this is relevant for the Splunk and the Horizon AI as you guys expand Node zero out internationally? >>Yeah, well so across, so you know, my role within Splunk was working with our most strategic accounts. And so I look back to 2013 and I think about the sales process like working with, with our small customers. You know, it was, it was still very siloed back then. Like I was selling to an IT team that was either using us for IT operations. We generally would always even say, yeah, although we do security, we weren't really designed for it. We're a log management tool. And you know, we, and I'm sure you remember back then John, we were like sort of stepping into the security space and in the public sector domain that I was in, you know, security was 70% of what we did. When I look back to sort of the transformation that I was, was witnessing in that digital transformation, you know when I, you look at like 2019 to today, you look at how the IT team and the security teams are, have been forced to break down those barriers that they used to sort of be silo away, would not communicate one, you know, the security guys would be like, Oh this is my BA box it, you're not allowed in today. >>You can't get away with that. And I think that the value that we bring to, you know, and of course Splunk has been a huge leader in that space and continues to do innovation across the board. But I think what we've we're seeing in the space that I was talking with Patrick Kauflin, the SVP of security markets about this, is that, you know, what we've been able to do with Splunk is build a purpose built solution that allows Splunk to eat more data. So Splunk itself, as you well know, it's an ingest engine, right? So the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it, but without data it doesn't do anything, right? So how do you drive and how do you bring more data in? And most importantly from a customer perspective, how do you bring the right data in? >>And so if you think about what node zero and what we're doing in a Horizon three is that, sure we do pen testing, but because we're an autonomous pen testing tool, we do it continuously. So this whole thought of being like, Oh, crud like my customers, Oh yeah, we got a pen test coming up, it's gonna be six weeks. The wait. Oh yeah. You know, and everyone's gonna sit on their hands, Call me back in two months, Chris, we'll talk to you then. Right? Not, not a real efficient way to test your environment and shoot, we, we saw that with Uber this week. Right? You know, and that's a case where we could have helped. >>Well just real quick, explain the Uber thing cause it was a contractor. Just give a quick highlight of what happened so you can connect the >>Dots. Yeah, no problem. So there it was, I think it was one of those, you know, games where they would try and test an environment. And what the pen tester did was he kept on calling them MFA guys being like, I need to reset my password re to set my password. And eventually the customer service guy said, Okay, I'm resetting it. Once he had reset and bypassed the multifactor authentication, he then was able to get in and get access to the domain area that he was in or the, not the domain, but he was able to gain access to a partial part of the network. He then paralleled over to what would I assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains. And so within minutes they had access. And that's the sort of stuff that we do under, you know, a lot of these tools. >>Like not, and I'm not, you know, you think about the cacophony of tools that are out there in a CTA orchestra architecture, right? I'm gonna get like a Zscaler, I'm gonna have Okta, I'm gonna have a Splunk, I'm gonna do this sore system. I mean, I don't mean to name names, we're gonna have crowd strike or, or Sentinel one in there. It's just, it's a cacophony of things that don't work together. They weren't designed work together. And so we have seen so many times in our business through our customer support and just working with customers when we do their pen test, that there will be 5,000 servers out there. Three are misconfigured. Those three misconfigurations will create the open door. Cause remember the hacker only needs to be right once, the defender needs to be right all the time. And that's the challenge. And so that's why I'm really passionate about what we're doing here at Horizon three. I see this my digital transformation, migration and security going on, which we're at the tip of the sp, it's why I joined say Hall coming on this journey and just super excited about where the path's going and super excited about the relationship with Splunk. I get into more details on some of the specifics of that. But you know, >>I mean, well you're nailing, I mean we've been doing a lot of things around super cloud and this next gen environment, we're calling it NextGen. You're really seeing DevOps, obviously Dev SecOps has, has already won the IT role has moved to the developer shift left as an indicator of that. It's one of the many examples, higher velocity code software supply chain. You hear these things. That means that it is now in the developer hands, it is replaced by the new ops, data ops teams and security where there's a lot of horizontal thinking. To your point about access, there's no more perimeter. So >>That there is no perimeter. >>Huge. A hundred percent right, is really right on. I don't think it's one time, you know, to get in there. Once you're in, then you can hang out, move around, move laterally. Big problem. Okay, so we get that. Now, the challenges for these teams as they are transitioning organizationally, how do they figure out what to do? Okay, this is the next step. They already have Splunk, so now they're kind of in transition while protecting for a hundred percent ratio of success. So how would you look at that and describe the challenges? What do they do? What is, what are the teams facing with their data and what's next? What do they, what do they, what action do they take? >>So let's do some vernacular that folks will know. So if I think about dev sec ops, right? We both know what that means, that I'm gonna build security into the app, but no one really talks about SEC DevOps, right? How am I building security around the perimeter of what's going inside my ecosystem and what are they doing? And so if you think about what we're able to do with somebody like Splunk is we could pen test the entire environment from soup to nuts, right? So I'm gonna test the end points through to it. So I'm gonna look for misconfigurations, I'm gonna, and I'm gonna look for credential exposed credentials. You know, I'm gonna look for anything I can in the environment. Again, I'm gonna do it at at light speed. And, and what we're, what we're doing for that SEC dev space is to, you know, did you detect that we were in your environment? >>So did we alert Splunk or the SIM that there's someone in the environment laterally moving around? Did they, more importantly, did they log us into their environment? And when did they detect that log to trigger that log? Did they alert on us? And then finally, most importantly, for every CSO out there is gonna be did they stop us? And so that's how we, we, we do this in, I think you, when speaking with Stay Hall, before, you know, we've come up with this boils U Loop, but we call it fine fix verify. So what we do is we go in is we act as the attacker, right? We act in a production environment. So we're not gonna be, we're a passive attacker, but we will go in un credentialed UN agents. But we have to assume, have an assumed breach model, which means we're gonna put a Docker container in your environment and then we're going to fingerprint the environment. >>So we're gonna go out and do an asset survey. Now that's something that's not something that Splunk does super well, you know, so can Splunk see all the assets, do the same assets marry up? We're gonna log all that data and think then put load that into the Splunk sim or the smoke logging tools just to have it in enterprise, right? That's an immediate future ad that they've got. And then we've got the fix. So once we've completed our pen test, we are then gonna generate a report and we could talk about about these in a little bit later. But the reports will show an executive summary the assets that we found, which would be your asset discovery aspect of that, a fixed report. And the fixed report I think is probably the most important one. It will go down and identify what we did, how we did it, and then how to fix that. >>And then from that, the pen tester or the organization should fix those. Then they go back and run another test. And then they validate through like a change detection environment to see, hey, did those fixes taste, play take place? And you know, SNA Hall, when he was the CTO of JS o, he shared with me a number of times about, he's like, Man, there would be 15 more items on next week's punch sheet that we didn't know about. And it's, and it has to do with how we, you know, how they were prioritizing the CVEs and whatnot because they would take all CVS was critical or non-critical. And it's like we are able to create context in that environment that feeds better information into Splunk and whatnot. That >>Was a lot. That brings, that brings up the, the efficiency for Splunk specifically. The teams out there. By the way, the burnout thing is real. I mean, this whole, I just finished my list and I got 15 more or whatever the list just can, keeps, keeps growing. How did Node zero specifically help Splunk teams be more efficient? Now that's the question I want to get at, because this seems like a very scalable way for Splunk customers and teams, service teams to be more efficient. So the question is, how does Node zero help make Splunk specifically their service teams be more efficient? >>So to, so today in our early interactions with building Splunk customers, what we've seen are five things, and I'll start with sort of identifying the blind spots, right? So kind of what I just talked about with you. Did we detect, did we log, did we alert? Did they stop node zero, right? And so I would, I put that at, you know, a a a more layman's third grade term. And if I was gonna beat a fifth grader at this game would be, we can be the sparring partner for a Splunk enterprise customer, a Splunk essentials customer, someone using Splunk soar, or even just an enterprise Splunk customer that may be a small shop with three people and, and just wants to know where am I exposed. So by creating and generating these reports and then having the API that actually generates the dashboard, they can take all of these events that we've logged and log them in. >>And then where that then comes in is number two is how do we prioritize those logs, right? So how do we create visibility to logs that are, have critical impacts? And again, as I mentioned earlier, not all CVEs are high impact regard and also not all are low, right? So if you daisy chain a bunch of low CVEs together, boom, I've got a mission critical AP CVE that needs to be fixed now, such as a credential moving to an NT box that's got a text file with a bunch of passwords on it, that would be very bad. And then third would be verifying that you have all of the hosts. So one of the things that Splunk's not particularly great at, and they, they themselves, they don't do asset discovery. So do what assets do we see and what are they logging from that? And then for, from, for every event that they are able to identify the, one of the cool things that we can do is actually create this low-code, no-code environment. >>So they could let, you know, float customers can use Splunk. So to actually triage events and prioritize that events or where they're being routed within it to optimize the SOX team time to market or time to triage any given event. Obviously reducing mtr. And then finally, I think one of the neatest things that we'll be seeing us develop is our ability to build glass tables. So behind me you'll see one of our triage events and how we build a lock Lockheed Martin kill chain on that with a glass table, which is very familiar to this Splunk community. We're going to have the ability, not too distant future to allow people to search, observe on those IOCs. And if people aren't familiar with an ioc, it's an incident of compromise. So that's a vector that we want to drill into. And of course who's better at drilling in into data and Splunk. >>Yeah, this is a critical, this is awesome synergy there. I mean I can see a Splunk customer going, Man, this just gives me so much more capability. Action actionability. And also real understanding, and I think this is what I wanna dig into, if you don't mind understanding that critical impact, okay. Is kind of where I see this coming. I got the data, data ingest now data's data. But the question is what not to log, You know, where are things misconfigured? These are critical questions. So can you talk about what it means to understand critical impact? >>Yeah, so I think, you know, going back to those things that I just spoke about, a lot of those CVEs where you'll see low, low, low and then you daisy chain together and you're suddenly like, oh, this is high now. But then to your other impact of like if you're a, if you're a a Splunk customer, you know, and I had, I had several of them, I had one customer that, you know, terabytes of McAfee data being brought in and it was like, all right, there's a lot of other data that you probably also wanna bring, but they could only afford, wanted to do certain data sets because that's, and they didn't know how to prioritize or filter those data sets. And so we provide that opportunity to say, Hey, these are the critical ones to bring in. But there's also the ones that you don't necessarily need to bring in because low CVE in this case really does mean low cve. >>Like an ILO server would be one that, that's the print server where the, your admin credentials are on, on like a, a printer. And so there will be credentials on that. That's something that a hacker might go in to look at. So although the CVE on it is low, if you daisy chain was something that's able to get into that, you might say, ah, that's high. And we would then potentially rank it giving our AI logic to say that's a moderate. So put it on the scale and we prioritize though, versus a, a vulner review scanner's just gonna give you a bunch of CVEs and good luck. >>And translating that if I, if I can and tell me if I'm wrong, that kind of speaks to that whole lateral movement. That's it. Challenge, right? Print server, great example, look stupid low end, who's gonna wanna deal with the print server? Oh, but it's connected into a critical system. There's a path. Is that kind of what you're getting at? >>Yeah, I used daisy chain. I think that's from the community they came from. But it's, it's just a lateral movement. It's exactly what they're doing. And those low level, low critical lateral movements is where the hackers are getting in. Right? So that's what the beauty thing about the, the Uber example is that who would've thought, you know, I've got my multifactor authentication going in a human made a mistake. We can't, we can't not expect humans to make mistakes. Were fall, were fallible, right? Yeah. The reality is is once they were in the environment, they could have protected themselves by running enough pen tests to know that they had certain exposed credentials that would've stopped the breach. Yeah. And they did not, had not done that in their environment. And I'm not poking. Yeah, >>They put it's interesting trend though. I mean it's obvious if sometimes those low end items are also not protected well. So it's easy to get at from a hacker standpoint, but also the people in charge of them can be fished easily or spear fished because they're not paying attention. Cause they don't have to. No one ever told them, Hey, be careful of what you collect. >>Yeah. For the community that I came from, John, that's exactly how they, they would meet you at a, an international event introduce themselves as a graduate student. These are national actor states. Would you mind reviewing my thesis on such and such? And I was at Adobe at the time though I was working on this and start off, you get the pdf, they opened the PDF and whoever that customer was launches, and I don't know if you remember back in like 2002, 2008 time frame, there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it. And John, that's >>Or LinkedIn. Hey I wanna get a joke, we wanna hire you double the salary. Oh I'm gonna click on that for sure. You know? Yeah, >>Right. Exactly. Yeah. The one thing I would say to you is like when we look at like sort of, you know, cuz I think we did 10,000 pen test last year is it's probably over that now, you know, we have these sort of top 10 ways that we think then fine people coming into the environment. The funniest thing is that only one of them is a, a CVE related vulnerability. Like, you know, you guys know what they are, right? So it's it, but it's, it's like 2% of the attacks are occurring through the CVEs, but yet there's all that attention spent to that. Yeah. And very little attention spent to this pen testing side. Yeah. Which is sort of this continuous threat, you know, monitoring space and, and, and this vulnerability space where I think we play such an important role and I'm so excited to be a part of the tip of the spear on this one. >>Yeah. I'm old enough to know the movie sneakers, which I love as a, you know, watching that movie, you know, professional hackers are testing, testing, always testing the environment. I love this. I gotta ask you, as we kind of wrap up here, Chris, if you don't mind the benefits to team professional services from this alliance, big news Splunk and you guys work well together. We see that clearly. What are, what other benefits do professional services teams see from the Splunk and Horizon three AI alliance? >>So if you're a, I think for, from our, our, from both of our partners as we bring these guys together and many of them already are the same partner, right? Is that first off, the licensing model is probably one of the key areas that we really excel at. So if you're an end user, you can buy for the enterprise by the enter of IP addresses you're using. But if you're a partner working with this, there's solution ways that you can go in and we'll license as to MSPs and what that business model on our MSPs looks like. But the unique thing that we do here is this c plus license. And so the Consulting Plus license allows like a, somebody a small to midsize to some very large, you know, Fortune 100, you know, consulting firms uses by buying into a license called Consulting Plus where they can have unlimited access to as many ips as they want. >>But you can only run one test at a time. And as you can imagine when we're going and hacking passwords and checking hashes and decrypting hashes, that can take a while. So, but for the right customer, it's, it's a perfect tool. And so I I'm so excited about our ability to go to market with our partners so that we underhand to sell, understand how not to just sell too or not tell just to sell through, but we know how to sell with them as a good vendor partner. I think that that's one thing that we've done a really good job building bringing into market. >>Yeah. I think also the Splunk has had great success how they've enabled partners and professional services. Absolutely. They've, you know, the services that layer on top of Splunk are multifold tons of great benefits. So you guys vector right into that ride, that wave with >>Friction. And, and the cool thing is that in, you know, in one of our reports, which could be totally customized with someone else's logo, we're going to generate, you know, so I, I used to work at another organization, it wasn't Splunk, but we, we did, you know, pen testing as a, as a for, for customers and my pen testers would come on site, they, they do the engagement and they would leave. And then another really, someone would be, oh shoot, we got another sector that was breached and they'd call you back, you know, four weeks later. And so by August our entire pen testings teams would be sold out and it would be like, wow. And in March maybe, and they'd like, No, no, no, I gotta breach now. And, and, and then when they do go in, they go through, do the pen test and they hand over a PDF and they pat you on the back and say, there's where your problems are, you need to fix it. And the reality is, is that what we're gonna generate completely autonomously with no human interaction is we're gonna go and find all the permutations that anything we found and the fix for those permutations and then once you fixed everything, you just go back and run another pen test. Yeah. It's, you know, for what people pay for one pen test, they could have a tool that does that. Every, every pat patch on Tuesday pen test on Wednesday, you know, triage throughout the week, >>Green, yellow, red. I wanted to see colors show me green, green is good, right? Not red. >>And once CIO doesn't want, who doesn't want that dashboard, right? It's, it's, it is exactly it. And we can help bring, I think that, you know, I'm really excited about helping drive this with the Splunk team cuz they get that, they understand that it's the green, yellow, red dashboard and, and how do we help them find more green so that the other guys are >>In Yeah. And get in the data and do the right thing and be efficient with how you use the data, Know what to look at. So many things to pay attention to, you know, the combination of both and then, then go to market strategy. Real brilliant. Congratulations Chris. Thanks for coming on and sharing this news with the detail around this Splunk in action around the alliance. Thanks for sharing, >>John. My pleasure. Thanks. Look forward to seeing you soon. >>All right, great. We'll follow up and do another segment on DevOps and IT and security teams as the new new ops, but, and Super cloud, a bunch of other stuff. So thanks for coming on. And our next segment, the CEO of Verizon, three AA, will break down all the new news for us here on the cube. You're watching the cube, the leader in high tech enterprise coverage.

Published Date : Sep 27 2022

SUMMARY :

I'm John Furrier, host of the Cube. Yeah, like I said, you know, great to meet you John. And boy man, you know, talk about being in the right place at the right time. the security space and in the public sector domain that I was in, you know, security was 70% And I think that the value that we bring to, you know, And so if you think about what node zero and what we're doing in a Horizon three is that, Just give a quick highlight of what happened so you And that's the sort of stuff that we do under, you know, a lot of these tools. Like not, and I'm not, you know, you think about the cacophony of tools that are That means that it is now in the developer hands, So how would you look at that and And so if you think about what we're able to do with before, you know, we've come up with this boils U Loop, but we call it fine fix verify. you know, so can Splunk see all the assets, do the same assets marry up? And you know, SNA Hall, when he was the CTO of JS o, So the question is, And so I would, I put that at, you know, a a a more layman's third grade term. And then third would be verifying that you have all of the hosts. So they could let, you know, float customers can use Splunk. So can you talk about what Yeah, so I think, you know, going back to those things that I just spoke about, a lot of those CVEs So put it on the scale and we prioritize though, versus a, a vulner review scanner's just gonna give you a bunch of Is that kind of what you're getting at? is that who would've thought, you know, I've got my multifactor authentication going in a Hey, be careful of what you collect. time though I was working on this and start off, you get the pdf, they opened the PDF and whoever that customer was Oh I'm gonna click on that for sure. Which is sort of this continuous threat, you know, monitoring space and, services from this alliance, big news Splunk and you guys work well together. And so the Consulting Plus license allows like a, somebody a small to midsize to And as you can imagine when we're going and hacking passwords They've, you know, the services that layer on top of Splunk are multifold And, and the cool thing is that in, you know, in one of our reports, which could be totally customized I wanted to see colors show me green, green is good, And we can help bring, I think that, you know, I'm really excited about helping drive this with the Splunk team cuz So many things to pay attention to, you know, the combination of both and then, then go to market strategy. Look forward to seeing you soon. And our next segment, the CEO of Verizon,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JohnPERSON

0.99+

Patrick KauflinPERSON

0.99+

2013DATE

0.99+

70%QUANTITY

0.99+

MarchDATE

0.99+

Chris HillPERSON

0.99+

VerizonORGANIZATION

0.99+

2019DATE

0.99+

SplunkORGANIZATION

0.99+

McAfeeORGANIZATION

0.99+

John FurrierPERSON

0.99+

WednesdayDATE

0.99+

UberORGANIZATION

0.99+

six weeksQUANTITY

0.99+

last yearDATE

0.99+

AdobeORGANIZATION

0.99+

three peopleQUANTITY

0.99+

5,000 serversQUANTITY

0.99+

2008DATE

0.99+

2002DATE

0.99+

TuesdayDATE

0.99+

bothQUANTITY

0.99+

Horizons NehaORGANIZATION

0.99+

four weeks laterDATE

0.99+

LinkedInORGANIZATION

0.99+

next weekDATE

0.99+

todayDATE

0.99+

United StatesLOCATION

0.99+

oneQUANTITY

0.99+

AugustDATE

0.99+

firstQUANTITY

0.99+

2012DATE

0.99+

2%QUANTITY

0.98+

thirdQUANTITY

0.98+

one pen testQUANTITY

0.98+

one timeQUANTITY

0.98+

this weekDATE

0.98+

one testQUANTITY

0.98+

hundred percentQUANTITY

0.98+

NextGenORGANIZATION

0.98+

15 more itemsQUANTITY

0.97+

two monthsQUANTITY

0.97+

First timeQUANTITY

0.97+

five thingsQUANTITY

0.96+

SECORGANIZATION

0.96+

one customerQUANTITY

0.96+

Lockheed MartinORGANIZATION

0.96+

15 moreQUANTITY

0.95+

one thingQUANTITY

0.95+

hundred percentQUANTITY

0.95+

Snehal Antani, Horizon3.ai | AWS Startup Showcase S2 E4 | Cybersecurity


 

(upbeat music) >> Hello and welcome to theCUBE's presentation of the AWS Startup Showcase. This is season two, episode four of the ongoing series covering the exciting hot startups from the AWS ecosystem. Here we're talking about cybersecurity in this episode. I'm your host, John Furrier here we're excited to have CUBE alumni who's back Snehal Antani who's the CEO and co-founder of Horizon3.ai talking about exploitable weaknesses and vulnerabilities with autonomous pen testing. Snehal, it's great to see you. Thanks for coming back. >> Likewise, John. I think it's been about five years since you and I were on the stage together. And I've missed it, but I'm glad to see you again. >> Well, before we get into the showcase about your new startup, that's extremely successful, amazing margins, great product. You have a unique journey. We talked about this prior to you doing the journey, but you have a great story. You left the startup world to go into the startup, like world of self defense, public defense, NSA. What group did you go to in the public sector became a private partner. >> My background, I'm a software engineer by education and trade. I started my career at IBM. I was a CIO at GE Capital, and I think we met once when I was there and I became the CTO of Splunk. And we spent a lot of time together when I was at Splunk. And at the end of 2017, I decided to take a break from industry and really kind of solve problems that I cared deeply about and solve problems that mattered. So I left industry and joined the US Special Operations Community and spent about four years in US Special Operations, where I grew more personally and professionally than in anything I'd ever done in my career. And exited that time, met my co-founder in special ops. And then as he retired from the air force, we started Horizon3. >> So there's really, I want to bring that up one, 'cause it's fascinating that not a lot of people in Silicon Valley and tech would do that. So thanks for the service. And I know everyone who's out there in the public sector knows that this is a really important time for the tactical edge in our military, a lot of things going on around the world. So thanks for the service and a great journey. But there's a storyline with the company you're running now that you started. I know you get the jacket on there. I noticed get a little military vibe to it. Cybersecurity, I mean, every company's on their own now. They have to build their own militia. There is no government supporting companies anymore. There's no militia. No one's on the shores of our country defending the citizens and the companies, they got to offend for themselves. So every company has to have their own military. >> In many ways, you don't see anti-aircraft rocket launchers on top of the JP Morgan building in New York City because they rely on the government for air defense. But in cyber it's very different. Every company is on their own to defend for themselves. And what's interesting is this blend. If you look at the Ukraine, Russia war, as an example, a thousand companies have decided to withdraw from the Russian economy and those thousand companies we should expect to be in the ire of the Russian government and their proxies at some point. And so it's not just those companies, but their suppliers, their distributors. And it's no longer about cyber attack for extortion through ransomware, but rather cyber attack for punishment and retaliation for leaving. Those companies are on their own to defend themselves. There's no government that is dedicated to supporting them. So yeah, the reality is that cybersecurity, it's the burden of the organization. And also your attack surface has expanded to not just be your footprint, but if an adversary wants to punish you for leaving their economy, they can get, if you're in agriculture, they could disrupt your ability to farm or they could get all your fruit to spoil at the border 'cause they disrupted your distributors and so on. So I think the entire world is going to change over the next 18 to 24 months. And I think this idea of cybersecurity is going to become truly a national problem and a problem that breaks down any corporate barriers that we see in previously. >> What are some of the things that inspired you to start this company? And I loved your approach of thinking about the customer, your customer, as defending themselves in context to threats, really leaning into it, being ready and able to defend. Horizon3 has a lot of that kind of military thinking for the good of the company. What's the motivation? Why this company? Why now? What's the value proposition? >> So there's two parts to why the company and why now. The first part was what my observation, when I left industry realm or my military background is watching "Jack Ryan" and "Tropic Thunder" and I didn't come from the military world. And so when I entered the special operations community, step one was to keep my mouth shut, learn, listen, and really observe and understand what made that community so impressive. And obviously the people and it's not about them being fast runners or great shooters or awesome swimmers, but rather there are learn-it-alls that can solve any problem as a team under pressure, which is the exact culture you want to have in any startup, early stage companies are learn-it-alls that can solve any problem under pressure as a team. So I had this immediate advantage when we started Horizon3, where a third of Horizon3 employees came from that special operations community. So one is this awesome talent. But the second part that, I remember this quote from a special operations commander that said we use live rounds in training because if we used fake rounds or rubber bullets, everyone would act like metal of honor winners. And the whole idea there is you train like you fight, you build that muscle memory for crisis and response and so on upfront. So when you're in the thick of it, you already know how to react. And this aligns to a pain I had in industry. I had no idea I was secure until the bad guy showed up. I had no idea if I was fixing the right vulnerabilities, logging the right data in Splunk, or if my CrowdStrike EDR platform was configured correctly, I had to wait for the bad guys to show up. I didn't know if my people knew how to respond to an incident. So what I wanted to do was proactively verify my security posture, proactively harden my systems. I needed to do that by continuously pen testing myself or continuously testing my security posture. And there just wasn't any way to do that where an IT admin or a network engineer could in three clicks have the power of a 20 year pen testing expert. And that was really what we set out to do, not build a autonomous pen testing platform for security people, build it so that anybody can quickly test their security posture and then use the output to fix problems that truly matter. >> So the value preposition, if I get this right is, there's a lot of companies out there doing pen tests. And I know I hate pen tests. They're like, cause you do DevOps, it changes you got to do another pen test. So it makes sense to do autonomous pen testing. So congratulations on seeing that that's obvious to that, but a lot of other have consulting tied to it. Which seems like you need to train someone and you guys taking a different approach. >> Yeah, we actually, as a company have zero consulting, zero professional services. And the whole idea is that build a true software as a service offering where an intern, in fact, we've got a video of a nine year old that in three clicks can run pen tests against themselves. And because of that, you can wire pen tests into your DevOps tool chain. You can run multiple pen tests today. In fact, I've got customers running 40, 50 pen tests a month against their organization. And that what that does is completely lowers the barrier of entry for being able to verify your posture. If you have consulting on average, when I was a CIO, it was at least a three month lead time to schedule consultants to show up and then they'd show up, they'd embarrass the security team, they'd make everyone look bad, 'cause they're going to get in, leave behind a report. And that report was almost identical to what they found last year because the older that report, the one the date itself gets stale, the context changes and so on. And then eventually you just don't even bother fixing it. Or if you fix a problem, you don't have the skills to verify that has been fixed. So I think that consulting led model was acceptable when you viewed security as a compliance checkbox, where once a year was sufficient to meet your like PCI requirements. But if you're really operating with a wartime mindset and you actually need to harden and secure your environment, you've got to be running pen test regularly against your organization from different perspectives, inside, outside, from the cloud, from work, from home environments and everything in between. >> So for the CISOs out there, for the CSOs and the CXOs, what's the pitch to them because I see your jacket that says Horizon3 AI, trust but verify. But this trust is, but is canceled out, just as verify. What's the product that you guys are offering the service. Describe what it is and why they should look at it. >> Yeah, sure. So one, when I back when I was the CIO, don't tell me we're secure in PowerPoint. Show me we're secure right now. Show me we're secure again tomorrow. And then show me we're secure again next week because my environment is constantly changing and the adversary always has a vote and they're always evolving. And this whole idea of show me we're secure. Don't trust that your security tools are working, verify that they can detect and respond and stifle an attack and then verify tomorrow, verify next week. That's the big mind shift. Now what we do is-- >> John: How do they respond to that by the way? Like they don't believe you at first or what's the story. >> I think, there's actually a very bifurcated response. There are still a decent chunk of CIOs and CSOs that have a security is a compliance checkbox mindset. So my attitude with them is I'm not going to convince you. You believe it's a checkbox. I'll just wait for you to get breached and sell to your replacement, 'cause you'll get fired. And in the meantime, I spend all my energy with those that actually care about proactively securing and hardening their environments. >> That's true. People do get fired. Can you give an example of what you're saying about this environment being ready, proving that you're secure today, tomorrow and a few weeks out. Give me an example. >> Of, yeah, I'll give you actually a customer example. There was a healthcare organization and they had about 5,000 hosts in their environment and they did everything right. They had Fortinet as their EDR platform. They had user behavior analytics in place that they had purchased and tuned. And when they ran a pen test self-service, our product node zero immediately started to discover every host on the network. It then fingerprinted all those hosts and found it was able to get code execution on three machines. So it got code execution, dumped credentials, laterally maneuvered, and became a domain administrator, which in IT, if an attacker becomes a domain admin, they've got keys to the kingdom. So at first the question was, how did the node zero pen test become domain admin? How'd they get code execution, Fortinet should have detected and stopped it. Well, it turned out Fortinet was misconfigured on three boxes out of 5,000. And these guys had no idea and it's just automation that went wrong and so on. And now they would've only known they had misconfigured their EDR platform on three hosts if the attacker had showed up. The second question though was, why didn't they catch the lateral movement? Which all their marketing brochures say they're supposed to catch. And it turned out that that customer purchased the wrong Fortinet modules. One again, they had no idea. They thought they were doing the right thing. So don't trust just installing your tools is good enough. You've got to exercise and verify them. We've got tons of stories from patches that didn't actually apply to being able to find the AWS admin credentials on a local file system. And then using that to log in and take over the cloud. In fact, I gave this talk at Black Hat on war stories from running 10,000 pen tests. And that's just the reality is, you don't know that these tools and processes are working for you until the bad guys have shown. >> The velocities there. You can accelerate through logs, you know from the days you've been there. This is now the threat. Being, I won't say lazy, but just not careful or just not thinking. >> Well, I'll do an example. We have a lot of customers that are Horizon3 customers and Splunk customers. And what you'll see their behavior is, is they'll have Horizon3 up on one screen. And every single attacker command executed with its timestamp is up on that screen. And then look at Splunk and say, hey, we were able to dump vCenter credentials from VMware products at this time on this host, what did Splunk see or what didn't they see? Why were no logs generated? And it turns out that they had some logging blind spots. So what they'll actually do is run us to almost like stimulate the defensive tools and then see what did the tools catch? What did they miss? What are those blind spots and how do they fix it. >> So your price called node zero. You mentioned that. Is that specifically a suite, a tool, a platform. How do people consume and engage with you guys? >> So the way that we work, the whole product is designed to be self-service. So once again, while we have a sales team, the whole intent is you don't need to have to talk to a sales rep to start using the product, you can log in right now, go to Horizon3.ai, you can run a trial log in with your Google ID, your LinkedIn ID, start running pen test against your home or against your network against this organization right now, without talking to anybody. The whole idea is self-service, run a pen test in three clicks and give you the power of that 20 year pen testing expert. And then what'll happen is node zero will execute and then it'll provide to you a full report of here are all of the different paths or attack paths or sequences where we are able to become an admin in your environment. And then for every attack path, here is the path or the kill chain, the proof of exploitation for every step along the way. Here's exactly what you've got to do to fix it. And then once you've fixed it, here's how you verify that you've truly fixed the problem. And this whole aha moment is run us to find problems. You fix them, rerun us to verify that the problem has been fixed. >> Talk about the company, how many people do you have and get some stats? >> Yeah, so we started writing code in January of 2020, right before the pandemic hit. And then about 10 months later at the end of 2020, we launched the first version of the product. We've been in the market for now about two and a half years total from start of the company till present. We've got 130 employees. We've got more customers than we do employees, which is really cool. And instead our customers shift from running one pen test a year to 40, 50 pen test. >> John: And it's full SaaS. >> The whole product is full SaaS. So no consulting, no pro serve. You run as often as you-- >> Who's downloading, who's buying the product. >> What's amazing is, we have customers in almost every section or sector now. So we're not overly rotated towards like healthcare or financial services. We've got state and local education or K through 12 education, state and local government, a number of healthcare companies, financial services, manufacturing. We've got organizations that large enterprises. >> John: Security's diverse. >> It's very diverse. >> I mean, ransomware must be a big driver. I mean, is that something that you're seeing a lot. >> It is. And the thing about ransomware is, if you peel back the outcome of ransomware, which is extortion, at the end of the day, what ransomware organizations or criminals or APTs will do is they'll find out who all your employees are online. They will then figure out if you've got 7,000 employees, all it takes is one of them to have a bad password. And then attackers are going to credential spray to find that one person with a bad password or whose Netflix password that's on the dark web is also their same password to log in here, 'cause most people reuse. And then from there they're going to most likely in your organization, the domain user, when you log in, like you probably have local admin on your laptop. If you're a windows machine and I've got local admin on your laptop, I'm going to be able to dump credentials, get the admin credentials and then start to laterally maneuver. Attackers don't have to hack in using zero days like you see in the movies, often they're logging in with valid user IDs and passwords that they've found and collected from somewhere else. And then they make that, they maneuver by making a low plus a low equal a high. And the other thing in financial services, we spend all of our time fixing critical vulnerabilities, attackers know that. So they've adapted to finding ways to chain together, low priority vulnerabilities and misconfigurations and dangerous defaults to become admin. So while we've over rotated towards just fixing the highs and the criticals attackers have adapted. And once again they have a vote, they're always evolving their tactics. >> And how do you prevent that from happening? >> So we actually apply those same tactics. Rarely do we actually need a CVE to compromise your environment. We will harvest credentials, just like an attacker. We will find misconfigurations and dangerous defaults, just like an attacker. We will combine those together. We'll make use of exploitable vulnerabilities as appropriate and use that to compromise your environment. So the tactics that, in many ways we've built a digital weapon and the tactics we apply are the exact same tactics that are applied by the adversary. >> So you guys basically simulate hacking. >> We actually do the hacking. Simulate means there's a fakeness to it. >> So you guys do hack. >> We actually compromise. >> Like sneakers the movie, those sneakers movie for the old folks like me. >> And in fact that was my inspiration. I've had this idea for over a decade now, which is I want to be able to look at anything that laptop, this Wi-Fi network, gear in hospital or a truck driving by and know, I can figure out how to gain initial access, rip that environment apart and be able to opponent. >> Okay, Chuck, he's not allowed in the studio anymore. (laughs) No, seriously. Some people are exposed. I mean, some companies don't have anything. But there's always passwords or so most people have that argument. Well, there's nothing to protect here. Not a lot of sensitive data. How do you respond to that? Do you see that being kind of putting the head in the sand or? >> Yeah, it's actually, it's less, there's not sensitive data, but more we've installed or applied multifactor authentication, attackers can't get in now. Well MFA only applies or does not apply to lower level protocols. So I can find a user ID password, log in through SMB, which isn't protected by multifactor authentication and still upon your environment. So unfortunately I think as a security industry, we've become very good at giving a false sense of security to organizations. >> John: Compliance drives that behavior. >> Compliance drives that. And what we need. Back to don't tell me we're secure, show me, we've got to, I think, change that to a trust but verify, but get rid of the trust piece of it, just to verify. >> Okay, we got a lot of CISOs and CSOs watching this showcase, looking at the hot startups, what's the message to the executives there. Do they want to become more leaning in more hawkish if you will, to use the military term on security? I mean, I heard one CISO say, security first then compliance 'cause compliance can make you complacent and then you're unsecure at that point. >> I actually say that. I agree. One definitely security is different and more important than being compliant. I think there's another emerging concept, which is I'd rather be defensible than secure. What I mean by that is security is a point in time state. I am secure right now. I may not be secure tomorrow 'cause something's changed. But if I'm defensible, then what I have is that muscle memory to detect, respondent and stifle an attack. And that's what's more important. Can I detect you? How long did it take me to detect you? Can I stifle you from achieving your objective? How long did it take me to stifle you? What did you use to get in to gain access? How long did that sit in my environment? How long did it take me to fix it? So on and so forth. But I think it's being defensible and being able to rapidly adapt to changing tactics by the adversary is more important. >> This is the evolution of how the red line never moved. You got the adversaries in our networks and our banks. Now they hang out and they wait. So everyone thinks they're secure. But when they start getting hacked, they're not really in a position to defend, the alarms go off. Where's the playbook. Team springs into action. I mean, you kind of get the visual there, but this is really the issue being defensible means having your own essentially military for your company. >> Being defensible, I think has two pieces. One is you've got to have this culture and process in place of training like you fight because you want to build that incident response muscle memory ahead of time. You don't want to have to learn how to respond to an incident in the middle of the incident. So that is that proactively verifying your posture and continuous pen testing is critical there. The second part is the actual fundamentals in place so you can detect and stifle as appropriate. And also being able to do that. When you are continuously verifying your posture, you need to verify your entire posture, not just your test systems, which is what most people do. But you have to be able to safely pen test your production systems, your cloud environments, your perimeter. You've got to assume that the bad guys are going to get in, once they're in, what can they do? So don't just say that my perimeter's secure and I'm good to go. It's the soft squishy center that attackers are going to get into. And from there, can you detect them and can you stop them? >> Snehal, take me through the use. You got to be sold on this, I love this topic. Alright, pen test. Is it, what am I buying? Just pen test as a service. You mentioned dark web. Are you actually buying credentials online on behalf of the customer? What is the product? What am I buying if I'm the CISO from Horizon3? What's the service? What's the product, be specific. >> So very specifically and one just principles. The first principle is when I was a buyer, I hated being nickled and dimed buyer vendors, which was, I had to buy 15 different modules in order to achieve an objective. Just give me one line item, make it super easy to buy and don't nickel and dime me. Because I've spent time as a buyer that very much has permeated throughout the company. So there is a single skew from Horizon3. It is an annual subscription based on how big your environment is. And it is inclusive of on-prem internal pen tests, external pen tests, cloud attacks, work from home attacks, our ability to harvest credentials from the dark web and from open source sources. Being able to crack those credentials, compromise. All of that is included as a singles skew. All you get as a CISO is a singles skew, annual subscription, and you can run as many pen tests as you want. Some customers still stick to, maybe one pen test a quarter, but most customers shift when they realize there's no limit, we don't nickel and dime. They can run 10, 20, 30, 40 a month. >> Well, it's not nickel and dime in the sense that, it's more like dollars and hundreds because they know what to expect if it's classic cloud consumption. They kind of know what their environment, can people try it. Let's just say I have a huge environment, I have a cloud, I have an on-premise private cloud. Can I dabble and set parameters around pricing? >> Yes you can. So one is you can dabble and set perimeter around scope, which is like manufacturing does this, do not touch the production line that's on at the moment. We've got a hospital that says every time they run a pen test, any machine that's actually connected to a patient must be excluded. So you can actually set the parameters for what's in scope and what's out of scope up front, most again we're designed to be safe to run against production so you can set the parameters for scope. You can set the parameters for cost if you want. But our recommendation is I'd rather figure out what you can afford and let you test everything in your environment than try to squeeze every penny from you by only making you buy what can afford as a smaller-- >> So the variable ratio, if you will is, how much they spend is the size of their environment and usage. >> Just size of the environment. >> So it could be a big ticket item for a CISO then. >> It could, if you're really large, but for the most part-- >> What's large? >> I mean, if you were Walmart, well, let me back up. What I heard is global 10 companies spend anywhere from 50 to a hundred million dollars a year on security testing. So they're already spending a ton of money, but they're spending it on consultants that show up maybe a couple of times a year. They don't have, humans can't scale to test a million hosts in your environment. And so you're already spending that money, spend a fraction of that and use us and run as much as you want. And that's really what it comes down to. >> John: All right. So what's the response from customers? >> What's really interesting is there are three use cases. The first is that SOC manager that is using us to verify that their security tools are actually working. So their Splunk environment is logging the right data. It's integrating properly with CrowdStrike, it's integrating properly with their active directory services and their password policies. So the SOC manager is using us to verify the effectiveness of their security controls. The second use case is the IT director that is using us to proactively harden their systems. Did they install VMware correctly? Did they install their Cisco gear correctly? Are they patching right? And then the third are for the companies that are lucky to have their own internal pen test and red teams where they use us like a force multiplier. So if you've got 10 people on your red team and you still have a million IPs or hosts in your environment, you still don't have enough people for that coverage. So they'll use us to do recon at scale and attack at scale and let the humans focus on the really juicy hard stuff that humans are successful at. >> Love the product. Again, I'm trying to think about how I engage on the test. Is there pilots? Is there a demo version? >> There's a free trials. So we do 30 day free trials. The output can actually be used to meet your SOC 2 requirements. So in many ways you can just use us to get a free SOC 2 pen test report right now, if you want. Go to the website, log in for a free trial, you can log into your Google ID or your LinkedIn ID, run a pen test against your organization and use that to answer your PCI segmentation test requirements, your SOC 2 requirements, but you will be hooked. You will want to run us more often. And you'll get a Horizon3 tattoo. >> The first hits free as they say in the drug business. >> Yeah. >> I mean, so you're seeing that kind of response then, trial converts. >> It's exactly. In fact, we have a very well defined aha moment, which is you run us to find, you fix, you run us to verify, we have 100% technical win rate when our customers hit a find, fix, verify cycle, then it's about budget and urgency. But 100% technical win rate because of that aha moment, 'cause people realize, holy crap, I don't have to wait six months to verify that my problems have actually been fixed. I can just come in, click, verify, rerun the entire pen test or rerun a very specific part of it on what I just patched my environment. >> Congratulations, great stuff. You're here part of the AWS Startup Showcase. So I have to ask, what's the relationship with AWS, you're on their cloud. What kind of actions going on there? Is there secret sauce on there? What's going on? >> So one is we are AWS customers ourselves, our brains command and control infrastructure. All of our analytics are all running on AWS. It's amazing, when we run a pen test, we are able to use AWS and we'll spin up a virtual private cloud just for that pen test. It's completely ephemeral, it's all Lambda functions and graph analytics and other techniques. When the pen test ends, you can delete, there's a single use Docker container that gets deleted from your environment so you have nothing on-prem to deal with and the entire virtual private cloud tears itself down. So at any given moment, if we're running 50 pen tests or a hundred pen tests, self-service, there's a hundred virtual private clouds being managed in AWS that are spinning up, running and tearing down. It's an absolutely amazing underlying platform for us to make use of. Two is that many customers that have hybrid environments. So they've got a cloud infrastructure, an Office 365 infrastructure and an on-prem infrastructure. We are a single attack platform that can test all of that together. No one else can do it. And so the AWS customers that are especially AWS hybrid customers are the ones that we do really well targeting. >> Got it. And that's awesome. And that's the benefit of cloud? >> Absolutely. And the AWS marketplace. What's absolutely amazing is the competitive advantage being part of the marketplace has for us, because the simple thing is my customers, if they already have dedicated cloud spend, they can use their approved cloud spend to pay for Horizon3 through the marketplace. So you don't have to, if you already have that budget dedicated, you can use that through the marketplace. The other is you've already got the vendor processes in place, you can purchase through your existing AWS account. So what I love about the AWS company is one, the infrastructure we use for our own pen test, two, the marketplace, and then three, the customers that span that hybrid cloud environment. That's right in our strike zone. >> Awesome. Well, congratulations. And thanks for being part of the showcase and I'm sure your product is going to do very, very well. It's very built for what people want. Self-service get in, get the value quickly. >> No agents to install, no consultants to hire. safe to run against production. It's what I wanted. >> Great to see you and congratulations and what a great story. And we're going to keep following you. Thanks for coming on. >> Snehal: Phenomenal. Thank you, John. >> This is the AWS Startup Showcase. I'm John John Furrier, your host. This is season two, episode four on cybersecurity. Thanks for watching. (upbeat music)

Published Date : Sep 7 2022

SUMMARY :

of the AWS Startup Showcase. I'm glad to see you again. to you doing the journey, and I became the CTO of Splunk. and the companies, they got over the next 18 to 24 months. And I loved your approach of and "Tropic Thunder" and I didn't come from the military world. So the value preposition, And the whole idea is that build a true What's the product that you and the adversary always has a vote Like they don't believe you and sell to your replacement, Can you give an example And that's just the reality is, This is now the threat. the defensive tools and engage with you guys? the whole intent is you We've been in the market for now about So no consulting, no pro serve. who's buying the product. So we're not overly rotated I mean, is that something and the criticals attackers have adapted. and the tactics we apply We actually do the hacking. Like sneakers the movie, and be able to opponent. kind of putting the head in the sand or? and still upon your environment. that to a trust but verify, looking at the hot startups, and being able to rapidly This is the evolution of and I'm good to go. What is the product? and you can run as many and dime in the sense that, So you can actually set the So the variable ratio, if you will is, So it could be a big and run as much as you want. So what's the response from customers? and let the humans focus on about how I engage on the test. So in many ways you can just use us they say in the drug business. I mean, so you're seeing I don't have to wait six months to verify So I have to ask, what's When the pen test ends, you can delete, And that's the benefit of cloud? And the AWS marketplace. And thanks for being part of the showcase no consultants to hire. Great to see you and congratulations This is the AWS Startup Showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
WalmartORGANIZATION

0.99+

40QUANTITY

0.99+

SnehalPERSON

0.99+

January of 2020DATE

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

10QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

ChuckPERSON

0.99+

Snehal AntaniPERSON

0.99+

two partsQUANTITY

0.99+

two piecesQUANTITY

0.99+

30 dayQUANTITY

0.99+

Tropic ThunderTITLE

0.99+

100%QUANTITY

0.99+

CiscoORGANIZATION

0.99+

20 yearQUANTITY

0.99+

second questionQUANTITY

0.99+

GE CapitalORGANIZATION

0.99+

30QUANTITY

0.99+

next weekDATE

0.99+

20QUANTITY

0.99+

New York CityLOCATION

0.99+

130 employeesQUANTITY

0.99+

IBMORGANIZATION

0.99+

10 peopleQUANTITY

0.99+

tomorrowDATE

0.99+

7,000 employeesQUANTITY

0.99+

PowerPointTITLE

0.99+

thirdQUANTITY

0.99+

SplunkORGANIZATION

0.99+

10 companiesQUANTITY

0.99+

5,000QUANTITY

0.99+

second partQUANTITY

0.99+

six monthsQUANTITY

0.99+

end of 2020DATE

0.99+

LinkedInORGANIZATION

0.99+

oneQUANTITY

0.99+

15 different modulesQUANTITY

0.99+

last yearDATE

0.99+

TwoQUANTITY

0.99+

firstQUANTITY

0.99+

CUBEORGANIZATION

0.99+

first partQUANTITY

0.99+

OneQUANTITY

0.99+

first versionQUANTITY

0.99+

Horizon3ORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

three machinesQUANTITY

0.99+

CrowdStrikeTITLE

0.98+

first principleQUANTITY

0.98+

one screenQUANTITY

0.98+

threeQUANTITY

0.98+

one personQUANTITY

0.98+

thousand companiesQUANTITY

0.98+

SOC 2TITLE

0.98+

Jack RyanTITLE

0.98+

one line itemQUANTITY

0.98+

about two and a half yearsQUANTITY

0.98+

twoQUANTITY

0.98+

three use casesQUANTITY

0.98+

zero daysQUANTITY

0.98+

hundredsQUANTITY

0.98+

about four yearsQUANTITY

0.98+

Snehal Antani, Horizon3.ai | CUBE Conversation


 

(upbeat music) >> Hey, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase, season two, episode four. I'm your host, Lisa Martin. This topic is cybersecurity detect and protect against threats. Very excited to welcome a CUBE alumni back to the program. Snehal Antani, the co-founder and CEO of Horizon3 joins me. Snehal, it's great to have you back in the studio. >> Likewise, thanks for the invite. >> Tell us a little bit about Horizon3, what is it that you guys do? You were founded in 2019, got a really interesting group of folks with interesting backgrounds, but talk to the audience about what it is that you guys are aiming to do. >> Sure, so maybe back to the problem we were trying to solve. So my background, I was a engineer by trade, I was a CIO at G Capital, CTO at Splunk and helped grow scale that company. And then took a break from industry to serve within the Department of Defense. And in every one of my jobs where I had cyber security in my responsibility, I suffered from the same problem. I had no idea I was secure or that we were fixing the right vulnerabilities or logging the right data in Splunk or that our tools and processes and people worked together well until the bad guys had showed up. And by then it was too late. And what I wanted to do was proactively verify my security posture, make sure that my security tools were actually effective, that my people knew how to respond to a breach before the bad guys were there. And so this whole idea of continuously verifying my security posture through security testing and pen testing became a passion project of mine for over a decade. And through my time in the DOD found the right group of an early people that had offensive cyber experience, that had defensive cyber experience, that knew how to build and ship and deliver software at scale. And we came together at the end of 2019 to start Horizon3. >> Talk to me about the current threat landscape. We've seen so much change in flux in the last couple of years. Globally, we've seen the threat actors are just getting more and more sophisticated as is the different types of attacks. What are you seeing kind of horizontally across the threat landscape? >> Yeah, the biggest thing is attackers don't have to hack in using Zero-days like you see in the movies. Often they're able to just log in with valid credentials that they've collected through some mechanism. As an example, if I wanted to compromise a large organization, say United Airlines, one of the things that an attacker's going to go off and do is go to LinkedIn and find all of the employees that work at United Airlines. Now you've got say, 7,000 pilots. Of those pilots, you're going to figure out quickly that their user IDs and passwords or their user IDs at least are first name, last initial @united.com. Cool, now I have 7,000 potential logins and all it takes is one of them to reuse a compromised password for their corporate email, and now you've got an initial user in the system. And most likely, that initial user has local admin on their laptops. And from there, an attacker can dump credentials and find a path to becoming a domain administrator. And what happens oftentimes is, security tools don't detect this because it looks like valid behavior in the organization. And this is pretty common, this idea of collecting information on an organization or a target using open source intelligence, using a mix of credential spraying and kind of low priority or low severity exploitations or misconfigurations to get in. And then from there, systematically dumping credentials, reusing those credentials, and finding a path towards compromise. And less than 2% of CVEs are actually used in exploits. Most of the time, attackers chain together misconfigurations, bad product defaults. And so really the threat landscape is, attackers don't hack in, they log in. And organizations have to focus on getting the basics right and fundamentals right first before they layer on some magic easy button that is some security AI tools hoping that that's going to save their day. And that's what we found systemically across the board. >> So you're finding that across the board, probably pan-industry that a lot of companies need to go back to basics. We talk about that a lot when we're talking about security, why do you think that is? >> I think it's because, one, most organizations are barely treading water. When you look at the early rapid adopters of Horizon3's pen testing product, autonomous pen testing, the early adopters tended to be teams where the IT team and the security team were the same person, and they were barely treading water. And the hardest part of my job as a CIO was deciding what not to fix. Because the bottleneck in the security process is the actual capacity to fix problems. And so, fiercely prioritizing issues becomes really important. But the tools and the processes don't focus on prioritizing what's exploitable, they prioritize by some arbitrary score from some arbitrary vulnerability scanner. And so we have as a fundamental breakdown of the small group of folks with the expertise to fix problems tend to be the most overworked and tend to have the most noise to need to sift through. So they don't even have time to get to the basics. They're just barely treading water doing their day jobs and they're often sacrificing their nights and weekends. All of us at Horizon3 were practitioners at one point in our career, we've all been called in on the weekend. So that's why what we did was fiercely focus on helping customers and users fix problems that truly matter, and allowing them to quickly reattack and verify that the problems were truly fixed. >> So when it comes to today's threat landscape, what is it that organizations across the board should really be focused on? >> I think, systemically, what we see are bad password or credential policies, least access privileged management type processes not being well implemented. The domain user tends to be the local admin on the box, no ability to understand what is a valid login versus a malicious login. Those are some of the basics that we see systemically. And if you layer that with it's very easy to say, misconfigure vCenter, or misconfigure a piece of Cisco gear, or you're not going to be installing, monitoring security observability tools on that HPE Integrated Lights Out server and so on. What you'll find is that you've got people overworked that don't have the capacity to fix. You have the fundamentals or the basics not well implemented. And you have a whole bunch of blind spots in your security posture. And defenders have to be right every time, attackers only have to be right once. And so what we have is this asymmetric fight where attackers are very likely to get in, and we see this on the news all the time. >> So, and nobody, of course, wants to be the next headline, right? Talk to me a little bit about autonomous pen testing as a service, what you guys are delivering, and what makes it unique and different than other tools that have been out, as you're saying, that clearly have gaps. >> Yeah. So first and foremost was the approach we took in building our product. What we set upfront was, our primary users should be IT administrators, network engineers, and that IT intern who, in three clicks, should have the power of a 20-year pen testing expert. So the whole idea was empower and enable all of the fixers to find, fix, and verify their security weaknesses continuously. That was the design goal. Most other security products are designed for security people, but we already know they're task saturated, they've got way too many tools under the belt. So first and foremost, we wanted to empower the fixers to fix problems that truly matter. The second part was, we wanted to do that without having to install credentialed agents all over the place or writing your own custom attack scripts, or having to do a bunch of configurations and make sure that it's safe to run against production systems so that you could test your entire attack surface. Your on-prem, your cloud, your external perimeter. And this is where AWS comes in to be very important, especially hybrid customers where you've got a portion of your infrastructure on AWS, a portion on-prem, and you use Horizon3 to be able to attack your complete attack surface. So we can start on-prem and we will find say, the AWS credentials file that was mistakenly saved on a shared drive, and then reuse that to become admin in the cloud. AWS didn't do anything wrong, the cloud team didn't do anything wrong, a developer happened to share a password or save a password file locally. That's how attackers get in. So we can start from on-prem and show how we can compromise the cloud, start from the cloud and show how we can compromise on-prem. Start from the outside and break in. And we're able to show that complete attack surface at scale for hybrid customers. >> So showing that complete attack surface sort of from the eyes of the attacker? >> That's exactly right, because while blue teams or the defenders have a very specific view of their environment, you have to look at yourself through the eyes of the attacker to understand what are your blind spots, what do they see that you don't see. And it's actually a discipline that is well entrenched within military culture. And that's also important for us as the company. We're about a third of Horizon3 served in US special operations or the intelligence community with the United States, and then DOD writ large. And a lot of that red team mindset, view yourself through the eyes of the attacker, and this idea of training like you fight and building muscle memory so you know how to react to the real incident when it occurs is just ingrained in how we operate, and we disseminate that culture through all of our customers as well. >> And at this point in time, every business needs to assume an attacker's going to get in. >> That's right. There are way too many doors and windows in the organization. Attackers are going to get in, whether it's a single customer that reused their Netflix password for their corporate email, a patch that didn't get applied properly, or a new Zero-day that just gets published. A piece of Cisco software that was misconfigured, not buy anything more than it's easy to misconfigure these complex pieces of technology. Attackers are going to get in. And what we want to understand as customers is, once they're in, what could they do? Could they get to my crown jewel's data and systems? Could they borrow and prepare for a much more complicated attack down the road? If you assume breach, now you want to understand what can they get to, how quickly can you detect that breach, and what are your ways to stifle their ability to achieve their objectives. And culturally, we would need a shift from talking about how secure I am to how defensible are we. Security is kind of a point in time state of your organization. Defensibility is how quickly you can adapt to the attacker to stifle their ability to achieve their objective. >> As things are changing constantly. >> That's exactly right. >> Yeah. Talk to me about a typical customer engagement. If there's, you mentioned folks treading water, obviously, there's the huge cybersecurity skills gap that we've been talking about for a long time now, that's another factor there. But when you're in customer conversations, who are you talking to? Typically, what are they coming to you for help? >> Yeah. One big thing is, you're not going to win and win a customer by taking 'em out to steak dinners. Not anymore. The way we focus on our go to market and our sales motion is cultivating champions. At the end of the proof of concept, our internal measure of successes is, is that person willing to get a Horizon3 tattoo? And you do that, not through steak dinners, not through cool swag, not through marketing, but by letting your results do the talking. Now, part of those results should not require professional services or consulting. The whole experience should be self-service, frictionless, and insightful. And that really is how we've designed the product and designed the entire sales motion. So a prospect will learn or discover about us, whether it's through LinkedIn, through social, through the website, but often because one of their friends or colleagues heard about us, saw our result, and is advocating on our behalf when we're not in the room. From there, they're going to be able to self-service, just log in to our product through their LinkedIn ID, their Google ID. They can engage with a salesperson if they want to. They can run a pen test right there on the spot against their home without any interaction with a sales rep. Let those results do the talking, use that as a starting point to engage in a more complicated proof of value. And the whole idea is we don't charge for these, we let our results do the talking. And at the end, after they've run us to find problems, they've gone off and fixed those issues, and they've rerun us to verify that what they've fixed was properly fixed, then they're hooked. And we have a hundred percent technical win rate with our prospects when they hit that find-fix-verify cycle, which is awesome. And then we get the tattoo for them, at least give them the template. And then we're off to the races. >> Sounds like you're making the process more simple. There's so much complexity behind it, but allowing users to be able to actually test it out themselves in a simplified way is huge. Allowing them to really focus on becoming defensible. >> That's exactly right. And the value is, especially now in security, there's so much hype and so much noise. There's a lot more time being spent self-discovering and researching technologies before you engage in a commercial discussion. And so what we try to do is optimize that entire buying experience around enabling people to discover and research and learn. The other part, remember is, offensive cyber and ethical hacking and so on is very mysterious and magical to most defenders. It's such a complicated topic with many nuance tools that they don't have the time to understand or learn. And so if you surface the complexity of all those attacker tools, you're going to overwhelm a person that is already overwhelmed. So we needed the experience to be incredibly simple and optimize that find-fix-verify aha moment. And once again, be frictionless and be insightful. >> Frictionless and insightful. Excellent. Talk to me about results, you mentioned results. We love talking about outcomes. When a customer goes through the PoC, PoV that you talked about, what are some of the results that they see that hook them? >> Yeah, the biggest thing is, what attackers do today is they will find a low from machine one plus a low from machine two equals compromised domain. What they're doing is they're chaining together issues across multiple parts of your system or your organization to opone your environment. What attackers don't do is find a critical vulnerability and exploit that single machine. It's always a chain, always multiple steps in the attack. And so the entire product and experience in, actually, our underlying tech is around attack paths. Here is the path, the attack path an attacker could have taken. That node zero our product took. Here is the proof of exploitation for every step along the way. So you know this isn't a false positive. In fact, you can copy and paste the attacker command from the product and rerun it yourself and see it for yourself. And then here is exactly what you have to go fix and why it's important to fix. So that path, proof, impact, and fix action is what the entire experience is focused on. And that is the results doing the talking, because remember, these folks are already overwhelmed, they're dealing with a lot of false positives. And if you tell them you've got another critical to fix, their immediate reaction is "Nope, I don't believe you. This is a false positive. I've seen this plenty of times, that's not important." So you have to, in your product experience and sales process and adoption process, immediately cut through that defensive or that reflex. And it's path, proof, impact. Here's exactly what you fix, here are the exact steps to fix it, and then you're off to the races. What I learned at Splunk was, you win hearts and minds of your users through amazing experience, product experience, amazing documentation. >> Yes. >> And a vibrant community of champions. Those are the three ingredients of success, and we've really made that the core of the product. So we win on our documentation, we win on the product experience, and we've cultivated pretty awesome community. >> Talk to me about some of those champions. Is there a customer story that you think really articulates the value of node zero and what it is that you are doing? >> Yeah, I'll tell you a couple. Actually, I just gave this talk at Black Hat on war stories from running 10,000 pen tests. And I'll try to be gentle on the vendors that were involved here, but the reality is, you got to be honest and authentic. So a customer, a healthcare organization ran a pen test and they were using a very well-known managed security services provider as their security operations team. And so they initiate the pen test and they wanted to audit their response time of their MSSP. So they run the pen test and we're in and out. The whole pen test runs two hours or less. And in those two hours, the pen test compromises the domain, gets access to a bunch of sensitive data, laterally maneuvers, rips the entire environment apart. It took seven hours for the MSSP to send an email notification to the IT director that said, "Hey, we think something suspicious is going on." >> Wow. >> Seven hours! >> That's a long time. >> We were in and out in two, seven hours for notification. And the issue with that healthcare company was, they thought they had hired the right MSSP, but they had no way to audit their performance. And so we gave them the details and the ammunition to get services credits to hold them accountable and also have a conversation of switching to somebody else. >> Accountability is key, especially when we're talking about the threat landscape and how it's evolving day to day. >> That's exactly right. Accountability of your suppliers or your security vendors, accountability of your people and your processes, and not having to wait for the bad guys to show up to test your posture. That's what's really important. Another story that's interesting. This customer did everything right. It was a banking customer, large environment, and they had Fortinet installed as their EDR type platform. And they initiate us as a pen test and we're able to get code execution on one of their machines. And from there, laterally maneuver to become a domain administrator, which in security is a really big deal. So they came back and said, "This is absolutely not possible. Fortinet should have stopped that from occurring." And it turned out, because we showed the path and the proof and the impact, Fortinet was misconfigured on three machines out of 5,000. And they had no idea. >> Wow. >> So it's one of those, you want to don't trust that your tools are working, don't trust your processes, verify them. Show me we're secure today. Show me we're secure tomorrow. And then show me again we're secure next week. Because my environment's constantly changing and the adversary always has a vote. >> Right, the constant change in flux is huge challenge for organizations, but those results clearly speak for themselves. You talked about speed in terms of time, how quickly can a customer deploy your technology, identify and remedy problems in their environment? >> Yeah, this find-fix-verify aha moment, if you will. So traditionally, a customer would have to maybe run one or two pen tests a year. And then they'd go off and fix things. They have no capacity to test them 'cause they don't have the internal attack expertise. So they'd wait for the next pen test and figure out that they were still exploitable. Usually, this year's pen test results look identical than last year's. That isn't sustainable. So our customers shift from running one or two pen tests a year to 40 pen tests a month. And they're in this constant loop of finding, fixing, and verifying all of the weaknesses in their infrastructure. Remember, there's infrastructure pen testing, which is what we are really good at, and then there's application level pen testing that humans are much better at solving. >> Okay. >> So we focus on the infrastructure side, especially at scale. But can you imagine, 40 pen tests a month, they run from the perimeter, the inside from a specific subnet, from work from home machines, from the cloud. And they're running these pen tests from many different perspectives to understand what does the attacker see from each of these locations in their organization and how do they systemically fix those issues? And what they look at is, how many critical problems were found, how quickly were they fixed, how often do they reoccur. And that third metric is important because you might fix something, but if it shows up again next week because you've got bad automation, you're in a rat race. So you want to look at that reoccurrence rate also. >> The reoccurrence rate. What are you most excited about as, obviously, the threat landscape continues to evolve, but what are you most excited about for the company and what it is that you're able to help organizations across industries achieve in such tumultuous times? >> Yeah. One of the coolest things is, because I was a customer for many of these products, I despised threat intelligence products. I despised them. Because there were basically generic blog posts. Maybe delivered as a data feed to my Splunk environment or something. But they're always really generic. Like, "You may have a problem here." And as a result, they weren't very actionable. So one of the really cool things that we do, it's just part of the product is this concept of flares, flares that we shoot up. And the idea is not to cause angst or anxiety or panic, but rather we look at threat intelligence and then because all of the insights we have from your pen test results, we connect those two together and say, "Your VMware Horizon instance at this IP is exploitable. You need to fix it as fast as possible, or is very likely to be exploited. And here is the threat intelligence and in the news from CSAI and elsewhere that shows why it's important." So I think what is really cool is we're able to take together threat intelligence out in the wild combined with very precise understanding of your environment to give you very accurate and actionable starting points for what you need to go fix or test or verify. And when we do that, what we see is almost like, imagine this ball bouncing, that is the first drop of the ball, and then that drives the first major pen test. And then they'll run all these subsequent pen tests to continue to find and fix and verify. And so what we see is this tremendous amount of excitement from customers that we're actually giving them accurate, detailed information to take advantage of, and we're not causing panic and we're not causing alert and fatigue as a result. >> That's incredibly important in this type of environment. Last question for you. If autonomous pen testing is obviously critical and has tremendous amount of potential for organizations, but it's only part of the equation. What's the larger vision? >> Yeah, we are not a pen testing company and that's something we decided upfront. Pen testing is a sensor. It collects and understands a tremendous amount of data for your attack surface. So the natural next thing is to analyze the pen test results over time to start to give you a more accurate understanding of your governance, risk, and compliance posture. So now what happens is, we are able to allow customers to go run 40 pen tests a month. And that kind of becomes the initial land or flagship product. But then from there, we're able to upsell or increase value to our customers and start to compete and take out companies like Security Scorecard or RiskIQ and other companies like that, where there tended to be, I was a user of all those tools, a lot of garbage in, garbage out. Where you can't fill out a spreadsheet and get an accurate understanding of your risk posture. You need to look at your detailed pen test results over time and use that to accurately understand what are your hotspots, what's your recurrence rate and so on. And being able to tell that story to your auditors, to your regulators, to the board. And actually, it gives you a much more accurate way to show return on investment of your security spend also. >> Which is huge. So where can customers and those that are interested go to learn more? >> So horizonthree.ai is the website. That's a great starting point. We tend to very much rely on social channels, so LinkedIn in particular, to really get our stories out there. So finding us on LinkedIn is probably the next best thing to go do. And we're always at the major trade shows and events also. >> Excellent. Snehal, it's been a pleasure talking to you about Horizon3, what it is that you guys are doing, why, and the greater vision. We appreciate your insights and your time. >> Thank you, likewise. >> All right. For my guest, I'm Lisa Martin. We want to thank you for watching the AWS Startup Showcase. We'll see you next time. (gentle music)

Published Date : Aug 30 2022

SUMMARY :

of the AWS Startup Showcase, but talk to the audience about what it is that my people knew how to respond Talk to me about the and do is go to LinkedIn and that across the board, the early adopters tended to that don't have the capacity to fix. to be the next headline, right? of the fixers to find, fix, to understand what are your blind spots, to assume an attacker's going to get in. Could they get to my crown coming to you for help? And at the end, after they've Allowing them to really and magical to most defenders. Talk to me about results, And that is the results doing Those are the three and what it is that you are doing? to the IT director that said, And the issue with that and how it's evolving day to day. the bad guys to show up and the adversary always has a vote. Right, the constant change They have no capacity to test them to understand what does the attacker see the threat landscape continues to evolve, And the idea is not to cause but it's only part of the equation. And that kind of becomes the initial land to learn more? So horizonthree.ai is the website. to you about Horizon3, what it is the AWS Startup Showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

SnehalPERSON

0.99+

two hoursQUANTITY

0.99+

2019DATE

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

United AirlinesORGANIZATION

0.99+

twoQUANTITY

0.99+

20-yearQUANTITY

0.99+

Seven hoursQUANTITY

0.99+

seven hoursQUANTITY

0.99+

Snehal AntaniPERSON

0.99+

next weekDATE

0.99+

SplunkORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

three machinesQUANTITY

0.99+

less than 2%QUANTITY

0.99+

tomorrowDATE

0.99+

5,000QUANTITY

0.99+

last yearDATE

0.99+

G CapitalORGANIZATION

0.99+

Department of DefenseORGANIZATION

0.99+

second partQUANTITY

0.99+

CUBEORGANIZATION

0.99+

end of 2019DATE

0.99+

FortinetORGANIZATION

0.99+

Horizon3ORGANIZATION

0.99+

firstQUANTITY

0.99+

third metricQUANTITY

0.99+

todayDATE

0.99+

7,000 pilotsQUANTITY

0.99+

DODORGANIZATION

0.98+

eachQUANTITY

0.98+

USLOCATION

0.98+

OneQUANTITY

0.98+

one pointQUANTITY

0.98+

hundred percentQUANTITY

0.97+

three clicksQUANTITY

0.97+

@united.comOTHER

0.97+

single machineQUANTITY

0.97+

two pen testsQUANTITY

0.97+

Horizon3TITLE

0.97+

three ingredientsQUANTITY

0.97+

40 pen testsQUANTITY

0.97+

7,000 potential loginsQUANTITY

0.96+

theCUBEORGANIZATION

0.95+

first major pen testQUANTITY

0.94+

this yearDATE

0.94+

last couple of yearsDATE

0.94+

machine twoQUANTITY

0.93+

first nameQUANTITY

0.92+

10,000 pen testsQUANTITY

0.92+

United StatesLOCATION

0.91+

over a decadeQUANTITY

0.91+

single customerQUANTITY

0.9+

40 pen tests a monthQUANTITY

0.89+

Startup ShowcaseEVENT

0.86+

a yearQUANTITY

0.86+

One big thingQUANTITY

0.85+

RiskIQORGANIZATION

0.85+

VMwareORGANIZATION

0.83+

GoogleORGANIZATION

0.82+

first drop ofQUANTITY

0.82+

The New Data Equation: Leveraging Cloud-Scale Data to Innovate in AI, CyberSecurity, & Life Sciences


 

>> Hi, I'm Natalie Ehrlich and welcome to the AWS startup showcase presented by The Cube. We have an amazing lineup of great guests who will share their insights on the latest innovations and solutions and leveraging cloud scale data in AI, security and life sciences. And now we're joined by the co-founders and co-CEOs of The Cube, Dave Vellante and John Furrier. Thank you gentlemen for joining me. >> Hey Natalie. >> Hey Natalie. >> How are you doing. Hey John. >> Well, I'd love to get your insights here, let's kick it off and what are you looking forward to. >> Dave, I think one of the things that we've been doing on the cube for 11 years is looking at the signal in the marketplace. I wanted to focus on this because AI is cutting across all industries. So we're seeing that with cybersecurity and life sciences, it's the first time we've had a life sciences track in the showcase, which is amazing because it shows that growth of the cloud scale. So I'm super excited by that. And I think that's going to showcase some new business models and of course the keynotes Ali Ghodsi, who's the CEO Data bricks pushing a billion dollars in revenue, clear validation that startups can go from zero to a billion dollars in revenues. So that should be really interesting. And of course the top venture capitalists coming in to talk about what the enterprise dynamics are all about. And what about you, Dave? >> You know, I thought it was an interesting mix and choice of startups. When you think about, you know, AI security and healthcare, and I've been thinking about that. Healthcare is the perfect industry, it is ripe for disruption. If you think about healthcare, you know, we all complain how expensive it is not transparent. There's a lot of discussion about, you know, can everybody have equal access that certainly with COVID the staff is burned out. There's a real divergence and diversity of the quality of healthcare and you know, it all results in patients not being happy, and I mean, if you had to do an NPS score on the patients and healthcare will be pretty low, John, you know. So when I think about, you know, AI and security in the context of healthcare in cloud, I ask questions like when are machines going to be able to better meet or make better diagnoses than doctors? And that's starting. I mean, it's really in assistance putting into play today. But I think when you think about cheaper and more accurate image analysis, when you think about the overall patient experience and trust and personalized medicine, self-service, you know, remote medicine that we've seen during the COVID pandemic, disease tracking, language translation, I mean, there are so many things where the cloud and data, and then it can help. And then at the end of it, it's all about, okay, how do I authenticate? How do I deal with privacy and personal information and tamper resistance? And that's where the security play comes in. So it's a very interesting mix of startups. I think that I'm really looking forward to hearing from... >> You know Natalie one of the things we talked about, some of these companies, Dave, we've talked a lot of these companies and to me the business model innovations that are coming out of two factors, the pandemic is kind of coming to an end so that accelerated and really showed who had the right stuff in my opinion. So you were either on the wrong side or right side of history when it comes to the pandemic and as we look back, as we come out of it with clear growth in certain companies and certain companies that adopted let's say cloud. And the other one is cloud scale. So the focus of these startup showcases is really to focus on how startups can align with the enterprise buyers and create the new kind of refactoring business models to go from, you know, a re-pivot or refactoring to more value. And the other thing that's interesting is that the business model isn't just for the good guys. If you look at say ransomware, for instance, the business model of hackers is gone completely amazing too. They're kicking it but in terms of revenue, they have their own they're well-funded machines on how to extort cash from companies. So there's a lot of security issues around the business model as well. So to me, the business model innovation with cloud-scale tech, with the pandemic forcing function, you've seen a lot of new kinds of decision-making in enterprises. You seeing how enterprise buyers are changing their decision criteria, and frankly their existing suppliers. So if you're an old guard supplier, you're going to be potentially out because if you didn't deliver during the pandemic, this is the issue that everyone's talking about. And it's kind of not publicized in the press very much, but this is actually happening. >> Well thank you both very much for joining me to kick off our AWS startup showcase. Now we're going to go to our very special guest Ali Ghodsi and John Furrier will seat with him for a fireside chat and Dave and I will see you on the other side. >> Okay, Ali great to see you. Thanks for coming on our AWS startup showcase, our second edition, second batch, season two, whatever we want to call it it's our second version of this new series where we feature, you know, the hottest startups coming out of the AWS ecosystem. And you're one of them, I've been there, but you're not a startup anymore, you're here pushing serious success on the revenue side and company. Congratulations and great to see you. >> Likewise. Thank you so much, good to see you again. >> You know I remember the first time we chatted on The Cube, you weren't really doing much software revenue, you were really talking about the new revolution in data. And you were all in on cloud. And I will say that from day one, you were always adamant that it was cloud cloud scale before anyone was really talking about it. And at that time it was on premises with Hadoop and those kinds of things. You saw that early. I remember that conversation, boy, that bet paid out great. So congratulations. >> Thank you so much. >> So I've got to ask you to jump right in. Enterprises are making decisions differently now and you are an example of that company that has gone from literally zero software sales to pushing a billion dollars as it's being reported. Certainly the success of Data bricks has been written about, but what's not written about is the success of how you guys align with the changing criteria for the enterprise customer. Take us through that and these companies here are aligning the same thing and enterprises want to change. They want to be in the right side of history. What's the success formula? >> Yeah. I mean, basically what we always did was look a few years out, the how can we help these enterprises, future proof, what they're trying to achieve, right? They have, you know, 30 years of legacy software and, you know baggage, and they have compliance and regulations, how do we help them move to the future? So we try to identify those kinds of secular trends that we think are going to maybe you see them a little bit right now, cloud was one of them, but it gets more and more and more. So we identified those and there were sort of three or four of those that we kind of latched onto. And then every year the passes, we're a little bit more right. Cause it's a secular trend in the market. And then eventually, it becomes a force that you can't kind of fight anymore. >> Yeah. And I just want to put a plug for your clubhouse talks with Andreessen Horowitz. You're always on clubhouse talking about, you know, I won't say the killer instinct, but being a CEO in a time where there's so much change going on, you're constantly under pressure. It's a lonely job at the top, I know that, but you've made some good calls. What was some of the key moments that you can point to, where you were like, okay, the wave is coming in now, we'd better get on it. What were some of those key decisions? Cause a lot of these startups want to be in your position, and a lot of buyers want to take advantage of the technology that's coming. They got to figure it out. What was some of those key inflection points for you? >> So if you're just listening to what everybody's saying, you're going to miss those trends. So then you're just going with the stream. So, Juan you mentioned that cloud. Cloud was a thing at the time, we thought it's going to be the thing that takes over everything. Today it's actually multi-cloud. So multi-cloud is a thing, it's more and more people are thinking, wow, I'm paying a lot's to the cloud vendors, do I want to buy more from them or do I want to have some optionality? So that's one. Two, open. They're worried about lock-in, you know, lock-in has happened for many, many decades. So they want open architectures, open source, open standards. So that's the second one that we bet on. The third one, which you know, initially wasn't sort of super obvious was AI and machine learning. Now it's super obvious, everybody's talking about it. But when we started, it was kind of called artificial intelligence referred to robotics, and machine learning wasn't a term that people really knew about. Today, it's sort of, everybody's doing machine learning and AI. So betting on those future trends, those secular trends as we call them super critical. >> And one of the things that I want to get your thoughts on is this idea of re-platforming versus refactoring. You see a lot being talked about in some of these, what does that even mean? It's people trying to figure that out. Re-platforming I get the cloud scale. But as you look at the cloud benefits, what do you say to customers out there and enterprises that are trying to use the benefits of the cloud? Say data for instance, in the middle of how could they be thinking about refactoring? And how can they make a better selection on suppliers? I mean, how do you know it used to be RFP, you deliver these speeds and feeds and you get selected. Now I think there's a little bit different science and methodology behind it. What's your thoughts on this refactoring as a buyer? What do I got to do? >> Well, I mean let's start with you said RFP and so on. Times have changed. Back in the day, you had to kind of sign up for something and then much later you're going to get it. So then you have to go through this arduous process. In the cloud, would pay us to go model elasticity and so on. You can kind of try your way to it. You can try before you buy. And you can use more and more. You can gradually, you don't need to go in all in and you know, say we commit to 50,000,000 and six months later to find out that wow, this stuff has got shelf where it doesn't work. So that's one thing that has changed it's beneficial. But the second thing is, don't just mimic what you had on prem in the cloud. So that's what this refactoring is about. If you had, you know, Hadoop data lake, now you're just going to have an S3 data lake. If you had an on-prem data warehouse now you just going to have a cloud data warehouse. You're just repeating what you did on prem in the cloud, architected for the future. And you know, for us, the most important thing that we say is that this lake house paradigm is a cloud native way of organizing your data. That's different from how you would do things on premises. So think through what's the right way of doing it in the cloud. Don't just try to copy paste what you had on premises in the cloud. >> It's interesting one of the things that we're observing and I'd love to get your reaction to this. Dave a lot** and I have been reporting on it is, two personas in the enterprise are changing their organization. One is I call IT ops or there's an SRE role developing. And the data teams are being dismantled and being kind of sprinkled through into other teams is this notion of data, pipelining being part of workflows, not just the department. Are you seeing organizational shifts in how people are organizing their resources, their human resources to take advantage of say that the data problems that are need to being solved with machine learning and whatnot and cloud-scale? >> Yeah, absolutely. So you're right. SRE became a thing, lots of DevOps people. It was because when the cloud vendors launched their infrastructure as a service to stitch all these things together and get it all working you needed a lot of devOps people. But now things are maturing. So, you know, with vendors like Data bricks and other multi-cloud vendors, you can actually get much higher level services where you don't need to necessarily have lots of lots of DevOps people that are themselves trying to stitch together lots of services to make this work. So that's one trend. But secondly, you're seeing more data teams being sort of completely ubiquitous in these organizations. Before it used to be you have one data team and then we'll have data and AI and we'll be done. ' It's a one and done. But that's not how it works. That's not how Google, Facebook, Twitter did it, they had data throughout the organization. Every BU was empowered. It's sales, it's marketing, it's finance, it's engineering. So how do you embed all those data teams and make them actually run fast? And you know, there's this concept of a data mesh which is super important where you can actually decentralize and enable all these teams to focus on their domains and run super fast. And that's really enabled by this Lake house paradigm in the cloud that we're talking about. Where you're open, you're basing it on open standards. You have flexibility in the data types and how they're going to store their data. So you kind of provide a lot of that flexibility, but at the same time, you have sort of centralized governance for it. So absolutely things are changing in the market. >> Well, you're just the professor, the masterclass right here is amazing. Thanks for sharing that insight. You're always got to go out of date and that's why we have you on here. You're amazing, great resource for the community. Ransomware is a huge problem, it's now the government's focus. We're being attacked and we don't know where it's coming from. This business models around cyber that's expanding rapidly. There's real revenue behind it. There's a data problem. It's not just a security problem. So one of the themes in all of these startup showcases is data is ubiquitous in the value propositions. One of them is ransomware. What's your thoughts on ransomware? Is it a data problem? Does cloud help? Some are saying that cloud's got better security with ransomware, then say on premise. What's your vision of how you see this ransomware problem being addressed besides the government taking over? >> Yeah, that's a great question. Let me start by saying, you know, we're a data company, right? And if you say you're a data company, you might as well just said, we're a privacy company, right? It's like some people say, well, what do you think about privacy? Do you guys even do privacy? We're a data company. So yeah, we're a privacy company as well. Like you can't talk about data without talking about privacy. With every customer, with every enterprise. So that's obviously top of mind for us. I do think that in the cloud, security is much better because, you know, vendors like us, we're investing so much resources into security and making sure that we harden the infrastructure and, you know, by actually having all of this infrastructure, we can monitor it, detect if something is, you know, an attack is happening, and we can immediately sort of stop it. So that's different from when it's on prem, you have kind of like the separated duties where the software vendor, which would have been us, doesn't really see what's happening in the data center. So, you know, there's an IT team that didn't develop the software is responsible for the security. So I think things are much better now. I think we're much better set up, but of course, things like cryptocurrencies and so on are making it easier for people to sort of hide. There decentralized networks. So, you know, the attackers are getting more and more sophisticated as well. So that's definitely something that's super important. It's super top of mind. We're all investing heavily into security and privacy because, you know, that's going to be super critical going forward. >> Yeah, we got to move that red line, and figure that out and get more intelligence. Decentralized trends not going away it's going to be more of that, less of the centralized. But centralized does come into play with data. It's a mix, it's not mutually exclusive. And I'll get your thoughts on this. Architectural question with, you know, 5G and the edge coming. Amazon's got that outpost stringent, the wavelength, you're seeing mobile world Congress coming up in this month. The focus on processing data at the edge is a huge issue. And enterprises are now going to be commercial part of that. So architecture decisions are being made in enterprises right now. And this is a big issue. So you mentioned multi-cloud, so tools versus platforms. Now I'm an enterprise buyer and there's no more RFPs. I got all this new choices for startups and growing companies to choose from that are cloud native. I got all kinds of new challenges and opportunities. How do I build my architecture so I don't foreclose a future opportunity. >> Yeah, as I said, look, you're actually right. Cloud is becoming even more and more something that everybody's adopting, but at the same time, there is this thing that the edge is also more and more important. And the connectivity between those two and making sure that you can really do that efficiently. My ask from enterprises, and I think this is top of mind for all the enterprise architects is, choose open because that way you can avoid locking yourself in. So that's one thing that's really, really important. In the past, you know, all these vendors that locked you in, and then you try to move off of them, they were highly innovative back in the day. In the 80's and the 90's, there were the best companies. You gave them all your data and it was fantastic. But then because you were locked in, they didn't need to innovate anymore. And you know, they focused on margins instead. And then over time, the innovation stopped and now you were kind of locked in. So I think openness is really important. I think preserving optionality with multi-cloud because we see the different clouds have different strengths and weaknesses and it changes over time. All right. Early on AWS was the only game that either showed up with much better security, active directory, and so on. Now Google with AI capabilities, which one's going to win, which one's going to be better. Actually, probably all three are going to be around. So having that optionality that you can pick between the three and then artificial intelligence. I think that's going to be the key to the future. You know, you asked about security earlier. That's how people detect zero day attacks, right? You ask about the edge, same thing there, that's where the predictions are going to happen. So make sure that you invest in AI and artificial intelligence very early on because it's not something you can just bolt on later on and have a little data team somewhere that then now you have AI and it's one and done. >> All right. Great insight. I've got to ask you, the folks may or may not know, but you're a professor at Berkeley as well, done a lot of great work. That's where you kind of came out of when Data bricks was formed. And the Berkeley basically was it invented distributed computing back in the 80's. I remember I was breaking in when Unix was proprietary, when software wasn't open you actually had the deal that under the table to get code. Now it's all open. Isn't the internet now with distributed computing and how interconnects are happening. I mean, the internet didn't break during the pandemic, which proves the benefit of the internet. And that's a positive. But as you start seeing edge, it's essentially distributed computing. So I got to ask you from a computer science standpoint. What do you see as the key learnings or connect the dots for how this distributed model will work? I see hybrids clearly, hybrid cloud is clearly the operating model but if you take it to the next level of distributed computing, what are some of the key things that you look for in the next five years as this starts to be completely interoperable, obviously software is going to drive a lot of it. What's your vision on that? >> Yeah, I mean, you know, so Berkeley, you're right for the gigs, you know, there was a now project 20, 30 years ago that basically is how we do things. There was a project on how you search in the very early on with Inktomi that became how Google and everybody else to search today. So workday was super, super early, sometimes way too early. And that was actually the mistake. Was that they were so early that people said that that stuff doesn't work. And then 20 years later you were invented. So I think 2009, Berkeley published just above the clouds saying the cloud is the future. At that time, most industry leaders said, that's just, you know, that doesn't work. Today, recently they published a research paper called, Sky Computing. So sky computing is what you get above the clouds, right? So we have the cloud as the future, the next level after that is the sky. That's one on top of them. That's what multi-cloud is. So that's a lot of the research at Berkeley, you know, into distributed systems labs is about this. And we're excited about that. Then we're one of the sky computing vendors out there. So I think you're going to see much more innovation happening at the sky level than at the compute level where you needed all those DevOps and SRE people to like, you know, build everything manually themselves. I can just see the memes now coming Ali, sky net, star track. You've got space too, by the way, space is another frontier that is seeing a lot of action going on because now the surface area of data with satellites is huge. So again, I know you guys are doing a lot of business with folks in that vertical where you starting to see real time data acquisition coming from these satellites. What's your take on the whole space as the, not the final frontier, but certainly as a new congested and contested space for, for data? >> Well, I mean, as a data vendor, we see a lot of, you know, alternative data sources coming in and people aren't using machine learning< AI to eat out signal out of the, you know, massive amounts of imagery that's coming out of these satellites. So that's actually a pretty common in FinTech, which is a vertical for us. And also sort of in the public sector, lots of, lots of, lots of satellites, imagery data that's coming. And these are massive volumes. I mean, it's like huge data sets and it's a super, super exciting what they can do. Like, you know, extracting signal from the satellite imagery is, and you know, being able to handle that amount of data, it's a challenge for all the companies that we work with. So we're excited about that too. I mean, definitely that's a trend that's going to continue. >> All right. I'm super excited for you. And thanks for coming on The Cube here for our keynote. I got to ask you a final question. As you think about the future, I see your company has achieved great success in a very short time, and again, you guys done the work, I've been following your company as you know. We've been been breaking that Data bricks story for a long time. I've been excited by it, but now what's changed. You got to start thinking about the next 20 miles stair when you look at, you know, the sky computing, you're thinking about these new architectures. As the CEO, your job is to one, not run out of money which you don't have to worry about that anymore, so hiring. And then, you got to figure out that next 20 miles stair as a company. What's that going on in your mind? Take us through your mindset of what's next. And what do you see out in that landscape? >> Yeah, so what I mentioned around Sky company optionality around multi-cloud, you're going to see a lot of capabilities around that. Like how do you get multi-cloud disaster recovery? How do you leverage the best of all the clouds while at the same time not having to just pick one? So there's a lot of innovation there that, you know, we haven't announced yet, but you're going to see a lot of it over the next many years. Things that you can do when you have the optionality across the different parts. And the second thing that's really exciting for us is bringing AI to the masses. Democratizing data and AI. So how can you actually apply machine learning to machine learning? How can you automate machine learning? Today machine learning is still quite complicated and it's pretty advanced. It's not going to be that way 10 years from now. It's going to be very simple. Everybody's going to have it at their fingertips. So how do we apply machine learning to machine learning? It's called auto ML, automatic, you know, machine learning. So that's an area, and that's not something that can be done with, right? But the goal is to eventually be able to automate a way the whole machine learning engineer and the machine learning data scientist altogether. >> You know it's really fun and talking with you is that, you know, for years we've been talking about this inside the ropes, inside the industry, around the future. Now people starting to get some visibility, the pandemics forced that. You seeing the bad projects being exposed. It's like the tide pulled out and you see all the scabs and bad projects that were justified old guard technologies. If you get it right you're on a good wave. And this is clearly what we're seeing. And you guys example of that. So as enterprises realize this, that they're going to have to look double down on the right projects and probably trash the bad projects, new criteria, how should people be thinking about buying? Because again, we talked about the RFP before. I want to kind of circle back because this is something that people are trying to figure out. You seeing, you know, organic, you come in freemium models as cloud scale becomes the advantage in the lock-in frankly seems to be the value proposition. The more value you provide, the more lock-in you get. Which sounds like that's the way it should be versus proprietary, you know, protocols. The protocol is value. How should enterprises organize their teams? Is it end to end workflows? Is it, and how should they evaluate the criteria for these technologies that they want to buy? >> Yeah, that's a great question. So I, you know, it's very simple, try to future proof your decision-making. Make sure that whatever you're doing is not blocking your in. So whatever decision you're making, what if the world changes in five years, make sure that if you making a mistake now, that's not going to bite you in about five years later. So how do you do that? Well, open source is great. If you're leveraging open-source, you can try it out already. You don't even need to talk to any vendor. Your teams can already download it and try it out and get some value out of it. If you're in the cloud, this pay as you go models, you don't have to do a big RFP and commit big. You can try it, pay the vendor, pay as you go, $10, $15. It doesn't need to be a million dollar contract and slowly grow as you're providing value. And then make sure that you're not just locking yourself in to one cloud or, you know, one particular vendor. As much as possible preserve your optionality because then that's not a one-way door. If it turns out later you want to do something else, you can, you know, pick other things as well. You're not locked in. So that's what I would say. Keep that top of mind that you're not locking yourself into a particular decision that you made today, that you might regret in five years. >> I really appreciate you coming on and sharing your with our community and The Cube. And as always great to see you. I really enjoy your clubhouse talks, and I really appreciate how you give back to the community. And I want to thank you for coming on and taking the time with us today. >> Thanks John, always appreciate talking to you. >> Okay Ali Ghodsi, CEO of Data bricks, a success story that proves the validation of cloud scale, open and create value, values the new lock-in. So Natalie, back to you for continuing coverage. >> That was a terrific interview John, but I'd love to get Dave's insights first. What were your takeaways, Dave? >> Well, if we have more time I'll tell you how Data bricks got to where they are today, but I'll say this, the most important thing to me that Allie said was he conveyed a very clear understanding of what data companies are outright and are getting ready. Talked about four things. There's not one data team, there's many data teams. And he talked about data is decentralized, and data has to have context and that context lives in the business. He said, look, think about it. The way that the data companies would get it right, they get data in teams and sales and marketing and finance and engineering. They all have their own data and data teams. And he referred to that as a data mesh. That's a term that is your mock, the Gany coined and the warehouse of the data lake it's merely a node in that global message. It meshes discoverable, he talked about federated governance, and Data bricks, they're breaking the model of shoving everything into a single repository and trying to make that the so-called single version of the truth. Rather what they're doing, which is right on is putting data in the hands of the business owners. And that's how true data companies do. And the last thing you talked about with sky computing, which I loved, it's that future layer, we talked about multi-cloud a lot that abstracts the underlying complexity of the technical details of the cloud and creates additional value on top. I always say that the cloud players like Amazon have given the gift to the world of 100 billion dollars a year they spend in CapEx. Thank you. Now we're going to innovate on top of it. Yeah. And I think the refactoring... >> Hope by John. >> That was great insight and I totally agree. The refactoring piece too was key, he brought that home. But to me, I think Data bricks that Ali shared there and why he's been open and sharing a lot of his insights and the community. But what he's not saying, cause he's humble and polite is they cracked the code on the enterprise, Dave. And to Dave's points exactly reason why they did it, they saw an opportunity to make it easier, at that time had dupe was the rage, and they just made it easier. They was smart, they made good bets, they had a good formula and they cracked the code with the enterprise. They brought it in and they brought value. And see that's the key to the cloud as Dave pointed out. You get replatform with the cloud, then you refactor. And I think he pointed out the multi-cloud and that really kind of teases out the whole future and landscape, which is essentially distributed computing. And I think, you know, companies are starting to figure that out with hybrid and this on premises and now super edge I call it, with 5G coming. So it's just pretty incredible. >> Yeah. Data bricks, IPO is coming and people should know. I mean, what everybody, they created spark as you know John and everybody thought they were going to do is mimic red hat and sell subscriptions and support. They didn't, they developed a managed service and they embedded AI tools to simplify data science. So to your point, enterprises could buy instead of build, we know this. Enterprises will spend money to make things simpler. They don't have the resources, and so this was what they got right was really embedding that, making a building a managed service, not mimicking the kind of the red hat model, but actually creating a new value layer there. And that's big part of their success. >> If I could just add one thing Natalie to that Dave saying is really right on. And as an enterprise buyer, if we go the other side of the equation, it used to be that you had to be a known company, get PR, you fill out RFPs, you had to meet all the speeds. It's like going to the airport and get a swab test, and get a COVID test and all kinds of mechanisms to like block you and filter you. Most of the biggest success stories that have created the most value for enterprises have been the companies that nobody's understood. And Andy Jazz's famous quote of, you know, being misunderstood is actually a good thing. Data bricks was very misunderstood at the beginning and no one kind of knew who they were but they did it right. And so the enterprise buyers out there, don't be afraid to test the startups because you know the next Data bricks is out there. And I think that's where I see the psychology changing from the old IT buyers, Dave. It's like, okay, let's let's test this company. And there's plenty of ways to do that. He illuminated those premium, small pilots, you don't need to go on these big things. So I think that is going to be a shift in how companies going to evaluate startups. >> Yeah. Think about it this way. Why should the large banks and insurance companies and big manufacturers and pharma companies, governments, why should they burn resources managing containers and figuring out data science tools if they can just tap into solutions like Data bricks which is an AI platform in the cloud and let the experts manage all that stuff. Think about how much money in time that saves enterprises. >> Yeah, I mean, we've got 15 companies here we're showcasing this batch and this season if you call it. That episode we are going to call it? They're awesome. Right? And the next 15 will be the same. And these companies could be the next billion dollar revenue generator because the cloud enables that day. I think that's the exciting part. >> Well thank you both so much for these insights. Really appreciate it. AWS startup showcase highlights the innovation that helps startups succeed. And no one knows that better than our very next guest, Jeff Barr. Welcome to the show and I will send this interview now to Dave and John and see you just in the bit. >> Okay, hey Jeff, great to see you. Thanks for coming on again. >> Great to be back. >> So this is a regular community segment with Jeff Barr who's a legend in the industry. Everyone knows your name. Everyone knows that. Congratulations on your recent blog posts we have reading. Tons of news, I want to get your update because 5G has been all over the news, mobile world congress is right around the corner. I know Bill Vass was a keynote out there, virtual keynote. There's a lot of Amazon discussion around the edge with wavelength. Specifically, this is the outpost piece. And I know there is news I want to get to, but the top of mind is there's massive Amazon expansion and the cloud is going to the edge, it's here. What's up with wavelength. Take us through the, I call it the power edge, the super edge. >> Well, I'm really excited about this mostly because it gives a lot more choice and flexibility and options to our customers. This idea that with wavelength we announced quite some time ago, at least quite some time ago if we think in cloud years. We announced that we would be working with 5G providers all over the world to basically put AWS in the telecom providers data centers or telecom centers, so that as their customers build apps, that those apps would take advantage of the low latency, the high bandwidth, the reliability of 5G, be able to get to some compute and storage services that are incredibly close geographically and latency wise to the compute and storage that is just going to give customers this new power and say, well, what are the cool things we can build? >> Do you see any correlation between wavelength and some of the early Amazon services? Because to me, my gut feels like there's so much headroom there. I mean, I was just riffing on the notion of low latency packets. I mean, just think about the applications, gaming and VR, and metaverse kind of cool stuff like that where having the edge be that how much power there. It just feels like a new, it feels like a new AWS. I mean, what's your take? You've seen the evolutions and the growth of a lot of the key services. Like EC2 and SA3. >> So welcome to my life. And so to me, the way I always think about this is it's like when I go to a home improvement store and I wander through the aisles and I often wonder through with no particular thing that I actually need, but I just go there and say, wow, they've got this and they've got this, they've got this other interesting thing. And I just let my creativity run wild. And instead of trying to solve a problem, I'm saying, well, if I had these different parts, well, what could I actually build with them? And I really think that this breadth of different services and locations and options and communication technologies. I suspect a lot of our customers and customers to be and are in this the same mode where they're saying, I've got all this awesomeness at my fingertips, what might I be able to do with it? >> He reminds me when Fry's was around in Palo Alto, that store is no longer here but it used to be back in the day when it was good. It was you go in and just kind of spend hours and then next thing you know, you built a compute. Like what, I didn't come in here, whether it gets some cables. Now I got a motherboard. >> I clearly remember Fry's and before that there was the weird stuff warehouse was another really cool place to hang out if you remember that. >> Yeah I do. >> I wonder if I could jump in and you guys talking about the edge and Jeff I wanted to ask you about something that is, I think people are starting to really understand and appreciate what you did with the entrepreneur acquisition, what you do with nitro and graviton, and really driving costs down, driving performance up. I mean, there's like a compute Renaissance. And I wonder if you could talk about the importance of that at the edge, because it's got to be low power, it has to be low cost. You got to be doing processing at the edge. What's your take on how that's evolving? >> Certainly so you're totally right that we started working with and then ultimately acquired Annapurna labs in Israel a couple of years ago. I've worked directly with those folks and it's really awesome to see what they've been able to do. Just really saying, let's look at all of these different aspects of building the cloud that were once effectively kind of somewhat software intensive and say, where does it make sense to actually design build fabricate, deploy custom Silicon? So from putting up the system to doing all kinds of additional kinds of security checks, to running local IO devices, running the NBME as fast as possible to support the EBS. Each of those things has been a contributing factor to not just the power of the hardware itself, but what I'm seeing and have seen for the last probably two or three years at this point is the pace of innovation on instance types just continues to get faster and faster. And it's not just cranking out new instance types because we can, it's because our awesomely diverse base of customers keeps coming to us and saying, well, we're happy with what we have so far, but here's this really interesting new use case. And we needed a different ratio of memory to CPU, or we need more cores based on the amount of memory, or we needed a lot of IO bandwidth. And having that nitro as the base lets us really, I don't want to say plug and play, cause I haven't actually built this myself, but it seems like they can actually put the different elements together, very very quickly and then come up with new instance types that just our customers say, yeah, that's exactly what I asked for and be able to just do this entire range of from like micro and nano sized all the way up to incredibly large with incredible just to me like, when we talk about terabytes of memory that are just like actually just RAM memory. It's like, that's just an inconceivably large number by the standards of where I started out in my career. So it's all putting this power in customer hands. >> You used the term plug and play, but it does give you that nitro gives you that optionality. And then other thing that to me is really exciting is the way in which ISVs are writing to whatever's underneath. So you're making that, you know, transparent to the users so I can choose as a customer, the best price performance for my workload and that that's just going to grow that ISV portfolio. >> I think it's really important to be accurate and detailed and as thorough as possible as we launch each one of these new instance types with like what kind of processor is in there and what clock speed does it run at? What kind of, you know, how much memory do we have? What are the, just the ins and outs, and is it Intel or arm or AMD based? It's such an interesting to me contrast. I can still remember back in the very very early days of back, you know, going back almost 15 years at this point and effectively everybody said, well, not everybody. A few people looked and said, yeah, we kind of get the value here. Some people said, this just sounds like a bunch of generic hardware, just kind of generic hardware in Iraq. And even back then it was something that we were very careful with to design and optimize for use cases. But this idea that is generic is so, so, so incredibly inaccurate that I think people are now getting this. And it's okay. It's fine too, not just for the cloud, but for very specific kinds of workloads and use cases. >> And you guys have announced obviously the performance improvements on a lamb** does getting faster, you got the per billing, second billings on windows and SQL server on ECE too**. So I mean, obviously everyone kind of gets that, that's been your DNA, keep making it faster, cheaper, better, easier to use. But the other area I want to get your thoughts on because this is also more on the footprint side, is that the regions and local regions. So you've got more region news, take us through the update on the expansion on the footprint of AWS because you know, a startup can come in and these 15 companies that are here, they're global with AWS, right? So this is a major benefit for customers around the world. And you know, Ali from Data bricks mentioned privacy. Everyone's a privacy company now. So the huge issue, take us through the news on the region. >> Sure, so the two most recent regions that we announced are in the UAE and in Israel. And we generally like to pre-announce these anywhere from six months to two years at a time because we do know that the customers want to start making longer term plans to where they can start thinking about where they can do their computing, where they can store their data. I think at this point we now have seven regions under construction. And, again it's all about customer trice. Sometimes it's because they have very specific reasons where for based on local laws, based on national laws, that they must compute and restore within a particular geographic area. Other times I say, well, a lot of our customers are in this part of the world. Why don't we pick a region that is as close to that part of the world as possible. And one really important thing that I always like to remind our customers of in my audience is, anything that you choose to put in a region, stays in that region unless you very explicitly take an action that says I'd like to replicate it somewhere else. So if someone says, I want to store data in the US, or I want to store it in Frankfurt, or I want to store it in Sao Paulo, or I want to store it in Tokyo or Osaka. They get to make that very specific choice. We give them a lot of tools to help copy and replicate and do cross region operations of various sorts. But at the heart, the customer gets to choose those locations. And that in the early days I think there was this weird sense that you would, you'd put things in the cloud that would just mysteriously just kind of propagate all over the world. That's never been true, and we're very very clear on that. And I just always like to reinforce that point. >> That's great stuff, Jeff. Great to have you on again as a regular update here, just for the folks watching and don't know Jeff he'd been blogging and sharing. He'd been the one man media band for Amazon it's early days. Now he's got departments, he's got peoples on doing videos. It's an immediate franchise in and of itself, but without your rough days we wouldn't have gotten all the great news we subscribe to. We watch all the blog posts. It's essentially the flow coming out of AWS which is just a tsunami of a new announcements. Always great to read, must read. Jeff, thanks for coming on, really appreciate it. That's great. >> Thank you John, great to catch up as always. >> Jeff Barr with AWS again, and follow his stuff. He's got a great audience and community. They talk back, they collaborate and they're highly engaged. So check out Jeff's blog and his social presence. All right, Natalie, back to you for more coverage. >> Terrific. Well, did you guys know that Jeff took a three week AWS road trip across 15 cities in America to meet with cloud computing enthusiasts? 5,500 miles he drove, really incredible I didn't realize that. Let's unpack that interview though. What stood out to you John? >> I think Jeff, Barr's an example of what I call direct to audience a business model. He's been doing it from the beginning and I've been following his career. I remember back in the day when Amazon was started, he was always building stuff. He's a builder, he's classic. And he's been there from the beginning. At the beginning he was just the blog and it became a huge audience. It's now morphed into, he was power blogging so hard. He has now support and he still does it now. It's basically the conduit for information coming out of Amazon. I think Jeff has single-handedly made Amazon so successful at the community developer level, and that's the startup action happened and that got them going. And I think he deserves a lot of the success for AWS. >> And Dave, how about you? What is your reaction? >> Well I think you know, and everybody knows about the cloud and back stop X** and agility, and you know, eliminating the undifferentiated, heavy lifting and all that stuff. And one of the things that's often overlooked which is why I'm excited to be part of this program is the innovation. And the innovation comes from startups, and startups start in the cloud. And so I think that that's part of the flywheel effect. You just don't see a lot of startups these days saying, okay, I'm going to do something that's outside of the cloud. There are some, but for the most part, you know, if you saw in software, you're starting in the cloud, it's so capital efficient. I think that's one thing, I've throughout my career. I've been obsessed with every part of the stack from whether it's, you know, close to the business process with the applications. And right now I'm really obsessed with the plumbing, which is why I was excited to talk about, you know, the Annapurna acquisition. Amazon bought and a part of the $350 million, it's reported, you know, maybe a little bit more, but that isn't an amazing acquisition. And the reason why that's so important is because Amazon is continuing to drive costs down, drive performance up. And in my opinion, leaving a lot of the traditional players in their dust, especially when it comes to the power and cooling. You have often overlooked things. And the other piece of the interview was that Amazon is actually getting ISVs to write to these new platforms so that you don't have to worry about there's the software run on this chip or that chip, or x86 or arm or whatever it is. It runs. And so I can choose the best price performance. And that's where people don't, they misunderstand, you always say it John, just said that people are misunderstood. I think they misunderstand, they confused, you know, the price of the cloud with the cost of the cloud. They ignore all the labor costs that are associated with that. And so, you know, there's a lot of discussion now about the cloud tax. I just think the pace is accelerating. The gap is not closing, it's widening. >> If you look at the one question I asked them about wavelength and I had a follow up there when I said, you know, we riff on it and you see, he lit up like he beam was beaming because he said something interesting. It's not that there's a problem to solve at this opportunity. And he conveyed it to like I said, walking through Fry's. But like, you go into a store and he's a builder. So he sees opportunity. And this comes back down to the Martine Casada paradox posts he wrote about do you optimize for CapEx or future revenue? And I think the tell sign is at the wavelength edge piece is going to be so creative and that's going to open up massive opportunities. I think that's the place to watch. That's the place I'm watching. And I think startups going to come out of the woodwork because that's where the action will be. And that's just Amazon at the edge, I mean, that's just cloud at the edge. I think that is going to be very effective. And his that's a little TeleSign, he kind of revealed a little bit there, a lot there with that comment. >> Well that's a to be continued conversation. >> Indeed, I would love to introduce our next guest. We actually have Soma on the line. He's the managing director at Madrona venture group. Thank you Soma very much for coming for our keynote program. >> Thank you Natalie and I'm great to be here and will have the opportunity to spend some time with you all. >> Well, you have a long to nerd history in the enterprise. How would you define the modern enterprise also known as cloud scale? >> Yeah, so I would say I have, first of all, like, you know, we've all heard this now for the last, you know, say 10 years or so. Like, software is eating the world. Okay. Put it another way, we think about like, hey, every enterprise is a software company first and foremost. Okay. And companies that truly internalize that, that truly think about that, and truly act that way are going to start up, continue running well and things that don't internalize that, and don't do that are going to be left behind sooner than later. Right. And the last few years you start off thing and not take it to the next level and talk about like, not every enterprise is not going through a digital transformation. Okay. So when you sort of think about the world from that lens. Okay. Modern enterprise has to think about like, and I am first and foremost, a technology company. I may be in the business of making a car art, you know, manufacturing paper, or like you know, manufacturing some healthcare products or what have you got out there. But technology and software is what is going to give me a unique, differentiated advantage that's going to let me do what I need to do for my customers in the best possible way [Indistinct]. So that sort of level of focus, level of execution, has to be there in a modern enterprise. The other thing is like not every modern enterprise needs to think about regular. I'm competing for talent, not anymore with my peers in my industry. I'm competing for technology talent and software talent with the top five technology companies in the world. Whether it is Amazon or Facebook or Microsoft or Google, or what have you cannot think, right? So you really have to have that mindset, and then everything flows from that. >> So I got to ask you on the enterprise side again, you've seen many ways of innovation. You've got, you know, been in the industry for many, many years. The old way was enterprises want the best proven product and the startups want that lucrative contract. Right? Yeah. And get that beach in. And it used to be, and we addressed this in our earlier keynote with Ali and how it's changing, the buyers are changing because the cloud has enabled this new kind of execution. I call it agile, call it what you want. Developers are driving modern applications, so enterprises are still, there's no, the playbooks evolving. Right? So we see that with the pandemic, people had needs, urgent needs, and they tried new stuff and it worked. The parachute opened as they say. So how do you look at this as you look at stars, you're investing in and you're coaching them. What's the playbook? What's the secret sauce of how to crack the enterprise code today. And if you're an enterprise buyer, what do I need to do? I want to be more agile. Is there a clear path? Is there's a TSA to let stuff go through faster? I mean, what is the modern playbook for buying and being a supplier? >> That's a fantastic question, John, because I think that sort of playbook is changing, even as we speak here currently. A couple of key things to understand first of all is like, you know, decision-making inside an enterprise is getting more and more de-centralized. Particularly decisions around what technology to use and what solutions to use to be able to do what people need to do. That decision making is no longer sort of, you know, all done like the CEO's office or the CTO's office kind of thing. Developers are more and more like you rightly said, like sort of the central of the workflow and the decision making process. So it'll be who both the enterprises, as well as the startups to really understand that. So what does it mean now from a startup perspective, from a startup perspective, it means like, right. In addition to thinking about like hey, not do I go create an enterprise sales post, do I sell to the enterprise like what I might have done in the past? Is that the best way of moving forward, or should I be thinking about a product led growth go to market initiative? You know, build a product that is easy to use, that made self serve really works, you know, get the developers to start using to see the value to fall in love with the product and then you think about like hey, how do I go translate that into a contract with enterprise. Right? And more and more what I call particularly, you know, startups and technology companies that are focused on the developer audience are thinking about like, you know, how do I have a bottom up go to market motion? And sometime I may sort of, you know, overlap that with the top down enterprise sales motion that we know that has been going on for many, many years or decades kind of thing. But really this product led growth bottom up a go to market motion is something that we are seeing on the rise. I would say they're going to have more than half the startup that we come across today, have that in some way shape or form. And so the enterprise also needs to understand this, the CIO or the CTO needs to know that like hey, I'm not decision-making is getting de-centralized. I need to empower my engineers and my engineering managers and my engineering leaders to be able to make the right decision and trust them. I'm going to give them some guard rails so that I don't find myself in a soup, you know, sometime down the road. But once I give them the guard rails, I'm going to enable people to make the decisions. People who are closer to the problem, to make the right decision. >> Well Soma, what are some of the ways that startups can accelerate their enterprise penetration? >> I think that's another good question. First of all, you need to think about like, Hey, what are enterprises wanting to rec? Okay. If you start off take like two steps back and think about what the enterprise is really think about it going. I'm a software company, but I'm really manufacturing paper. What do I do? Right? The core thing that most enterprises care about is like, hey, how do I better engage with my customers? How do I better serve my customers? And how do I do it in the most optimal way? At the end of the day that's what like most enterprises really care about. So startups need to understand, what are the problems that the enterprise is trying to solve? What kind of tools and platform technologies and infrastructure support, and, you know, everything else that they need to be able to do what they need to do and what only they can do in the most optimal way. Right? So to the extent you are providing either a tool or platform or some technology that is going to enable your enterprise to make progress on what they want to do, you're going to get more traction within the enterprise. In other words, stop thinking about technology, and start thinking about the customer problem that they want to solve. And the more you anchor your company, and more you anchor your conversation with the customer around that, the more the enterprise is going to get excited about wanting to work with you. >> So I got to ask you on the enterprise and developer equation because CSOs and CXOs, depending who you talk to have that same answer. Oh yeah. In the 90's and 2000's, we kind of didn't, we throttled down, we were using the legacy developer tools and cloud came and then we had to rebuild and we didn't really know what to do. So you seeing a shift, and this is kind of been going on for at least the past five to eight years, a lot more developers being hired yet. I mean, at FinTech is clearly a vertical, they always had developers and everyone had developers, but there's a fast ramp up of developers now and the role of open source has changed. Just looking at the participation. They're not just consuming open source, open source is part of the business model for mainstream enterprises. How is this, first of all, do you agree? And if so, how has this changed the course of an enterprise human resource selection? How they're organized? What's your vision on that? >> Yeah. So as I mentioned earlier, John, in my mind the first thing is, and this sort of, you know, like you said financial services has always been sort of hiring people [Indistinct]. And this is like five-year old story. So bear with me I'll tell you the firewall story and then come to I was trying to, the cloud CIO or the Goldman Sachs. Okay. And this is five years ago when people were still like, hey, is this cloud thing real and now is cloud going to take over the world? You know, am I really ready to put my data in the cloud? So there are a lot of questions and conversations can affect. The CIO of Goldman Sachs told me two things that I remember to this day. One is, hey, we've got a internal edict. That we made a decision that in the next five years, everything in Goldman Sachs is going to be on the public law. And I literally jumped out of the chair and I said like now are you going to get there? And then he laughed and said like now it really doesn't matter whether we get there or not. We want to set the tone, set the direction for the organization that hey, public cloud is here. Public cloud is there. And we need to like, you know, move as fast as we realistically can and think about all the financial regulations and security and privacy. And all these things that we care about deeply. But given all of that, the world is going towards public load and we better be on the leading edge as opposed to the lagging edge. And the second thing he said, like we're talking about like hey, how are you hiring, you know, engineers at Goldman Sachs Canada? And he said like in hey, I sort of, my team goes out to the top 20 schools in the US. And the people we really compete with are, and he was saying this, Hey, we don't compete with JP Morgan or Morgan Stanley, or pick any of your favorite financial institutions. We really think about like, hey, we want to get the best talent into Goldman Sachs out of these schools. And we really compete head to head with Google. We compete head to head with Microsoft. We compete head to head with Facebook. And we know that the caliber of people that we want to get is no different than what these companies want. If you want to continue being a successful, leading it, you know, financial services player. That sort of tells you what's going on. You also talked a little bit about like hey, open source is here to stay. What does that really mean kind of thing. In my mind like now, you can tell me that I can have from given my pedigree at Microsoft, I can tell you that we were the first embraces of open source in this world. So I'll say that right off the bat. But having said that we did in our turn around and said like, hey, this open source is real, this open source is going to be great. How can we embrace and how can we participate? And you fast forward to today, like in a Microsoft is probably as good as open source as probably any other large company I would say. Right? Including like the work that the company has done in terms of acquiring GitHub and letting it stay true to its original promise of open source and community can I think, right? I think Microsoft has come a long way kind of thing. But the thing that like in all these enterprises need to think about is you want your developers to have access to the latest and greatest tools. To the latest and greatest that the software can provide. And you really don't want your engineers to be reinventing the wheel all the time. So there is something available in the open source world. Go ahead, please set up, think about whether that makes sense for you to use it. And likewise, if you think that is something you can contribute to the open source work, go ahead and do that. So it's really a two way somebody Arctic relationship that enterprises need to have, and they need to enable their developers to want to have that symbiotic relationship. >> Soma, fantastic insights. Thank you so much for joining our keynote program. >> Thank you Natalie and thank you John. It was always fun to chat with you guys. Thank you. >> Thank you. >> John we would love to get your quick insight on that. >> Well I think first of all, he's a prolific investor the great from Madrona venture partners, which is well known in the tech circles. They're in Seattle, which is in the hub of I call cloud city. You've got Amazon and Microsoft there. He'd been at Microsoft and he knows the developer ecosystem. And reason why I like his perspective is that he understands the value of having developers as a core competency in Microsoft. That's their DNA. You look at Microsoft, their number one thing from day one besides software was developers. That was their army, the thousand centurions that one won everything for them. That has shifted. And he brought up open source, and .net and how they've embraced Linux, but something that tele before he became CEO, we interviewed him in the cube at an Xcel partners event at Stanford. He was open before he was CEO. He was talking about opening up. They opened up a lot of their open source infrastructure projects to the open compute foundation early. So they had already had that going and at that price, since that time, the stock price of Microsoft has skyrocketed because as Ali said, open always wins. And I think that is what you see here, and as an investor now he's picking in startups and investing in them. He's got to read the tea leaves. He's got to be in the right side of history. So he brings a great perspective because he sees the old way and he understands the new way. That is the key for success we've seen in the enterprise and with the startups. The people who get the future, and can create the value are going to win. >> Yeah, really excellent point. And just really quickly. What do you think were some of our greatest hits on this hour of programming? >> Well first of all I'm really impressed that Ali took the time to come join us because I know he's super busy. I think they're at a $28 billion valuation now they're pushing a billion dollars in revenue, gap revenue. And again, just a few short years ago, they had zero software revenue. So of these 15 companies we're showcasing today, you know, there's a next Data bricks in there. They're all going to be successful. They already are successful. And they're all on this rocket ship trajectory. Ali is smart, he's also got the advantage of being part of that Berkeley community which they're early on a lot of things now. Being early means you're wrong a lot, but you're also right, and you're right big. So Berkeley and Stanford obviously big areas here in the bay area as research. He is smart, He's got a great team and he's really open. So having him share his best practices, I thought that was a great highlight. Of course, Jeff Barr highlighting some of the insights that he brings and honestly having a perspective of a VC. And we're going to have Peter Wagner from wing VC who's a classic enterprise investors, super smart. So he'll add some insight. Of course, one of the community session, whenever our influencers coming on, it's our beat coming on at the end, as well as Katie Drucker. Another Madrona person is going to talk about growth hacking, growth strategies, but yeah, sights Raleigh coming on. >> Terrific, well thank you so much for those insights and thank you to everyone who is watching the first hour of our live coverage of the AWS startup showcase for myself, Natalie Ehrlich, John, for your and Dave Vellante we want to thank you very much for watching and do stay tuned for more amazing content, as well as a special live segment that John Furrier is going to be hosting. It takes place at 12:30 PM Pacific time, and it's called cracking the code, lessons learned on how enterprise buyers evaluate new startups. Don't go anywhere.

Published Date : Jun 24 2021

SUMMARY :

on the latest innovations and solutions How are you doing. are you looking forward to. and of course the keynotes Ali Ghodsi, of the quality of healthcare and you know, to go from, you know, a you on the other side. Congratulations and great to see you. Thank you so much, good to see you again. And you were all in on cloud. is the success of how you guys align it becomes a force that you moments that you can point to, So that's the second one that we bet on. And one of the things that Back in the day, you had to of say that the data problems And you know, there's this and that's why we have you on here. And if you say you're a data company, and growing companies to choose In the past, you know, So I got to ask you from a for the gigs, you know, to eat out signal out of the, you know, I got to ask you a final question. But the goal is to eventually be able the more lock-in you get. to one cloud or, you know, and taking the time with us today. appreciate talking to you. So Natalie, back to you but I'd love to get Dave's insights first. And the last thing you talked And see that's the key to the of the red hat model, to like block you and filter you. and let the experts manage all that stuff. And the next 15 will be the same. see you just in the bit. Okay, hey Jeff, great to see you. and the cloud is going and options to our customers. and some of the early Amazon services? And so to me, and then next thing you Fry's and before that and appreciate what you did And having that nitro as the base is the way in which ISVs of back, you know, going back is that the regions and local regions. And that in the early days Great to have you on again Thank you John, great to you for more coverage. What stood out to you John? and that's the startup action happened the most part, you know, And that's just Amazon at the edge, Well that's a to be We actually have Soma on the line. and I'm great to be here How would you define the modern enterprise And the last few years you start off thing So I got to ask you on and then you think about like hey, And the more you anchor your company, So I got to ask you on the enterprise and this sort of, you know, Thank you so much for It was always fun to chat with you guys. John we would love to get And I think that is what you see here, What do you think were it's our beat coming on at the end, and it's called cracking the code,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ali GhodsiPERSON

0.99+

Natalie EhrlichPERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

NataliePERSON

0.99+

JeffPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

OsakaLOCATION

0.99+

UAELOCATION

0.99+

AlliePERSON

0.99+

IsraelLOCATION

0.99+

Peter WagnerPERSON

0.99+

John FurrierPERSON

0.99+

FacebookORGANIZATION

0.99+

TokyoLOCATION

0.99+

$10QUANTITY

0.99+

Sao PauloLOCATION

0.99+

Goldman SachsORGANIZATION

0.99+

FrankfurtLOCATION

0.99+

BerkeleyORGANIZATION

0.99+

Jeff BarrPERSON

0.99+

SeattleLOCATION

0.99+

$28 billionQUANTITY

0.99+

Katie DruckerPERSON

0.99+

$15QUANTITY

0.99+

Morgan StanleyORGANIZATION

0.99+

SomaPERSON

0.99+

IraqLOCATION

0.99+

2009DATE

0.99+

JuanPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

$350 millionQUANTITY

0.99+

AliPERSON

0.99+

11 yearsQUANTITY

0.99+

Rick Farnell, Protegrity | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(gentle music) >> Welcome to today's session of the AWS Startup Showcase The Next Big Thing in AI, Security, & Life Sciences. Today we're featuring Protegrity for the life sciences track. I'm your host for theCUBE, Natalie Erlich, and now we're joined by our guest, Rick Farnell, the CEO of Protegrity. Thank you so much for being with us. >> Great to be here. Thanks so much Natalie, great to be on theCUBE. >> Yeah, great, and so we're going to talk today about the ransomware game, and how it has changed with kinetic data protection. So, the title of today's video segment makes a bold claim, how are kinetic data and ransomware connected? >> So first off kinetic data, data is in use, it's moving, it's not static, it's no longer sitting still, and your data protection has to adhere to those same standards. And I think if you kind of look at what's happening in the ransomware kind of attacks, there's a couple of different things going on, which is number one, bad actors are getting access to data in the clear, and they're holding that data ransom, and threatening to release that data. So kind of from a Protegrity standpoint, with our protection capabilities, that data would be rendered useless to them in that scenario. So there's lots of ways in which kind of backup data protection, really wonderful opportunities to do both data protection and kind of that backup mixed together really is a wonderful solution to the threat of ransomware. And it's a serious issue and it's not just targeting the most highly regulated industries and customers, we're seeing kind of attacks on pipeline and ferry companies, and really there is no end to where some of these bad actors are really focusing on and the damages can be in the hundreds of millions of dollars and last for years after from a brand reputation. So I think if you look at how data is used today, there's that kind of opposing forces where the business wants to use data at the speed of light to produce more machine learning, and more artificial intelligence, and predict where customers are going to be, and have wonderful services at their fingertips. But at the same time, they really want to protect their data, and sometimes those architectures can be at odds, and at Protegrity, we're really focusing on solving that problem. So free up your data to be used in artificial intelligence and machine learning, while making sure that it is absolutely bulletproof from some of these ransomware attacks. >> Yeah, I mean, you bring a really fascinating point that's really central to your business. Could you tell us more about how you're actually making that data worthless? I mean, that sounds really revolutionary. >> So, it sounds novel, right? To kind of make your data worthless in the wrong hands. And I think from a Protegrity perspective, our kind of policy and protection capability follows the individual piece of data no matter where it lives in the architecture. And we do a ton of work as the world does with Amazon Web Services, so kind of helping customers really blend their hybrid cloud strategies with their on-premise and their use of AWS, is something that we thrive at. So protecting that data, not just at rest or while it's in motion, but it's a continuous protection policy that we can basically preserve the privacy of the data but still keep it unique for use in downstream analytics and machine learning. >> Right, well, traditional security is rather stifling, so how can we fix this, and what are you doing to amend that? >> Well, I think if you look at cybersecurity, and we certainly play a big role in the cybersecurity world but like any industry, there are many layers. And traditional cybersecurity investment has been at the perimeter level, at the network level keeping bad actors out, and once people do get through some of those fences, if your data is not protected at a fine grain level, they have access to it. And I think from our standpoint, yes, we're last line of defense but at the same time, we partner with folks in the cybersecurity industry and with AWS and with others in the backup and recovery to give customers that level of protection, but still allow their kinetic data to be utilized in downstream analytics. >> Right, well, I'd love to hear more about the types of industries that you're helping, and specifically healthcare obviously, a really big subject for the year and probably now for years to come, how is this industry using kinetic protection at the moment? >> So certainly, as you mentioned, some of the most highly regulated industries are our sweet spot. So financial services, insurance, online retail, and healthcare, or any industry that has sensitive data and sensitive customer data, so think first name last name, credit card information, national ID number, social security number blood type, cancer type. That's all sensitive information that you as an organization want to protect. So in the healthcare space, specifically, some of the largest healthcare organizations in the world rely on Protegrity to provide that level of protection, but at the same time, give them the business flexibility to utilize that data. So one of our customers, one of the leaders in online prescriptions, and that is an AWS customer, to allow a wonderful service to be delivered to all of their customers while maintaining protection. If you think about sharing data on your watch with your insurance provider, we have lots of customers that bridge that gap and have that personal data coming in to the insurance companies. All the way to, if in a use case in the future, looking at the pandemic, if you have to prove that you've been vaccinated, we're talking about some sensitive information, so you want to be able to show that information but still have the confidence that it's not going to be used for nefarious purposes. >> Right, and what is next for Protegrity? >> Well, I think continuing on our journey, we've been around for 17 years now, and I think the last couple, there's been an absolute renaissance in fine-grained data protection or that connected data protection, and organizations are recognizing that continuing to protect your perimeter, continuing to protect your firewalls, that's not going to go away anytime soon. Your access points, your points of vulnerability to keep bad actors out, but at the same time, recognizing that the data itself needs to be protected but with that balance of utilizing it downstream for analytic purposes, for machine learning, for artificial intelligence. Keeping the data of hundreds of millions if not billions of people saved, that's what we do. If you were to add up the customers of all of our customers, the largest banks, the largest insurance companies, largest healthcare companies in the world, globally, we're protecting the private data of billions of human beings. And it doesn't just stop there, I think you asked a great question about kind of the industry and yes, insurance, healthcare, retail, where there's a lot of sensitive data that certainly can be a focus point. But in the IOT space, kind of if you think about GPS location or geolocation, if you think about a device, and what it does, and the intelligence that it has, and the decisions that it makes on the fly, protecting data and keeping that safe is not just a personal thing, we're stepping into intellectual property and some of the most valuable assets that companies have, which is their decision-making on how they use data and how they deliver an experience, and I think that's why there's been such a renaissance, if you will, in kind of that fine grain data protection that we provide. >> Yeah, well, what is Protegrity's role now in future proofing businesses against cyber attacks? I mean, you mentioned really the ramifications of that and the impact it can have on businesses, but also on governments. I mean, obviously this is really critical. >> So there's kind of a three-step approach, and this is something that we have certainly kind of felt for a long, long time, and we work on with our customers. One is having that fine-grain data protection. So tokenizing your data so that if someone were to get your data, it's worthless, unless they have the ability to unlock every single individual piece of data. So that's number one, and then that's kind of what Protegrity provides. Number two, having a wonderful backup capability to roll kind of an active-active, AWS being one of the major clouds in the world where we deploy our software regularly and work with our customers, having multi-regions, multi-capabilities for an active-active scenario where if there's something that goes down or happens you can bring that down and bring in a new environment up. And then third is kind of malware detection in the rest of the cyber world to make sure that you rinse kind of your architecture from some of those agents. And I think when you kind of look at it, ransomware, they take data, they encrypt your data, so they force you to give them Bitcoin, or whatnot, or they'll release some of your data. And if that data is rendered useless, that's one huge step in kind of your discussions with these nefarious actors and be like you could release it, but there's nothing there, you're not going to see anything. And then second, if you have a wonderful backup capability where you wind down that environment that has been infiltrated, prove that this new environment is safe, have your production data have rolling and then wind that back up, you're back in business. You don't have to notify your customers, you don't have to deal with the ransomware players. So it's really a three-step process but ultimately it starts with protecting your data and tokenizing your data, and that's something that Protegrity does really, really well. >> So you're basically able to eliminate the financial impact of a breach? >> Honestly, we dramatically reduce the risk of customers being at risk for ransomware attacks 100%. Now, tokenizing data and moving that direction is something that it's not trivial, we are literally replacing production data with a token and then making sure that all downstream applications have the ability to utilize that, and make sure that the analytic systems and machine learning systems, and artificial intelligence applications that are built downstream on that data have the ability to execute, but that is something that from our patent portfolio and what we provide to our customers, again, some of the largest organizations in retail, in financial services, in banking, and in healthcare, we've been doing that for a long time. We're not just saying that we can do this and we're in version one of our product, we've been doing this for years, supporting the largest organizations with a 24 by seven capability. >> Right, and tell us a bit about the competitive landscape, where do you see your offering compared to your competitors? >> So, kind of historically back, let's call it an era ago maybe even before cloud even became a thing, and hybrid cloud, there were a handful of players that could acquire into much larger organizations, those organizations have been dusting off those acquired assets, and we're seeing them come back in. There's some new entrants into our space that have some protection mechanisms, whether it be encryption, or whether it be anonymization, but unless you're doing fine grain tokenization, you're not going to be able to allow that data to participate in the artificial intelligence world. So, we see kind of a range of competition there. And then I'd say probably the biggest competitor, Natalie, is customers not doing tokenization. They're saying, "No, we're okay, we'll continue protecting our firewall, we'll continue protecting our access points, we'll invest a little bit more in maybe some governance, but that fine grain data protection, maybe it's not for us." And that is the big shift that's happening. You look at kind of the beginning of this year with the solar winds attack, and the vulnerability that caused the very large and important organizations found themselves the last few weeks with all the ransomware attacks that are happening on meat processing plants and facilities, shutting down meat production, pipeline, stopping oil and gas and kind of that. So we're seeing a complete shift in the types of organizations and the industries that need to protect their data. It's not just the healthcare organizations, or the banks, or the credit card companies, it is every single industry, every single size company. >> Right, and I got to ask you this questioning, what is your defining contribution to the future of cloud scale? >> Well, ultimately we kind of have a charge here at Protegrity where we feel like we protect the world's most sensitive data. And when we come into work every day, that's what every single employee thinks at Protegrity. We are standing behind billions of individuals who are customers of our customers, and that's a cultural thing for us, and we take that very serious. We have maniacal customer support supporting our biggest customers with a fall of the sun 24 by seven global capability. So that's number one. So, I think our part in this is really helping to educate the world that there is a solution for this ransomware and for some of these things that don't have to happen. Now, naturally with any solution, there's going to be some investment, there's going to be some architecture changes, but with partnerships like AWS, and our partnership with pretty much every data provider, data storage provider, data solution provider in the world, we want to provide fine-grain data protection, any data in any system on any platform. And that's our mission. >> Well, Rick Farnell, this has been really fascinating conversation with you, thank you so much. The CEO of Protegrity, really great to have you on this program for the AWS Startup Showcase, talking about how ransomware game has changed with the kinetic data protection. Really appreciate it. Again, I'm your host Natalie Erlich, thank you again very much for watching. (light music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase Great to be here. and how it has changed with and kind of that backup mixed together that's really central to your business. in the architecture. but at the same time, and have that personal data coming in and some of the most valuable and the impact it can have on businesses, have the ability to unlock and make sure that the analytic systems And that is the big that don't have to happen. really great to have you on this program

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natalie ErlichPERSON

0.99+

Rick FarnellPERSON

0.99+

NataliePERSON

0.99+

AWSORGANIZATION

0.99+

ProtegrityORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

24QUANTITY

0.99+

pandemicEVENT

0.99+

hundreds of millionsQUANTITY

0.99+

17 yearsQUANTITY

0.99+

100%QUANTITY

0.99+

secondQUANTITY

0.99+

oneQUANTITY

0.98+

thirdQUANTITY

0.98+

todayDATE

0.98+

TodayDATE

0.98+

billions of peopleQUANTITY

0.98+

OneQUANTITY

0.97+

three-stepQUANTITY

0.97+

hundreds of millions of dollarsQUANTITY

0.96+

bothQUANTITY

0.96+

billions of human beingsQUANTITY

0.96+

billions of individualsQUANTITY

0.93+

sevenQUANTITY

0.9+

theCUBEORGANIZATION

0.89+

Next Big ThingTITLE

0.85+

Startup ShowcaseEVENT

0.85+

firstQUANTITY

0.83+

this yearDATE

0.78+

lastDATE

0.78+

Number twoQUANTITY

0.76+

single industryQUANTITY

0.76+

single employeeQUANTITY

0.75+

weeksDATE

0.73+

yearsQUANTITY

0.72+

single sizeQUANTITY

0.7+

oneOTHER

0.7+

Startup Showcase The Next Big Thing inEVENT

0.68+

Security, &EVENT

0.67+

ransomwareTITLE

0.64+

anDATE

0.63+

24DATE

0.59+

coupleQUANTITY

0.59+

single individual pieceQUANTITY

0.59+

SciencesEVENT

0.58+

stepQUANTITY

0.54+

versionQUANTITY

0.46+

sunEVENT

0.36+

Ariel Assaraf, Coralogix | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(upbeat music) >> Hello and welcome today's session for the AWS Startup Showcase, the next big thing in AI, Security and Life Sciences featuring Coralogix for the AI track. I'm your host, John Furrier with theCUBE. We're here we're joined by Ariel Assaraf, CEO of Coralogix. Ariel, great to see you calling in from remotely, videoing in from Tel Aviv. Thanks for coming on theCUBE. >> Thank you very much, John. Great to be here. >> So you guys are features a hot next thing, start next big thing startup. And one of the things that you guys do we've been covering for many years is, you're into the log analytics, from a data perspective, you guys decouple the analytics from the storage. This is a unique thing. Tell us about it. What's the story? >> Yeah. So what we've seen in the market is that probably because of the great job that a lot of the earlier generation products have done, more and more companies see the value in log data, what used to be like a couple rows, that you add, whenever you have something very important to say, became a standard to document all communication between different components, infrastructure, network, monitoring, and the application layer, of course. And what happens is that data grows extremely fast, all data grows fast, but log data grows even faster. What we always say is that for sure data grows faster than revenue. So as fast as a company grows, its data is going to outpace that. And so we found ourselves thinking, how can we help companies be able to still get the full coverage they want without cherry picking data or deciding exactly what they want to monitor and what they're taking risk with. But still give them the real time analysis that they need to make sure that they get the full insight suite for the entire data, wherever it comes from. And that's why we decided to decouple the analytics layer from storage. So instead of ingesting the data, then indexing and storing it, and then analyzing the stored data, we analyze everything, and then we only store it matters. So we go from the insights backwards. That allowed us to reduce the amount of data, reduce the digital exhaust that it creates, and also provide better insights. So the idea is that as this world of data scales, the need for real time streaming analytics is going to increase. >> So what's interesting is we've seen this decoupling with storage and compute be a great success formula and cloud scale, for instance, that's a known best practice. You're taking a little bit different. I love how you're coming backwards from it, you're working backwards from the insights, almost doing some intelligence on the front end of the data, probably sees a lot of storage costs. But I want to get specifically back to this real time. How do you do that? And how did you come up with this? What's the vision? How did you guys come up with the idea? What was the magic light bulb that went off for Coralogix? >> Yes, the Coralogix story is very interesting. Actually, it was no light bulb, it was a road of pain for years and years, we started by just you know, doing the same, maybe faster, a couple more features. And it didn't work out too well. The first few years, the company were not very successful. And we've grown tremendously in the past three years, almost 100X, since we've launched this, and it came from a pain. So once we started scaling, we saw that the side effects of accessing the storage for analytics, the latency it creates, the the dependency on schema, the price that it poses on our customers became unbearable. And then we started thinking, so okay, how do we get the same level of insights, because there's this perception in the world of storage. And now it started to happen in analytics, also, that talks about tiers. So you want to get a great experience, you pay a lot, you want to get a less than great experience, you pay less, it's a lower tier. And we decided that we're looking for a way to give the same level of real time analytics and the same level of insights. Only without the issue of dependencies, decoupling all the storage schema issues and latency. And we built our real time pipeline, we call it Streama. Streama is a Coralogix real time analysis platform that analyzes everything in real time, also the stateful thing. So stateless analytics in real time is something that's been done in the past and it always worked well. The issue is, how do you give a stateful insight on data that you analyze in real time without storing and I'll explain how can you tell that a certain issue happened that did not happen in the past three months if you did not store the past three months? Or how can you tell that behavior is abnormal if you did not store what's normal, you did not store to state. So we created what we call the state store that holds the state of the system, the state of data, were a snapshot on that state for the entire history. And then instead of our state being the storage, so you know, you asked me, how is this compared to last week? Instead of me going to the storage and compare last week, I go to the state store, and you know, like a record bag, I just scroll fast, I find out one piece of state. And I say, okay, this is how it looked like last week, compared to this week, it changed in ABC. And once we started doing that we on boarded more and more services to that model. And our customers came in and say, hey, you're doing everything in real time. We don't need more than that. Yeah, like a very small portion of data, we actually need to store and frequently search, how about you guys fit into our use cases, and not just sell on quota? And we decided to basically allow our customers to choose what is the use case that they have, and route the data through different use cases. And then each log records, each log record stops at the relevant stops in our data pipeline based on the use case. So just like you wouldn't walk into the supermarket, you fill in a bag, you go out, they weigh it and they say, you know, it's two kilograms, you pay this amount, because different products have different costs and different meaning to you. That same way, exactly, We analyze the data in real time. So we know the importance of data, and we allow you to route it based on your use case and pay a different amount per use case. >> So this is really interesting. So essentially, you guys, essentially capture insights and store those, you call them states, and then not have to go through the data. So it's like you're eliminating the old problem of, you know, going back to the index and recovering the data to get the insights, did we have that? So anyway, it's a round trip query, if you will, you guys are start saving all that data mining cost and time. >> We call it node zero side effects, that round trip that you that you described is exactly it, no side effects to an analysis that is done in real time. I don't need to get the latency from the storage, a bit of latency from the database that holds the model, a bit of latency from the cache, everything stays in memory, everything stays in stream. >> And so basically, it's like the definition of insanity, doing the same thing over and over again and expecting a different result. Here, that's kind of what that is, the old model of insight is go query the database and get something back, you're actually doing the real time filtering on the front end, capturing the insights, if you will, storing those and replicating that as use case. Is that right? >> Exactly. But then, you know, there's still the issue of customer saying, yeah, but I need that data. Someday, I need to really frequently search, I don't know, you know, the unknown unknowns, or some of the day I need for compliance, and I need an immutable record that stays in my compliance bucket forever. So we allowed customers, we have this some that screen, we call the TCO optimizer, that allows them to define those use cases. And they can always access the data by creating their remote storage from Coralogix, or carrying the hot data that is stored with Coralogix. So it's all about use cases. And it's all about how you consume the data because it doesn't make sense for me to pay the same amount or give the same amount of attention to a record that is completely useless. It's just there for the record or for a compliance audit, that may or may not happen in the future. And, you know, do the same with the most critical exception in my application log that has immediate business impact. >> What's really good too, is you can actually set some policy up if you want a certain use cases, okay, store that data. So it's not to say you don't want to store it, but you might want to store it on certain use cases. So I can see that. So I got to ask the question. So how does this differ from the competition? How do you guys compete? Take us through a use case of a customer? How do you guys go to the customer and you just say, hey, we got so much scar tissue from this, we learned the hard way, take it from us? How does it go? Take us through an example. >> So an interesting example of actually a company that is not the your typical early adopter, let's call it this way. A very advanced in technology and smart company, but a huge one, one of the largest telecommunications company in India. And they were actually cherry picking about 100 gigs of data per day, and sending it to one of the legacy providers which has a great solution that does give value. But they weren't even thinking about sending their entire data set because of cost because of scale, because of, you know, just a clutter. Whenever you search, you have to sift through millions of records that many of them are not that important. And we help them actually ask analyze their data and work with them to understand these guys had over a terabyte of data that had incredible insights, it was like a goldmine of insights. But now you just needed to prioritize it by their use case, and they went from 100 gig with the other legacy solution to a terabyte, at almost the same cost, with more advanced insights within one week, which isn't in that scale of an organization is something that is is out of the ordinary, took them four months to implement the other product. But now, when you go from the insights backwards, you understand your data before you have to store it, you understand the data before you have to analyze it, or before you have to manually sift through it. So if you ask about the difference, it's all about the architecture. We analyze and only then index instead of indexing and then analyzing. It sounds simple. But of course, when you look at this stateful analytics, it's a lot more, a lot more complex. >> Take me through your growth story, because first of all, I'll get back to the secret sauce in the same way. I want to get back to how you guys got here. (indistinct) you had this problem? You kind of broke through, you hit the magic formula, talking about the growth? Where's the growth coming from? And what's the real impact? What's the situation relative to the company's growth? >> Yeah, so we had a first rough three years that I kind of mentioned, and then I was not the CEO at the beginning, I'm one of the co founders. I'm more of the technical guy, was the product manager. And I became CEO after the company was kind of on the verge of closing at the end of 2017. And the CTO left the CEO left, the VP of R&D became the CTO, I became the CEO, we were five people with $200,000 in the bank that you know, you know that that's not a long runway. And we kind of changed attitudes. So we kind of, so we first we launched this product, and then we understood that we need to go bottoms up, you can go to enterprises and try to sell something that is out of the ordinary, or that changes how they're used to working or just, you know, sell something, (indistinct) five people will do under $1,000 in the bank. So we started going from bottoms up, and the earlier adopters. And it's still until today, you know, the the more advanced companies, the more advanced teams. This is our Gartner friend Coralogix, the preferred solution for Advanced, DevOps and Platform Teams. So they started adopting Coralogix, and then it grew to the larger organization, and they were actually pushing, there are champions within their organizations. And ever since. So until the beginning of 2018, we raised about $2 million and had sales or marginal. Today, we have over 1500, pink accounts, and we raised almost $100 million more. >> Wow, what a great pivot. That was great example of kind of getting the right wave here, cloud wave. You said in terms of customers, you had the DevOps kind of (indistinct) initially. And now you said expanded out to a lot more traditional enterprise, you can take me through the customer profile. >> Yeah, so I'd say it's still the core would be cloud native and (indistinct) companies. These are typical ones, we have very tight integration with AWS, all the services, all the integrations required, we know how to read and write back to the different services and analysis platforms in AWS. Also for Asia and GCP, but mostly AWS. And then we do have quite a few big enterprise accounts, actually, five of the largest 50 companies in the world use Coralogix today. And it grew from those DevOps and platform evangelists into the level of IT, execs and even (indistinct). So today, we have our security product that already sells to some of the biggest companies in the world, it's a different profile. And the idea for us is that, you know, once you solve that issue of too much data, too expensive, not proactive enough, too couple with the storage, you can actually expand that from observability logging metrics, now into tracing and then into security and maybe even to other fields, where the cost and the productivity are an issue for many companies. >> So let me ask you this question, then Ariel, if you don't mind. So if a customer has a need for Coralogix, is it because the data fall? Or they just got data kind of sprawled all over the place? Or is it that storage costs are going up on S3 or what's some of the signaling that you would see, that would be like, telling you, okay, okay, what's the opportunity to come in and either clean house or fix the mess or whatnot, Take us through what you see. What do you see is the trend? >> Yeah. So like the tip customer (indistinct) Coralogix will be someone using one of the legacy solution and growing very fast. That's the easiest way for us to know. >> What grows fast? The storage, the storage is growing fast? >> The company is growing fast. >> Okay. And you remember, the data grows faster than revenue. And we know that. So if I see a company that grew from, you know, 50 people to 500, in three years, specifically, if it's cloud native or internet company, I know that their data grew not 10X, but 100X. So I know that that company that might started with a legacy solution at like, you know, $1,000 a month, and they're happy with it. And you know, for $1,000 a month, if you don't have a lot of data, those legacy solutions, you know, they'll do the trick. But now I know that they're going to get asked to pay 50, 60, $70,000 a month. And this is exactly where we kick in. Because now, when it doesn't fit the economic model, when it doesn't fit the unit economics, and he started damaging the margins of those companies. Because remember, those internet and cloud companies, it's not costs are not the classic costs that you'll see in an enterprise, they're actually damaging your unit economics and the valuation of the business, the bigger deal. So now, when I see that type of organization, we come in and say, hey, better coverage, more advanced analytics, easier integration within your organization, we support all the common open source syntaxes, and dashboards, you can plug it into your entire environment, and the costs are going to be a quarter of whatever you're paying today. So once they see that they see, you know, the Dev friendliness of the product, the ease of scale, the stability of the product, it makes a lot more sense for them to engage in a PLC, because at the end of the day, if you don't prove value, you know, you can come with 90% discount, it doesn't do anything, not to prove the value to them. So it's a great door opener. But from then on, you know, it's a PLC like any other. >> Cloud is all about the PLC or pilot, as they say. So take me through the product, today, and what's next for the product, take us through the vision of the product and the product strategy. >> Yeah, so today, the product allows you to send any log data, metric data or security information, analyze it a million ways, we have one of the most extensive alerting mechanism to market, automatic anomaly detection, data flustering. And all the real law, you know, the real time pipeline, things that help companies make their data smarter, and more readable, parsing, enriching, getting external sources to enrich the data, and so on, so forth. Where we're stepping in now is actually to make the final step of decoupling the analytics from storage, what we call the datalist data platform in which no data will sit or reside within the Coralogix cloud, everything will be analyzed in real time, stored in a storage of choice of our customers, then we'll allow our customers to remotely query that incredible performance. So that'll bring our customers away, to have the first ever true SaaS experience for observability. Think about no quota plans, no retention, you send whatever you want, you pay only for what you send, you retain it, how long you want to retain it, and you get all the real time insights much, much faster than any other product that keeps it on a hot storage. So that'll be our next step to really make sure that, you know, we're kind of not reselling cloud storage, because a lot of the times when you are dependent on storage, and you know, we're a cloud company, like I mentioned, you got to keep your unit economics. So what do you do? You sell storage to the customer, you add your markup, and then you you charge for it. And this is exactly where we don't want to be. We want to sell the intelligence and the insights and the real time analysis that we know how to do and let the customers enjoy the, you know, the wealth of opportunities and choices their cloud providers offer for storage. >> That's great vision in a way, the hyper scalars early days showed that decoupling compute from storage, which I mentioned earlier, was a huge category creation. Here, you're doing it for data. We call hyper data scale, or like, maybe there's got to be a name for this. What do you see, about five years from now? Take us through the trajectory of the next five years, because certainly observability is not going away. I mean, it's data management, monitoring, real time, asynchronous, synchronous, linear, all the stuffs happening, what's the what's the five year vision? >> Now add security and observability, which is something we started preaching for, because no one can say I have observability to my environment when people you know, come in and out and steal data. That's no observability. But the thing is that because data grows exponentially, because it grows faster than revenue what we believe is that in five years, there's not going to be a choice, everyone are going to have to analyze the data in real time. Extract the insights and then decide whether to store it on a you know long term archive or not, or not store it at all. You still want to get the full coverage and insights. But you know, when you think about observability, unlike many other things, the more data you have many times, the less observability you get. So you think of log data unlike statistics, if my system was only in recording everything was only generating 10 records a day, I have full, incredible observability I know everything that I've done. what happens is that you pay more, you get less observability, and more uncertainty. So I think that you know, with time, we'll start seeing more and more real time streaming analytics, and a lot less storage based and index based solutions. >> You know, Ariel, I've always been saying to Dave Vellante on theCUBE, many times that there needs to be insights as to be the norm, not the exception, where, and then ultimately, it would be a database of insights. I mean, at the end of the day, the insights become more plentiful. You have the ability to actually store those insights, and refresh them and challenge them and model update them, verify them, either sunset them or add to them or you know, saying that's like, when you start getting more data into your organization, AI and machine learning prove that pattern recognition works. So why not grab those insights? >> And use them as your baseline to know what's important, and not have to start by putting everything in a bucket. >> So we're going to have new categories like insight, first, software (indistinct) >> Go from insights backwards, that'll be my tagline, if I have to, but I'm a terrible marketing (indistinct). >> Yeah, well, I mean, everyone's like cloud, first data, data is data driven, insight driven, what you're basically doing is you're moving into the world of insights driven analytics, really, as a way to kind of bring that forward. So congratulations. Great story. I love the pivot love how you guys entrepreneurially put it all together and had the problem your own problem and brought it out and to the to the rest of the world. And certainly DevOps in the cloud scale wave is just getting bigger and bigger and taking over the enterprise. So great stuff. Real quick while you're here. Give a quick plug for the company. What you guys are up to, stats, vitals, hiring, what's new, give the commercial. >> Yeah, so like mentioned over 1500 being customers growing incredibly in the past 24 months, hiring, almost doubling the company in the next few months. offices in Israel, East Center, West US, and UK and Mumbai. Looking for talented engineers to join the journey and build the next generation of data lists data platforms. >> Ariel Assaraf, CEO of Coralogix. Great to have you on theCUBE and thank you for participating in the AI track for our next big thing in the Startup Showcase. Thanks for coming on. >> Thank you very much John, really enjoyed it. >> Okay, I'm John Furrier with theCUBE. Thank you for watching the AWS Startup Showcase presented by theCUBE. (calm music)

Published Date : Jun 24 2021

SUMMARY :

Ariel, great to see you Thank you very much, John. And one of the things that you guys do So instead of ingesting the data, And how did you come up with this? and we allow you to route and recovering the data database that holds the model, capturing the insights, if you will, that may or may not happen in the future. So it's not to say you that is not the your sauce in the same way. and the earlier adopters. And now you said expanded out to And the idea for us is that, the opportunity to come in So like the tip customer and the costs are going to be a quarter and the product strategy. and let the customers enjoy the, you know, of the next five years, the more data you have many times, You have the ability to and not have to start by Go from insights backwards, I love the pivot love how you guys and build the next generation and thank you for Thank you very much the AWS Startup Showcase

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Ariel AssarafPERSON

0.99+

$200,000QUANTITY

0.99+

IsraelLOCATION

0.99+

IndiaLOCATION

0.99+

90%QUANTITY

0.99+

JohnPERSON

0.99+

last weekDATE

0.99+

$1,000QUANTITY

0.99+

Tel AvivLOCATION

0.99+

10XQUANTITY

0.99+

John FurrierPERSON

0.99+

two kilogramsQUANTITY

0.99+

100 gigQUANTITY

0.99+

MumbaiLOCATION

0.99+

UKLOCATION

0.99+

50QUANTITY

0.99+

ArielPERSON

0.99+

50 peopleQUANTITY

0.99+

CoralogixORGANIZATION

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

this weekDATE

0.99+

three yearsQUANTITY

0.99+

todayDATE

0.99+

five peopleQUANTITY

0.99+

100XQUANTITY

0.99+

TodayDATE

0.99+

five yearQUANTITY

0.99+

each logQUANTITY

0.99+

about $2 millionQUANTITY

0.99+

four monthsQUANTITY

0.99+

five yearsQUANTITY

0.99+

one pieceQUANTITY

0.99+

millions of recordsQUANTITY

0.99+

60QUANTITY

0.99+

50 companiesQUANTITY

0.99+

almost $100 millionQUANTITY

0.99+

one weekQUANTITY

0.99+

GartnerORGANIZATION

0.99+

500QUANTITY

0.98+

AsiaLOCATION

0.98+

CoralogixPERSON

0.98+

West USLOCATION

0.98+

over 1500QUANTITY

0.98+

East CenterLOCATION

0.97+

under $1,000QUANTITY

0.97+

firstQUANTITY

0.96+

each log recordsQUANTITY

0.96+

10 records a dayQUANTITY

0.96+

oneQUANTITY

0.96+

end of 2017DATE

0.96+

about 100 gigsQUANTITY

0.96+

StreamaTITLE

0.95+

$1,000 a monthQUANTITY

0.95+

R&DORGANIZATION

0.95+

beginningDATE

0.95+

first few yearsQUANTITY

0.93+

past three monthsDATE

0.93+

$70,000 a monthQUANTITY

0.9+

CoralogixTITLE

0.9+

GCPORGANIZATION

0.88+

TCOORGANIZATION

0.88+

AWS Startup ShowcaseEVENT

0.87+

Toni Manzano, Aizon | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences


 

(up-tempo music) >> Welcome to today's session of the cube's presentation of the AWS startup showcase. The next big thing in AI security and life sciences. Today, we'll be speaking with Aizon, as part of our life sciences track and I'm pleased to welcome the co-founder as well as the chief science officer of Aizon: Toni Monzano, will be discussing how artificial intelligence is driving key processes in pharma manufacturing. Welcome to the show. Thanks so much for being with us today. >> Thank you Natalie to you and to your introduction. >> Yeah. Well, as you know industry 4.0 is revolutionizing manufacturing across many industries. Let's talk about how it's impacting biotech and pharma and as well as Aizon's contributions to this revolution. >> Well, actually pharmacogenetics is totally introducing a new concept of how to manage processes. So, nowadays the industry is considering that everything is particularly static, nothing changes and this is because they don't have the ability to manage the complexity and the variability around the biotech and the driving factor in processes. Nowadays, with pharma - technologies cloud, our computing, IOT, AI, we can get all those data. We can understand the data and we can interact in real time, with processes. This is how things are going on nowadays. >> Fascinating. Well, as you know COVID-19 really threw a wrench in a lot of activity in the world, our economies, and also people's way of life. How did it impact manufacturing in terms of scale up and scale out? And what are your observations from this year? >> You know, the main problem when you want to do a scale-up process is not only the equipment, it is also the knowledge that you have around your process. When you're doing a vaccine on a smaller scale in your lab, the only parameters you're controlling in your lab, they have to be escalated when you work from five liters to 2,500 liters. How to manage this different of a scale? Well, AI is helping nowadays in order to detect and to identify the most relevant factors involved in the process. The critical relationship between the variables and the final control of all the full process following a continued process verification. This is how we can help nowadays in using AI and cloud technologies in order to accelerate and to scale up vaccines like the COVID-19. >> And how do you anticipate pharma manufacturing to change in a post COVID world? >> This is a very good question. Nowadays, we have some assumptions that we are trying to overpass yet with human efforts. Nowadays, with the new situation, with the pandemic that we are living in, the next evolution that we are doing humans will take care about the good practices of the new knowledge that we have to generate. So AI will manage the repetitive tasks, all the human condition activity that we are doing, So that will be done by AI, and humans will never again do repetitive tasks in this way. They will manage complex problems and supervise AI output. >> So you're driving more efficiencies in the manufacturing process with AI. You recently presented at the United nations industrial development organization about the challenges brought by COVID-19 and how AI is helping with the equitable distribution of vaccines and therapies. What are some of the ways that companies like Aizon can now help with that kind of response? >> Very good point. Could you imagine you're a big company, a top pharma company, that you have an intellectual property of COVID-19 vaccine based on emergency and principle, and you are going to, or you would like to, expand this vaccination in order not to get vaccination, also to manufacture the vaccine. What if you try to manufacture these vaccines in South Africa or in Asia in India? So the secret is to transport, not only the raw material not only the equipment, also the knowledge. How to appreciate how to control the full process from the initial phase 'till their packaging and the vials filling. So, this is how we are contributing. AI is packaging all this knowledge in just AI models. This is the secret. >> Interesting. Well, what are the benefits for pharma manufacturers when considering the implementation of AI and cloud technologies. And how can they progress in their digital transformation by utilizing them? >> One of the benefits is that you are able to manage the variability the real complexity in the world. So, you can not create processes, in order to manufacture drugs, just considering that the raw material that you're using is never changing. You cannot consider that all the equipment works in the same way. You cannot consider that your recipe will work in the same way in Brazil than in Singapore. So the complexity and the variability is must be understood as part of the process. This is one of the benefits. The second benefit is that when you use cloud technologies, you have not a big care about computing's licenses, software updates, antivirals, scale up of cloud ware computing. Everything is done in the cloud. So well, this is two main benefits. There are more, but this is maybe the two main ones. >> Yeah. Well, that's really interesting how you highlight how this is really. There's a big shift how you handle this in different parts of the world. So, what role does compliance and regulation play here? And of course we see differences the way that's handled around the world as well. >> Well, I think that is the first time the human race in the pharma - let me say experience - that we have a very strong commitment from the 30 bodies, you know, to push forward using this kind of technologies actually, for example, the FDA, they are using cloud, to manage their own system. So why not use them in pharma? >> Yeah. Well, how does AWS and Aizon help manufacturers address these kinds of considerations? >> Well, we have a very great partner. AWS, for us, is simplifying a lot our life. So, we are a very, let me say different startup company, Aizon, because we have a lot of PhDs in the company. So we are not in the classical geeky company with guys all day parameter developing. So we have a lot of science inside the company. So this is our value. So everything that is provided by Amazon, why we have to aim to recreate again so we can rely on Sage Maker. we can rely on Cogito, we can rely on Landon we can rely on Esri to have encryption data with automatic backup. So, AWS is simplifying a lot of our life. And we can dedicate all our knowledge and all our efforts to the things that we know: pharma compliance. >> And how do you anticipate that pharma manufacturing will change further in the 2021 year? Well, we are participating not only with business cases. We also participate with the community because we are leading an international project in order to anticipate this kind of new breakthroughs. So, we are working with, let me say, initiatives in the - association we are collaborating in two different projects in order to apply AI in computer certification in order to create more robust process for the MRA vaccine. We are collaborating with the - university creating the standards for AI application in GXP. We collaborating with different initiatives with the pharma community in order to create the foundation to move forward during this year. >> And how do you see the competitive landscape? What do you think Aizon provides compared to its competitors? >> Well, good question. Probably, you can find a lot of AI services, platforms, programs softwares that can run in the industrial environment. But I think that it will be very difficult to find a GXP - a full GXP-compliant platform working on cloud with AI when AI is already qualified. I think that no one is doing that nowadays. And one of the demonstration for that is that we are also writing some scientific papers describing how to do that. So you will see that Aizon is the only company that is doing that nowadays. >> Yeah. And how do you anticipate that pharma manufacturing will change or excuse me how do you see that it is providing a defining contribution to the future of cloud-scale? >> Well, there is no limits in cloud. So as far as you accept that everything is varied and complex, you will need power computing. So the only way to manage this complexity is running a lot of power computation. So cloud is the only system, let me say, that allows that. Well, the thing is that, you know pharma will also have to be compliant with the cloud providers. And for that, we created a new layer around the platform that we say qualification as a service. We are creating this layer in order to qualify continuously any kind of cloud platform that wants to work on environment. This is how we are doing that. >> And in what areas are you looking to improve? How are you constantly trying to develop the product and bring it to the next level? >> Always we have, you know, in mind the patient. So Aizon is a patient-centric company. Everything that we do is to improve processes in order to improve at the end, to deliver the right medicine at the right time to the right patient. So this is how we are focusing all our efforts in order to bring this opportunity to everyone around the world. For this reason, for example, we want to work with this project where we are delivering value to create vaccines for COVID-19, for example, everywhere. Just packaging the knowledge using AI. This is how we envision and how we are acting. >> Yeah. Well, you mentioned the importance of science and compliance. What do you think are the key themes that are the foundation of your company? >> The first thing is that we enjoy the task that we are doing. This is the first thing. The other thing is that we are learning every day with our customers and for real topics. So we are serving to the patients. And everything that we do is enjoying science enjoying how to achieve new breakthroughs in order to improve life in the factory. We know that at the end will be delivered to the final patient. So enjoying making science and creating breakthroughs; being innovative. >> Right, and do you think that in the sense that we were lucky, in light of COVID, that we've already had these kinds of technologies moving in this direction for some time that we were somehow able to mitigate the tragedy and the disaster of this situation because of these technologies? >> Sure. So we are lucky because of this technology because we are breaking the distance, the physical distance, and we are putting together people that was so difficult to do that in all the different aspects. So, nowadays we are able to be closer to the patients to the people, to the customer, thanks to these technologies. Yes. >> So now that also we're moving out of, I mean, hopefully out of this kind of COVID reality, what's next for Aizon? Do you see more collaboration? You know, what's next for the company? >> The next for the company is to deliver AI models that are able to be encapsulated in the drug manufacturing for vaccines, for example. And that will be delivered with the full process not only materials, equipment, personnel, recipes also the AI models will go together as part of the recipe. >> Right, well, we'd love to hear more about your partnership with AWS. How did you get involved with them? And why them, and not another partner? >> Well, let me explain to you a secret. Seven years ago, we started with another top cloud provider, but we saw very soon, that this other cloud provider were not well aligned with the GXP requirements. For this reason, we met with AWS. We went together to some seminars, conferences with top pharma communities and pharma organizations. We went there to make speeches and talks. We felt that we fit very well together because AWS has a GXP white paper describing very well how to rely on AWS components. One by one. So this is for us, this is a very good credential, when we go to our customers. Do you know that when customers are acquiring and are establishing the Aizon platform in their systems, they are outbidding us. They are outbidding Aizon. Well we have to also outbid AWS because this is the normal chain in pharma supplier. Well, that means that we need this documentation. We need all this transparency between AWS and our partners. This is the main reason. >> Well, this has been a really fascinating conversation to hear how AI and cloud are revolutionizing pharma manufacturing at such a critical time for society all over the world. Really appreciate your insights, Toni Monzano: the chief science officer and co-founder of Aizon. I'm your host, Natalie Erlich, for the Cube's presentation of the AWS startup showcase. Thanks very much for watching. (soft upbeat music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS startup showcase. and to your introduction. contributions to this revolution. and the variability around the biotech in a lot of activity in the world, the knowledge that you the next evolution that we are doing in the manufacturing process with AI. So the secret is to transport, considering the implementation You cannot consider that all the equipment And of course we see differences from the 30 bodies, you and Aizon help manufacturers to the things that we in order to create the is that we are also to the future of cloud-scale? So cloud is the only system, at the right time to the right patient. the importance of science and compliance. the task that we are doing. and we are putting in the drug manufacturing love to hear more about This is the main reason. of the AWS startup showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Toni MonzanoPERSON

0.99+

Natalie ErlichPERSON

0.99+

AWSORGANIZATION

0.99+

NataliePERSON

0.99+

AizonORGANIZATION

0.99+

SingaporeLOCATION

0.99+

BrazilLOCATION

0.99+

South AfricaLOCATION

0.99+

AmazonORGANIZATION

0.99+

AsiaLOCATION

0.99+

COVID-19OTHER

0.99+

oneQUANTITY

0.99+

2,500 litersQUANTITY

0.99+

five litersQUANTITY

0.99+

2021 yearDATE

0.99+

30 bodiesQUANTITY

0.99+

TodayDATE

0.99+

second benefitQUANTITY

0.99+

IndiaLOCATION

0.99+

Toni ManzanoPERSON

0.99+

OneQUANTITY

0.99+

two main benefitsQUANTITY

0.99+

pandemicEVENT

0.98+

todayDATE

0.98+

two different projectsQUANTITY

0.98+

COVIDOTHER

0.97+

Seven years agoDATE

0.97+

two main onesQUANTITY

0.97+

this yearDATE

0.96+

LandonORGANIZATION

0.95+

first thingQUANTITY

0.92+

FDAORGANIZATION

0.89+

MRAORGANIZATION

0.88+

CubeORGANIZATION

0.85+

United nationsORGANIZATION

0.82+

first timeQUANTITY

0.8+

Sage MakerTITLE

0.77+

Startup ShowcaseEVENT

0.73+

GXPORGANIZATION

0.64+

EsriORGANIZATION

0.64+

GXPTITLE

0.6+

CogitoORGANIZATION

0.6+

AizonTITLE

0.57+

benefitsQUANTITY

0.36+

GXPCOMMERCIAL_ITEM

0.36+

Gil Geron, Orca Security | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(upbeat electronic music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. The Next Big Thing in AI, Security, and Life Sciences. In this segment, we feature Orca Security as a notable trend setter within, of course, the security track. I'm your host, Dave Vellante. And today we're joined by Gil Geron. Who's the co-founder and Chief Product Officer at Orca Security. And we're going to discuss how to eliminate cloud security blind spots. Orca has a really novel approach to cybersecurity problems, without using agents. So welcome Gil to today's sessions. Thanks for coming on. >> Thank you for having me. >> You're very welcome. So Gil, you're a disruptor in security and cloud security specifically and you've created an agentless way of securing cloud assets. You call this side scanning. We're going to get into that and probe that a little bit into the how and the why agentless is the future of cloud security. But I want to start at the beginning. What were the main gaps that you saw in cloud security that spawned Orca Security? >> I think that the main gaps that we saw when we started Orca were pretty similar in nature to gaps that we saw in legacy, infrastructures, in more traditional data centers. But when you look at the cloud when you look at the nature of the cloud the ephemeral nature, the technical possibilities and disruptive way of working with a data center, we saw that the usage of traditional approaches like agents in these environments is lacking, it actually not only working as well as it was in the legacy world, it's also, it's providing less value. And in addition, we saw that the friction between the security team and the IT, the engineering, the DevOps in the cloud is much worse or how does that it was, and we wanted to find a way, we want for them to work together to bridge that gap and to actually allow them to leverage the cloud technology as it was intended to gain superior security than what was possible in the on-prem world. >> Excellent, let's talk a little bit more about agentless. I mean, maybe we could talk a little bit about why agentless is so compelling. I mean, it's kind of obvious it's less intrusive. You've got fewer processes to manage, but how did you create your agentless approach to cloud security? >> Yes, so I think the basis of it all is around our mission and what we try to provide. We want to provide seamless security because we believe it will allow the business to grow faster. It will allow the business to adopt technology faster and to be more dynamic and achieve goals faster. And so we've looked on what are the problems or what are the issues that slow you down? And one of them, of course, is the fact that you need to install agents that they cause performance impact, that they are technically segregated from one another, meaning you need to install multiple agents and they need to somehow not interfere with one another. And we saw this friction causes organization to slow down their move to the cloud or slow down the adoption of technology. In the cloud, it's not only having servers, right? You have containers, you have manage services, you have so many different options and opportunities. And so you need a different approach on how to secure that. And so when we understood that this is the challenge, we decided to attack it in three, using three periods; one, trying to provide complete security and complete coverage with no friction, trying to provide comprehensive security, which is taking an holistic approach, a platform approach and combining the data in order to provide you visibility into all of your security assets, and last but not least of course, is context awareness, meaning being able to understand and find these the 1% that matter in the environment. So you can actually improve your security posture and improve your security overall. And to do so, you had to have a technique that does not involve agents. And so what we've done, we've find a way that utilizes the cloud architecture in order to scan the cloud itself, basically when you integrate Orca, you are able within minutes to understand, to read, and to view all of the risks. We are leveraging a technique that we are calling side scanning that uses the API. So it uses the infrastructure of the cloud itself to read the block storage device of every compute instance and every instance, in the environment, and then we can deduce the actual risk of every asset. >> So that's a clever name, side scanning. Tell us a little bit more about that. Maybe you could double click on, on how it works. You've mentioned it's looking into block storage and leveraging the API is a very, very clever actually quite innovative. But help us understand in more detail how it works and why it's better than traditional tools that we might find in this space. >> Yes, so the way that it works is that by reading the block storage device, we are able to actually deduce what is running on your computer, meaning what kind of waste packages applications are running. And then by con combining the context, meaning understanding that what kind of services you have connected to the internet, what is the attack surface for these services? What will be the business impact? Will there be any access to PII or any access to the crown jewels of the organization? You can not only understand the risks. You can also understand the impact and then understand what should be our focus in terms of security of the environment. Different factories, the fact that we are doing it using the infrastructure itself, we are not installing any agents, we are not running any packet. You do not need to change anything in your architecture or design of how you use the cloud in order to utilize Orca Orca is working in a pure SaaS way. And so it means that there is no impact, not on cost and not on performance of your environment while using Orca. And so it reduces any friction that might happen with other parties of the organization when you enjoy the security or improve your security in the cloud. >> Yeah, and no process management intrusion. Now, I presume Gil that you eat your own cooking, meaning you're using your own product. First of all, is that true? And if so, how has your use of Orca as a chief product officer help you scale Orca as a company? >> So it's a great question. I think that something that we understood early on is that there is a, quite a significant difference between the way you architect your security in cloud and also the way that things reach production, meaning there's a difference, that there's a gap between how you imagined, like in everything in life how you imagine things will be and how they are in real life in production. And so, even though we have amazing customers that are extremely proficient in security and have thought of a lot of ways of how to secure the environment. Ans so, we of course, we are trying to secure environment as much as possible. We are using Orca because we understand that no one is perfect. We are not perfect. We might, the engineers might, my engineers might make mistakes like every organization. And so we are using Orca because we want to have complete coverage. We want to understand if we are doing any mistake. And sometimes the gap between the architecture and the hole in the security or the gap that you have in your security could take years to happen. And you need a tool that will constantly monitor your environment. And so that's why we are using Orca all around from day one not to find bugs or to do QA, we're doing it because we need security to our cloud environment that will provide these values. And so we've also passed the compliance auditing like SOC 2 and ISO using Orca and it expedited and allowed us to do these processes extremely fast because of having all of these guardrails and metrics has. >> Yeah, so, okay. So you recognized that you potentially had and did have that same problem as your customer has been. Has it helped you scale as a company obviously but how has it helped you scale as a company? >> So it helped us scale as a company by increasing the trust, the level of trust customer having Orca. It allowed us to adopt technology faster, meaning we need much less diligence or exploration of how to use technology because we have these guardrails. So we can use the richness of the technology that we have in the cloud without the need to stop, to install agents, to try to re architecture the way that we are using the technology. And we simply use it. We simply use the technology that the cloud offer as it is. And so it allows you a rapid scalability. >> Allows you allows you to move at the speed of cloud. Now, so I'm going to ask you as a co-founder, you got to wear many hats first of a co-founder and the leadership component there. And also the chief product officer, you got to go out, you got to get early customers, but but even more importantly you have to keep those customers retention. So maybe you can describe how customers have been using Orca. Did they, what was their aha moment that you've seen customers react to when you showcase the new product? And then how have you been able to keep them as loyal partners? >> So I think that we are very fortunate, we have a lot of, we are blessed with our customers. Many of our customers are vocal customers about what they like about Orca. And I think that something that comes along a lot of times is that this is a solution they have been waiting for. I can't express how many times I hear that I could go on a call and a customer says, "I must say, I must share. "This is a solution I've been looking for." And I think that in that respect, Orca is creating a new standard of what is expected from a security solution because we are transforming the security all in the company from an inhibitor to an enabler. You can use the technology. You can use new tools. You can use the cloud as it was intended. And so (coughs) we have customers like one of these cases is a customer that they have a lot of data and they're all super scared about using S3 buckets. We call over all of these incidents of these three buckets being breached or people connecting to an s3 bucket and downloading the data. So they had a policy saying, "S3 bucket should not be used. "We do not allow any use of S3 bucket." And obviously you do need to use S3 bucket. It's a powerful technology. And so the engineering team in that customer environment, simply installed a VM, installed an FTP server, and very easy to use password to that FTP server. And obviously two years later, someone also put all of the customer databases on that FTP server, open to the internet, open to everyone. And so I think it was for him and for us as well. It was a hard moment. First of all, he planned that no data will be leaked but actually what happened is way worse. The data was open to the to do to the world in a technology that exists for a very long time. And it's probably being scanned by attackers all the time. But after that, he not only allowed them to use S3 bucket because he knew that now he can monitor. Now, you can understand that they are using the technology as intended, now that they are using it securely. It's not open to everyone it's open in the right way. And there was no PII on that S3 bucket. And so I think the way he described it is that, now when he's coming to a meeting about things that needs to be improved, people are waiting for this meeting because he actually knows more than what they know, what they know about the environment. And I see it really so many times where a simple mistake or something that looks benign when you look at the environment in a holistic way, when you are looking on the context, you understand that there is a huge gap. That should be the breech. And another cool example was a case where a customer allowed an access from a third party service that everyone trusts to the crown jewels of the environment. And he did it in a very traditional way. He allowed a certain IP to be open to that environment. So overall it sounds like the correct way to go. You allow only a specific IP to access the environment but what he failed to to notice is that everyone in the world can register for free for this third-party service and access the environment from this IP. And so, even though it looks like you have access from a trusted service, a trusted third party service, when it's a Saas service, it's actually, it can mean that everyone can use it in order to access the environment and using Orca, you saw immediately the access, you saw immediately the risk. And I see it time after time that people are simply using Orca to monitor, to guardrail, to make sure that the environment stays safe throughout time and to communicate better in the organization to explain the risk in a very easy way. And the, I would say the statistics show that within few weeks, more than 85% of the different alerts and risks are being fixed, and think it comes to show how effective it is and how effective it is in improving your posture, because people are taking action. >> Those are two great examples, and of course they have often said that the shared responsibility model is often misunderstood. And those two examples underscore thinking that, "oh I hear all this, see all this press about S3, but it's up to the customer to secure the endpoint components et cetera. Configure it properly is what I'm saying. So what an unintended consequence, but but Orca plays a role in helping the customer with their portion of that shared responsibility. Obviously AWS is taking care of this. Now, as part of this program we ask a little bit of a challenging question to everybody because look it as a startup, you want to do well you want to grow a company. You want to have your employees, you know grow and help your customers. And that's great and grow revenues, et cetera but we feel like there's more. And so we're going to ask you because the theme here is all about cloud scale. What is your defining contribution to the future of cloud at scale, Gil? >> So I think that cloud is allowed the revolution to the data centers, okay? The way that you are building services, the way that you are allowing technology to be more adaptive, dynamic, ephemeral, accurate, and you see that it is being adopted across all vendors all type of industries across the world. I think that Orca is the first company that allows you to use this technology to secure your infrastructure in a way that was not possible in the on-prem world, meaning that when you're using the cloud technology and you're using technologies like Orca, you're actually gaining superior security that what was possible in the pre cloud world. And I think that, to that respect, Orca is going hand in hand with the evolution and actually revolutionizes the way that you expect to consume security, the way that you expect to get value, from security solutions across the world. >> Thank You for that Gil. And so we're at the end of our time, but we'll give you a chance for final wrap up. Bring us home with your summary, please. >> So I think that Orca is building the cloud security solution that actually works with its innovative aid agentless approach to cyber security to gain complete coverage, comprehensive solution and to gain, to understand the complete context of the 1% that matters in your security challenges across your data centers in the cloud. We are bridging the gap between the security teams, the business needs to grow and to do so in the paste of the cloud, I think the approach of being able to install within minutes, a security solution in getting complete understanding of your risk which is goes hand in hand in the way you expect and adopt cloud technology. >> That's great Gil. Thanks so much for coming on. You guys doing awesome work. Really appreciate you participating in the program. >> Thank you very much. >> And thank you for watching this AWS Startup Showcase. We're covering the next big thing in AI, Security, and Life Science on theCUBE. Keep it right there for more great content. (upbeat music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase. agentless is the future of cloud security. and the IT, the engineering, but how did you create And to do so, you had to have a technique into block storage and leveraging the API is that by reading the you eat your own cooking, or the gap that you have and did have that same problem And so it allows you a rapid scalability. to when you showcase the new product? the to do to the world And so we're going to ask you the way that you expect to get value, but we'll give you a in the way you expect and participating in the program. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

OrcaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

1%QUANTITY

0.99+

GilPERSON

0.99+

Gil GeronPERSON

0.99+

oneQUANTITY

0.99+

more than 85%QUANTITY

0.99+

two examplesQUANTITY

0.99+

two years laterDATE

0.99+

Orca SecurityORGANIZATION

0.98+

threeQUANTITY

0.98+

two great examplesQUANTITY

0.98+

ISOORGANIZATION

0.98+

three bucketsQUANTITY

0.97+

three periodsQUANTITY

0.96+

todayDATE

0.96+

S3TITLE

0.96+

FirstQUANTITY

0.95+

firstQUANTITY

0.94+

first companyQUANTITY

0.91+

day oneQUANTITY

0.9+

SOC 2TITLE

0.87+

theCUBEORGANIZATION

0.86+

SaasORGANIZATION

0.82+

Startup ShowcaseEVENT

0.8+

s3TITLE

0.7+

doubleQUANTITY

0.57+

GilORGANIZATION

0.55+

Next Big ThingTITLE

0.51+

yearsQUANTITY

0.5+

S3COMMERCIAL_ITEM

0.47+

Rohan D'Souza, Olive | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences.


 

(upbeat music) (music fades) >> Welcome to today's session of theCUBE's presentation of the AWS Startup Showcase, I'm your host Natalie Erlich. Today, we're going to feature Olive, in the life sciences track. And of course, this is part of the future of AI, security, and life sciences. Here we're joined by our very special guest Rohan D'Souza, the Chief Product Officer of Olive. Thank you very much for being with us. Of course, we're going to talk today about building the internet of healthcare. I do you appreciate you joining the show. >> Thanks, Natalie. My pleasure to be here, I'm excited. >> Yeah, likewise. Well tell us about AI and how it's revolutionizing health systems across America. >> Yeah, I mean, we're clearly living around, living at this time of a lot of hype with AI, and there's a tremendous amount of excitement. Unfortunately for us, or, you know, depending on if you're an optimist or a pessimist, we had to wait for a global pandemic for people to realize that technology is here to really come into the aid of assisting everybody in healthcare, not just on the consumer side, but on the industry side, and on the enterprise side of delivering better care. And it's a truly an exciting time, but there's a lot of buzz and we play an important role in trying to define that a little bit better because you can't go too far today and hear about the term AI being used/misused in healthcare. >> Definitely. And also I'd love to hear about how Olive is fitting into this, and its contributions to AI in health systems. >> Yeah, so at its core, we, the industry thinks of us very much as an automation player. We are, we've historically been in the trenches of healthcare, mostly on the provider side of the house, in leveraging technology to automate a lot of the high velocity, low variability items. Our founding and our DNA is in this idea of, we think it's unfair that healthcare relies on humans as being routers. And we have looked to solve the problem of technology not talking to each other, by using humans. And so we set out to really go in into the trenches of healthcare and bring about core automation technology. And you might be sitting there wondering, well why are we talking about automation under the umbrella of AI? And that's because we are challenging the very status quo of siloed-based automation, and we're building, what we say, is the internet of healthcare. And more importantly what we've done is, we've brought in a human, very empathetic approach to automation, and we're leveraging technology by saying when one Olive learns, all Olives learn, so that we take advantage of the network effect of a single Olive worker in the trenches of healthcare, sharing that knowledge and wisdom, both with her human counterparts, but also with her AI worker counterparts that are showing up to work every single day in some of the most complex health systems in this country. >> Right. Well, when you think about AI and, you know, computer technology, you don't exactly think of, you know, humanizing kind of potential. So how are you seeking to make AI really humanistic, and empathetic, potentially? >> Well, most importantly the way we're starting with that is where we are treating Olive just like we would any single human counterpart. We don't want to think of this as just purely a technology player. Most importantly, healthcare is deeply rooted in this idea of investing in outcomes, and not necessarily investing in core technology, right? So we have learned that from the early days of us doing some really robust integrated AI-based solutions, but we've humanized it, right? Take, for example, we treat Olive just like any other human worker would, she shows up to work, she's onboarded, she has an obligation to her customers and to her human worker counterparts. And we care very deeply about the cost of the false positive that exists in healthcare, right? So, and we do this through various different ways. Most importantly, we do it in an extremely transparent and interpretable way. By transparent I mean, Olive provides deep insights back to her human counterparts in the form of reporting and status reports, and we even, we even have a term internally, that we call is a sick day. So when Olive calls in sick, we don't just tell our customers Olive's not working today, we tell our customers that Olive is taking a sick day, because a human worker that might require, that might need to stay home and recover. In our case, we just happened to have to rewire a certain portal integration because a portal just went through a massive change, and Olive has to take a sick day in order to make that fix, right? So. And this is, you know, just helping our customers understand, or feel like they can achieve success with AI-based deployments, and not sort of this like robot hanging over them, where we're waiting for Skynet to come into place, and truly humanizing the aspects of AI in healthcare. >> Right. Well that's really interesting. How would you describe Olive's personality? I mean, could you attribute a personality? >> Yeah, she's unbiased, data-driven, extremely transparent in her approach, she's empathetic. There are certain days where she's direct, and there are certain ways where she could be quirky in the way she shares stuff. Most importantly, she's incredibly knowledgeable, and we really want to bring that knowledge that she has gained over the years of working in the trenches of healthcare to her customers. >> That sounds really fascinating, and I love hearing about the human side of Olive. Can you tell us about how this AI, though, is actually improving efficiencies in healthcare systems right now? >> Yeah, not too many people know that about a third of every single US dollar is spent in the administrative burden of delivering care. It's really, really unfortunate. In the capitalistic world, of, just us as a system of healthcare in the United States, there is a lot of tail wagging the dog that ends up happening. Most importantly, I don't know that the last time, if you've been through a process where you have to go and get an MRI or a CT scan, and your provider tells you that we first have to wait for the insurance company in order to give us permission to perform this particular task. And when you think about that, one, there's, you know the tail wagging the dog scenario, but two, the administrative burden to actually seek the approval for that test, that your provider is telling you that you need to perform. Right? And what we've done is, as humans, or as sort of systems, we have just put humans in the supply chain of connecting the left side to the right side. So what we're doing is we're taking advantage of massive distributing cloud computing platforms, I mean, we're fully built on the AWS stack, we take advantage of things that we can very quickly stand up, and spin up. And we're leveraging core capabilities in our computer vision, our natural language processing, to do a lot of the tasks that, unfortunately, we have relegated humans to do, and our goal is can we allow humans to function at the top of their license? Irrespective of what the license is, right? It could be a provider, it could be somebody working in the trenches of revenue cycle management, or it could be somebody in a call center talking to a very anxious patient that just learned that he or she might need to take a test in order to rule out something catastrophic, like a very adverse diagnosis. >> Yeah, really fascinating. I mean, do you think that this is just like the tip of the iceberg? I mean, how much more potential does AI have for healthcare? >> Yeah, I think we're very much in the early, early, early days of AI being applied in a production in practical sense. You know, AI has been talked about for many, many many years, in the trenches of healthcare. It has found its place very much in challenging status quos in research, it has struggled to find its way in the trenches of just the practicality on the application of AI. And that's partly because we, you know, going back to the point that I raised earlier, the cost of the false positive in healthcare is really high. You know, it can't just be a, you know, I bought a pair of shoes online, and it recommended that I buy a pair of socks, and I happen to get the socks and I returned them back because I realized that they're really ugly and hideous and I don't want them. In healthcare, you can't do that. Right? In healthcare you can't tell a patient or somebody else oops, I really screwed up, I should not have told you that. So, what that's meant for us, in the trenches of delivery of AI-based applications, is we've been through a cycle of continuous pilots and proof of concepts. Now, though, with AI starting to take center stage, where a lot of what has been hardened in the research world can be applied towards the practicality to avoid the burnout, and the sheer cost that the system is under, we're starting to see this real upwards tick of people implementing AI-based solutions, whether it's for decision-making, whether it's for administrative tasks, drug discovery, it's just, is an amazing, amazing time to be at the intersection of practical application of AI and really, really good healthcare delivery for all of us. >> Yeah, I mean, that's really, really fascinating, especially your point on practicality. Now how do you foresee AI, you know, being able to be more commercial in its appeal? >> I think you have to have a couple of key wins under your belt, is number one, number two, the standard, sort of outcomes-based publications that is required. Two, I think we need, we need real champions on the inside of systems to support the narrative that us as vendors are pushing heavily on the AI-driven world or the AI-approachable world, and we're starting to see that right now. You know, it took a really, really long time for providers, first here in the United States, but now internationally, on this adoption and move away from paper-based records to electronic medical records. You know, you still hear a lot of pain from people saying oh my God, I used an EMR, but try to take the EMR away from them for a day or two, and you'll very quickly realize that life without an EMR is extremely hard right now. AI is starting to get to that point where, for us, we, you know, we treat, we always say that Olive needs to pass the Turing test. Right? So when you clearly get this, this sort of feeling that I can trust my AI counterpart, my AI worker to go and perform these tasks, because I realized that, you know, as long as it's unbiased, as long as it's data-driven, as long as it's interpretable, and something that I can understand, I'm willing to try this out in a routine basis, but we really, really need those champions on the internal side to promote the use of this safe application. >> Yeah. Well, just another thought here is, you know, looking at your website, you really focus on some of the broken systems in healthcare, and how Olive is uniquely prepared to shine the light on that, where others aren't. Can you just give us an insight onto that? >> Yeah. You know, the shine the light is a play on the fact that there's a tremendous amount of excitement in technology and AI in healthcare applied to the clinical side of the house. And it's the obvious place that most people would want to invest in, right? It's like, can I bring an AI-based technology to the clinical side of the house? Like decision support tools, drug discovery, clinical NLP, et cetera, et cetera. But going back to what I said, 30% of what happens today in healthcare is on the administrative side. And so what we call as the really, sort of the dark side of healthcare where it's not the most exciting place to do true innovation, because you're controlled very much by some big players in the house, and that's why we we provide sort of this insight on saying we can shine a light on a place that has typically been very dark in healthcare. It's around this mundane aspects of traditional, operational, and financial performance, that doesn't get a lot of love from the tech community. >> Well, thank you Rohan for this fascinating conversation on how AI is revolutionizing health systems across the country, and also the unique role that Olive is now playing in driving those efficiencies that we really need. Really looking forward to our next conversation with you. And that was Rohan D'Souza, the Chief Product Officer of Olive, and I'm Natalie Erlich, your host for the AWS Startup Showcase, on theCUBE. Thank you very much for joining us, and look forward for you to join us on the next session. (gentle music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase, My pleasure to be here, I'm excited. and how it's revolutionizing and on the enterprise side And also I'd love to hear about in some of the most complex So how are you seeking to in the form of reporting I mean, could you attribute a personality? that she has gained over the years the human side of Olive. know that the last time, is just like the tip of the iceberg? and the sheer cost that you know, being able to be first here in the United and how Olive is uniquely prepared is on the administrative side. and also the unique role

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rohan D'SouzaPERSON

0.99+

NataliePERSON

0.99+

Natalie ErlichPERSON

0.99+

United StatesLOCATION

0.99+

30%QUANTITY

0.99+

AWSORGANIZATION

0.99+

twoQUANTITY

0.99+

AmericaLOCATION

0.99+

RohanPERSON

0.99+

OlivePERSON

0.99+

United StatesLOCATION

0.99+

TodayDATE

0.99+

a dayQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

TwoQUANTITY

0.98+

singleQUANTITY

0.97+

OlivesPERSON

0.96+

OliveORGANIZATION

0.92+

oneQUANTITY

0.88+

Startup ShowcaseEVENT

0.88+

theCUBEORGANIZATION

0.88+

single dayQUANTITY

0.82+

pandemicEVENT

0.81+

about a thirdQUANTITY

0.81+

a pair of socksQUANTITY

0.8+

AWS Startup ShowcaseEVENT

0.8+

AWS Startup ShowcaseEVENT

0.75+

single humanQUANTITY

0.73+

SkynetORGANIZATION

0.68+

USLOCATION

0.67+

every singleQUANTITY

0.65+

dollarQUANTITY

0.62+

pairQUANTITY

0.6+

numberQUANTITY

0.56+

NLPORGANIZATION

0.5+

shoesQUANTITY

0.5+

Zach Booth, Explorium | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences.


 

(gentle upbeat music) >> Everyone welcome to the AWS Startup Showcase presented by theCUBE. I'm John Furrier, host of theCUBE. We are here talking about the next big thing in cloud featuring Explorium. For the AI track, we've got AI cybersecurity and life sciences. Obviously AI is hot, machine learning powering that. Today we're joined by Zach Booth, director of global partnerships and channels like Explorium. Zach, thank you for joining me today remotely. Soon we'll be in person, but thanks for coming on. We're going to talk about rethinking external data. Thanks for coming on theCUBE. >> Absolutely, thanks so much for having us, John. >> So you guys are a hot startup. Congratulations, we just wrote about on SiliconANGLE, you're a new $75 million of fresh funding. So you're part of the Amazon partner network and growing like crazy. You guys have a unique value proposition looking at external data and that having a platform for advanced analytics and machine learning. Can you take a minute to explain what you guys do? What is this platform? What's the value proposition and why do you exist? >> Bottom line, we're bringing context to decision-making. The premise of Explorium and kind of this is consistent with the framework of advanced analytics is we're helping customers to reach better, more relevant, external data to feed into their predictive and analytical models. It's quite a challenge to actually integrate and effectively leverage data that's coming from beyond your organization's walls. It's manual, it's tedious, it's extremely time consuming and that's a problem. It's really a problem that Explorium was built to solve. And our philosophy is it shouldn't take so long. It shouldn't be such an arduous process, but it is. So we built a company, a technology that's capable for any given analytical process of connecting a customer to relevant sources that are kind of beyond their organization's walls. And this really impacts decision-making by bringing variety and context into their analytical processes. >> You know, one of the things I see a lot in my interviews with theCUBE and talking to people in the industry is that everyone talks a big game about having some machine learning and AI, they're like, "Okay, I got all this cool stuff". But at the end of the day, people are still using spreadsheets. They're wrangling data. And a lot of it's dominated by these still fenced-off data warehousing and you start to see the emergence of really companies built on the cloud. I saw the snowflake IPO, you're seeing a whole new shift of new brands emerging that are doing things differently, right? And because there's such a need for just move out of the archaic spreadsheet and data presentation layers, it's a slower antiquated, outdated. How do you guys solve that problem? You guys are on the other side of that equation, you're on the new wave of analytics. What are you guys solving? How do you make that work? How do you get on that way? >> So basically the way Explorium sees the world, and I think that most analytical practitioners these days see it in a similar way, but the key to any analytical problem is having the right data. And the challenge that we've talked about and that we're really focused on is helping companies reach that right data. Our focus is on the data part of data science. The science part is the algorithmic side. It's interesting. It was kind of the first frontier of machine learning as practitioners and experts were focused on it and cloud and compute really enabled that. The challenge today isn't so much "What's the right model for my problem?" But it's "What's the right data?" And that's the premise of what we do. Your model's only as strong as the data that it trains on. And going back to that concept of just bringing context to decision-making. Within that framework that we talked about, the key is bringing comprehensive, accurate and highly varied data into my model. But if my model is only being informed with internal data which is wonderful data, but only internal, then it's missing context. And we're helping companies to reach that external variety through a pretty elegant platform that can connect the right data for my analytical process. And this really has implications across several different industries and a multitude of use cases. We're working with companies across consumer packaged goods, insurance, financial services, retail, e-commerce, even software as a service. And the use cases can range between fraud and risk to marketing and lifetime value. Now, why is this such a challenge today with maybe some antiquated or analog means? With a spreadsheet or with a rule-based approach where we're pretty limited, it was an effective means of decision-making to generate and create actions, but it's highly limited in its ability to change, to be dynamic, to be flexible. And with modeling and using data, it's really a huge arsenal that we have at our fingertips. The trick is extracting value from within it. There's obviously latent value from within our org but every day there's more and more data that's being created outside of our org. And that is a challenge to go out and get to effectively filter and navigate and connect to. So we've basically built that tech to help us navigate and query for any given analytical question. Find me the right data rather than starting with what's the problem I'm looking for, now let me think about the right data. Which is kind of akin to going into a library and searching for a specific book. You know which book you're looking for. Instead of saying, there's a world, a universe of data outside there. I want to access it. I want to tap into what's right. Can I use a tool that can effectively query all that data, find what's relevant for me, connect it and match it with my own and distill signals or features from that data to provide more variety into my modeling efforts yielding a robust decision as an output. >> I love that paradigm of just having that searchable kind of paradigm. I got to ask you one of the big things that I've heard people talk about. I want to get your thoughts on this, is that how do I know if I even have the right data? Is the data addressable? Can I find it? Is it even, can I even be queried? How do you solve that problem for customers when they say, "I really want the best analytics but do I even have the data or is it the right data?" How do you guys look at that? >> So the way our technology was built is that it's quite relevant for a few different profile types of customers. Some of these customers, really the genesis of the company started with those cloud-based, model-driven since day one organizations, and they're working with machine learning and they have models in production. They're quite mature in fact. And the problem that they've been facing is, again, our models are only as strong as the data that they're training on. The only data that they're training on is internal data. And we're seeing diminishing returns from those decisions. So now suddenly we're looking for outside data and we're finding that to effectively use outside data, we have to spend a lot of time. 60% of our time spent thinking of data, going out and getting it, cleaning it, validating it, and only then can we actually train a model and assess if there's an ROI. That takes months. And if it doesn't push the needle from an ROI standpoint, then it's an enormous opportunity cost, which is very, very painful, which goes back to their decision-making. Is it even worth it if it doesn't push the needle? That's why there had to be a better way. And what we built is relevant for that audience as well as companies that are in the midst of their digital transformation. We're data rich, but data science poor. We have lots of data. A latent value to extract from within our own data and at the same time tons of valuable data outside of our org. Instead of waiting 18, 36 months to transform ourselves, get our infrastructure in place, our data collection in place, and really start having models in production based on our own data. You can now do this in tandem. And that's what we're seeing with a lot of our enterprise customers. By using their analysts, their data engineers, some of them in their innovation or kind of center of excellences have a data science group as well. And they're using the platform to inform a lot of their different models across lines of businesses. >> I love that expression, "data-rich". A lot of people becoming full of data too. They have a data problem. They have a lot of it. I think I want to get your thoughts but I think that connects to my next question which is as people look at the cloud, for instance, and again, all these old methods were internal, internal to the company, but now that you have this idea of cloud, more integration's happening. More people are connecting with APIs. There's more access to potentially more signals, more data. How does a company go to that next level to connect in and acquire the data and make it faster? Because I can almost imagine that the signals that come from that context of merging external data and that's the topic of this theme, re-imagining external data is extremely valuable signaling capability. And so it sounds like you guys make it go faster. So how does it work? Is it the cloud? Take us through that value proposition. >> Well, it's a real, it's amazing how fast the rate of change organizations have been moving onto the cloud over the past year during COVID and the fact that alternative or external data, depending on how you refer to it, has really, really blown up. And it's really exciting. This is coming in the form of data providers and data marketplaces, and everybody is kind of, more and more organizations are moving from rule-based decision-making to predictive decision making, and that's exciting. Now what's interesting about this company, Explorium, we're working with a lot of different types of customers but our long game has a real high upside. There's more and more companies that are starting to use data and are transformed or already are in the midst of their transformation. So they need outside data. And that challenge that I described is exists for all of them. So how does it really work? Today, if I don't have data outside, I have to think. It's based on hypothesis and it all starts with that hypothesis which is already prone to error from the get-go. You and I might be domain experts for a given use case. Let's say we're focusing on fraud. We might think about a dozen different types of data sources, but going out and getting it like I said, it takes a lot of time harmonizing it, cleaning it, and being able to use it takes even more time. And that's just for each one. So if we have to do that across dozens of data sources it's going to take far too much time and the juice isn't worth the squeeze. And so I'm going to forego using that. And a metaphor that I like to use when I try to describe what Explorium does to my mom. I basically use this connection to buying your first home. It's a very, very important financial decision. You would, when you're buying this home, you're thinking about all the different inputs in your decision-making. It's not just about the blueprint of the house and how many rooms and the criteria you're looking for. You're also thinking external variables. You're thinking about the school zone, the construction, the property value, alternative or similar neighborhoods. That's probably your most important financial decision or one of the largest at least. A machine learning model in production is an extremely important and expensive investment for an organization. Now, the problem is as a consumer buying a home, we have all this data at our fingertips to find out all of those external-based inputs. Organizations don't, which is kind of crazy when I first kind of got into this world. And so, they're making decisions with their first party data only. First party data's wonderful data. It's the best, it's representative, it's high quality, it's high value for their specific decision-making and use cases but it lacks context. And there's so much context in the form of location-based data and business information that can inform decision-making that isn't being used. It translates to sub-optimal decision-making, let's say. >> Yeah, and I think one of the insights around looking at signal data in context is if by merging it with the first party, it creates a huge value window, it gives you observational data, maybe potentially insights into customer behavior. So totally agree, I think that's a huge observation. You guys are definitely on the right side of history here. I want to get into how it plays out for the customer. You mentioned the different industries, obviously data's in every vertical. And vertical specialization with the data it has to be, is very metadata driven. I mean, metadata and oil and gas is different than fintech. I mean, some overlap, but for the most part you got to have that context, acute context, each one. How are you guys working? Take us through an example of someone getting it right, getting that right set up, taking us through the use case of how someone on boards Explorium, how they put it to use, and what are some of the benefits? >> So let's break it down into kind of a three-step phase. And let's use that example of fraud earlier. An organization would have basically past historical data of how many customers were actually fraudulent in the end of the day. So this use case, and it's a core business problem, is with an intention to reduce that fraud. So they would basically provide, going with your description earlier, something similar to an Excel file. This can be pulled from any database out there, we're working with loads of them, and they would provide this what's called training data. This training data is their historical data and would have as an output, the outcome, the conclusion, was this business fraudulent or not? Yes or no. Binary. The platform would understand that data itself to train a model with external context in the form of enrichments. These data enrichments at the end of the day are important, they're relevant, but their purpose is to generate signals. So to your point, signals is the bottom line what everyone's trying to achieve and identify and discover, and even engineer by using data that they have and data that they yet to integrate with. So the platform would connect to your data, infer and understand the meaning of that data. And based on this matching of internal plus external context, the platform automates the process of distilling signals. Or in machine learning this is called, referred to as features. And these features are really the bread and butter of your modeling efforts. If you can leverage features that are coming from data that's outside of your org, and they're quantifiably valuable which the platform measures, then you're putting yourself in a position to generate an edge in your modeling efforts. Meaning now, you might reduce your fraud rate. So your customers get a much better, more compelling offer or service or price point. It impacts your business in a lot of ways. What Explorium is bringing to the table in terms of value is a single access point to a huge universe of external data. It expedites your time to value. So rather than data analysts, data engineers, data scientists, spending a significant amount of time on data preparation, they can now spend most of their time on feature or signal engineering. That's the more fun and interesting part, less so the boring part. But they can scale their modeling efforts. So time to value, access to a huge universe of external context, and scale. >> So I see two things here. Just make sure I get this right 'cause it sounds awesome. So one, the core assets of the engineering side of it, whether it's the platform engineer or data engineering, they're more optimized for getting more signaling which is more impactful for the context acquisition, looking at contexts that might have a business outcome, versus wrangling and doing mundane, heavy lifting. >> Yeah so with it, sorry, go ahead. >> And the second one is you create a democratization for analysts or business people who just are used to dealing with spreadsheets who just want to kind of play and play with data and get a feel for it, or experiment, do querying, try to match planning with policy - >> Yeah, so the way I like to kind of communicate this is Explorium's this one, two punch. It's got this technology layer that provides entity resolution, so matching with external data, which otherwise is a manual endeavor. Explorium's automated that piece. The second is a huge universe of outside data. So this circumvents procurement. You don't have to go out and spend all of these one-off efforts on time finding data, organizing it, cleaning it, etc. You can use Explorium as your single access point to and gateway to external data and match it, so this will accelerate your time to value and ultimately the amount of valuable signals that you can discover and leverage through the platform and feed this into your own pipelines or whatever system or analytical need you have. >> Zach, great stuff. I love talking with you and I love the hot startup action here. Cause you're again, you're on the net new wave here. Like anything new, I was just talking to a colleague here. (indistinct) When you have something new, it's like driving a car for the first time. You need someone to give you some driving lessons or figure out how to operationalize it or take advantage of the one, two, punch as you pointed out. How do you guys get someone up and running? 'Cause let's just say, I'm like, okay, I'm bought into this. So no brainer, you got my attention. I still don't understand. Do you provide a marketplace of data? Do I need to get my own data? Do I bring my own data to the party? Do you guys provide relationships with other data providers? How do I get going? How do I drive this car? How do you answer that? >> So first, explorium.ai is a free trial and we're a product-focused company. So a practitioner, maybe a data analyst, a data engineer, or data scientist would use this platform to enrich their analytical, so BI decision-making or any models that they're working on either in production or being trained. Now oftentimes models that are being trained don't actually make it to production because they don't meet a minimum threshold. Meaning they're not going to have a positive business outcome if they're deployed. With Explorium you can now bring variety into that and increase your chances that your model that's being trained will actually be deployed because it's being fed with the right data. The data that you need that's not just the data that you have. So how a business would start working with us would typically be with a use case that has a high business value. Maybe this could be a fraud use case or a risk use case and B2B, or even B2SMB context. This might be a marketing use case. We're talking about LTV modeling, lookalike modeling, lead acquisition and generation for our CPGs and field sales optimization. Explore and understand your data. It would enrich that data automatically, it would generate and discover new signals from external data plus from your own and feed this into either a model that you have in-house or end to end in the platform itself. We provide customer success to generate, kind of help you build out your first model perhaps, and hold your hands through that process. But typically most of our customers are after a few months time having run in building models, multiple models in production on their own. And that's really exciting because we're helping organizations move from a more kind of rule-based decision making and being their bridge to data science. >> Awesome. I noticed that in your title you handle global partnerships and channels which I'm assuming is you guys have a network and ecosystem you're working with. What are some of the partnerships and channel relationships that you have that you bring to bear in the marketplace? >> So data and analytics, this space is very much an ecosystem. Our customers are working across different clouds, working with all sorts of vendors, technologies. Basically they have a pretty big stack. We're a part of that stack and we want to symbiotically play within our customer stack so that we can contribute value whether they sit here, there, or in another place. Our partners range from consulting and system integration firms, those that perhaps are building out the blueprint for a digital transformation or actually implementing that digital transformation. And we contribute value in both of these cases as a technology innovation layer in our product. And a customer would then consume Explorium afterwards, after that transformation is complete as a part of their stack. We're also working with a lot of the different cloud vendors. Our customers are all cloud-based and data enrichment is becoming more and more relevant with some wonderful machine-learning tools. Be they AutoML, or even some data marketplaces are popping up and very exciting. What we're bringing to the table as an edge is accelerating the connection between the data that I think I want as a company and how to actually extract value from that data. Being part of this ecosystem means that we can be working with and should be working with a lot of different partners to contribute incremental value to our end customers. >> Final question I want to ask you is if I'm in a conference room with my team and someone says, "Hey, we should be rethinking our external data." What would I say? How would I pound my fist on the table or raise my hand in saying, "Hey, I have an idea, we should be thinking this way." What would be my argument to the team, to re-imagine how we deal with external data? >> So it might be a scenario that rather than banging your hands on the table, you might be banging your heads on the table because it's such a challenging endeavor today. Companies have to think about, What's the right data for my specific use cases? I need to validate that data. Is it relevant? Is it real? Is it representative? Does it have good coverage, good depth and good quality? Then I need to procure that data. And this is about getting a license from it. I need to integrate that data with my own. That means I need to have some in-house expertise to do so. And then of course, I need to monitor and maintain that data on an ongoing basis. All of this is a pretty big thing to undertake and undergo and having a partner to facilitate that external data integration and ongoing refresh and monitoring, and being able to trust that this is all harmonized, high quality, and I can find the valuable ones without having to manually pick and choose and try to discover it myself is a huge value add, particularly the larger the organization or partner. Because there's so much data out there. And there's a lot of noise out there too. And so if I can through a single partner or access point, tap into that data and quantify what's relevant for my specific problem, then I'm putting myself in a really good position and optimizing the allocation of my very expensive and valuable data analysts and engineering resources. >> Yeah, I think one of the things you mentioned earlier I thought was a huge point was good call out was it goes beyond the first party data because and even just first party if you just in an internal view, some of the best, most successful innovators that we've been covering with cloud scale is they're extending their first party data to external providers. So they're in the value chains of solutions that share their first party data with other suppliers. And so that's just, again, more of an extension of the first party data. You're kind of taking it to a whole 'nother level of there's another external, external set of data beyond it that's even more important. I think this is a fascinating growth area and I think you guys are onto it. Great stuff. >> Thank you so much, John. >> Well, I really appreciate you coming on Zach. Final word, give a quick plug for the company. What are you up to, and what's going on? >> What's going on with Explorium? We are growing very fast. We're a very exciting company. I've been here since the very early days and I can tell you that we have a stellar working environment, a very, very, strong down to earth, high work ethic culture. We're growing in the sense of our office in San Mateo, New York, and Tel Aviv are growing rapidly. As you mentioned earlier, we raised our series C so that totals Explorium to raising I think 127 million over the past two years and some change. And whether you want to partner with Explorium, work with us as a customer, or join us as an employee, we welcome that. And I encourage everybody to go to explorium.ai. Check us out, read some of the interesting content there around data science, around the processes, around the business outcomes that a lot of our customers are seeing, as well as joining a free trial. So you can check out the platform and everything that has to offer from machine learning engine to a signal studio, as well as what type of information might be relevant for your specific use case. >> All right Zach, thanks for coming on. Zach Booth, director of global partnerships and channels that explorium.ai. The next big thing in cloud featuring Explorium and a part of our AI track, I'm John Furrier, host of theCUBE. Thanks for watching.

Published Date : Jun 24 2021

SUMMARY :

For the AI track, we've Absolutely, thanks so and that having a platform It's quite a challenge to actually of really companies built on the cloud. And that is a challenge to go out and get I got to ask you one of the big things and at the same time tons of valuable data and that's the topic of this theme, And a metaphor that I like to use of the insights around and data that they yet to integrate with. the core assets of the and gateway to external data Do I bring my own data to the party? that's not just the data that you have. What are some of the partnerships a lot of the different cloud vendors. to re-imagine how we and optimizing the allocation of the first party data. plug for the company. that has to offer from and a part of our AI track,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Zach BoothPERSON

0.99+

ExploriumORGANIZATION

0.99+

ZachPERSON

0.99+

AmazonORGANIZATION

0.99+

60%QUANTITY

0.99+

$75 millionQUANTITY

0.99+

John FurrierPERSON

0.99+

San MateoLOCATION

0.99+

two thingsQUANTITY

0.99+

Tel AvivLOCATION

0.99+

127 millionQUANTITY

0.99+

ExcelTITLE

0.99+

explorium.aiOTHER

0.99+

first partyQUANTITY

0.99+

TodayDATE

0.99+

first timeQUANTITY

0.99+

first modelQUANTITY

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

first homeQUANTITY

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

three-stepQUANTITY

0.98+

secondQUANTITY

0.97+

two punchQUANTITY

0.97+

twoQUANTITY

0.97+

first frontierQUANTITY

0.95+

New YorkLOCATION

0.95+

theCUBEORGANIZATION

0.94+

AWSORGANIZATION

0.93+

explorium.aiORGANIZATION

0.91+

each oneQUANTITY

0.9+

second oneQUANTITY

0.9+

single partnerQUANTITY

0.89+

AWS Startup ShowcaseEVENT

0.87+

dozensQUANTITY

0.85+

past yearDATE

0.84+

single accessQUANTITY

0.84+

First partyQUANTITY

0.84+

series COTHER

0.79+

COVIDEVENT

0.74+

past two yearsDATE

0.74+

36 monthsQUANTITY

0.73+

18,QUANTITY

0.71+

Startup ShowcaseEVENT

0.7+

SiliconANGLEORGANIZATION

0.55+

tonsQUANTITY

0.53+

thingsQUANTITY

0.53+

snowflake IPOEVENT

0.52+

Dr Eng Lim Goh, High Performance Computing & AI | HPE Discover 2021


 

>>Welcome back to HPD discovered 2021 the cubes virtual coverage, continuous coverage of H P. S H. P. S. Annual customer event. My name is Dave Volonte and we're going to dive into the intersection of high performance computing data and AI with DR Eng limb go who is the senior vice president and CTO for AI at Hewlett Packard enterprise Doctor go great to see you again. Welcome back to the cube. >>Hello Dave, Great to talk to you again. >>You might remember last year we talked a lot about swarm intelligence and how AI is evolving. Of course you hosted the day two keynotes here at discover you talked about thriving in the age of insights and how to craft a data centric strategy and you addressed you know some of the biggest problems I think organizations face with data that's You got a data is plentiful but insights they're harder to come by. And you really dug into some great examples in retail banking and medicine and health care and media. But stepping back a little bit with zoom out on discovered 21, what do you make of the events so far? And some of your big takeaways? >>Mm Well you started with the insightful question, right? Yeah. Data is everywhere then. But we like the insight. Right? That's also part of the reason why that's the main reason why you know Antonio on day one focused and talked about that. The fact that we are now in the age of insight. Right? Uh and and uh and and how to thrive thrive in that in this new age. What I then did on the day to kino following Antonio is to talk about the challenges that we need to overcome in order in order to thrive in this new age. >>So maybe we could talk a little bit about some of the things that you took away in terms I'm specifically interested in some of the barriers to achieving insights when you know customers are drowning in data. What do you hear from customers? What we take away from some of the ones you talked about today? >>Oh, very pertinent question. Dave you know the two challenges I spoke about right now that we need to overcome in order to thrive in this new age. The first one is is the current challenge and that current challenge is uh you know stated is you know, barriers to insight, you know when we are awash with data. So that's a statement right? How to overcome those barriers. What are the barriers of these two insight when we are awash in data? Um I in the data keynote I spoke about three main things. Three main areas that received from customers. The first one, the first barrier is in many with many of our customers. A data is siloed. All right. You know, like in a big corporation you've got data siloed by sales, finance, engineering, manufacturing, and so on, uh supply chain and so on. And uh, there's a major effort ongoing in many corporations to build a federation layer above all those silos so that when you build applications above they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the that was the first barrier we spoke about barriers to incite when we are washed with data. The second barrier is uh, that we see amongst our customers is that uh data is raw and dispersed when they are stored and and uh and you know, it's tough to get tough to to get value out of them. Right? And I in that case I I used the example of uh you know the May 6 2010 event where the stock market dropped a trillion dollars in in tens of ministerial. We we all know those who are financially attuned with know about this uh incident But this is not the only incident. There are many of them out there and for for that particular May six event uh you know, it took a long time to get insight months. Yeah before we for months we had no insight as to what happened, why it happened, right. Um and and there were many other incidences like this. And the regulators were looking for that one rule that could, that could mitigate many of these incidences. Um one of our customers decided to take the hard road go with the tough data right? Because data is rolling dispersed. So they went into all the different feeds of financial transaction information. Uh took the took the tough uh took the tough road and analyze that data took a long time to assemble and they discovered that there was court stuffing right? That uh people were sending a lot of traits in and then cancelling them almost immediately. You have to manipulate the market. Um And why why why didn't we see it immediately? Well the reason is the process reports that everybody sees uh rule in there that says all trades. Less than 100 shares don't need to report in there. And so what people did was sending a lot of less than 103 100 100 shares trades uh to fly under the radar to do this manipulation. So here is here the second barrier right? Data could be raw and dispersed. Um Sometimes you just have to take the hard road and um and to get insight And this is 1 1 great example. And then the last barrier is uh is has to do with sometimes when you start a project to to get insight to get uh to get answers and insight. You you realize that all the datas around you but you don't you don't seem to find the right ones To get what you need. You don't you don't seem to get the right ones. Yeah. Um here we have three quick examples of customers. 111 was it was a great example right? Where uh they were trying to build a language translator, a machine language translator between two languages. Right? But not do that. They need to get hundreds of millions of word pairs, you know, of one language compared uh with the corresponding other hundreds of millions of them. They say we are going to get all these word pairs. Someone creative thought of a willing source and a huge, so it was a United Nations you see. So sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data right. The second one has to do with uh there was uh the uh sometimes you you may just have to generate that data, interesting one. We had an autonomous car customer that collects all these data from their cars, right, massive amounts of data, loss of senses, collect loss of data. And uh you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car uh in in um in fine weather and collected the car driving on this highway in rain and also in stone, but never had the opportunity to collect the car in hale because that's a rare occurrence. So instead of waiting for a time where the car can dr inhale, they build a simulation you by having the car collector in snow and simulated him. So these are some of the examples where we have customers working to overcome barriers, right? You have barriers that is associated the fact that data is silo Federated, it various associated with data. That's tough to get that. They just took the hard road, right? And sometimes, thirdly, you just have to be creative to get the right data you need, >>wow, I tell you, I have about 100 questions based on what you just said. Uh, there's a great example, the flash crash. In fact, Michael Lewis wrote about this in his book, The Flash Boys and essentially right. It was high frequency traders trying to front run the market and sending in small block trades trying to get on the front end it. So that's and they, and they chalked it up to a glitch like you said, for months, nobody really knew what it was. So technology got us into this problem. I guess my question is, can technology help us get out of the problem? And that maybe is where AI fits in. >>Yes, yes. Uh, in fact, a lot of analytics, we went in, uh, to go back to the raw data that is highly dispersed from different sources, right, assemble them to see if you can find a material trend, right? You can see lots of trends right? Like, uh, you know, we, if if humans look at things right, we tend to see patterns in clouds, right? So sometimes you need to apply statistical analysis, um math to be sure that what the model is seeing is is real. Right? And and that required work. That's one area. The second area is uh you know, when um uh there are times when you you just need to to go through that uh that tough approach to to find the answer. Now, the issue comes to mind now is is that humans put in the rules to decide what goes into a report that everybody sees in this case uh before the change in the rules. Right? But by the way, after the discovery, the authorities change the rules and all all shares, all traits of different any sizes. It has to be reported. No. Yeah. Right. But the rule was applied uh you know, to say earlier that shares under 100 trades under 100 shares need not be reported. So sometimes you just have to understand that reports were decided by humans and and under for understandable reasons. I mean they probably didn't want that for various reasons not to put everything in there so that people could still read it uh in a reasonable amount of time. But uh we need to understand that rules were being put in by humans for the reports we read. And as such, there are times you just need to go back to the raw data. >>I want to ask, >>albeit that it's gonna be tough. >>Yeah. So I want to ask a question about AI is obviously it's in your title and it's something you know a lot about but and I want to make a statement, you tell me if it's on point or off point. So it seems that most of the Ai going on in the enterprise is modeling data science applied to troves of data >>but >>but there's also a lot of ai going on in consumer whether it's you know, fingerprint technology or facial recognition or natural language processing. Will a two part question will the consumer market has so often in the enterprise sort of inform us uh the first part and then will there be a shift from sort of modeling if you will to more you mentioned autonomous vehicles more ai influencing in real time. Especially with the edge. She can help us understand that better. >>Yeah, it's a great question. Right. Uh there are three stages to just simplify, I mean, you know, it's probably more sophisticated than that but let's simplify three stages. All right. To to building an Ai system that ultimately can predict, make a prediction right or to to assist you in decision making, have an outcome. So you start with the data massive amounts data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data and the machine uh starts to evolve a model based on all the data is seeing. It starts to evolve right to the point that using a test set of data that you have separately campus site that you know the answer for. Then you test the model uh you know after you trained it with all that data to see whether it's prediction accuracy is high enough and once you are satisfied with it, you you then deploy the model to make the decision and that's the influence. Right? So a lot of times depend on what what we are focusing on. We we um in data science are we working hard on assembling the right data to feed the machine with, That's the data preparation organization work. And then after which you build your models, you have to pick the right models for the decisions and prediction you wanted to make. You pick the right models and then you start feeding the data with it. Sometimes you you pick one model and the prediction isn't that robust, it is good but then it is not consistent right now what you do is uh you try another model so sometimes it's just keep trying different models until you get the right kind. Yeah, that gives you a good robust decision making and prediction after which It is tested well Q eight. You would then take that model and deploy it at the edge. Yeah. And then at the edges is essentially just looking at new data, applying it to the model, you're you're trained and then that model will give you a prediction decision. Right? So uh it is these three stages. Yeah, but more and more uh you know, your question reminds me that more and more people are thinking as the edge become more and more powerful. Can you also do learning at the edge? Right. That's the reason why we spoke about swarm learning the last time, learning at the edge as a swamp, right? Because maybe individually they may not have enough power to do so. But as a swampy me, >>is that learning from the edge or learning at the edge? In other words? Yes. Yeah. Question Yeah. >>That's a great question. That's a great question. Right? So uh the quick answer is learning at the edge, right? Uh and also from the edge, but the main goal, right? The goal is to learn at the edge so that you don't have to move the data that the Edge sees first back to the cloud or the core to do the learning because that would be the reason. One of the main reasons why you want to learn at the edge, right? Uh So so that you don't need to have to send all that data back and assemble it back from all the different edge devices, assemble it back to the cloud side to to do the learning right? With swampland. You can learn it and keep the data at the edge and learn at that point. >>And then maybe only selectively send the autonomous vehicle example you gave us. Great because maybe there, you know, there may be only persisting, they're not persisting data that is inclement weather or when a deer runs across the front and then maybe they they do that and then they send that smaller data set back and maybe that's where it's modelling done. But the rest can be done at the edges. It's a new world that's coming down. Let me ask you a question, is there a limit to what data should be collected and how it should be collected? >>That's a great question again. You know uh wow today, full of these uh insightful questions that actually touches on the second challenge. Right? How do we uh in order to thrive in this new age of inside? The second challenge is are you know the is our future challenge, right? What do we do for our future? And and in there is uh the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that I talk about what to collect right? When to organize it when you collect and then where will your data be, you know going forward that you are collecting from? So what, when and where for the what data for the what data to collect? That? That was the question you ask. Um it's it's a question that different industries have to ask themselves because it will vary, right? Um let me give you the you use the autonomous car example, let me use that. And you have this customer collecting massive amounts of data. You know, we're talking about 10 petabytes a day from the fleet of their cars. And these are not production autonomous cars, right? These are training autonomous cars collecting data so they can train and eventually deploy commercial cars, right? Um so this data collection cars they collect as a fleet of them collect temporal bikes a day. And when it came to us building a storage system to store all of that data, they realized they don't want to afford to store all of it. Now, here comes the dilemma, right? What should I after I spent so much effort building all these cars and sensors and collecting data, I've now decide what to delete. That's a dilemma right now in working with them on this process of trimming down what they collected. You know, I'm constantly reminded of the sixties and seventies, right? To remind myself 60 and seventies, we call a large part of our D. N. A junk DNA. Today. We realize that a large part of that what we call john has function as valuable function. They are not jeans, but they regulate the function of jeans, you know, So, so what's jump in the yesterday could be valuable today or what's junk today could be valuable tomorrow, Right? So, so there's this tension going on right between you decided not wanting to afford to store everything that you can get your hands on. But on the other hand, you you know, you worry you you you ignore the wrong ones, right? You can see this tension in our customers, right? And it depends on industry here, right? In health care, they say I have no choice. I I want it. All right. One very insightful point brought up by one health care provider that really touched me was, you know, we are not we don't only care. Of course we care a lot. We care a lot about the people we are caring for, right? But you also care for the people were not caring for. How do we find them? Mhm. Right. And that therefore, they did not just need to collect data. That is that they have with from their patients. They also need to reach out right to outside data so that they can figure out who they are not caring for, right? So they want it all. So I tell us them, so what do you do with funding if you want it all? They say they have no choice but to figure out a way to fund it and perhaps monetization of what they have now is the way to come around and find that. Of course they also come back to us rightfully that you know, we have to then work out a way to help them build that system, you know? So that's health care, right? And and if you go to other industries like banking, they say they can't afford to keep them off, but they are regulated, seems like healthcare, they are regulated as to uh privacy and such. Like so many examples different industries having different needs, but different approaches to how what they collect. But there is this constant tension between um you perhaps deciding not wanting to fund all of that uh all that you can store, right? But on the other hand, you know, if you if you kind of don't want to afford it and decide not to store some uh if he does some become highly valuable in the future, right? Yeah. >>We can make some assumptions about the future, can't we? I mean, we know there's gonna be a lot more data than than we've ever seen before. We know that we know well notwithstanding supply constraints on things like nand. We know the prices of storage is going to continue to decline. We also know, and not a lot of people are really talking about this but the processing power but he says moore's law is dead okay. It's waning. But the processing power when you combine the Cpus and NP US and GPUS and accelerators and and so forth actually is is increasing. And so when you think about these use cases at the edge, you're going to have much more processing power, you're gonna have cheaper storage and it's going to be less expensive processing And so as an ai practitioner, what can you do with that? >>Yeah, it's highly again, another insightful questions that we touched on our keynote and that that goes up to the why I do the where? Right, When will your data be? Right. We have one estimate that says that by next year there will be 55 billion connected devices out there. Right. 55 billion. Right. What's the population of the world? Of the other? Of 10 billion? But this thing is 55 billion. Right? Uh and many of them, most of them can collect data. So what do you what do you do? Right. Um So the amount of data that's gonna come in, it's gonna weigh exceed right? Our drop in storage costs are increasing computer power. Right? So what's the answer? Right. So, so the the answer must be knowing that we don't and and even the drop in price and increase in bandwidth, it will overwhelm the increased five G will overwhelm five G. Right? Given amount 55 billion of them collecting. Right? So, the answer must be that there might need to be a balance between you needing to bring all that data from the 55 billion devices of data back to a central as a bunch of central Cause because you may not be able to afford to do that firstly band with even with five G. M and and SD when you'll still be too expensive given the number of devices out there. Were you given storage cause dropping will still be too expensive to try and store them all. So the answer must be to start at least to mitigate the problem to some leave both a lot of the data out there. Right? And only send back the pertinent ones as you said before. But then if you did that, then how are we gonna do machine learning at the core and the cloud side? If you don't have all the data you want rich data to train with. Right? Some sometimes you want a mix of the uh positive type data and the negative type data so you can train the machine in a more balanced way. So the answer must be eventually right. As we move forward with these huge number of devices out of the edge to do machine learning at the edge. Today, we don't have enough power. Right? The edge typically is characterized by a lower uh, energy capability and therefore lower compute power. But soon, you know, even with lower energy, they can do more with compute power improving in energy efficiency, Right? Uh, so learning at the edge today, we do influence at the edge. So we data model deploy and you do influence at the age, that's what we do today. But more and more, I believe, given a massive amount of data at the edge, you you have to have to start doing machine learning at the edge. And and if when you don't have enough power, then you aggregate multiple devices, compute power into a swamp and learn as a swan, >>interesting. So now, of course, if I were sitting and fly on the wall in HP board meeting, I said, okay, HP is as a leading provider of compute, how do you take advantage of that? I mean, we're going, I know it's future, but you must be thinking about that and participating in those markets. I know today you are you have, you know, edge line and other products. But there's it seems to me that it's it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that >>opportunity for your customers? Uh the world will have to have a balance right? Where today the default, Well, the more common mode is to collect the data from the edge and train at uh at some centralized location or a number of centralized location um going forward. Given the proliferation of the edge devices, we'll need a balance. We need both. We need capability at the cloud side. Right. And it has to be hybrid. And then we need capability on the edge side. Yeah. That they want to build systems that that on one hand, uh is uh edge adapted, right? Meaning the environmentally adapted because the edge different they are on a lot of times on the outside. Uh They need to be packaging adapted and also power adapted, right? Because typically many of these devices are battery powered. Right? Um so you have to build systems that adapt to it, but at the same time they must not be custom. That's my belief. They must be using standard processes and standard operating system so that they can run rich a set of applications. So yes. Um that's that's also the insightful for that Antonio announced in 2018, Uh the next four years from 2018, right, $4 billion dollars invested to strengthen our edge portfolio, edge product lines, right Edge solutions. >>I get a doctor go. I could go on for hours with you. You're you're just such a great guest. Let's close what are you most excited about in the future of of of it? Certainly H. P. E. But the industry in general. >>Yeah I think the excitement is uh the customers right? The diversity of customers and and the diversity in a way they have approached their different problems with data strategy. So the excitement is around data strategy right? Just like you know uh you know the the statement made was was so was profound. Right? Um And Antonio said we are in the age of insight powered by data. That's the first line right? The line that comes after that is as such were becoming more and more data centric with data the currency. Now the next step is even more profound. That is um you know we are going as far as saying that you know um data should not be treated as cost anymore. No right. But instead as an investment in a new asset class called data with value on our balance sheet, this is a this is a step change right in thinking that is going to change the way we look at data the way we value it. So that's a statement that this is the exciting thing because because for for me a city of AI right uh machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. So so that's that's why when when people start to value data right? And and and say that it is an investment when we collect it. It is very positive for ai because an Ai system gets intelligent, more intelligence because it has a huge amounts of data and the diversity of data. So it'd be great if the community values values data. Well >>you certainly see it in the valuations of many companies these days. Um and I think increasingly you see it on the income statement, you know data products and people monetizing data services and maybe eventually you'll see it in the in the balance. You know Doug Laney when he was a gardener group wrote a book about this and a lot of people are thinking about it. That's a big change isn't it? Dr >>yeah. Question is is the process and methods evaluation. Right. But uh I believe we'll get there, we need to get started then we'll get their belief >>doctor goes on and >>pleasure. And yeah and then the yeah I will will will will benefit greatly from it. >>Oh yeah, no doubt people will better understand how to align you know, some of these technology investments, Doctor goes great to see you again. Thanks so much for coming back in the cube. It's been a real pleasure. >>Yes. A system. It's only as smart as the data you feed it with. >>Excellent. We'll leave it there. Thank you for spending some time with us and keep it right there for more great interviews from HP discover 21. This is dave a lot for the cube. The leader in enterprise tech coverage right back.

Published Date : Jun 17 2021

SUMMARY :

at Hewlett Packard enterprise Doctor go great to see you again. the age of insights and how to craft a data centric strategy and you addressed you know That's also part of the reason why that's the main reason why you know Antonio on day one So maybe we could talk a little bit about some of the things that you The first one is is the current challenge and that current challenge is uh you know stated So that's and they, and they chalked it up to a glitch like you said, is is that humans put in the rules to decide what goes into So it seems that most of the Ai going on in the enterprise is modeling be a shift from sort of modeling if you will to more you mentioned autonomous It starts to evolve right to the point that using a test set of data that you have is that learning from the edge or learning at the edge? The goal is to learn at the edge so that you don't have to move the data that the And then maybe only selectively send the autonomous vehicle example you gave us. But on the other hand, you know, if you if you kind of don't want to afford it and But the processing power when you combine the Cpus and NP that there might need to be a balance between you needing to bring all that data from the I know today you are you have, you know, edge line and other products. Um so you have to build systems that adapt to it, but at the same time they must not Let's close what are you most excited about in the future of machine is only as intelligent as the data you feed it with. Um and I think increasingly you see it on the income statement, you know data products and Question is is the process and methods evaluation. And yeah and then the yeah I will will will will benefit greatly from it. Doctor goes great to see you again. It's only as smart as the data you feed it with. Thank you for spending some time with us and keep it right there for more great

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael LewisPERSON

0.99+

Dave VolontePERSON

0.99+

DavePERSON

0.99+

2018DATE

0.99+

HPORGANIZATION

0.99+

two languagesQUANTITY

0.99+

The Flash BoysTITLE

0.99+

55 billionQUANTITY

0.99+

10 billionQUANTITY

0.99+

second challengeQUANTITY

0.99+

Hewlett PackardORGANIZATION

0.99+

two challengesQUANTITY

0.99+

second areaQUANTITY

0.99+

one languageQUANTITY

0.99+

TodayDATE

0.99+

last yearDATE

0.99+

Doug LaneyPERSON

0.99+

tomorrowDATE

0.99+

next yearDATE

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

first lineQUANTITY

0.99+

first partQUANTITY

0.99+

May 6 2010DATE

0.99+

$4 billion dollarsQUANTITY

0.99+

two partQUANTITY

0.99+

Less than 100 sharesQUANTITY

0.99+

HPDORGANIZATION

0.99+

one modelQUANTITY

0.98+

one ruleQUANTITY

0.98+

one areaQUANTITY

0.98+

second barrierQUANTITY

0.98+

60QUANTITY

0.98+

55 billion devicesQUANTITY

0.98+

AntonioPERSON

0.98+

johnPERSON

0.98+

three stagesQUANTITY

0.98+

hundreds of millionsQUANTITY

0.97+

about 100 questionsQUANTITY

0.97+

Eng Lim GohPERSON

0.97+

HPEORGANIZATION

0.97+

first barrierQUANTITY

0.97+

first oneQUANTITY

0.97+

Three main areasQUANTITY

0.97+

yesterdayDATE

0.96+

tens of ministerialQUANTITY

0.96+

two insightQUANTITY

0.96+

Q eightOTHER

0.95+

2021DATE

0.94+

seventiesQUANTITY

0.94+

two keynotesQUANTITY

0.93+

a dayQUANTITY

0.93+

firstQUANTITY

0.92+

H P. S H. P. S. Annual customerEVENT

0.91+

United NationsORGANIZATION

0.91+

less than 103 100 100 sharesQUANTITY

0.91+

under 100 tradesQUANTITY

0.9+

under 100 sharesQUANTITY

0.9+

day oneQUANTITY

0.88+

about 10 petabytes a dayQUANTITY

0.88+

three quick examplesQUANTITY

0.85+

one health care providerQUANTITY

0.85+

one estimateQUANTITY

0.84+

three main thingsQUANTITY

0.83+

hundreds of millions of word pairsQUANTITY

0.82+

AntonioORGANIZATION

0.81+

sixtiesQUANTITY

0.78+

oneQUANTITY

0.77+

May sixDATE

0.75+

firstlyQUANTITY

0.74+

trillion dollarsQUANTITY

0.73+

second oneQUANTITY

0.71+

HP discover 21ORGANIZATION

0.69+

DR Eng limbPERSON

0.69+

one of our customersQUANTITY

0.66+

Dr Eng Lim Goh, Vice President, CTO, High Performance Computing & AI


 

(upbeat music) >> Welcome back to HPE Discover 2021, theCube's virtual coverage, continuous coverage of HPE's annual customer event. My name is Dave Vellante and we're going to dive into the intersection of high-performance computing, data and AI with Dr. Eng Lim Goh who's a Senior Vice President and CTO for AI at Hewlett Packard Enterprise. Dr. Goh, great to see you again. Welcome back to theCube. >> Hey, hello, Dave. Great to talk to you again. >> You might remember last year we talked a lot about swarm intelligence and how AI is evolving. Of course you hosted the Day 2 keynotes here at Discover. And you talked about thriving in the age of insights and how to craft a data-centric strategy and you addressed some of the biggest problems I think organizations face with data. And that's, you got to look, data is plentiful, but insights, they're harder to come by and you really dug into some great examples in retail, banking, and medicine and healthcare and media. But stepping back a little bit we'll zoom out on Discover '21, you know, what do you make of the events so far and some of your big takeaways? >> Hmm, well, you started with the insightful question. Data is everywhere then but we lack the insight. That's also part of the reason why that's a main reason why, Antonio on Day 1 focused and talked about that, the fact that we are in the now in the age of insight and how to thrive in this new age. What I then did on the Day 2 keynote following Antonio is to talk about the challenges that we need to overcome in order to thrive in this new age. >> So maybe we could talk a little bit about some of the things that you took away in terms of, I'm specifically interested in some of the barriers to achieving insights when you know customers are drowning in data. What do you hear from customers? What were your takeaway from some of the ones you talked about today? >> Very pertinent question, Dave. You know, the two challenges I spoke about how to, that we need to overcome in order to thrive in this new age, the first one is the current challenge. And that current challenge is, you know state of this, you know, barriers to insight, when we are awash with data. So that's a statement. How to overcome those barriers. One of the barriers to insight when we are awash in data, in the Day 2 keynote, I spoke about three main things, three main areas that receive from customers. The first one, the first barrier is with many of our customers, data is siloed. You know, like in a big corporation, you've got data siloed by sales, finance, engineering, manufacturing, and so on supply chain and so on. And there's a major effort ongoing in many corporations to build a Federation layer above all those silos so that when you build applications above they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the first barrier we spoke about, you know, barriers to insight when we are awash with data. The second barrier is that we see amongst our customers is that data is raw and disperse when they are stored. And it's tough to get to value out of them. In that case I use the example of the May 6, 2010 event where the stock market dropped a trillion dollars in tens of minutes. We all know those who are financially attuned with, know about this incident. But that this is not the only incident. There are many of them out there. And for that particular May 6, event, you know it took a long time to get insight, months, yeah, before we, for months we had no insight as to what happened, why it happened. And there were many other incidences like this and the regulators were looking for that one rule that could mitigate many of these incidences. One of our customers decided to take the hard road to go with the tough data. Because data is raw and dispersed. So they went into all the different feeds of financial transaction information, took the tough, you know, took a tough road and analyze that data took a long time to assemble. And he discovered that there was quote stuffing. That people were sending a lot of trades in and then canceling them almost immediately. You have to manipulate the market. And why didn't we see it immediately? Well, the reason is the process reports that everybody sees had the rule in there that says all trades less than 100 shares don't need to report in there. And so what people did was sending a lot of less than 100 shares trades to fly under the radar to do this manipulation. So here is, here the second barrier. Data could be raw and disperse. Sometimes it's just have to take the hard road and to get insight. And this is one great example. And then the last barrier has to do with sometimes when you start a project to get insight, to get answers and insight, you realize that all the data's around you, but you don't seem to find the right ones to get what you need. You don't seem to get the right ones, yeah. Here we have three quick examples of customers. One was a great example where they were trying to build a language translator a machine language translator between two languages. But in order to do that they need to get hundreds of millions of word pairs of one language compare with the corresponding other hundreds of millions of them. They say, "Where I'm going to get all these word pairs?" Someone creative thought of a willing source and huge source, it was a United Nations. You see, so sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data. The second one has to do with, there was the, sometimes you may just have to generate that data. Interesting one. We had an autonomous car customer that collects all these data from their cars. Massive amounts of data, lots of sensors, collect lots of data. And, you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car in fine weather and collected the car driving on this highway in rain and also in snow. But never had the opportunity to collect the car in hail because that's a rare occurrence. So instead of waiting for a time where the car can drive in hail, they build a simulation by having the car collected in snow and simulated hail. So these are some of the examples where we have customers working to overcome barriers. You have barriers that is associated with the fact, that data silo, if federated barriers associated with data that's tough to get at. They just took the hard road. And sometimes thirdly, you just have to be creative to get the right data you need. >> Wow, I tell you, I have about 100 questions based on what you just said. And as a great example, the flash crash in fact Michael Lewis wrote about this in his book, the "Flash Boys" and essentially. It was high frequency traders trying to front run the market and sending in small block trades trying to get sort of front ended. So that's, and they chalked it up to a glitch. Like you said, for months, nobody really knew what it was. So technology got us into this problem. Can I guess my question is can technology help us get get out of the problem? And that maybe is where AI fits in. >> Yes. Yes. In fact, a lot of analytics work went in to go back to the raw data that is highly dispersed from different sources, assemble them to see if you can find a material trend. You can see lots of trends. Like, no, we, if humans at things we tend to see patterns in clouds. So sometimes you need to apply statistical analysis, math to be sure that what the model is seeing is real. And that required work. That's one area. The second area is, you know, when this, there are times when you just need to go through that tough approach to find the answer. Now, the issue comes to mind now is that humans put in the rules to decide what goes into a report that everybody sees. And in this case before the change in the rules. By the way, after the discovery, the authorities changed the rules and all shares all trades of different, any sizes it has to be reported. Not, yeah. But the rule was applied to to say earlier that shares under 100, trades under 100 shares need not be reported. So sometimes you just have to understand that reports were decided by humans and for understandable reasons. I mean, they probably didn't, wanted for various reasons not to put everything in there so that people could still read it in a reasonable amount of time. But we need to understand that rules were being put in by humans for the reports we read. And as such there are times we just need to go back to the raw data. >> I want to ask you-- Or be it that it's going to be tough there. >> Yeah, so I want to ask you a question about AI as obviously it's in your title and it's something you know a lot about and I'm going to make a statement. You tell me if it's on point or off point. Seems that most of the AI going on in the enterprise is modeling data science applied to troves of data. But there's also a lot of AI going on in consumer, whether it's fingerprint technology or facial recognition or natural language processing. Will, to two-part question, will the consumer market, let's say as it has so often in the enterprise sort of inform us is sort of first part. And then will there be a shift from sort of modeling, if you will, to more, you mentioned autonomous vehicles more AI inferencing in real-time, especially with the Edge. I think you can help us understand that better. >> Yeah, this is a great question. There are three stages to just simplify, I mean, you know, it's probably more sophisticated than that, but let's just simplify there're three stages to building an AI system that ultimately can predict, make a prediction. Or to assist you in decision-making, have an outcome. So you start with the data, massive amounts of data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data. And the machine starts to evolve a model based on all the data is seeing it starts to evolve. To a point that using a test set of data that you have separately kept a site that you know the answer for. Then you test the model, you know after you're trained it with all that data to see whether his prediction accuracy is high enough. And once you are satisfied with it, you then deploy the model to make the decision and that's the inference. So a lot of times depending on what we are focusing on. We in data science are we working hard on assembling the right data to feed the machine with? That's the data preparation organization work. And then after which you build your models you have to pick the right models for the decisions and prediction you wanted to make. You pick the right models and then you start feeding the data with it. Sometimes you pick one model and a prediction isn't that a robust, it is good, but then it is not consistent. Now what you do is you try another model. So sometimes you just keep trying different models until you get the right kind, yeah, that gives you a good robust decision-making and prediction. Now, after which, if it's tested well, Q8 you will then take that model and deploy it at the Edge, yeah. And then at the Edge is essentially just looking at new data applying it to the model that you have trained and then that model will give you a prediction or a decision. So it is these three stages, yeah. But more and more, your question reminds me that more and more people are thinking as the Edge become more and more powerful, can you also do learning at the Edge? That's the reason why we spoke about swarm learning the last time, learning at the Edge as a swarm. Because maybe individually they may not have enough power to do so, but as a swarm, they may. >> Is that learning from the Edge or learning at the Edge. In other words, is it-- >> Yes. >> Yeah, you don't understand my question, yeah. >> That's a great question. That's a great question. So answer is learning at the Edge, and also from the Edge, but the main goal, the goal is to learn at the Edge so that you don't have to move the data that Edge sees first back to the Cloud or the call to do the learning. Because that would be the reason, one of the main reasons why you want to learn at the Edge. So that you don't need to have to send all that data back and assemble it back from all the different Edge devices assemble it back to the Cloud side to do the learning. With swarm learning, you can learn it and keep the data at the Edge and learn at that point, yeah. >> And then maybe only selectively send the autonomous vehicle example you gave is great 'cause maybe they're, you know, there may be only persisting. They're not persisting data that is an inclement weather, or when a deer runs across the front and then maybe they do that and then they send that smaller data set back and maybe that's where it's modeling done but the rest can be done at the Edge. It's a new world that's coming to, let me ask you a question. Is there a limit to what data should be collected and how it should be collected? >> That's a great question again, yeah, well, today full of these insightful questions that actually touches on the second challenge. How do we, to in order to thrive in this new age of insight. The second challenge is our future challenge. What do we do for our future? And in there is the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that, I talk about what to collect, and when to organize it when you collect, and then where will your data be going forward that you are collecting from? So what, when, and where. For the what data, for what data to collect that was the question you asked. It's a question that different industries have to ask themselves because it will vary. Let me give you the, you use the autonomous car example. Let me use that and you have this customer collecting massive amounts of data. You know, we talking about 10 petabytes a day from a fleet of their cars and these are not production autonomous cars. These are training autonomous cars, collecting data so they can train and eventually deploy a commercial cars. Also these data collection cars, they collect 10 as a fleet of them collect 10 petabytes a day. And then when it came to us, building a storage system to store all of that data they realize they don't want to afford to store all of it. Now here comes the dilemma. What should I, after I spent so much effort building all this cars and sensors and collecting data, I've now decide what to delete. That's a dilemma. Now in working with them on this process of trimming down what they collected. I'm constantly reminded of the 60s and 70s. To remind myself 60s and 70s, we call a large part of our DNA, junk DNA. Today we realized that a large part of that, what we call junk has function has valuable function. They are not genes but they regulate the function of genes. So what's junk in yesterday could be valuable today, or what's junk today could be valuable tomorrow. So there's this tension going on between you deciding not wanting to afford to store everything that you can get your hands on. But on the other hand, you know you worry, you ignore the wrong ones. You can see this tension in our customers. And then it depends on industry here. In healthcare they say, I have no choice. I want it all, why? One very insightful point brought up by one healthcare provider that really touched me was you know, we are not, we don't only care. Of course we care a lot. We care a lot about the people we are caring for. But we also care for the people we are not caring for. How do we find them? And therefore, they did not just need to collect data that they have with, from their patients they also need to reach out to outside data so that they can figure out who they are not caring for. So they want it all. So I asked them, "So what do you do with funding if you want it all?" They say they have no choice but they'll figure out a way to fund it and perhaps monetization of what they have now is the way to come around and fund that. Of course, they also come back to us, rightfully that you know, we have to then work out a way to to help them build a system. So that healthcare. And if you go to other industries like banking, they say they can afford to keep them all. But they are regulated same like healthcare. They are regulated as to privacy and such like. So many examples, different industries having different needs but different approaches to how, what they collect. But there is this constant tension between you perhaps deciding not wanting to fund all of that, all that you can store. But on the other hand you know, if you kind of don't want to afford it and decide not to store some, maybe those some become highly valuable in the future. You worry. >> Well, we can make some assumptions about the future, can't we? I mean we know there's going to be a lot more data than we've ever seen before, we know that. We know, well not withstanding supply constraints and things like NAND. We know the price of storage is going to continue to decline. We also know and not a lot of people are really talking about this but the processing power, everybody says, Moore's Law is dead. Okay, it's waning but the processing power when you combine the CPUs and NPUs, and GPUs and accelerators and so forth, actually is increasing. And so when you think about these use cases at the Edge you're going to have much more processing power. You're going to have cheaper storage and it's going to be less expensive processing. And so as an AI practitioner, what can you do with that? >> Yeah, it's a highly, again another insightful question that we touched on, on our keynote and that goes up to the why, I'll do the where. Where will your data be? We have one estimate that says that by next year, there will be 55 billion connected devices out there. 55 billion. What's the population of the world? Well, off the order of 10 billion, but this thing is 55 billion. And many of them, most of them can collect data. So what do you do? So the amount of data that's going to come in is going to way exceed our drop in storage costs our increasing compute power. So what's the answer? The answer must be knowing that we don't and even a drop in price and increase in bandwidth, it will overwhelm the 5G, it'll will overwhelm 5G, given the amount of 55 billion of them collecting. So the answer must be that there needs to be a balance between you needing to bring all that data from the 55 billion devices of the data back out to a central, as a bunch of central cost because you may not be able to afford to do that. Firstly bandwidth, even with 5G and as the, when you still be too expensive given the number of devices out there. You know given storage costs dropping it'll still be too expensive to try and install them all. So the answer must be to start at least to mitigate the problem to some leave most a lot of the data out there. And only send back the pertinent ones, as you said before. But then if you did that then, how are we going to do machine learning at the core and the Cloud side, if you don't have all the data you want rich data to train with. Sometimes you want to a mix of the positive type data, and the negative type data. So you can train the machine in a more balanced way. So the answer must be you eventually, as we move forward with these huge number of devices are at the Edge to do machine learning at the Edge. Today we don't even have power. The Edge typically is characterized by a lower energy capability and therefore, lower compute power. But soon, you know, even with low energy, they can do more with compute power, improving in energy efficiency. So learning at the Edge today we do inference at the Edge. So we data, model, deploy and you do inference at age. That's what we do today. But more and more, I believe given a massive amount of data at the Edge you have to have to start doing machine learning at the Edge. And if when you don't have enough power then you aggregate multiple devices' compute power into a swarm and learn as a swarm. >> Oh, interesting, so now of course, if I were sitting in a flyer flying the wall on HPE Board meeting I said, "Okay, HPE is a leading provider of compute." How do you take advantage that? I mean, we're going, I know it's future but you must be thinking about that and participating in those markets. I know today you are, you have, you know, Edge line and other products, but there's, it seems to me that it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that opportunity for your customers? >> The wall will have to have a balance. Where today the default, well, the more common mode is to collect the data from the Edge and train at some centralized location or number of centralized location. Going forward, given the proliferation of the Edge devices, we'll need a balance, we need both. We need capability at the Cloud side. And it has to be hybrid. And then we need capability on the Edge side. Yeah that we need to build systems that on one hand is Edge-adapted. Meaning they environmentally-adapted because the Edge differently are on it. A lot of times on the outside, they need to be packaging-adapted and also power-adapted. Because typically many of these devices are battery-powered. So you have to build systems that adapts to it. But at the same time, they must not be custom. That's my belief. They must be using standard processes and standard operating system so that they can run a rich set of applications. So yes, that's also the insightful for that. Antonio announced in 2018 for the next four years from 2018, $4 billion invested to strengthen our Edge portfolio our Edge product lines, Edge solutions. >> Dr. Goh, I could go on for hours with you. You're just such a great guest. Let's close. What are you most excited about in the future of certainly HPE, but the industry in general? >> Yeah, I think the excitement is the customers. The diversity of customers and the diversity in the way they have approached their different problems with data strategy. So the excitement is around data strategy. Just like, you know, the statement made for us was so, was profound. And Antonio said we are in the age of insight powered by data. That's the first line. The line that comes after that is as such we are becoming more and more data-centric with data the currency. Now the next step is even more profound. That is, you know, we are going as far as saying that data should not be treated as cost anymore, no. But instead, as an investment in a new asset class called data with value on our balance sheet. This is a step change in thinking that is going to change the way we look at data, the way we value it. So that's a statement. So this is the exciting thing, because for me a CTO of AI, a machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. So that's why when the people start to value data and say that it is an investment when we collect it it is very positive for AI because an AI system gets intelligent, get more intelligence because it has huge amounts of data and a diversity of data. So it'd be great if the community values data. >> Well, are you certainly see it in the valuations of many companies these days? And I think increasingly you see it on the income statement, you know data products and people monetizing data services, and yeah, maybe eventually you'll see it in the balance sheet, I know. Doug Laney when he was at Gartner Group wrote a book about this and a lot of people are thinking about it. That's a big change, isn't it? Dr. Goh. >> Yeah, yeah, yeah. Your question is the process and methods in valuation. But I believe we'll get there. We need to get started and then we'll get there, I believe, yeah. >> Dr. Goh it's always my pleasure. >> And then the AI will benefit greatly from it. >> Oh yeah, no doubt. People will better understand how to align some of these technology investments. Dr. Goh, great to see you again. Thanks so much for coming back in theCube. It's been a real pleasure. >> Yes, a system is only as smart as the data you feed it with. (both chuckling) >> Well, excellent, we'll leave it there. Thank you for spending some time with us so keep it right there for more great interviews from HPE Discover '21. This is Dave Vellante for theCube, the leader in enterprise tech coverage. We'll be right back (upbeat music)

Published Date : Jun 10 2021

SUMMARY :

Dr. Goh, great to see you again. Great to talk to you again. and you addressed some and how to thrive in this new age. of the ones you talked about today? One of the barriers to insight And as a great example, the flash crash is that humans put in the rules to decide that it's going to be tough there. and it's something you know a lot about And the machine starts to evolve a model Is that learning from the Yeah, you don't So that you don't need to have but the rest can be done at the Edge. But on the other hand you know, And so when you think about and the Cloud side, if you I know today you are, you So you have to build about in the future as the data you feed it with. And I think increasingly you Your question is the process And then the AI will Dr. Goh, great to see you again. as the data you feed it with. Thank you for spending some time with us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Michael LewisPERSON

0.99+

Doug LaneyPERSON

0.99+

DavePERSON

0.99+

AntonioPERSON

0.99+

2018DATE

0.99+

10 billionQUANTITY

0.99+

$4 billionQUANTITY

0.99+

second challengeQUANTITY

0.99+

55 billionQUANTITY

0.99+

two languagesQUANTITY

0.99+

two challengesQUANTITY

0.99+

May 6DATE

0.99+

Flash BoysTITLE

0.99+

two-partQUANTITY

0.99+

55 billionQUANTITY

0.99+

tomorrowDATE

0.99+

Gartner GroupORGANIZATION

0.99+

second areaQUANTITY

0.99+

TodayDATE

0.99+

last yearDATE

0.99+

less than 100 sharesQUANTITY

0.99+

hundreds of millionsQUANTITY

0.99+

first lineQUANTITY

0.99+

OneQUANTITY

0.99+

HPEORGANIZATION

0.99+

todayDATE

0.99+

second barrierQUANTITY

0.99+

May 6, 2010DATE

0.99+

10QUANTITY

0.99+

first barrierQUANTITY

0.99+

bothQUANTITY

0.99+

less than 100 shareQUANTITY

0.99+

Dr.PERSON

0.99+

one modelQUANTITY

0.99+

tens of minutesQUANTITY

0.98+

one areaQUANTITY

0.98+

one languageQUANTITY

0.98+

EdgeORGANIZATION

0.98+

three stagesQUANTITY

0.98+

yesterdayDATE

0.98+

first partQUANTITY

0.98+

one ruleQUANTITY

0.98+

GohPERSON

0.98+

FirstlyQUANTITY

0.98+

first oneQUANTITY

0.97+

United NationsORGANIZATION

0.97+

firstQUANTITY

0.97+

oneQUANTITY

0.97+

first barrierQUANTITY

0.97+

Hewlett Packard EnterpriseORGANIZATION

0.96+

about 100 questionsQUANTITY

0.96+

10 petabytes a dayQUANTITY

0.95+

Day 2QUANTITY

0.94+

Eng Lim GohPERSON

0.94+

Day 1QUANTITY

0.93+

under 100QUANTITY

0.92+

DrPERSON

0.92+

one estimateQUANTITY

0.91+

Dr Eng Lim Goh, Vice President, CTO, High Performance Computing & AI


 

(upbeat music) >> Welcome back to HPE Discover 2021, theCUBE's virtual coverage, continuous coverage of HPE's Annual Customer Event. My name is Dave Vellante, and we're going to dive into the intersection of high-performance computing, data and AI with Doctor Eng Lim Goh, who's a Senior Vice President and CTO for AI at Hewlett Packard Enterprise. Doctor Goh, great to see you again. Welcome back to theCUBE. >> Hello, Dave, great to talk to you again. >> You might remember last year we talked a lot about Swarm intelligence and how AI is evolving. Of course, you hosted the Day 2 Keynotes here at Discover. And you talked about thriving in the age of insights, and how to craft a data-centric strategy. And you addressed some of the biggest problems, I think organizations face with data. That's, you've got a, data is plentiful, but insights, they're harder to come by. >> Yeah. >> And you really dug into some great examples in retail, banking, in medicine, healthcare and media. But stepping back a little bit we zoomed out on Discover '21. What do you make of the events so far and some of your big takeaways? >> Hmm, well, we started with the insightful question, right, yeah? Data is everywhere then, but we lack the insight. That's also part of the reason why, that's a main reason why Antonio on day one focused and talked about the fact that we are in the now in the age of insight, right? And how to try thrive in that age, in this new age? What I then did on a Day 2 Keynote following Antonio is to talk about the challenges that we need to overcome in order to thrive in this new age. >> So, maybe we could talk a little bit about some of the things that you took away in terms of, I'm specifically interested in some of the barriers to achieving insights. You know customers are drowning in data. What do you hear from customers? What were your takeaway from some of the ones you talked about today? >> Oh, very pertinent question, Dave. You know the two challenges I spoke about, that we need to overcome in order to thrive in this new age. The first one is the current challenge. And that current challenge is, you know, stated is now barriers to insight, when we are awash with data. So that's a statement on how do you overcome those barriers? What are the barriers to insight when we are awash in data? In the Day 2 Keynote, I spoke about three main things. Three main areas that we receive from customers. The first one, the first barrier is in many, with many of our customers, data is siloed, all right. You know, like in a big corporation, you've got data siloed by sales, finance, engineering, manufacturing and so on supply chain and so on. And there's a major effort ongoing in many corporations to build a federation layer above all those silos so that when you build applications above, they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the first barrier we spoke about, you know? Barriers to insight when we are awash with data. The second barrier is that we see amongst our customers is that data is raw and disperse when they are stored. And you know, it's tough to get at, to tough to get a value out of them, right? And in that case, I use the example of, you know, the May 6, 2010 event where the stock market dropped a trillion dollars in terms of minutes. We all know those who are financially attuned with know about this incident but that this is not the only incident. There are many of them out there. And for that particular May 6 event, you know, it took a long time to get insight. Months, yeah, before we, for months we had no insight as to what happened. Why it happened? Right, and there were many other incidences like this and the regulators were looking for that one rule that could mitigate many of these incidences. One of our customers decided to take the hard road they go with the tough data, right? Because data is raw and dispersed. So they went into all the different feeds of financial transaction information, took the tough, you know, took a tough road. And analyze that data took a long time to assemble. And they discovered that there was caught stuffing, right? That people were sending a lot of trades in and then canceling them almost immediately. You have to manipulate the market. And why didn't we see it immediately? Well, the reason is the process reports that everybody sees, the rule in there that says, all trades less than a hundred shares don't need to report in there. And so what people did was sending a lot of less than a hundred shares trades to fly under the radar to do this manipulation. So here is the second barrier, right? Data could be raw and dispersed. Sometimes it's just have to take the hard road and to get insight. And this is one great example. And then the last barrier has to do with sometimes when you start a project to get insight, to get answers and insight, you realize that all the data's around you, but you don't seem to find the right ones to get what you need. You don't seem to get the right ones, yeah? Here we have three quick examples of customers. One was a great example, right? Where they were trying to build a language translator or machine language translator between two languages, right? By not do that, they need to get hundreds of millions of word pairs. You know of one language compare with the corresponding other. Hundreds of millions of them. They say, well, I'm going to get all these word pairs. Someone creative thought of a willing source and a huge, it was a United Nations. You see? So sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data, right? The second one has to do with, there was the sometimes you may just have to generate that data. Interesting one, we had an autonomous car customer that collects all these data from their their cars, right? Massive amounts of data, lots of sensors, collect lots of data. And, you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car in fine weather and collected the car driving on this highway in rain and also in snow. But never had the opportunity to collect the car in hill because that's a rare occurrence. So instead of waiting for a time where the car can drive in hill, they build a simulation by having the car collected in snow and simulated him. So these are some of the examples where we have customers working to overcome barriers, right? You have barriers that is associated. In fact, that data silo, they federated it. Virus associated with data, that's tough to get at. They just took the hard road, right? And sometimes thirdly, you just have to be creative to get the right data you need. >> Wow! I tell you, I have about a hundred questions based on what you just said, you know? (Dave chuckles) And as a great example, the Flash Crash. In fact, Michael Lewis, wrote about this in his book, the Flash Boys. And essentially, right, it was high frequency traders trying to front run the market and sending into small block trades (Dave chuckles) trying to get sort of front ended. So that's, and they chalked it up to a glitch. Like you said, for months, nobody really knew what it was. So technology got us into this problem. (Dave chuckles) I guess my question is can technology help us get out of the problem? And that maybe is where AI fits in? >> Yes, yes. In fact, a lot of analytics work went in to go back to the raw data that is highly dispersed from different sources, right? Assembled them to see if you can find a material trend, right? You can see lots of trends, right? Like, no, we, if humans look at things that we tend to see patterns in Clouds, right? So sometimes you need to apply statistical analysis math to be sure that what the model is seeing is real, right? And that required, well, that's one area. The second area is you know, when this, there are times when you just need to go through that tough approach to find the answer. Now, the issue comes to mind now is that humans put in the rules to decide what goes into a report that everybody sees. Now, in this case, before the change in the rules, right? But by the way, after the discovery, the authorities changed the rules and all shares, all trades of different any sizes it has to be reported. >> Right. >> Right, yeah? But the rule was applied, you know, I say earlier that shares under a hundred, trades under a hundred shares need not be reported. So, sometimes you just have to understand that reports were decided by humans and for understandable reasons. I mean, they probably didn't wanted a various reasons not to put everything in there. So that people could still read it in a reasonable amount of time. But we need to understand that rules were being put in by humans for the reports we read. And as such, there are times we just need to go back to the raw data. >> I want to ask you... >> Oh, it could be, that it's going to be tough, yeah. >> Yeah, I want to ask you a question about AI as obviously it's in your title and it's something you know a lot about but. And I'm going to make a statement, you tell me if it's on point or off point. So seems that most of the AI going on in the enterprise is modeling data science applied to, you know, troves of data. But there's also a lot of AI going on in consumer. Whether it's, you know, fingerprint technology or facial recognition or natural language processing. Well, two part question will the consumer market, as it has so often in the enterprise sort of inform us is sort of first part. And then, there'll be a shift from sort of modeling if you will to more, you mentioned the autonomous vehicles, more AI inferencing in real time, especially with the Edge. Could you help us understand that better? >> Yeah, this is a great question, right? There are three stages to just simplify. I mean, you know, it's probably more sophisticated than that. But let's just simplify that three stages, right? To building an AI system that ultimately can predict, make a prediction, right? Or to assist you in decision-making. I have an outcome. So you start with the data, massive amounts of data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data, and the machine starts to evolve a model based on all the data it's seeing. It starts to evolve, right? To a point that using a test set of data that you have separately kept aside that you know the answer for. Then you test the model, you know? After you've trained it with all that data to see whether its prediction accuracy is high enough. And once you are satisfied with it, you then deploy the model to make the decision. And that's the inference, right? So a lot of times, depending on what we are focusing on, we in data science are, are we working hard on assembling the right data to feed the machine with? That's the data preparation organization work. And then after which you build your models you have to pick the right models for the decisions and prediction you need to make. You pick the right models. And then you start feeding the data with it. Sometimes you pick one model and a prediction isn't that robust. It is good, but then it is not consistent, right? Now what you do is you try another model. So sometimes it gets keep trying different models until you get the right kind, yeah? That gives you a good robust decision-making and prediction. Now, after which, if it's tested well, QA, you will then take that model and deploy it at the Edge. Yeah, and then at the Edge is essentially just looking at new data, applying it to the model that you have trained. And then that model will give you a prediction or a decision, right? So it is these three stages, yeah. But more and more, your question reminds me that more and more people are thinking as the Edge become more and more powerful. Can you also do learning at the Edge? >> Right. >> That's the reason why we spoke about Swarm Learning the last time. Learning at the Edge as a Swarm, right? Because maybe individually, they may not have enough power to do so. But as a Swarm, they may. >> Is that learning from the Edge or learning at the Edge? In other words, is that... >> Yes. >> Yeah. You do understand my question. >> Yes. >> Yeah. (Dave chuckles) >> That's a great question. That's a great question, right? So the quick answer is learning at the Edge, right? And also from the Edge, but the main goal, right? The goal is to learn at the Edge so that you don't have to move the data that Edge sees first back to the Cloud or the Call to do the learning. Because that would be the reason, one of the main reasons why you want to learn at the Edge. Right? So that you don't need to have to send all that data back and assemble it back from all the different Edge devices. Assemble it back to the Cloud Site to do the learning, right? Some on you can learn it and keep the data at the Edge and learn at that point, yeah. >> And then maybe only selectively send. >> Yeah. >> The autonomous vehicle, example you gave is great. 'Cause maybe they're, you know, there may be only persisting. They're not persisting data that is an inclement weather, or when a deer runs across the front. And then maybe they do that and then they send that smaller data setback and maybe that's where it's modeling done but the rest can be done at the Edge. It's a new world that's coming through. Let me ask you a question. Is there a limit to what data should be collected and how it should be collected? >> That's a great question again, yeah. Well, today full of these insightful questions. (Dr. Eng chuckles) That actually touches on the the second challenge, right? How do we, in order to thrive in this new age of insight? The second challenge is our future challenge, right? What do we do for our future? And in there is the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that, I talked about what to collect, right? When to organize it when you collect? And then where will your data be going forward that you are collecting from? So what, when, and where? For what data to collect? That was the question you asked, it's a question that different industries have to ask themselves because it will vary, right? Let me give you the, you use the autonomous car example. Let me use that. And we do have this customer collecting massive amounts of data. You know, we're talking about 10 petabytes a day from a fleet of their cars. And these are not production autonomous cars, right? These are training autonomous cars, collecting data so they can train and eventually deploy commercial cars, right? Also this data collection cars, they collect 10, as a fleet of them collect 10 petabytes a day. And then when they came to us, building a storage system you know, to store all of that data, they realized they don't want to afford to store all of it. Now here comes the dilemma, right? What should I, after I spent so much effort building all this cars and sensors and collecting data, I've now decide what to delete. That's a dilemma, right? Now in working with them on this process of trimming down what they collected, you know, I'm constantly reminded of the 60s and 70s, right? To remind myself 60s and 70s, we called a large part of our DNA, junk DNA. >> Yeah. (Dave chuckles) >> Ah! Today, we realized that a large part of that what we call junk has function as valuable function. They are not genes but they regulate the function of genes. You know? So what's junk in yesterday could be valuable today. Or what's junk today could be valuable tomorrow, right? So, there's this tension going on, right? Between you deciding not wanting to afford to store everything that you can get your hands on. But on the other hand, you worry, you ignore the wrong ones, right? You can see this tension in our customers, right? And then it depends on industry here, right? In healthcare they say, I have no choice. I want it all, right? Oh, one very insightful point brought up by one healthcare provider that really touched me was you know, we don't only care. Of course we care a lot. We care a lot about the people we are caring for, right? But who also care for the people we are not caring for? How do we find them? >> Uh-huh. >> Right, and that definitely, they did not just need to collect data that they have with from their patients. They also need to reach out, right? To outside data so that they can figure out who they are not caring for, right? So they want it all. So I asked them, so what do you do with funding if you want it all? They say they have no choice but to figure out a way to fund it and perhaps monetization of what they have now is the way to come around and fund that. Of course, they also come back to us rightfully, that you know we have to then work out a way to help them build a system, you know? So that's healthcare, right? And if you go to other industries like banking, they say they can afford to keep them all. >> Yeah. >> But they are regulated, seemed like healthcare, they are regulated as to privacy and such like. So many examples different industries having different needs but different approaches to what they collect. But there is this constant tension between you perhaps deciding not wanting to fund all of that, all that you can install, right? But on the other hand, you know if you kind of don't want to afford it and decide not to start some. Maybe those some become highly valuable in the future, right? (Dr. Eng chuckles) You worry. >> Well, we can make some assumptions about the future. Can't we? I mean, we know there's going to be a lot more data than we've ever seen before. We know that. We know, well, not withstanding supply constraints and things like NAND. We know the prices of storage is going to continue to decline. We also know and not a lot of people are really talking about this, but the processing power, but the says, Moore's law is dead. Okay, it's waning, but the processing power when you combine the CPUs and NPUs, and GPUs and accelerators and so forth actually is increasing. And so when you think about these use cases at the Edge you're going to have much more processing power. You're going to have cheaper storage and it's going to be less expensive processing. And so as an AI practitioner, what can you do with that? >> Yeah, it's a highly, again, another insightful question that we touched on our Keynote. And that goes up to the why, uh, to the where? Where will your data be? Right? We have one estimate that says that by next year there will be 55 billion connected devices out there, right? 55 billion, right? What's the population of the world? Well, of the other 10 billion? But this thing is 55 billion. (Dave chuckles) Right? And many of them, most of them can collect data. So what do you do? Right? So the amount of data that's going to come in, it's going to way exceed, right? Drop in storage costs are increasing compute power. >> Right. >> Right. So what's the answer, right? So the answer must be knowing that we don't, and even a drop in price and increase in bandwidth, it will overwhelm the, 5G, it will overwhelm 5G, right? Given the amount of 55 billion of them collecting. So the answer must be that there needs to be a balance between you needing to bring all of that data from the 55 billion devices of the data back to a central, as a bunch of central cost. Because you may not be able to afford to do that. Firstly bandwidth, even with 5G and as the, when you'll still be too expensive given the number of devices out there. You know given storage costs dropping is still be too expensive to try and install them all. So the answer must be to start, at least to mitigate from to, some leave most a lot of the data out there, right? And only send back the pertinent ones, as you said before. But then if you did that then how are we going to do machine learning at the Core and the Cloud Site, if you don't have all the data? You want rich data to train with, right? Sometimes you want to mix up the positive type data and the negative type data. So you can train the machine in a more balanced way. So the answer must be eventually, right? As we move forward with these huge number of devices all at the Edge to do machine learning at the Edge. Today we don't even have power, right? The Edge typically is characterized by a lower energy capability and therefore lower compute power. But soon, you know? Even with low energy, they can do more with compute power improving in energy efficiency, right? So learning at the Edge, today we do inference at the Edge. So we data, model, deploy and you do inference there is. That's what we do today. But more and more, I believe given a massive amount of data at the Edge, you have to start doing machine learning at the Edge. And when you don't have enough power then you aggregate multiple devices, compute power into a Swarm and learn as a Swarm, yeah. >> Oh, interesting. So now of course, if I were sitting and fly on the wall and the HPE board meeting I said, okay, HPE is a leading provider of compute. How do you take advantage of that? I mean, we're going, I know it's future but you must be thinking about that and participating in those markets. I know today you are, you have, you know, Edge line and other products. But there's, it seems to me that it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that opportunity for the customers? >> Hmm, the wall will have to have a balance, right? Where today the default, well, the more common mode is to collect the data from the Edge and train at some centralized location or number of centralized location. Going forward, given the proliferation of the Edge devices, we'll need a balance, we need both. We need capability at the Cloud Site, right? And it has to be hybrid. And then we need capability on the Edge side that we need to build systems that on one hand is an Edge adapter, right? Meaning they environmentally adapted because the Edge differently are on it, a lot of times on the outside. They need to be packaging adapted and also power adapted, right? Because typically many of these devices are battery powered. Right? So you have to build systems that adapts to it. But at the same time, they must not be custom. That's my belief. It must be using standard processes and standard operating system so that they can run a rich set of applications. So yes, that's also the insight for that Antonio announced in 2018. For the next four years from 2018, right? $4 billion invested to strengthen our Edge portfolio. >> Uh-huh. >> Edge product lines. >> Right. >> Uh-huh, Edge solutions. >> I could, Doctor Goh, I could go on for hours with you. You're just such a great guest. Let's close. What are you most excited about in the future of, certainly HPE, but the industry in general? >> Yeah, I think the excitement is the customers, right? The diversity of customers and the diversity in the way they have approached different problems of data strategy. So the excitement is around data strategy, right? Just like, you know, the statement made for us was so was profound, right? And Antonio said, we are in the age of insight powered by data. That's the first line, right? The line that comes after that is as such we are becoming more and more data centric with data that currency. Now the next step is even more profound. That is, you know, we are going as far as saying that, you know, data should not be treated as cost anymore. No, right? But instead as an investment in a new asset class called data with value on our balance sheet. This is a step change, right? Right, in thinking that is going to change the way we look at data, the way we value it. So that's a statement. (Dr. Eng chuckles) This is the exciting thing, because for me a CTO of AI, right? A machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. Right? (Dr. Eng chuckles) So, that's why when the people start to value data, right? And say that it is an investment when we collect it it is very positive for AI. Because an AI system gets intelligent, get more intelligence because it has huge amounts of data and a diversity of data. >> Yeah. >> So it'd be great, if the community values data. >> Well, you certainly see it in the valuations of many companies these days. And I think increasingly you see it on the income statement. You know data products and people monetizing data services. And yeah, maybe eventually you'll see it in the balance sheet. I know Doug Laney, when he was at Gartner Group, wrote a book about this and a lot of people are thinking about it. That's a big change, isn't it? >> Yeah, yeah. >> Dr. Goh... (Dave chuckles) >> The question is the process and methods in valuation. Right? >> Yeah, right. >> But I believe we will get there. We need to get started. And then we'll get there. I believe, yeah. >> Doctor Goh, it's always my pleasure. >> And then the AI will benefit greatly from it. >> Oh, yeah, no doubt. People will better understand how to align, you know some of these technology investments. Dr. Goh, great to see you again. Thanks so much for coming back in theCUBE. It's been a real pleasure. >> Yes, a system is only as smart as the data you feed it with. (Dave chuckles) (Dr. Eng laughs) >> Excellent. We'll leave it there. Thank you for spending some time with us and keep it right there for more great interviews from HPE Discover 21. This is Dave Vellante for theCUBE, the leader in Enterprise Tech Coverage. We'll be right back. (upbeat music)

Published Date : Jun 8 2021

SUMMARY :

Doctor Goh, great to see you again. great to talk to you again. And you talked about thriving And you really dug in the age of insight, right? of the ones you talked about today? to get what you need. And as a great example, the Flash Crash. is that humans put in the rules to decide But the rule was applied, you know, that it's going to be tough, yeah. So seems that most of the AI and the machine starts to evolve a model they may not have enough power to do so. Is that learning from the Edge You do understand my question. or the Call to do the learning. but the rest can be done at the Edge. When to organize it when you collect? But on the other hand, to help them build a system, you know? all that you can install, right? And so when you think about So what do you do? of the data back to a central, in that opportunity for the customers? And it has to be hybrid. about in the future of, as the data you feed it with. if the community values data. And I think increasingly you The question is the process We need to get started. And then the AI will Dr. Goh, great to see you again. as smart as the data Thank you for spending some time with us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Michael LewisPERSON

0.99+

Doug LaneyPERSON

0.99+

DavePERSON

0.99+

2018DATE

0.99+

$4 billionQUANTITY

0.99+

AntonioPERSON

0.99+

two languagesQUANTITY

0.99+

10 billionQUANTITY

0.99+

55 billionQUANTITY

0.99+

two challengesQUANTITY

0.99+

second challengeQUANTITY

0.99+

55 billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

last yearDATE

0.99+

Gartner GroupORGANIZATION

0.99+

first lineQUANTITY

0.99+

10QUANTITY

0.99+

second areaQUANTITY

0.99+

bothQUANTITY

0.99+

tomorrowDATE

0.99+

Hundreds of millionsQUANTITY

0.99+

TodayDATE

0.99+

todayDATE

0.99+

second barrierQUANTITY

0.99+

two partQUANTITY

0.99+

May 6, 2010DATE

0.99+

OneQUANTITY

0.99+

EdgeORGANIZATION

0.99+

first barrierQUANTITY

0.99+

less than a hundred sharesQUANTITY

0.99+

next yearDATE

0.98+

EngPERSON

0.98+

yesterdayDATE

0.98+

first partQUANTITY

0.98+

May 6DATE

0.98+

United NationsORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

one areaQUANTITY

0.98+

one modelQUANTITY

0.98+

first oneQUANTITY

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

Dr.PERSON

0.97+

less than a hundred sharesQUANTITY

0.97+

three stagesQUANTITY

0.97+

one ruleQUANTITY

0.97+

Three main areasQUANTITY

0.97+

Flash BoysTITLE

0.97+

one languageQUANTITY

0.97+

oneQUANTITY

0.96+

10 petabytes a dayQUANTITY

0.96+

Flash CrashTITLE

0.95+

under a hundredQUANTITY

0.95+

FirstlyQUANTITY

0.95+