Joseph Nelson, Roboflow | Cube Conversation
(gentle music) >> Hello everyone. Welcome to this CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great remote guest coming in. Joseph Nelson, co-founder and CEO of RoboFlow hot startup in AI, computer vision. Really interesting topic in this wave of AI next gen hitting. Joseph, thanks for coming on this CUBE conversation. >> Thanks for having me. >> Yeah, I love the startup tsunami that's happening here in this wave. RoboFlow, you're in the middle of it. Exciting opportunities, you guys are in the cutting edge. I think computer vision's been talked about more as just as much as the large language models and these foundational models are merging. You're in the middle of it. What's it like right now as a startup and growing in this new wave hitting? >> It's kind of funny, it's, you know, I kind of describe it like sometimes you're in a garden of gnomes. It's like we feel like we've got this giant headstart with hundreds of thousands of people building with computer vision, training their own models, but that's a fraction of what it's going to be in six months, 12 months, 24 months. So, as you described it, a wave is a good way to think about it. And the wave is still building before it gets to its full size. So it's a ton of fun. >> Yeah, I think it's one of the most exciting areas in computer science. I wish I was in my twenties again, because I would be all over this. It's the intersection, there's so many disciplines, right? It's not just tech computer science, it's computer science, it's systems, it's software, it's data. There's so much aperture of things going on around your world. So, I mean, you got to be batting all the students away kind of trying to get hired in there, probably. I can only imagine you're hiring regiment. I'll ask that later, but first talk about what the company is that you're doing. How it's positioned, what's the market you're going after, and what's the origination story? How did you guys get here? How did you just say, hey, want to do this? What was the origination story? What do you do and how did you start the company? >> Yeah, yeah. I'll give you the what we do today and then I'll shift into the origin. RoboFlow builds tools for making the world programmable. Like anything that you see should be read write access if you think about it with a programmer's mind or legible. And computer vision is a technology that enables software to be added to these real world objects that we see. And so any sort of interface, any sort of object, any sort of scene, we can interact with it, we can make it more efficient, we can make it more entertaining by adding the ability for the tools that we use and the software that we write to understand those objects. And at RoboFlow, we've empowered a little over a hundred thousand developers, including those in half the Fortune 100 so far in that mission. Whether that's Walmart understanding the retail in their stores, Cardinal Health understanding the ways that they're helping their patients, or even electric vehicle manufacturers ensuring that they're making the right stuff at the right time. As you mentioned, it's early. Like I think maybe computer vision has touched one, maybe 2% of the whole economy and it'll be like everything in a very short period of time. And so we're focused on enabling that transformation. I think it's it, as far as I think about it, I've been fortunate to start companies before, start, sell these sorts of things. This is the last company I ever wanted to start and I think it will be, should we do it right, the world's largest in riding the wave of bringing together the disparate pieces of that technology. >> What was the motivating point of the formation? Was it, you know, you guys were hanging around? Was there some catalyst? What was the moment where it all kind of came together for you? >> You know what's funny is my co-founder, Brad and I, we were making computer vision apps for making board games more fun to play. So in 2017, Apple released AR kit, augmented reality kit for building augmented reality applications. And Brad and I are both sort of like hacker persona types. We feel like we don't really understand the technology until we build something with it and so we decided that we should make an app that if you point your phone at a Sudoku puzzle, it understands the state of the board and then it kind of magically fills in that experience with all the digits in real time, which totally ruins the game of Sudoku to be clear. But it also just creates this like aha moment of like, oh wow, like the ability for our pocket devices to understand and see the world as good or better than we can is possible. And so, you know, we actually did that as I mentioned in 2017, and the app went viral. It was, you know, top of some subreddits, top of Injure, Reddit, the hacker community as well as Product Hunt really liked it. So it actually won Product Hunt AR app of the year, which was the same year that the Tesla model three won the product of the year. So we joked that we share an award with Elon our shared (indistinct) But frankly, so that was 2017. RoboFlow wasn't incorporated as a business until 2019. And so, you know, when we made Magic Sudoku, I was running a different company at the time, Brad was running a different company at the time, and we kind of just put it out there and were excited by how many people liked it. And we assumed that other curious developers would see this inevitable future of, oh wow, you know. This is much more than just a pedestrian point your phone at a board game. This is everything can be seen and understood and rewritten in a different way. Things like, you know, maybe your fridge. Knowing what ingredients you have and suggesting recipes or auto ordering for you, or we were talking about some retail use cases of automated checkout. Like anything can be seen and observed and we presume that that would kick off a Cambrian explosion of applications. It didn't. So you fast forward to 2019, we said, well we might as well be the guys to start to tackle this sort of problem. And because of our success with board games before, we returned to making more board game solving applications. So we made one that solves Boggle, you know, the four by four word game, we made one that solves chess, you point your phone at a chess board and it understands the state of the board and then can make move recommendations. And each additional board game that we added, we realized that the tooling was really immature. The process of collecting images, knowing which images are actually going to be useful for improving model performance, training those models, deploying those models. And if we really wanted to make the world programmable, developers waiting for us to make an app for their thing of interest is a lot less efficient, less impactful than taking our tool chain and releasing that externally. And so, that's what RoboFlow became. RoboFlow became the internal tools that we used to make these game changing applications readily available. And as you know, when you give developers new tools, they create new billion dollar industries, let alone all sorts of fun hobbyist projects along the way. >> I love that story. Curious, inventive, little radical. Let's break the rules, see how we can push the envelope on the board games. That's how companies get started. It's a great story. I got to ask you, okay, what happens next? Now, okay, you realize this new tooling, but this is like how companies get built. Like they solve their own problem that they had 'cause they realized there's one, but then there has to be a market for it. So you actually guys knew that this was coming around the corner. So okay, you got your hacker mentality, you did that thing, you got the award and now you're like, okay, wow. Were you guys conscious of the wave coming? Was it one of those things where you said, look, if we do this, we solve our own problem, this will be big for everybody. Did you have that moment? Was that in 2019 or was that more of like, it kind of was obvious to you guys? >> Absolutely. I mean Brad puts this pretty effectively where he describes how we lived through the initial internet revolution, but we were kind of too young to really recognize and comprehend what was happening at the time. And then mobile happened and we were working on different companies that were not in the mobile space. And computer vision feels like the wave that we've caught. Like, this is a technology and capability that rewrites how we interact with the world, how everyone will interact with the world. And so we feel we've been kind of lucky this time, right place, right time of every enterprise will have the ability to improve their operations with computer vision. And so we've been very cognizant of the fact that computer vision is one of those groundbreaking technologies that every company will have as a part of their products and services and offerings, and we can provide the tooling to accelerate that future. >> Yeah, and the developer angle, by the way, I love that because I think, you know, as we've been saying in theCUBE all the time, developer's the new defacto standard bodies because what they adopt is pure, you know, meritocracy. And they pick the best. If it's sell service and it's good and it's got open source community around it, its all in. And they'll vote. They'll vote with their code and that is clear. Now I got to ask you, as you look at the market, we were just having this conversation on theCUBE in Barcelona at recent Mobile World Congress, now called MWC, around 5G versus wifi. And the debate was specifically computer vision, like facial recognition. We were talking about how the Cleveland Browns were using facial recognition for people coming into the stadium they were using it for ships in international ports. So the question was 5G versus wifi. My question is what infrastructure or what are the areas that need to be in place to make computer vision work? If you have developers building apps, apps got to run on stuff. So how do you sort that out in your mind? What's your reaction to that? >> A lot of the times when we see applications that need to run in real time and on video, they'll actually run at the edge without internet. And so a lot of our users will actually take their models and run it in a fully offline environment. Now to act on that information, you'll often need to have internet signal at some point 'cause you'll need to know how many people were in the stadium or what shipping crates are in my port at this point in time. You'll need to relay that information somewhere else, which will require connectivity. But actually using the model and creating the insights at the edge does not require internet. I mean we have users that deploy models on underwater submarines just as much as in outer space actually. And those are not very friendly environments to internet, let alone 5g. And so what you do is you use an edge device, like an Nvidia Jetson is common, mobile devices are common. Intel has some strong edge devices, the Movidius family of chips for example. And you use that compute that runs completely offline in real time to process those signals. Now again, what you do with those signals may require connectivity and that becomes a question of the problem you're solving of how soon you need to relay that information to another place. >> So, that's an architectural issue on the infrastructure. If you're a tactical edge war fighter for instance, you might want to have highly available and maybe high availability. I mean, these are words that mean something. You got storage, but it's not at the edge in real time. But you can trickle it back and pull it down. That's management. So that's more of a business by business decision or environment, right? >> That's right, that's right. Yeah. So I mean we can talk through some specifics. So for example, the RoboFlow actually powers the broadcaster that does the tennis ball tracking at Wimbledon. That runs completely at the edge in real time in, you know, technically to track the tennis ball and point the camera, you actually don't need internet. Now they do have internet of course to do the broadcasting and relay the signal and feeds and these sorts of things. And so that's a case where you have both edge deployment of running the model and high availability act on that model. We have other instances where customers will run their models on drones and the drone will go and do a flight and it'll say, you know, this many residential homes are in this given area, or this many cargo containers are in this given shipping yard. Or maybe we saw these environmental considerations of soil erosion along this riverbank. The model in that case can run on the drone during flight without internet, but then you only need internet once the drone lands and you're going to act on that information because for example, if you're doing like a study of soil erosion, you don't need to be real time. You just need to be able to process and make use of that information once the drone finishes its flight. >> Well I can imagine a zillion use cases. I heard of a use case interview at a company that does computer vision to help people see if anyone's jumping the fence on their company. Like, they know what a body looks like climbing a fence and they can spot it. Pretty easy use case compared to probably some of the other things, but this is the horizontal use cases, its so many use cases. So how do you guys talk to the marketplace when you say, hey, we have generative AI for commuter vision. You might know language models that's completely different animal because vision's like the world, right? So you got a lot more to do. What's the difference? How do you explain that to customers? What can I build and what's their reaction? >> Because we're such a developer centric company, developers are usually creative and show you the ways that they want to take advantage of new technologies. I mean, we've had people use things for identifying conveyor belt debris, doing gas leak detection, measuring the size of fish, airplane maintenance. We even had someone that like a hobby use case where they did like a specific sushi identifier. I dunno if you know this, but there's a specific type of whitefish that if you grew up in the western hemisphere and you eat it in the eastern hemisphere, you get very sick. And so there was someone that made an app that tells you if you happen to have that fish in the sushi that you're eating. But security camera analysis, transportation flows, plant disease detection, really, you know, smarter cities. We have people that are doing curb management identifying, and a lot of these use cases, the fantastic thing about building tools for developers is they're a creative bunch and they have these ideas that if you and I sat down for 15 minutes and said, let's guess every way computer vision can be used, we would need weeks to list all the example use cases. >> We'd miss everything. >> And we'd miss. And so having the community show us the ways that they're using computer vision is impactful. Now that said, there are of course commercial industries that have discovered the value and been able to be out of the gate. And that's where we have the Fortune 100 customers, like we do. Like the retail customers in the Walmart sector, healthcare providers like Medtronic, or vehicle manufacturers like Rivian who all have very difficult either supply chain, quality assurance, in stock, out of stock, anti-theft protection considerations that require successfully making sense of the real world. >> Let me ask you a question. This is maybe a little bit in the weeds, but it's more developer focused. What are some of the developer profiles that you're seeing right now in terms of low-hanging fruit applications? And can you talk about the academic impact? Because I imagine if I was in school right now, I'd be all over it. Are you seeing Master's thesis' being worked on with some of your stuff? Is the uptake in both areas of younger pre-graduates? And then inside the workforce, What are some of the devs like? Can you share just either what their makeup is, what they work on, give a little insight into the devs you're working with. >> Leading developers that want to be on state-of-the-art technology build with RoboFlow because they know they can use the best in class open source. They know that they can get the most out of their data. They know that they can deploy extremely quickly. That's true among students as you mentioned, just as much as as industries. So we welcome students and I mean, we have research grants that will regularly support for people to publish. I mean we actually have a channel inside our internal slack where every day, more student publications that cite building with RoboFlow pop up. And so, that helps inspire some of the use cases. Now what's interesting is that the use case is relatively, you know, useful or applicable for the business or the student. In other words, if a student does a thesis on how to do, we'll say like shingle damage detection from satellite imagery and they're just doing that as a master's thesis, in fact most insurance businesses would be interested in that sort of application. So, that's kind of how we see uptick and adoption both among researchers who want to be on the cutting edge and publish, both with RoboFlow and making use of open source tools in tandem with the tool that we provide, just as much as industry. And you know, I'm a big believer in the philosophy that kind of like what the hackers are doing nights and weekends, the Fortune 500 are doing in a pretty short order period of time and we're experiencing that transition. Computer vision used to be, you know, kind of like a PhD, multi-year investment endeavor. And now with some of the tooling that we're working on in open source technologies and the compute that's available, these science fiction ideas are possible in an afternoon. And so you have this idea of maybe doing asset management or the aerial observation of your shingles or things like this. You have a few hundred images and you can de-risk whether that's possible for your business today. So there's pretty broad-based adoption among both researchers that want to be on the state of the art, as much as companies that want to reduce the time to value. >> You know, Joseph, you guys and your partner have got a great front row seat, ground floor, presented creation wave here. I'm seeing a pattern emerging from all my conversations on theCUBE with founders that are successful, like yourselves, that there's two kind of real things going on. You got the enterprises grabbing the products and retrofitting into their legacy and rebuilding their business. And then you have startups coming out of the woodwork. Young, seeing greenfield or pick a specific niche or focus and making that the signature lever to move the market. >> That's right. >> So can you share your thoughts on the startup scene, other founders out there and talk about that? And then I have a couple questions for like the enterprises, the old school, the existing legacy. Little slower, but the startups are moving fast. What are some of the things you're seeing as startups are emerging in this field? >> I think you make a great point that independent of RoboFlow, very successful, especially developer focused businesses, kind of have three customer types. You have the startups and maybe like series A, series B startups that you're building a product as fast as you can to keep up with them, and they're really moving just as fast as as you are and pulling the product out at you for things that they need. The second segment that you have might be, call it SMB but not enterprise, who are able to purchase and aren't, you know, as fast of moving, but are stable and getting value and able to get to production. And then the third type is enterprise, and that's where you have typically larger contract value sizes, slower moving in terms of adoption and feedback for your product. And I think what you see is that successful companies balance having those three customer personas because you have the small startups, small fast moving upstarts that are discerning buyers who know the market and elect to build on tooling that is best in class. And so you basically kind of pass the smell test of companies who are quite discerning in their purchases, plus are moving so quick they're pulling their product out of you. Concurrently, you have a product that's enterprise ready to service the scalability, availability, and trust of enterprise buyers. And that's ultimately where a lot of companies will see tremendous commercial success. I mean I remember seeing the Twilio IPO, Uber being like a full 20% of their revenue, right? And so there's this very common pattern where you have the ability to find some of those upstarts that you make bets on, like the next Ubers of the world, the smaller companies that continue to get developed with the product and then the enterprise whom allows you to really fund the commercial success of the business, and validate the size of the opportunity in market that's being creative. >> It's interesting, there's so many things happening there. It's like, in a way it's a new category, but it's not a new category. It becomes a new category because of the capabilities, right? So, it's really interesting, 'cause that's what you're talking about is a category, creating. >> I think developer tools. So people often talk about B to B and B to C businesses. I think developer tools are in some ways a third way. I mean ultimately they're B to B, you're selling to other businesses and that's where your revenue's coming from. However, you look kind of like a B to C company in the ways that you measure product adoption and kind of go to market. In other words, you know, we're often tracking the leading indicators of commercial success in the form of usage, adoption, retention. Really consumer app, traditionally based metrics of how to know you're building the right stuff, and that's what product led growth companies do. And then you ultimately have commercial traction in a B to B way. And I think that that actually kind of looks like a third thing, right? Like you can do these sort of funny zany marketing examples that you might see historically from consumer businesses, but yet you ultimately make your money from the enterprise who has these de-risked high value problems you can solve for them. And I selfishly think that that's the best of both worlds because I don't have to be like Evan Spiegel, guessing the next consumer trend or maybe creating the next consumer trend and catching lightning in a bottle over and over again on the consumer side. But I still get to have fun in our marketing and make sort of fun, like we're launching the world's largest game of rock paper scissors being played with computer vision, right? Like that's sort of like a fun thing you can do, but then you can concurrently have the commercial validation and customers telling you the things that they need to be built for them next to solve commercial pain points for them. So I really do think that you're right by calling this a new category and it really is the best of both worlds. >> It's a great call out, it's a great call out. In fact, I always juggle with the VC. I'm like, it's so easy. Your job is so easy to pick the winners. What are you talking about its so easy? I go, just watch what the developers jump on. And it's not about who started, it could be someone in the dorm room to the boardroom person. You don't know because that B to C, the C, it's B to D you know? You know it's developer 'cause that's a human right? That's a consumer of the tool which influences the business that never was there before. So I think this direct business model evolution, whether it's media going direct or going direct to the developers rather than going to a gatekeeper, this is the reality. >> That's right. >> Well I got to ask you while we got some time left to describe, I want to get into this topic of multi-modality, okay? And can you describe what that means in computer vision? And what's the state of the growth of that portion of this piece? >> Multi modality refers to using multiple traditionally siloed problem types, meaning text, image, video, audio. So you could treat an audio problem as only processing audio signal. That is not multimodal, but you could use the audio signal at the same time as a video feed. Now you're talking about multi modality. In computer vision, multi modality is predominantly happening with images and text. And one of the biggest releases in this space is actually two years old now, was clip, contrastive language image pre-training, which took 400 million image text pairs and basically instead of previously when you do classification, you basically map every single image to a single class, right? Like here's a bunch of images of chairs, here's a bunch of images of dogs. What clip did is used, you can think about it like, the class for an image being the Instagram caption for the image. So it's not one single thing. And by training on understanding the corpora, you basically see which words, which concepts are associated with which pixels. And this opens up the aperture for the types of problems and generalizability of models. So what does this mean? This means that you can get to value more quickly from an existing trained model, or at least validate that what you want to tackle with a computer vision, you can get there more quickly. It also opens up the, I mean. Clip has been the bedrock of some of the generative image techniques that have come to bear, just as much as some of the LLMs. And increasingly we're going to see more and more of multi modality being a theme simply because at its core, you're including more context into what you're trying to understand about the world. I mean, in its most basic sense, you could ask yourself, if I have an image, can I know more about that image with just the pixels? Or if I have the image and the sound of when that image was captured or it had someone describe what they see in that image when the image was captured, which one's going to be able to get you more signal? And so multi modality helps expand the ability for us to understand signal processing. >> Awesome. And can you just real quick, define clip for the folks that don't know what that means? >> Yeah. Clip is a model architecture, it's an acronym for contrastive language image pre-training and like, you know, model architectures that have come before it captures the almost like, models are kind of like brands. So I guess it's a brand of a model where you've done these 400 million image text pairs to match up which visual concepts are associated with which text concepts. And there have been new releases of clip, just at bigger sizes of bigger encoding's, of longer strings of texture, or larger image windows. But it's been a really exciting advancement that OpenAI released in January, 2021. >> All right, well great stuff. We got a couple minutes left. Just I want to get into more of a company-specific question around culture. All startups have, you know, some sort of cultural vibe. You know, Intel has Moore's law doubles every whatever, six months. What's your culture like at RoboFlow? I mean, if you had to describe that culture, obviously love the hacking story, you and your partner with the games going number one on Product Hunt next to Elon and Tesla and then hey, we should start a company two years later. That's kind of like a curious, inventing, building, hard charging, but laid back. That's my take. How would you describe the culture? >> I think that you're right. The culture that we have is one of shipping, making things. So every week each team shares what they did for our customers on a weekly basis. And we have such a strong emphasis on being better week over week that those sorts of things compound. So one big emphasis in our culture is getting things done, shipping, doing things for our customers. The second is we're an incredibly transparent place to work. For example, how we think about giving decisions, where we're progressing against our goals, what problems are biggest and most important for the company is all open information for those that are inside the company to know and progress against. The third thing that I'd use to describe our culture is one that thrives with autonomy. So RoboFlow has a number of individuals who have founded companies before, some of which have sold their businesses for a hundred million plus upon exit. And the way that we've been able to attract talent like that is because the problems that we're tackling are so immense, yet individuals are able to charge at it with the way that they think is best. And this is what pairs well with transparency. If you have a strong sense of what the company's goals are, how we're progressing against it, and you have this ownership mentality of what can I do to change or drive progress against that given outcome, then you create a really healthy pairing of, okay cool, here's where the company's progressing. Here's where things are going really well, here's the places that we most need to improve and work on. And if you're inside that company as someone who has a preponderance to be a self-starter and even a history of building entire functions or companies yourself, then you're going to be a place where you can really thrive. You have the inputs of the things where we need to work on to progress the company's goals. And you have the background of someone that is just necessarily a fast moving and ambitious type of individual. So I think the best way to describe it is a transparent place with autonomy and an emphasis on getting things done. >> Getting shit done as they say. Getting stuff done. Great stuff. Hey, final question. Put a plug out there for the company. What are you going to hire? What's your pipeline look like for people? What jobs are open? I'm sure you got hiring all around. Give a quick plug for the company what you're looking for. >> I appreciate you asking. Basically you're either building the product or helping customers be successful with the product. So in the building product category, we have platform engineering roles, machine learning engineering roles, and we're solving some of the hardest and most impactful problems of bringing such a groundbreaking technology to the masses. And so it's a great place to be where you can kind of be your own user as an engineer. And then if you're enabling people to be successful with the products, I mean you're working in a place where there's already such a strong community around it and you can help shape, foster, cultivate, activate, and drive commercial success in that community. So those are roles that tend themselves to being those that build the product for developer advocacy, those that are account executives that are enabling our customers to realize commercial success, and even hybrid roles like we call it field engineering, where you are a technical resource to drive success within customer accounts. And so all this is listed on roboflow.com/careers. And one thing that I actually kind of want to mention John that's kind of novel about the thing that's working at RoboFlow. So there's been a lot of discussion around remote companies and there's been a lot of discussion around in-person companies and do you need to be in the office? And one thing that we've kind of recognized is you can actually chart a third way. You can create a third way which we call satellite, which basically means people can work from where they most like to work and there's clusters of people, regular onsite's. And at RoboFlow everyone gets, for example, $2,500 a year that they can use to spend on visiting coworkers. And so what's sort of organically happened is team numbers have started to pull together these resources and rent out like, lavish Airbnbs for like a week and then everyone kind of like descends in and works together for a week and makes and creates things. And we call this lighthouses because you know, a lighthouse kind of brings ships into harbor and we have an emphasis on shipping. >> Yeah, quality people that are creative and doers and builders. You give 'em some cash and let the self-governing begin, you know? And like, creativity goes through the roof. It's a great story. I think that sums up the culture right there, Joseph. Thanks for sharing that and thanks for this great conversation. I really appreciate it and it's very inspiring. Thanks for coming on. >> Yeah, thanks for having me, John. >> Joseph Nelson, co-founder and CEO of RoboFlow. Hot company, great culture in the right place in a hot area, computer vision. This is going to explode in value. The edge is exploding. More use cases, more development, and developers are driving the change. Check out RoboFlow. This is theCUBE. I'm John Furrier, your host. Thanks for watching. (gentle music)
SUMMARY :
Welcome to this CUBE conversation You're in the middle of it. And the wave is still building the company is that you're doing. maybe 2% of the whole economy And as you know, when you it kind of was obvious to you guys? cognizant of the fact that I love that because I think, you know, And so what you do is issue on the infrastructure. and the drone will go and the marketplace when you say, in the sushi that you're eating. And so having the And can you talk about the use case is relatively, you know, and making that the signature What are some of the things you're seeing and pulling the product out at you because of the capabilities, right? in the ways that you the C, it's B to D you know? And one of the biggest releases And can you just real quick, and like, you know, I mean, if you had to like that is because the problems Give a quick plug for the place to be where you can the self-governing begin, you know? and developers are driving the change.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Joseph | PERSON | 0.99+ |
Joseph Nelson | PERSON | 0.99+ |
January, 2021 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Medtronic | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
400 million | QUANTITY | 0.99+ |
Evan Spiegel | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
RoboFlow | ORGANIZATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Rivian | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Cardinal Health | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Wimbledon | EVENT | 0.99+ |
roboflow.com/careers | OTHER | 0.99+ |
first | QUANTITY | 0.99+ |
second segment | QUANTITY | 0.99+ |
each team | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both worlds | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
two years later | DATE | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Ubers | ORGANIZATION | 0.98+ |
third way | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a week | QUANTITY | 0.98+ |
Magic Sudoku | TITLE | 0.98+ |
second | QUANTITY | 0.98+ |
Nvidia | ORGANIZATION | 0.98+ |
Sudoku | TITLE | 0.98+ |
MWC | EVENT | 0.97+ |
today | DATE | 0.97+ |
billion dollar | QUANTITY | 0.97+ |
one single thing | QUANTITY | 0.97+ |
over a hundred thousand developers | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
third | QUANTITY | 0.96+ |
Elon | ORGANIZATION | 0.96+ |
third thing | QUANTITY | 0.96+ |
Tesla | ORGANIZATION | 0.96+ |
Jetson | COMMERCIAL_ITEM | 0.96+ |
Elon | PERSON | 0.96+ |
RoboFlow | TITLE | 0.96+ |
ORGANIZATION | 0.95+ | |
Twilio | ORGANIZATION | 0.95+ |
twenties | QUANTITY | 0.95+ |
Product Hunt AR | TITLE | 0.95+ |
Moore | PERSON | 0.95+ |
both researchers | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Adam Wenchel, Arthur.ai | CUBE Conversation
(bright upbeat music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCUBE. We've got a great conversation featuring Arthur AI. I'm your host. I'm excited to have Adam Wenchel who's the Co-Founder and CEO. Thanks for joining us today, appreciate it. >> Yeah, thanks for having me on, John, looking forward to the conversation. >> I got to say, it's been an exciting world in AI or artificial intelligence. Just an explosion of interest kind of in the mainstream with the language models, which people don't really get, but they're seeing the benefits of some of the hype around OpenAI. Which kind of wakes everyone up to, "Oh, I get it now." And then of course the pessimism comes in, all the skeptics are out there. But this breakthrough in generative AI field is just awesome, it's really a shift, it's a wave. We've been calling it probably the biggest inflection point, then the others combined of what this can do from a surge standpoint, applications. I mean, all aspects of what we used to know is the computing industry, software industry, hardware, is completely going to get turbo. So we're totally obviously bullish on this thing. So, this is really interesting. So my first question is, I got to ask you, what's you guys taking? 'Cause you've been doing this, you're in it, and now all of a sudden you're at the beach where the big waves are. What's the explosion of interest is there? What are you seeing right now? >> Yeah, I mean, it's amazing, so for starters, I've been in AI for over 20 years and just seeing this amount of excitement and the growth, and like you said, the inflection point we've hit in the last six months has just been amazing. And, you know, what we're seeing is like people are getting applications into production using LLMs. I mean, really all this excitement just started a few months ago, with ChatGPT and other breakthroughs and the amount of activity and the amount of new systems that we're seeing hitting production already so soon after that is just unlike anything we've ever seen. So it's pretty awesome. And, you know, these language models are just, they could be applied in so many different business contexts and that it's just the amount of value that's being created is again, like unprecedented compared to anything. >> Adam, you know, you've been in this for a while, so it's an interesting point you're bringing up, and this is a good point. I was talking with my friend John Markoff, former New York Times journalist and he was talking about, there's been a lot of work been done on ethics. So there's been, it's not like it's new. It's like been, there's a lot of stuff that's been baking over many, many years and, you know, decades. So now everyone wakes up in the season, so I think that is a key point I want to get into some of your observations. But before we get into it, I want you to explain for the folks watching, just so we can kind of get a definition on the record. What's an LLM, what's a foundational model and what's generative ai? Can you just quickly explain the three things there? >> Yeah, absolutely. So an LLM or a large language model, it's just a large, they would imply a large language model that's been trained on a huge amount of data typically pulled from the internet. And it's a general purpose language model that can be built on top for all sorts of different things, that includes traditional NLP tasks like document classification and sentiment understanding. But the thing that's gotten people really excited is it's used for generative tasks. So, you know, asking it to summarize documents or asking it to answer questions. And these aren't new techniques, they've been around for a while, but what's changed is just this new class of models that's based on new architectures. They're just so much more capable that they've gone from sort of science projects to something that's actually incredibly useful in the real world. And there's a number of companies that are making them accessible to everyone so that you can build on top of them. So that's the other big thing is, this kind of access to these models that can power generative tasks has been democratized in the last few months and it's just opening up all these new possibilities. And then the third one you mentioned foundation models is sort of a broader term for the category that includes LLMs, but it's not just language models that are included. So we've actually seen this for a while in the computer vision world. So people have been building on top of computer vision models, pre-trained computer vision models for a while for image classification, object detection, that's something we've had customers doing for three or four years already. And so, you know, like you said, there are antecedents to like, everything that's happened, it's not entirely new, but it does feel like a step change. >> Yeah, I did ask ChatGPT to give me a riveting introduction to you and it gave me an interesting read. If we have time, I'll read it. It's kind of, it's fun, you get a kick out of it. "Ladies and gentlemen, today we're a privileged "to have Adam Wenchel, Founder of Arthur who's going to talk "about the exciting world of artificial intelligence." And then it goes on with some really riveting sentences. So if we have time, I'll share that, it's kind of funny. It was good. >> Okay. >> So anyway, this is what people see and this is why I think it's exciting 'cause I think people are going to start refactoring what they do. And I've been saying this on theCUBE now for about a couple months is that, you know, there's a scene in "Moneyball" where Billy Beane sits down with the Red Sox owner and the Red Sox owner says, "If people aren't rebuilding their teams on your model, "they're going to be dinosaurs." And it reminds me of what's happening right now. And I think everyone that I talk to in the business sphere is looking at this and they're connecting the dots and just saying, if we don't rebuild our business with this new wave, they're going to be out of business because there's so much efficiency, there's so much automation, not like DevOps automation, but like the generative tasks that will free up the intellect of people. Like just the simple things like do an intro or do this for me, write some code, write a countermeasure to a hack. I mean, this is kind of what people are doing. And you mentioned computer vision, again, another huge field where 5G things are coming on, it's going to accelerate. What do you say to people when they kind of are leaning towards that, I need to rethink my business? >> Yeah, it's 100% accurate and what's been amazing to watch the last few months is the speed at which, and the urgency that companies like Microsoft and Google or others are actually racing to, to do that rethinking of their business. And you know, those teams, those companies which are large and haven't always been the fastest moving companies are working around the clock. And the pace at which they're rolling out LLMs across their suite of products is just phenomenal to watch. And it's not just the big, the large tech companies as well, I mean, we're seeing the number of startups, like we get, every week a couple of new startups get in touch with us for help with their LLMs and you know, there's just a huge amount of venture capital flowing into it right now because everyone realizes the opportunities for transforming like legal and healthcare and content creation in all these different areas is just wide open. And so there's a massive gold rush going on right now, which is amazing. >> And the cloud scale, obviously horizontal scalability of the cloud brings us to another level. We've been seeing data infrastructure since the Hadoop days where big data was coined. Now you're seeing this kind of take fruit, now you have vertical specialization where data shines, large language models all of a set up perfectly for kind of this piece. And you know, as you mentioned, you've been doing it for a long time. Let's take a step back and I want to get into how you started the company, what drove you to start it? Because you know, as an entrepreneur you're probably saw this opportunity before other people like, "Hey, this is finally it, it's here." Can you share the origination story of what you guys came up with, how you started it, what was the motivation and take us through that origination story. >> Yeah, absolutely. So as I mentioned, I've been doing AI for many years. I started my career at DARPA, but it wasn't really until 2015, 2016, my previous company was acquired by Capital One. Then I started working there and shortly after I joined, I was asked to start their AI team and scale it up. And for the first time I was actually doing it, had production models that we were working with, that was at scale, right? And so there was hundreds of millions of dollars of business revenue and certainly a big group of customers who were impacted by the way these models acted. And so it got me hyper-aware of these issues of when you get models into production, it, you know. So I think people who are earlier in the AI maturity look at that as a finish line, but it's really just the beginning and there's this constant drive to make them better, make sure they're not degrading, make sure you can explain what they're doing, if they're impacting people, making sure they're not biased. And so at that time, there really weren't any tools to exist to do this, there wasn't open source, there wasn't anything. And so after a few years there, I really started talking to other people in the industry and there was a really clear theme that this needed to be addressed. And so, I joined with my Co-Founder John Dickerson, who was on the faculty in University of Maryland and he'd been doing a lot of research in these areas. And so we ended up joining up together and starting Arthur. >> Awesome. Well, let's get into what you guys do. Can you explain the value proposition? What are people using you for now? Where's the action? What's the customers look like? What do prospects look like? Obviously you mentioned production, this has been the theme. It's not like people woke up one day and said, "Hey, I'm going to put stuff into production." This has kind of been happening. There's been companies that have been doing this at scale and then yet there's a whole follower model coming on mainstream enterprise and businesses. So there's kind of the early adopters are there now in production. What do you guys do? I mean, 'cause I think about just driving the car off the lot is not, you got to manage operations. I mean, that's a big thing. So what do you guys do? Talk about the value proposition and how you guys make money? >> Yeah, so what we do is, listen, when you go to validate ahead of deploying these models in production, starts at that point, right? So you want to make sure that if you're going to be upgrading a model, if you're going to replacing one that's currently in production, that you've proven that it's going to perform well, that it's going to be perform ethically and that you can explain what it's doing. And then when you launch it into production, traditionally data scientists would spend 25, 30% of their time just manually checking in on their model day-to-day babysitting as we call it, just to make sure that the data hasn't drifted, the model performance hasn't degraded, that a programmer did make a change in an upstream data system. You know, there's all sorts of reasons why the world changes and that can have a real adverse effect on these models. And so what we do is bring the same kind of automation that you have for other kinds of, let's say infrastructure monitoring, application monitoring, we bring that to your AI systems. And that way if there ever is an issue, it's not like weeks or months till you find it and you find it before it has an effect on your P&L and your balance sheet, which is too often before they had tools like Arthur, that was the way they were detected. >> You know, I was talking to Swami at Amazon who I've known for a long time for 13 years and been on theCUBE multiple times and you know, I watched Amazon try to pick up that sting with stage maker about six years ago and so much has happened since then. And he and I were talking about this wave, and I kind of brought up this analogy to how when cloud started, it was, Hey, I don't need a data center. 'Cause when I did my startup that time when Amazon, one of my startups at that time, my choice was put a box in the colo, get all the configuration before I could write over the line of code. So the cloud became the benefit for that and you can stand up stuff quickly and then it grew from there. Here it's kind of the same dynamic, you don't want to have to provision a large language model or do all this heavy lifting. So that seeing companies coming out there saying, you can get started faster, there's like a new way to get it going. So it's kind of like the same vibe of limiting that heavy lifting. >> Absolutely. >> How do you look at that because this seems to be a wave that's going to be coming in and how do you guys help companies who are going to move quickly and start developing? >> Yeah, so I think in the race to this kind of gold rush mentality, race to get these models into production, there's starting to see more sort of examples and evidence that there are a lot of risks that go along with it. Either your model says things, your system says things that are just wrong, you know, whether it's hallucination or just making things up, there's lots of examples. If you go on Twitter and the news, you can read about those, as well as sort of times when there could be toxic content coming out of things like that. And so there's a lot of risks there that you need to think about and be thoughtful about when you're deploying these systems. But you know, you need to balance that with the business imperative of getting these things into production and really transforming your business. And so that's where we help people, we say go ahead, put them in production, but just make sure you have the right guardrails in place so that you can do it in a smart way that's going to reflect well on you and your company. >> Let's frame the challenge for the companies now that you have, obviously there's the people who doing large scale production and then you have companies maybe like as small as us who have large linguistic databases or transcripts for example, right? So what are customers doing and why are they deploying AI right now? And is it a speed game, is it a cost game? Why have some companies been able to deploy AI at such faster rates than others? And what's a best practice to onboard new customers? >> Yeah, absolutely. So I mean, we're seeing across a bunch of different verticals, there are leaders who have really kind of started to solve this puzzle about getting AI models into production quickly and being able to iterate on them quickly. And I think those are the ones that realize that imperative that you mentioned earlier about how transformational this technology is. And you know, a lot of times, even like the CEOs or the boards are very personally kind of driving this sense of urgency around it. And so, you know, that creates a lot of movement, right? And so those companies have put in place really smart infrastructure and rails so that people can, data scientists aren't encumbered by having to like hunt down data, get access to it. They're not encumbered by having to stand up new platforms every time they want to deploy an AI system, but that stuff is already in place. There's a really nice ecosystem of products out there, including Arthur, that you can tap into. Compared to five or six years ago when I was building at a top 10 US bank, at that point you really had to build almost everything yourself and that's not the case now. And so it's really nice to have things like, you know, you mentioned AWS SageMaker and a whole host of other tools that can really accelerate things. >> What's your profile customer? Is it someone who already has a team or can people who are learning just dial into the service? What's the persona? What's the pitch, if you will, how do you align with that customer value proposition? Do people have to be built out with a team and in play or is it pre-production or can you start with people who are just getting going? >> Yeah, people do start using it pre-production for validation, but I think a lot of our customers do have a team going and they're starting to put, either close to putting something into production or about to, it's everything from large enterprises that have really sort of complicated, they have dozens of models running all over doing all sorts of use cases to tech startups that are very focused on a single problem, but that's like the lifeblood of the company and so they need to guarantee that it works well. And you know, we make it really easy to get started, especially if you're using one of the common model development platforms, you can just kind of turn key, get going and make sure that you have a nice feedback loop. So then when your models are out there, it's pointing out, areas where it's performing well, areas where it's performing less well, giving you that feedback so that you can make improvements, whether it's in training data or futurization work or algorithm selection. There's a number of, you know, depending on the symptoms, there's a number of things you can do to increase performance over time and we help guide people on that journey. >> So Adam, I have to ask, since you have such a great customer base and they're smart and they got teams and you're on the front end, I mean, early adopters is kind of an overused word, but they're killing it. They're putting stuff in the production's, not like it's a test, it's not like it's early. So as the next wave comes of fast followers, how do you see that coming online? What's your vision for that? How do you see companies that are like just waking up out of the frozen, you know, freeze of like old IT to like, okay, they got cloud, but they're not yet there. What do you see in the market? I see you're in the front end now with the top people really nailing AI and working hard. What's the- >> Yeah, I think a lot of these tools are becoming, or every year they get easier, more accessible, easier to use. And so, you know, even for that kind of like, as the market broadens, it takes less and less of a lift to put these systems in place. And the thing is, every business is unique, they have their own kind of data and so you can use these foundation models which have just been trained on generic data. They're a great starting point, a great accelerant, but then, in most cases you're either going to want to create a model or fine tune a model using data that's really kind of comes from your particular customers, the people you serve and so that it really reflects that and takes that into account. And so I do think that these, like the size of that market is expanding and its broadening as these tools just become easier to use and also the knowledge about how to build these systems becomes more widespread. >> Talk about your customer base you have now, what's the makeup, what size are they? Give a taste a little bit of a customer base you got there, what's they look like? I'll say Capital One, we know very well while you were at there, they were large scale, lot of data from fraud detection to all kinds of cool stuff. What do your customers now look like? >> Yeah, so we have a variety, but I would say one area we're really strong, we have several of the top 10 US banks, that's not surprising, that's a strength for us, but we also have Fortune 100 customers in healthcare, in manufacturing, in retail, in semiconductor and electronics. So what we find is like in any sort of these major verticals, there's typically, you know, one, two, three kind of companies that are really leading the charge and are the ones that, you know, in our opinion, those are the ones that for the next multiple decades are going to be the leaders, the ones that really kind of lead the charge on this AI transformation. And so we're very fortunate to be working with some of those. And then we have a number of startups as well who we love working with just because they're really pushing the boundaries technologically and so they provide great feedback and make sure that we're continuing to innovate and staying abreast of everything that's going on. >> You know, these early markups, even when the hyperscalers were coming online, they had to build everything themselves. That's the new, they're like the alphas out there building it. This is going to be a big wave again as that fast follower comes in. And so when you look at the scale, what advice would you give folks out there right now who want to tee it up and what's your secret sauce that will help them get there? >> Yeah, I think that the secret to teeing it up is just dive in and start like the, I think these are, there's not really a secret. I think it's amazing how accessible these are. I mean, there's all sorts of ways to access LLMs either via either API access or downloadable in some cases. And so, you know, go ahead and get started. And then our secret sauce really is the way that we provide that performance analysis of what's going on, right? So we can tell you in a very actionable way, like, hey, here's where your model is doing good things, here's where it's doing bad things. Here's something you want to take a look at, here's some potential remedies for it. We can help guide you through that. And that way when you're putting it out there, A, you're avoiding a lot of the common pitfalls that people see and B, you're able to really kind of make it better in a much faster way with that tight feedback loop. >> It's interesting, we've been kind of riffing on this supercloud idea because it was just different name than multicloud and you see apps like Snowflake built on top of AWS without even spending any CapEx, you just ride that cloud wave. This next AI, super AI wave is coming. I don't want to call AIOps because I think there's a different distinction. If you, MLOps and AIOps seem a little bit old, almost a few years back, how do you view that because everyone's is like, "Is this AIOps?" And like, "No, not kind of, but not really." How would you, you know, when someone says, just shoots off the hip, "Hey Adam, aren't you doing AIOps?" Do you say, yes we are, do you say, yes, but we do differently because it's doesn't seem like it's the same old AIOps. What's your- >> Yeah, it's a good question. AIOps has been a term that was co-opted for other things and MLOps also has people have used it for different meanings. So I like the term just AI infrastructure, I think it kind of like describes it really well and succinctly. >> But you guys are doing the ops. I mean that's the kind of ironic thing, it's like the next level, it's like NextGen ops, but it's not, you don't want to be put in that bucket. >> Yeah, no, it's very operationally focused platform that we have, I mean, it fires alerts, people can action off them. If you're familiar with like the way people run security operations centers or network operations centers, we do that for data science, right? So think of it as a DSOC, a Data Science Operations Center where all your models, you might have hundreds of models running across your organization, you may have five, but as problems are detected, alerts can be fired and you can actually work the case, make sure they're resolved, escalate them as necessary. And so there is a very strong operational aspect to it, you're right. >> You know, one of the things I think is interesting is, is that, if you don't mind commenting on it, is that the aspect of scale is huge and it feels like that was made up and now you have scale and production. What's your reaction to that when people say, how does scale impact this? >> Yeah, scale is huge for some of, you know, I think, I think look, the highest leverage business areas to apply these to, are generally going to be the ones at the biggest scale, right? And I think that's one of the advantages we have. Several of us come from enterprise backgrounds and we're used to doing things enterprise grade at scale and so, you know, we're seeing more and more companies, I think they started out deploying AI and sort of, you know, important but not necessarily like the crown jewel area of their business, but now they're deploying AI right in the heart of things and yeah, the scale that some of our companies are operating at is pretty impressive. >> John: Well, super exciting, great to have you on and congratulations. I got a final question for you, just random. What are you most excited about right now? Because I mean, you got to be pretty pumped right now with the way the world is going and again, I think this is just the beginning. What's your personal view? How do you feel right now? >> Yeah, the thing I'm really excited about for the next couple years now, you touched on it a little bit earlier, but is a sort of convergence of AI and AI systems with sort of turning into AI native businesses. And so, as you sort of do more, get good further along this transformation curve with AI, it turns out that like the better the performance of your AI systems, the better the performance of your business. Because these models are really starting to underpin all these key areas that cumulatively drive your P&L. And so one of the things that we work a lot with our customers is to do is just understand, you know, take these really esoteric data science notions and performance and tie them to all their business KPIs so that way you really are, it's kind of like the operating system for running your AI native business. And we're starting to see more and more companies get farther along that maturity curve and starting to think that way, which is really exciting. >> I love the AI native. I haven't heard any startup yet say AI first, although we kind of use the term, but I guarantee that's going to come in all the pitch decks, we're an AI first company, it's going to be great run. Adam, congratulations on your success to you and the team. Hey, if we do a few more interviews, we'll get the linguistics down. We can have bots just interact with you directly and ask you, have an interview directly. >> That sounds good, I'm going to go hang out on the beach, right? So, sounds good. >> Thanks for coming on, really appreciate the conversation. Super exciting, really important area and you guys doing great work. Thanks for coming on. >> Adam: Yeah, thanks John. >> Again, this is Cube Conversation. I'm John Furrier here in Palo Alto, AI going next gen. This is legit, this is going to a whole nother level that's going to open up huge opportunities for startups, that's going to use opportunities for investors and the value to the users and the experience will come in, in ways I think no one will ever see. So keep an eye out for more coverage on siliconangle.com and theCUBE.net, thanks for watching. (bright upbeat music)
SUMMARY :
I'm excited to have Adam Wenchel looking forward to the conversation. kind of in the mainstream and that it's just the amount Adam, you know, you've so that you can build on top of them. to give me a riveting introduction to you And you mentioned computer vision, again, And you know, those teams, And you know, as you mentioned, of when you get models into off the lot is not, you and that you can explain what it's doing. So it's kind of like the same vibe so that you can do it in a smart way And so, you know, that creates and make sure that you out of the frozen, you know, and so you can use these foundation models a customer base you got there, that are really leading the And so when you look at the scale, And so, you know, go how do you view that So I like the term just AI infrastructure, I mean that's the kind of ironic thing, and you can actually work the case, is that the aspect of and so, you know, we're seeing exciting, great to have you on so that way you really are, success to you and the team. out on the beach, right? and you guys doing great work. and the value to the users and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Markoff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Adam Wenchel | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
John Dickerson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
13 years | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Billy Beane | PERSON | 0.99+ |
over 20 years | QUANTITY | 0.99+ |
DARPA | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
University of Maryland | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
US | LOCATION | 0.97+ |
first | QUANTITY | 0.96+ |
six years ago | DATE | 0.96+ |
New York Times | ORGANIZATION | 0.96+ |
ChatGPT | ORGANIZATION | 0.96+ |
Swami | PERSON | 0.95+ |
ChatGPT | TITLE | 0.95+ |
hundreds of models | QUANTITY | 0.95+ |
25, 30% | QUANTITY | 0.95+ |
single problem | QUANTITY | 0.95+ |
hundreds of millions of dollars | QUANTITY | 0.95+ |
10 | QUANTITY | 0.94+ |
Moneyball | TITLE | 0.94+ |
wave | EVENT | 0.91+ |
three things | QUANTITY | 0.9+ |
AIOps | TITLE | 0.9+ |
last six months | DATE | 0.89+ |
few months ago | DATE | 0.88+ |
big | EVENT | 0.86+ |
next couple years | DATE | 0.86+ |
DevOps | TITLE | 0.85+ |
Arthur | PERSON | 0.85+ |
CUBE | ORGANIZATION | 0.83+ |
dozens of models | QUANTITY | 0.8+ |
a few years back | DATE | 0.8+ |
six years ago | DATE | 0.78+ |
theCUBE | ORGANIZATION | 0.76+ |
SageMaker | TITLE | 0.75+ |
decades | QUANTITY | 0.75+ |
ORGANIZATION | 0.74+ | |
MLOps | TITLE | 0.74+ |
supercloud | ORGANIZATION | 0.73+ |
super AI wave | EVENT | 0.73+ |
a couple months | QUANTITY | 0.72+ |
Arthur | ORGANIZATION | 0.72+ |
100 customers | QUANTITY | 0.71+ |
Cube Conversation | EVENT | 0.69+ |
theCUBE.net | OTHER | 0.67+ |
CUBE Analysis of Day 1 of MWC Barcelona 2023 | MWC Barcelona 2023
>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies creating technologies that drive human progress. (upbeat music) >> Hey everyone, welcome back to theCube's first day of coverage of MWC 23 from Barcelona, Spain. Lisa Martin here with Dave Vellante and Dave Nicholson. I'm literally in between two Daves. We've had a great first day of coverage of the event. There's been lots of conversations, Dave, on disaggregation, on the change of mobility. I want to be able to get your perspectives from both of you on what you saw on the show floor, what you saw and heard from our guests today. So we'll start with you, Dave V. What were some of the things that were our takeaways from day one for you? >> Well, the big takeaway is the event itself. On day one, you get a feel for what this show is like. Now that we're back, face-to-face kind of pretty much full face-to-face. A lot of excitement here. 2000 plus exhibitors, I mean, planes, trains, automobiles, VR, AI, servers, software, I mean everything. I mean, everybody is here. So it's a really comprehensive show. It's not just about mobile. That's why they changed the name from Mobile World Congress. I think the other thing is from the keynotes this morning, I mean, you heard, there's a lot of, you know, action around the telcos and the transformation, but in a lot of ways they're sort of protecting their existing past from the future. And so they have to be careful about how fast they move. But at the same time if they don't move fast, they're going to get disrupted. We heard some complaints, essentially, you know, veiled complaints that the over the top guys aren't paying their fair share and Telco should be able to charge them more. We heard the chairman of Ericsson talk about how we can't let the OTTs do that again. We're going to charge directly for access through APIs to our network, to our data. We heard from Chris Lewis. Yeah. They've only got, or maybe it was San Ji Choha, how they've only got eight APIs. So, you know the developers are the ones who are going to actually build out the innovation at the edge. The telcos are going to provide the connectivity and the infrastructure companies like Dell as well. But it's really to me all about the developers. And that's where the action's going to be. And it's going to be interesting to see how the developers respond to, you know, the gun to the head. If you want access, you're going to have to pay for it. Now maybe there's so much money to be made that they'll go for it, but I feel like there's maybe a different model. And I think some of the emerging telcos are going to say, you know what, here developers, here's a platform, have at it. We're not going to charge you for all the data until you succeed. Then we're going to figure out a monetization model. >> Right. A lot of opportunity for the developer. That skillset is certainly one that's in demand here. And certainly the transformation of the telecom industry is, there's a lot of conundrums that I was hearing going on today, kind of chicken and egg scenarios. But Dave, you had a chance to walk around the show floor. We were here interviewing all day. What were some of the things that you saw that really stuck out to you? >> I think I was struck by how much attention was being paid to private 5G networks. You sort of read between the lines and it appears as though people kind of accept that the big incumbent telecom players are going to be slower to move. And this idea of things like open RAN where you're leveraging open protocols in a stack to deliver more agility and more value. So it sort of goes back to the generalized IT discussion of moving to cloud for agility. It appears as though a lot of players realize that the wild wild west, the real opportunity, is in the private sphere. So it's really interesting to see how that works, how 5G implemented into an environment with wifi how that actually works. It's really interesting. >> So it's, obviously when you talk to companies like Dell, I haven't hit HPE yet. I'm going to go over there and check out their booth. They got an analyst thing going on but it's really early days for them. I mean, they started in this business by taking an X86 box, putting a name on it, you know, that sounded like it was edged, throwing it over, you know, the wall. That's sort of how they all started in this business. And now they're, you know, but they knew they had to form partnerships. They had to build purpose-built systems. Now with 16 G out, you're seeing that. And so it's still really early days, talking about O RAN, open RAN, the open RAN alliance. You know, it's just, I mean, not even, the game hasn't even barely started yet but we heard from Dish today. They're trying to roll out a massive 5G network. Rakuten is really focused on sort of open RAN that's more reliable, you know, or as reliable as the existing networks but not as nearly as huge a scale as Dish. So it's going to take a decade for this to evolve. >> Which is surprising to the average consumer to hear that. Because as far as we know 5G has been around for a long time. We've been talking about 5G, implementing 5G, you sort of assume it's ubiquitous but the reality is it is just the beginning. >> Yeah. And you know, it's got a fake 5G too, right? I mean you see it on your phone and you're like, what's the difference here? And it's, you know, just, >> Dave N.: What does it really mean? >> Right. And so I think your point about private is interesting, the conversation Dave that we had earlier, I had throughout, hey I don't think it's a replacement for wifi. And you said, "well, why not?" I guess it comes down to economics. I mean if you can get the private network priced close enough then you're right. Why wouldn't it replace wifi? Now you got wifi six coming in. So that's a, you know, and WiFi's flexible, it's cheap, it's good for homes, good for offices, but these private networks are going to be like kickass, right? They're going to be designed to run whatever, warehouses and robots, and energy drilling facilities. And so, you know the economics I don't think are there today but maybe they can be at volume. >> Maybe at some point you sort of think of today's science experiment becoming the enterprise-grade solution in the future. I had a chance to have some conversations with folks around the show. And I think, and what I was surprised by was I was reminded, frankly, I wasn't surprised. I was reminded that when we start talking about 5G, we're talking about spectrum that is managed by government entities. Of course all broadcast, all spectrum, is managed in one way or another. But in particular, you can't simply put a SIM in every device now because there are a lot of regulatory hurdles that have to take place. So typically what these things look like today is 5G backhaul to the network, communication from that box to wifi. That's a huge improvement already. So yeah, my question about whether, you know, why not put a SIM in everything? Maybe eventually, but I think, but there are other things that I was not aware of that are standing in the way. >> Your point about spectrum's an interesting one though because private networks, you're going to be able to leverage that spectrum in different ways, and tune it essentially, use different parts of the spectrum, make it programmable so that you can apply it to that specific use case, right? So it's going to be a lot more flexible, you know, because I presume the needs spectrum needs of a hospital are going to be different than, you know, an agribusiness are going to be different than a drilling, you know, unit, offshore drilling unit. And so the ability to have the flexibility to use the spectrum in different ways and apply it to that use case, I think is going to be powerful. But I suspect it's going to be expensive initially. I think the other thing we talked about is public policy and regulation, and it's San Ji Choha brought up the point, is telcos have been highly regulated. They don't just do something and ask for permission, you know, they have to work within the confines of that regulated environment. And there's a lot of these greenfield companies and private networks that don't necessarily have to follow those rules. So that's a potential disruptive force. So at the same time, the telcos are spending what'd we hear, a billion, a trillion and a half over the next seven years? Building out 5G networks. So they got to figure out, you know how to get a payback on that. They'll get it I think on connectivity, 'cause they have a monopoly but they want more. They're greedy. They see the over, they see the Netflixes of the world and the Googles and the Amazons mopping up services and they want a piece of that action but they've never really been good at it. >> Well, I've got a question for both of you. I mean, what do you think the odds are that by the time the Shangri La of fully deployed 5G happens that we have so much data going through it that effectively it feels exactly the same as 3G? What are the odds? >> That's a good point. Well, the thing that gets me about 5G is there's so much of it on, if I go to the consumer side when we're all consumers in our daily lives so much of it's marketing hype. And, you know all the messaging about that, when it's really early innings yet they're talking about 6G. What does actual fully deployed 5G look like? What is that going to enable a hospital to achieve or an oil refinery out in the middle of the ocean? That's something that interests me is what's next for that? Are we going to hear that at this event? >> I mean, walking around, you see a fair amount of discussion of, you know, the internet of things. Edge devices, the increase in connectivity. And again, what I was surprised by was that there's very little talk about a sim card in every one of those devices at this point. It's like, no, no, no, we got wifi to handle all that but aggregating it back into a central network that's leveraging 5G. That's really interesting. That's really interesting. >> I think you, the odds of your, to go back to your question, I think the odds are even money, that by the time it's all built out there's going to be so much data and so much new capability it's going to work similarly at similar speeds as we see in the networks today. You're just going to be able to do so many more things. You know, and your video's going to look better, the graphics are going to look better. But I think over the course of history, this is what's happening. I mean, even when you go back to dial up, if you were in an AOL chat room in 1996, it was, you know, yeah it took a while. You're like, (screeches) (Lisa laughs) the modem and everything else, but once you were in there- >> Once you're there, 2400 baud. >> It was basically real time. And so you could talk to your friends and, you know, little chat room but that's all you could do. You know, if you wanted to watch a video, forget it, right? And then, you know, early days of streaming video, stop, start, stop, start, you know, look at Amazon Prime when it first started, Prime Video was not that great. It's sort of catching up to Netflix. But, so I think your point, that question is really prescient because more data, more capability, more apps means same speed. >> Well, you know, you've used the phrase over the top. And so just just so we're clear so we're talking about the same thing. Typically we're talking about, you've got, you have network providers. Outside of that, you know, Netflix, internet connection, I don't need Comcast, right? Perfect example. Well, what about the over the top that's coming from direct satellite communications with devices. There are times when I don't have a signal on my, happens to be an Apple iPhone, when I get a little SOS satellite logo because I can communicate under very limited circumstances now directly to the satellite for very limited text messaging purposes. Here at the show, I think it might be a Motorola device. It's a dongle that allows any mobile device to leverage direct satellite communication. Again, for texting back to the 2,400 baud modem, you know, days, 1200 even, 300 even, go back far enough. What's that going to look like? Is that too far in the future to think that eventually it's all going to be over the top? It's all going to be handset to satellite and we don't need these RANs anymore. It's all going to be satellite networks. >> Dave V.: I think you're going to see- >> Little too science fiction-y? (laughs) >> No, I, no, I think it's a good question and I think you're going to see fragments. I think you're going to see fragmentation of private networks. I think you're going to see fragmentation of satellites. I think you're going to see legacy incumbents kind of hanging on, you know, the cable companies. I think that's coming. I think by 2030 it'll, the picture will be much more clear. The question is, and I think it's come down to the innovation on top, which platform is going to be the most developer friendly? Right, and you know, I've not heard anything from the big carriers that they're going to be developer friendly. I've heard "we have proprietary data that we're going to charge access for and developers are going to have to pay for that." But I haven't heard them saying "Developers, developers, developers!" You know, Steve Bomber running around, like bend over backwards for developers, they're asking the developers to bend over. And so if a network can, let's say the satellite network is more developer friendly, you know, you're going to see more innovation there potentially. You know, or if a dish network says, "You know what? We're going after developers, we're going after innovation. We're not going to gouge them for all this network data. Rather we're going to make the platform open or maybe we're going to do an app store-like model where we take a piece of the action after they succeed." You know, take it out of the backend, like a Silicon Valley VC as opposed to an East Coast VC. They're not going to get you in the front end. (Lisa laughs) >> Well, you can see the sort of disruptive forces at play between open RAN and the legacy, call it proprietary stack, right? But what is the, you know, if that's sort of a horizontal disruptive model, what's the vertically disruptive model? Is it private networks coming in? Is it a private 5G network that comes in that says, "We're starting from the ground up, everything is containerized. We're going to go find people at KubeCon who are, who understand how to orchestrate with Kubernetes and use containers in microservices, and we're going to have this little 5G network that's going to deliver capabilities that you can't get from the big boys." Is there a way to monetize that? Is there a way for them to be disrupted, be disruptive, or are these private 5G networks that everybody's talking about just relegated to industrial use cases where you're just squeezing better economics out of wireless communication amongst all your devices in your factory? >> That's an interesting question. I mean, there are a lot of those smart factory industrial use cases. I mean, it's basically industry 4.0 use cases. But yeah, I don't count the cloud guys out. You know, everybody says, "oh, the narrative is, well, the latency of the cloud." Well, not if the cloud is at the edge. If you take a local zone and put storage, compute, and data right next to each other and the cloud model with the cloud APIs, and then you got an asynchronous, you know, connection back. I think that's a reasonable model. I think the cloud guys figured out developers, right? Pretty well. Certainly Microsoft and, and Amazon and Google, they know developers. I don't see any reason why they can't bring their model to the edge. So, and that's really disruptive to the legacy telco guys, you know? So they have to be careful. >> One step closer to my dream of eliminating the word "cloud" from IT lexicon. (Lisa laughs) I contend that it has always been IT, and it will always be IT. And this whole idea of cloud, what is cloud? If AWS, for example, is delivering hardware to the edge where it needs to be, is that cloud? Do we go back to the idea that cloud is an operational model and not a question of physical location? I hope we get to that point. >> Well, what's Apex and GreenLake? Apex is, you know, Dell's as a service. GreenLake is- >> HPE. >> HPE's as a service. That's outposts. >> Dave N.: Right. >> Yeah. >> That's their outpost. >> Yeah. >> Well AWS's position used to be, you know, to use them as a proxy for hyperscale cloud. We'll just, we'll grow in a very straight trajectory forever on the back of net new stuff. Forget about the old stuff. As James T. Kirk said of the Klingons, "let them die." (Lisa laughs) As far as the cloud providers were concerned just, yeah, let, let that old stuff go away. Well then they found out, there came a point in time where they realized there's a lot of friction and stickiness associated with that. So they had to deal with the reality of hybridity, if that's the word, the hybrid nature of things. So what are they doing? They're pushing stuff out to the edge, so... >> With the same operating model. >> With the same operating model. >> Similar. I mean, it's limited, right? >> So you see- >> You can't run a lot of database on outpost, you can run RES- >> You see this clash of Titans where some may have written off traditional IT infrastructure vendors, might have been written off as part of the past. Whereas hyperscale cloud providers represent the future. It seems here at this show they're coming head to head and competing evenly. >> And this is where I think a company like Dell or HPE or Cisco has some advantages in that they're not going to compete with the telcos, but the hyperscalers will. >> Lisa: Right. >> Right. You know, and they're already, Google's, how much undersea cable does Google own? A lot. Probably more than anybody. >> Well, we heard from Google and Microsoft this morning in the keynote. It'd be interesting to see if we hear from AWS and then over the next couple of days. But guys, clearly there is, this is a great wrap of day one. And the crazy thing is this is only day one. We've got three more days of coverage, more news, more information to break down and unpack on theCUBE. Look forward to doing that with you guys over the next three days. Thank you for sharing what you saw on the show floor, what you heard from our guests today as we had about 10 interviews. Appreciate your insights and your perspectives and can't wait for tomorrow. >> Right on. >> All right. For Dave Vellante and Dave Nicholson, I'm Lisa Martin. You're watching theCUBE's day one wrap from MWC 23. We'll see you tomorrow. (relaxing music)
SUMMARY :
that drive human progress. of coverage of the event. are going to say, you know what, of the telecom industry is, are going to be slower to move. And now they're, you know, Which is surprising to the I mean you see it on your phone I guess it comes down to economics. I had a chance to have some conversations And so the ability to have the flexibility I mean, what do you think the odds are What is that going to of discussion of, you know, the graphics are going to look better. And then, you know, early the 2,400 baud modem, you know, days, They're not going to get you that you can't get from the big boys." to the legacy telco guys, you know? dream of eliminating the word Apex is, you know, Dell's as a service. That's outposts. So they had to deal with I mean, it's limited, right? they're coming head to going to compete with the telcos, You know, and they're already, Google's, And the crazy thing is We'll see you tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Telco | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Comcast | ORGANIZATION | 0.99+ |
Steve Bomber | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Chris Lewis | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
James T. Kirk | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
1996 | DATE | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Motorola | ORGANIZATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Dave V. | PERSON | 0.99+ |
Dave N. | PERSON | 0.99+ |
1200 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
first day | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
2,400 baud | QUANTITY | 0.99+ |
telcos | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
2400 baud | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Apex | ORGANIZATION | 0.99+ |
San Ji Choha | ORGANIZATION | 0.99+ |
AOL | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
300 | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.98+ |
2030 | DATE | 0.98+ |
GreenLake | ORGANIZATION | 0.98+ |
iPhone | COMMERCIAL_ITEM | 0.98+ |
MWC 23 | EVENT | 0.98+ |
day one | QUANTITY | 0.98+ |
MWC 23 | EVENT | 0.98+ |
X86 | COMMERCIAL_ITEM | 0.97+ |
eight APIs | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
2023 | DATE | 0.96+ |
Dish | ORGANIZATION | 0.96+ |
Prime | COMMERCIAL_ITEM | 0.95+ |
this morning | DATE | 0.95+ |
Day 1 | QUANTITY | 0.95+ |
a billion, a trillion and a half | QUANTITY | 0.94+ |
Prime Video | COMMERCIAL_ITEM | 0.94+ |
three more days | QUANTITY | 0.94+ |
Apple | ORGANIZATION | 0.93+ |
first | QUANTITY | 0.92+ |
Robert Nishihara, Anyscale | CUBE Conversation
(upbeat instrumental) >> Hello and welcome to this CUBE conversation. I'm John Furrier, host of theCUBE, here in Palo Alto, California. Got a great conversation with Robert Nishihara who's the co-founder and CEO of Anyscale. Robert, great to have you on this CUBE conversation. It's great to see you. We did your first Ray Summit a couple years ago and congratulations on your venture. Great to have you on. >> Thank you. Thanks for inviting me. >> So you're first time CEO out of Berkeley in Data. You got the Databricks is coming out of there. You got a bunch of activity coming from Berkeley. It's like a, it really is kind of like where a lot of innovations going on data. Anyscale has been one of those startups that has risen out of that scene. Right? You look at the success of what the Data lakes are now. Now you've got the generative AI. This has been a really interesting innovation market. This new wave is coming. Tell us what's going on with Anyscale right now, as you guys are gearing up and getting some growth. What's happening with the company? >> Yeah, well one of the most exciting things that's been happening in computing recently, is the rise of AI and the excitement about AI, and the potential for AI to really transform every industry. Now of course, one of the of the biggest challenges to actually making that happen is that doing AI, that AI is incredibly computationally intensive, right? To actually succeed with AI to actually get value out of AI. You're typically not just running it on your laptop, you're often running it and scaling it across thousands of machines, or hundreds of machines or GPUs, and to, so organizations and companies and businesses that do AI often end up building a large infrastructure team to manage the distributed systems, the computing to actually scale these applications. And that's a, that's a, a huge software engineering lift, right? And so, one of the goals for Anyscale is really to make that easy. To get to the point where, developers and teams and companies can succeed with AI. Can build these scalable AI applications, without really you know, without a huge investment in infrastructure with a lot of, without a lot of expertise in infrastructure, where really all they need to know is how to program on their laptop, how to program in Python. And if you have that, then that's really all you need to succeed with AI. So that's what we've been focused on. We're building Ray, which is an open source project that's been starting to get adopted by tons of companies, to actually train these models, to deploy these models, to do inference with these models, you know, to ingest and pre-process their data. And our goals, you know, here with the company are really to make Ray successful. To grow the Ray community, and then to build a great product around it and simplify the development and deployment, and productionization of machine learning for, for all these businesses. >> It's a great trend. Everyone wants developer productivity seeing that, clearly right now. And plus, developers are voting literally on what standards become. As you look at how the market is open source driven, a lot of that I love the model, love the Ray project love the, love the Anyscale value proposition. How big are you guys now, and how is that value proposition of Ray and Anyscale and foundational models coming together? Because it seems like you guys are in a perfect storm situation where you guys could get a real tailwind and draft off the the mega trend that everyone's getting excited. The new toy is ChatGPT. So you got to look at that and say, hey, I mean, come on, you guys did all the heavy lifting. >> Absolutely. >> You know how many people you are, and what's the what's the proposition for you guys these days? >> You know our company's about a hundred people, that a bit larger than that. Ray's been going really quickly. It's been, you know, companies using, like OpenAI uses Ray to train their models, like ChatGPT. Companies like Uber run all their deep learning you know, and classical machine learning on top of Ray. Companies like Shopify, Spotify, Netflix, Cruise, Lyft, Instacart, you know, Bike Dance. A lot of these companies are investing heavily in Ray for their machine learning infrastructure. And I think it's gotten to the point where, if you're one of these, you know type of businesses, and you're looking to revamp your machine learning infrastructure. If you're looking to enable new capabilities, you know make your teams more productive, increase, speed up the experimentation cycle, you know make it more performance, like build, you know, run applications that are more scalable, run them faster, run them in a more cost efficient way. All of these types of companies are at least evaluating Ray and Ray is an increasingly common choice there. I think if they're not using Ray, if many of these companies that end up not using Ray, they often end up building their own infrastructure. So Ray has been, the growth there has been incredibly exciting over the, you know we had our first in-person Ray Summit just back in August, and planning the next one for, for coming September. And so when you asked about the value proposition, I think there's there's really two main things, when people choose to go with Ray and Anyscale. One reason is about moving faster, right? It's about developer productivity, it's about speeding up the experimentation cycle, easily getting their models in production. You know, we hear many companies say that they, you know they, once they prototype a model, once they develop a model, it's another eight weeks, or 12 weeks to actually get that model in production. And that's a reason they talk to us. We hear companies say that, you know they've been training their models and, and doing inference on a single machine, and they've been sort of scaling vertically, like using bigger and bigger machines. But they, you know, you can only do that for so long, and at some point you need to go beyond a single machine and that's when they start talking to us. Right? So one of the main value propositions is around moving faster. I think probably the phrase I hear the most is, companies saying that they don't want their machine learning people to have to spend all their time configuring infrastructure. All this is about productivity. >> Yeah. >> The other. >> It's the big brains in the company. That are being used to do remedial tasks that should be automated right? I mean that's. >> Yeah, and I mean, it's hard stuff, right? It's also not these people's area of expertise, and or where they're adding the most value. So all of this is around developer productivity, moving faster, getting to market faster. The other big value prop and the reason people choose Ray and choose Anyscale, is around just providing superior infrastructure. This is really, can we scale more? You know, can we run it faster, right? Can we run it in a more cost effective way? We hear people saying that they're not getting good GPU utilization with the existing tools they're using, or they can't scale beyond a certain point, or you know they don't have a way to efficiently use spot instances to save costs, right? Or their clusters, you know can't auto scale up and down fast enough, right? These are all the kinds of things that Ray and Anyscale, where Ray and Anyscale add value and solve these kinds of problems. >> You know, you bring up great points. Auto scaling concept, early days, it was easy getting more compute. Now it's complicated. They're built into more integrated apps in the cloud. And you mentioned those companies that you're working with, that's impressive. Those are like the big hardcore, I call them hardcore. They have a good technical teams. And as the wave starts to move from these companies that were hyper scaling up all the time, the mainstream are just developers, right? So you need an interface in, so I see the dots connecting with you guys and I want to get your reaction. Is that how you see it? That you got the alphas out there kind of kicking butt, building their own stuff, alpha developers and infrastructure. But mainstream just wants programmability. They want that heavy lifting taken care of for them. Is that kind of how you guys see it? I mean, take us through that. Because to get crossover to be democratized, the automation's got to be there. And for developer productivity to be in, it's got to be coding and programmability. >> That's right. Ultimately for AI to really be successful, and really you know, transform every industry in the way we think it has the potential to. It has to be easier to use, right? And that is, and being easier to use, there's many dimensions to that. But an important one is that as a developer to do AI, you shouldn't have to be an expert in distributed systems. You shouldn't have to be an expert in infrastructure. If you do have to be, that's going to really limit the number of people who can do this, right? And I think there are so many, all of the companies we talk to, they don't want to be in the business of building and managing infrastructure. It's not that they can't do it. But it's going to slow them down, right? They want to allocate their time and their energy toward building their product, right? To building a better product, getting their product to market faster. And if we can take the infrastructure work off of the critical path for them, that's going to speed them up, it's going to simplify their lives. And I think that is critical for really enabling all of these companies to succeed with AI. >> Talk about the customers you guys are talking to right now, and how that translates over. Because I think you hit a good thread there. Data infrastructure is critical. Managed services are coming online, open sources continuing to grow. You have these people building their own, and then if they abandon it or don't scale it properly, there's kind of consequences. 'Cause it's a system you mentioned, it's a distributed system architecture. It's not as easy as standing up a monolithic app these days. So when you guys go to the marketplace and talk to customers, put the customers in buckets. So you got the ones that are kind of leaning in, that are pretty peaked, probably working with you now, open source. And then what's the customer profile look like as you go mainstream? Are they looking to manage service, looking for more architectural system, architecture approach? What's the, Anyscale progression? How do you engage with your customers? What are they telling you? >> Yeah, so many of these companies, yes, they're looking for managed infrastructure 'cause they want to move faster, right? Now the kind of these profiles of these different customers, they're three main workloads that companies run on Anyscale, run with Ray. It's training related workloads, and it is serving and deployment related workloads, like actually deploying your models, and it's batch processing, batch inference related workloads. Like imagine you want to do computer vision on tons and tons of, of images or videos, or you want to do natural language processing on millions of documents or audio, or speech or things like that, right? So the, I would say the, there's a pretty large variety of use cases, but the most common you know, we see tons of people working with computer vision data, you know, computer vision problems, natural language processing problems. And it's across many different industries. We work with companies doing drug discovery, companies doing you know, gaming or e-commerce, right? Companies doing robotics or agriculture. So there's a huge variety of the types of industries that can benefit from AI, and can really get a lot of value out of AI. And, but the, but the problems are the same problems that they all want to solve. It's like how do you make your team move faster, you know succeed with AI, be more productive, speed up the experimentation, and also how do you do this in a more performant way, in a faster, cheaper, in a more cost efficient, more scalable way. >> It's almost like the cloud game is coming back to AI and these foundational models, because I was just on a podcast, we recorded our weekly podcast, and I was just riffing with Dave Vellante, my co-host on this, were like, hey, in the early days of Amazon, if you want to build an app, you just, you have to build a data center, and then you go to now you go to the cloud, cloud's easier, pay a little money, penny's on the dollar, you get your app up and running. Cloud computing is born. With foundation models in generative AI. The old model was hard, heavy lifting, expensive, build out, before you get to do anything, as you mentioned time. So I got to think that you're pretty much in a good position with this foundational model trend in generative AI because I just looked at the foundation map, foundation models, map of the ecosystem. You're starting to see layers of, you got the tooling, you got platform, you got cloud. It's filling out really quickly. So why is Anyscale important to this new trend? How do you talk to people when they ask you, you know what does ChatGPT mean for Anyscale? And how does the financial foundational model growth, fit into your plan? >> Well, foundational models are hugely important for the industry broadly. Because you're going to have these really powerful models that are trained that you know, have been trained on tremendous amounts of data. tremendous amounts of computes, and that are useful out of the box, right? That people can start to use, and query, and get value out of, without necessarily training these huge models themselves. Now Ray fits in and Anyscale fit in, in a number of places. First of all, they're useful for creating these foundation models. Companies like OpenAI, you know, use Ray for this purpose. Companies like Cohere use Ray for these purposes. You know, IBM. If you look at, there's of course also open source versions like GPTJ, you know, created using Ray. So a lot of these large language models, large foundation models benefit from training on top of Ray. And, but of course for every company training and creating these huge foundation models, you're going to have many more that are fine tuning these models with their own data. That are deploying and serving these models for their own applications, that are building other application and business logic around these models. And that's where Ray also really shines, because Ray you know, is, can provide common infrastructure for all of these workloads. The training, the fine tuning, the serving, the data ingest and pre-processing, right? The hyper parameter tuning, the and and so on. And so where the reason Ray and Anyscale are important here, is that, again, foundation models are large, foundation models are compute intensive, doing you know, using both creating and using these foundation models requires tremendous amounts of compute. And there there's a big infrastructure lift to make that happen. So either you are using Ray and Anyscale to do this, or you are building the infrastructure and managing the infrastructure yourself. Which you can do, but it's, it's hard. >> Good luck with that. I always say good luck with that. I mean, I think if you really need to do, build that hardened foundation, you got to go all the way. And I think this, this idea of composability is interesting. How is Ray working with OpenAI for instance? Take, take us through that. Because I think you're going to see a lot of people talking about, okay I got trained models, but I'm going to have not one, I'm going to have many. There's big debate that OpenAI is going to be the mother of all LLMs, but now, but really people are also saying that to be many more, either purpose-built or specific. The fusion and these things come together there's like a blending of data, and that seems to be a value proposition. How does Ray help these guys get their models up? Can you take, take us through what Ray's doing for say OpenAI and others, and how do you see the models interacting with each other? >> Yeah, great question. So where, where OpenAI uses Ray right now, is for the training workloads. Training both to create ChatGPT and models like that. There's both a supervised learning component, where you're pre-training this model on doing supervised pre-training with example data. There's also a reinforcement learning component, where you are fine-tuning the model and continuing to train the model, but based on human feedback, based on input from humans saying that, you know this response to this question is better than this other response to this question, right? And so Ray provides the infrastructure for scaling the training across many, many GPUs, many many machines, and really running that in an efficient you know, performance fault tolerant way, right? And so, you know, open, this is not the first version of OpenAI's infrastructure, right? They've gone through iterations where they did start with building the infrastructure themselves. They were using tools like MPI. But at some point, you know, given the complexity, given the scale of what they're trying to do, you hit a wall with MPI and that's going to happen with a lot of other companies in this space. And at that point you don't have many other options other than to use Ray or to build your own infrastructure. >> That's awesome. And then your vision on this data interaction, because the old days monolithic models were very rigid. You couldn't really interface with them. But we're kind of seeing this future of data fusion, data interaction, data blending at large scale. What's your vision? How do you, what's your vision of where this goes? Because if this goes the way people think. You can have this data chemistry kind of thing going on where people are integrating all kinds of data with each other at large scale. So you need infrastructure, intelligence, reasoning, a lot of code. Is this something that you see? What's your vision in all this? Take us through. >> AI is going to be used everywhere right? It's, we see this as a technology that's going to be ubiquitous, and is going to transform every business. I mean, imagine you make a product, maybe you were making a tool like Photoshop or, or whatever the, you know, tool is. The way that people are going to use your tool, is not by investing, you know, hundreds of hours into learning all of the different, you know specific buttons they need to press and workflows they need to go through it. They're going to talk to it, right? They're going to say, ask it to do the thing they want it to do right? And it's going to do it. And if it, if it doesn't know what it's want, what it's, what's being asked of it. It's going to ask clarifying questions, right? And then you're going to clarify, and you're going to have a conversation. And this is going to make many many many kinds of tools and technology and products easier to use, and lower the barrier to entry. And so, and this, you know, many companies fit into this category of trying to build products that, and trying to make them easier to use, this is just one kind of way it can, one kind of way that AI will will be used. But I think it's, it's something that's pretty ubiquitous. >> Yeah. It'll be efficient, it'll be efficiency up and down the stack, and will change the productivity equation completely. You just highlighted one, I don't want to fill out forms, just stand up my environment for me. And then start coding away. Okay well this is great stuff. Final word for the folks out there watching, obviously new kind of skill set for hiring. You guys got engineers, give a plug for the company, for Anyscale. What are you looking for? What are you guys working on? Give a, take the last minute to put a plug in for the company. >> Yeah well if you're interested in AI and if you think AI is really going to be transformative, and really be useful for all these different industries. We are trying to provide the infrastructure to enable that to happen, right? So I think there's the potential here, to really solve an important problem, to get to the point where developers don't need to think about infrastructure, don't need to think about distributed systems. All they think about is their application logic, and what they want their application to do. And I think if we can achieve that, you know we can be the foundation or the platform that enables all of these other companies to succeed with AI. So that's where we're going. I think something like this has to happen if AI is going to achieve its potential, we're looking for, we're hiring across the board, you know, great engineers, on the go-to-market side, product managers, you know people who want to really, you know, make this happen. >> Awesome well congratulations. I know you got some good funding behind you. You're in a good spot. I think this is happening. I think generative AI and foundation models is going to be the next big inflection point, as big as the pc inter-networking, internet and smartphones. This is a whole nother application framework, a whole nother set of things. So this is the ground floor. Robert, you're, you and your team are right there. Well done. >> Thank you so much. >> All right. Thanks for coming on this CUBE conversation. I'm John Furrier with theCUBE. Breaking down a conversation around AI and scaling up in this new next major inflection point. This next wave is foundational models, generative AI. And thanks to ChatGPT, the whole world's now knowing about it. So it really is changing the game and Anyscale is right there, one of the hot startups, that is in good position to ride this next wave. Thanks for watching. (upbeat instrumental)
SUMMARY :
Robert, great to have you Thanks for inviting me. as you guys are gearing up and the potential for AI to a lot of that I love the and at some point you need It's the big brains in the company. and the reason people the automation's got to be there. and really you know, and talk to customers, put but the most common you know, and then you go to now that are trained that you know, and that seems to be a value proposition. And at that point you don't So you need infrastructure, and lower the barrier to entry. What are you guys working on? and if you think AI is really is going to be the next And thanks to ChatGPT,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Robert Nishihara | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
12 weeks | QUANTITY | 0.99+ |
Robert | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
eight weeks | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
August | DATE | 0.99+ |
September | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Cruise | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Instacart | ORGANIZATION | 0.99+ |
Anyscale | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Photoshop | TITLE | 0.99+ |
One reason | QUANTITY | 0.99+ |
Bike Dance | ORGANIZATION | 0.99+ |
Ray | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
thousands of machines | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
two main things | QUANTITY | 0.98+ |
single machine | QUANTITY | 0.98+ |
Cohere | ORGANIZATION | 0.98+ |
Ray and Anyscale | ORGANIZATION | 0.98+ |
millions of documents | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one kind | QUANTITY | 0.96+ |
first version | QUANTITY | 0.95+ |
CUBE | ORGANIZATION | 0.95+ |
about a hundred people | QUANTITY | 0.95+ |
hundreds of machines | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
OpenAI | ORGANIZATION | 0.94+ |
First | QUANTITY | 0.94+ |
hundreds of hours | QUANTITY | 0.93+ |
first time | QUANTITY | 0.93+ |
Databricks | ORGANIZATION | 0.91+ |
Ray and Anyscale | ORGANIZATION | 0.9+ |
tons | QUANTITY | 0.89+ |
couple years ago | DATE | 0.88+ |
Ray and | ORGANIZATION | 0.86+ |
ChatGPT | TITLE | 0.81+ |
tons of people | QUANTITY | 0.8+ |
Paola Peraza Calderon & Viraj Parekh, Astronomer | Cube Conversation
(soft electronic music) >> Hey everyone, welcome to this CUBE conversation as part of the AWS Startup Showcase, season three, episode one, featuring Astronomer. I'm your host, Lisa Martin. I'm in the CUBE's Palo Alto Studios, and today excited to be joined by a couple of guests, a couple of co-founders from Astronomer. Viraj Parekh is with us, as is Paola Peraza-Calderon. Thanks guys so much for joining us. Excited to dig into Astronomer. >> Thank you so much for having us. >> Yeah, thanks for having us. >> Yeah, and we're going to be talking about the role of data orchestration. Paola, let's go ahead and start with you. Give the audience that understanding, that context about Astronomer and what it is that you guys do. >> Mm-hmm. Yeah, absolutely. So, Astronomer is a, you know, we're a technology and software company for modern data orchestration, as you said, and we're the driving force behind Apache Airflow. The Open Source Workflow Management tool that's since been adopted by thousands and thousands of users, and we'll dig into this a little bit more. But, by data orchestration, we mean data pipeline, so generally speaking, getting data from one place to another, transforming it, running it on a schedule, and overall just building a central system that tangibly connects your entire ecosystem of data services, right. So what, that's Redshift, Snowflake, DVT, et cetera. And so tangibly, we build, we at Astronomer here build products powered by Apache Airflow for data teams and for data practitioners, so that they don't have to. So, we sell to data engineers, data scientists, data admins, and we really spend our time doing three things. So, the first is that we build Astro, our flagship cloud service that we'll talk more on. But here, we're really building experiences that make it easier for data practitioners to author, run, and scale their data pipeline footprint on the cloud. And then, we also contribute to Apache Airflow as an open source project and community. So, we cultivate the community of humans, and we also put out open source developer tools that actually make it easier for individual data practitioners to be productive in their day-to-day jobs, whether or not they actually use our product and and pay us money or not. And then of course, we also have professional services and education and all of these things around our commercial products that enable folks to use our products and use Airflow as effectively as possible. So yeah, super, super happy with everything we've done and hopefully that gives you an idea of where we're starting. >> Awesome, so when you're talking with those, Paola, those data engineers, those data scientists, how do you define data orchestration and what does it mean to them? >> Yeah, yeah, it's a good question. So, you know, if you Google data orchestration you're going to get something about an automated process for organizing silo data and making it accessible for processing and analysis. But, to your question, what does that actually mean, you know? So, if you look at it from a customer's perspective, we can share a little bit about how we at Astronomer actually do data orchestration ourselves and the problems that it solves for us. So, as many other companies out in the world do, we at Astronomer need to monitor how our own customers use our products, right? And so, we have a weekly meeting, for example, that goes through a dashboard and a dashboarding tool called Sigma where we see the number of monthly customers and how they're engaging with our product. But, to actually do that, you know, we have to use data from our application database, for example, that has behavioral data on what they're actually doing in our product. We also have data from third party API tools, like Salesforce and HubSpot, and other ways in which our customer, we actually engage with our customers and their behavior. And so, our data team internally at Astronomer uses a bunch of tools to transform and use that data, right? So, we use FiveTran, for example, to ingest. We use Snowflake as our data warehouse. We use other tools for data transformations. And even, if we at Astronomer don't do this, you can imagine a data team also using tools like, Monte Carlo for data quality, or Hightouch for Reverse ETL, or things like that. And, I think the point here is that data teams, you know, that are building data-driven organizations have a plethora of tooling to both ingest the right data and come up with the right interfaces to transform and actually, interact with that data. And so, that movement and sort of synchronization of data across your ecosystem is exactly what data orchestration is responsible for. Historically, I think, and Raj will talk more about this, historically, schedulers like KRON and Oozie or Control-M have taken a role here, but we think that Apache Airflow has sort of risen over the past few years as the defacto industry standard for writing data pipelines that do tasks, that do data jobs that interact with that ecosystem of tools in your organization. And so, beyond that sort of data pipeline unit, I think where we see it is that data acquisition is not only writing those data pipelines that move your data, but it's also all the things around it, right, so, CI/CD tool and Secrets Management, et cetera. So, a long-winded answer here, but I think that's how we talk about it here at Astronomer and how we're building our products. >> Excellent. Great context, Paola. Thank you. Viraj, let's bring you into the conversation. Every company these days has to be a data company, right? They've got to be a software company- >> Mm-hmm. >> whether it's my bank or my grocery store. So, how are companies actually doing data orchestration today, Viraj? >> Yeah, it's a great question. So, I think one thing to think about is like, on one hand, you know, data orchestration is kind of a new category that we're helping define, but on the other hand, it's something that companies have been doing forever, right? You need to get data moving to use it, you know. You've got it all in place, aggregate it, cleaning it, et cetera. So, when you look at what companies out there are doing, right. Sometimes, if you're a more kind of born in the cloud company, as we say, you'll adopt all these cloud native tooling things your cloud provider gives you. If you're a bank or another sort of institution like that, you know, you're probably juggling an even wider variety of tools. You're thinking about a cloud migration. You might have things like Kron running in one place, Uzi running somewhere else, Informatics running somewhere else, while you're also trying to move all your workloads to the cloud. So, there's quite a large spectrum of what the current state is for companies. And then, kind of like Paola was saying, Apache Airflow started in 2014, and it was actually started by Airbnb, and they put out this blog post that was like, "Hey here's how we use Apache Airflow to orchestrate our data across all their sources." And really since then, right, it's almost been a decade since then, Airflow emerged as the open source standard, and there's companies of all sorts using it. And, it's really used to tie all these tools together, especially as that number of tools increases, companies move to hybrid cloud, hybrid multi-cloud strategies, and so on and so forth. But you know, what we found is that if you go to any company, especially a larger one and you say like, "Hey, how are you doing data orchestration?" They'll probably say something like, "Well, I have five data teams, so I have eight different ways I do data orchestration." Right. This idea of data orchestration's been there but the right way to do it, kind of all the abstractions you need, the way your teams need to work together, and so on and so forth, hasn't really emerged just yet, right? It's such a quick moving space that companies have to combine what they were doing before with what their new business initiatives are today. So, you know, what we really believe here at Astronomer is Airflow is the core of how you solve data orchestration for any sort of use case, but it's not everything. You know, it needs a little more. And, that's really where our commercial product, Astro comes in, where we've built, not only the most tried and tested airflow experience out there. We do employ a majority of the Airflow Core Committers, right? So, we're kind of really deep in the project. We've also built the right things around developer tooling, observability, and reliability for customers to really rely on Astro as the heart of the way they do data orchestration, and kind of think of it as the foundational layer that helps tie together all the different tools, practices and teams large companies have to do today. >> That foundational layer is absolutely critical. You've both mentioned open source software. Paola, I want to go back to you, and just give the audience an understanding of how open source really plays into Astronomer's mission as a company, and into the technologies like Astro. >> Mm-hmm. Yeah, absolutely. I mean, we, so we at Astronomers started using Airflow and actually building our products because Airflow is open source and we were our own customers at the beginning of our company journey. And, I think the open source community is at the core of everything we do. You know, without that open source community and culture, I think, you know, we have less of a business, and so, we're super invested in continuing to cultivate and grow that. And, I think there's a couple sort of concrete ways in which we do this that personally make me really excited to do my own job. You know, for one, we do things like we organize meetups and we sponsor the Airflow Summit and there's these sort of baseline community efforts that I think are really important and that reminds you, hey, there just humans trying to do their jobs and learn and use both our technology and things that are out there and contribute to it. So, making it easier to contribute to Airflow, for example, is another one of our efforts. As Viraj mentioned, we also employ, you know, engineers internally who are on our team whose full-time job is to make the open source project better. Again, regardless of whether or not you're a customer of ours or not, we want to make sure that we continue to cultivate the Airflow project in and of itself. And, we're also building developer tooling that might not be a part of the Apache Open Source project, but is still open source. So, we have repositories in our own sort of GitHub organization, for example, with tools that individual data practitioners, again customers are not, can use to make them be more productive in their day-to-day jobs with Airflow writing Dags for the most common use cases out there. The last thing I'll say is how important I think we've found it to build sort of educational resources and documentation and best practices. Airflow can be complex. It's been around for a long time. There's a lot of really, really rich feature sets. And so, how do we enable folks to actually use those? And that comes in, you know, things like webinars, and best practices, and courses and curriculum that are free and accessible and open to the community are just some of the ways in which I think we're continuing to invest in that open source community over the next year and beyond. >> That's awesome. It sounds like open source is really core, not only to the mission, but really to the heart of the organization. Viraj, I want to go back to you and really try to understand how does Astronomer fit into the wider modern data stack and ecosystem? Like what does that look like for customers? >> Yeah, yeah. So, both in the open source and with our commercial customers, right? Folks everywhere are trying to tie together a huge variety of tools in order to start making sense of their data. And you know, I kind of think of it almost like as like a pyramid, right? At the base level, you need things like data reliability, data, sorry, data freshness, data availability, and so on and so forth, right? You just need your data to be there. (coughs) I'm sorry. You just need your data to be there, and you need to make it predictable when it's going to be there. You need to make sure it's kind of correct at the highest level, some quality checks, and so on and so forth. And oftentimes, that kind of takes the case of ELT or ETL use cases, right? Taking data from somewhere and moving it somewhere else, usually into some sort of analytics destination. And, that's really what businesses can do to just power the core parts of getting insights into how their business is going, right? How much revenue did I had? What's in my pipeline, salesforce, and so on and so forth. Once that kind of base foundation is there and people can get the data they need, how they need it, it really opens up a lot for what customers can do. You know, I think one of the trendier things out there right now is MLOps, and how do companies actually put machine learning into production? Well, when you think about it you kind of have to squint at it, right? Like, machine learning pipelines are really just any other data pipeline. They just have a certain set of needs that might not not be applicable to ELT pipelines. And, when you kind of have a common layer to tie together all the ways data can move through your organization, that's really what we're trying to make it so companies can do. And, that happens in financial services where, you know, we have some customers who take app data coming from their mobile apps, and actually run it through their fraud detection services to make sure that all the activity is not fraudulent. We have customers that will run sports betting models on our platform where they'll take data from a bunch of public APIs around different sporting events that are happening, transform all of that in a way their data scientist can build models with it, and then actually bet on sports based on that output. You know, one of my favorite use cases I like to talk about that we saw in the open source is we had there was one company whose their business was to deliver blood transfusions via drone into remote parts of the world. And, it was really cool because they took all this data from all sorts of places, right? Kind of orchestrated all the aggregation and cleaning and analysis that happened had to happen via airflow and the end product would be a drone being shot out into a real remote part of the world to actually give somebody blood who needed it there. Because it turns out for certain parts of the world, the easiest way to deliver blood to them is via drone and not via some other, some other thing. So, these kind of, all the things people do with the modern data stack is absolutely incredible, right? Like you were saying, every company's trying to be a data-driven company. What really energizes me is knowing that like, for all those best, super great tools out there that power a business, we get to be the connective tissue, or the, almost like the electricity that kind of ropes them all together and makes so people can actually do what they need to do. >> Right. Phenomenal use cases that you just described, Raj. I mean, just the variety alone of what you guys are able to do and impact is so cool. So Paola, when you're with those data engineers, those data scientists, and customer conversations, what's your pitch? Why use Astro? >> Mm-hmm. Yeah, yeah, it's a good question. And honestly, to piggyback off of Viraj, there's so many. I think what keeps me so energized is how mission critical both our product and data orchestration is, and those use cases really are incredible and we work with customers of all shapes and sizes. But, to answer your question, right, so why use Astra? Why use our commercial products? There's so many people using open source, why pay for something more than that? So, you know, the baseline for our business really is that Airflow has grown exponentially over the last five years, and like we said has become an industry standard that we're confident there's a huge opportunity for us as a company and as a team. But, we also strongly believe that being great at running Airflow, you know, doesn't make you a successful company at what you do. What makes you a successful company at what you do is building great products and solving problems and solving pin points of your own customers, right? And, that differentiating value isn't being amazing at running Airflow. That should be our job. And so, we want to abstract those customers from meaning to do things like manage Kubernetes infrastructure that you need to run Airflow, and then hiring someone full-time to go do that. Which can be hard, but again doesn't add differentiating value to your team, or to your product, or to your customers. So, folks to get away from managing that infrastructure sort of a base, a base layer. Folks who are looking for differentiating features that make their team more productive and allows them to spend less time tweaking Airflow configurations and more time working with the data that they're getting from their business. For help, getting, staying up with Airflow releases. There's a ton of, we've actually been pretty quick to come out with new Airflow features and releases, and actually just keeping up with that feature set and working strategically with a partner to help you make the most out of those feature sets is a key part of it. And, really it's, especially if you're an organization who currently is committed to using Airflow, you likely have a lot of Airflow environments across your organization. And, being able to see those Airflow environments in a single place and being able to enable your data practitioners to create Airflow environments with a click of a button, and then use, for example, our command line to develop your Airflow Dags locally and push them up to our product, and use all of the sort of testing and monitoring and observability that we have on top of our product is such a key. It sounds so simple, especially if you use Airflow, but really those things are, you know, baseline value props that we have for the customers that continue to be excited to work with us. And of course, I think we can go beyond that and there's, we have ambitions to add whole, a whole bunch of features and expand into different types of personas. >> Right? >> But really our main value prop is for companies who are committed to Airflow and want to abstract themselves and make use of some of the differentiating features that we now have at Astronomer. >> Got it. Awesome. >> Thank you. One thing, one thing I'll add to that, Paola, and I think you did a good job of saying is because every company's trying to be a data company, companies are at different parts of their journey along that, right? And we want to meet customers where they are, and take them through it to where they want to go. So, on one end you have folks who are like, "Hey, we're just building a data team here. We have a new initiative. We heard about Airflow. How do you help us out?" On the farther end, you know, we have some customers that have been using Airflow for five plus years and they're like, "Hey, this is awesome. We have 10 more teams we want to bring on. How can you help with this? How can we do more stuff in the open source with you? How can we tell our story together?" And, it's all about kind of taking this vast community of data users everywhere, seeing where they're at, and saying like, "Hey, Astro and Airflow can take you to the next place that you want to go." >> Which is incredibly- >> Mm-hmm. >> and you bring up a great point, Viraj, that every company is somewhere in a different place on that journey. And it's, and it's complex. But it sounds to me like a lot of what you're doing is really stripping away a lot of the complexity, really enabling folks to use their data as quickly as possible, so that it's relevant and they can serve up, you know, the right products and services to whoever wants what. Really incredibly important. We're almost out of time, but I'd love to get both of your perspectives on what's next for Astronomer. You give us a a great overview of what the company's doing, the value in it for customers. Paola, from your lens as one of the co-founders, what's next? >> Yeah, I mean, I think we'll continue to, I think cultivate in that open source community. I think we'll continue to build products that are open sourced as part of our ecosystem. I also think that we'll continue to build products that actually make Airflow, and getting started with Airflow, more accessible. So, sort of lowering that barrier to entry to our products, whether that's price wise or infrastructure requirement wise. I think making it easier for folks to get started and get their hands on our product is super important for us this year. And really it's about, I think, you know, for us, it's really about focused execution this year and all of the sort of core principles that we've been talking about. And continuing to invest in all of the things around our product that again, enable teams to use Airflow more effectively and efficiently. >> And that efficiency piece is, everybody needs that. Last question, Viraj, for you. What do you see in terms of the next year for Astronomer and for your role? >> Yeah, you know, I think Paola did a really good job of laying it out. So it's, it's really hard to disagree with her on anything, right? I think executing is definitely the most important thing. My own personal bias on that is I think more than ever it's important to really galvanize the community around airflow. So, we're going to be focusing on that a lot. We want to make it easier for our users to get get our product into their hands, be that open source users or commercial users. And last, but certainly not least, is we're also really excited about Data Lineage and this other open source project in our umbrella called Open Lineage to make it so that there's a standard way for users to get lineage out of different systems that they use. When we think about what's in store for data lineage and needing to audit the way automated decisions are being made. You know, I think that's just such an important thing that companies are really just starting with, and I don't think there's a solution that's emerged that kind of ties it all together. So, we think that as we kind of grow the role of Airflow, right, we can also make it so that we're helping solve, we're helping customers solve their lineage problems all in Astro, which is our kind of the best of both worlds for us. >> Awesome. I can definitely feel and hear the enthusiasm and the passion that you both bring to Astronomer, to your customers, to your team. I love it. We could keep talking more and more, so you're going to have to come back. (laughing) Viraj, Paola, thank you so much for joining me today on this showcase conversation. We really appreciate your insights and all the context that you provided about Astronomer. >> Thank you so much for having us. >> My pleasure. For my guests, I'm Lisa Martin. You're watching this Cube conversation. (soft electronic music)
SUMMARY :
to this CUBE conversation Thank you so much and what it is that you guys do. and hopefully that gives you an idea and the problems that it solves for us. to be a data company, right? So, how are companies actually kind of all the abstractions you need, and just give the And that comes in, you of the organization. and analysis that happened that you just described, Raj. that you need to run Airflow, that we now have at Astronomer. Awesome. and I think you did a good job of saying and you bring up a great point, Viraj, and all of the sort of core principles and for your role? and needing to audit the and all the context that you (soft electronic music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Viraj Parekh | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Paola | PERSON | 0.99+ |
Viraj | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Astronomer | ORGANIZATION | 0.99+ |
Paola Peraza-Calderon | PERSON | 0.99+ |
Paola Peraza Calderon | PERSON | 0.99+ |
Airflow | ORGANIZATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
five plus years | QUANTITY | 0.99+ |
Astro | ORGANIZATION | 0.99+ |
Raj | PERSON | 0.99+ |
Uzi | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Kron | ORGANIZATION | 0.99+ |
10 more teams | QUANTITY | 0.98+ |
Astronomers | ORGANIZATION | 0.98+ |
Astra | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Airflow | TITLE | 0.98+ |
Informatics | ORGANIZATION | 0.98+ |
Monte Carlo | TITLE | 0.98+ |
this year | DATE | 0.98+ |
HubSpot | ORGANIZATION | 0.98+ |
one company | QUANTITY | 0.97+ |
Astronomer | TITLE | 0.97+ |
next year | DATE | 0.97+ |
Apache | ORGANIZATION | 0.97+ |
Airflow Summit | EVENT | 0.97+ |
AWS | ORGANIZATION | 0.95+ |
both worlds | QUANTITY | 0.93+ |
KRON | ORGANIZATION | 0.93+ |
CUBE | ORGANIZATION | 0.92+ |
M | ORGANIZATION | 0.92+ |
Redshift | TITLE | 0.91+ |
Snowflake | TITLE | 0.91+ |
five data teams | QUANTITY | 0.91+ |
GitHub | ORGANIZATION | 0.91+ |
Oozie | ORGANIZATION | 0.9+ |
Data Lineage | ORGANIZATION | 0.9+ |
Luis Ceze, OctoML | Cube Conversation
(gentle music) >> Hello, everyone. Welcome to this Cube Conversation. I'm John Furrier, host of theCUBE here, in our Palo Alto Studios. We're featuring OctoML. I'm with the CEO, Luis Ceze. Chief Executive Officer, Co-founder of OctoML. I'm John Furrier of theCUBE. Thanks for joining us today. Luis, great to see you. Last time we spoke was at "re:MARS" Amazon's event. Kind of a joint event between (indistinct) and Amazon, kind of put a lot together. Great to see you. >> Great to see you again, John. I really have good memories of that interview. You know, that was definitely a great time. Great to chat with you again. >> The world of ML and AI, machine learning and AI is really hot. Everyone's talking about it. It's really great to see that advance. So I'm looking forward to this conversation but before we get started, introduce who you are in OctoML. >> Sure. I'm Luis Ceze, Co-founder and CEO at OctoML. I'm also professor of Computer Science at University of Washington. You know, OctoML grew out of our efforts on the Apache CVM project, which is a compiler in runtime system that enables folks to run machine learning models in a broad set of harder in the Edge and in the Cloud very efficiently. You know, we grew that project and grew that community, definitely saw there was something to pain point there. And then we built OctoML, OctoML is about three and a half years old now. And the mission, the company is to enable customers to deploy models very efficiently in the Cloud. And make them, you know, run. Do it quickly, run fast, and run at a low cost, which is something that's especially timely right now. >> I like to point out also for the folks 'casue they should know that you're also a professor in the Computer Science department at University of Washington. A great program there. This is a really an inflection point with AI machine learning. The computer science industry has been waiting for decades to advance AI with all this new cloud computing, all the hardware and silicon advancements, GPUs. This is the perfect storm. And you know, this the computer science now we we're seeing an acceleration. Can you share your view, and you're obviously a professor in that department but also, an entrepreneur. This is a great time for computer science. Explain why. >> Absolutely, yeah, no. Just like the confluence of you know, advances in what, you know, computers can do as devices to computer information. Plus, you know, advances in AI that enable applications that you know, we thought it was highly futuristic and now it's just right there today. You know, AI that can generate photo realistic images from descriptions, you know, can write text that's pretty good. Can help augment, you know, human creativity in a really meaningful way. So you see the confluence of capabilities and the creativity of humankind into new applications is just extremely exciting, both from a researcher point of view as well as an entrepreneur point of view, right. >> What should people know about these large language models we're seeing with ChatGPT and how Google has got a lot of work going on that air. There's been a lot of work recently. What's different now about these models, and why are they so popular and effective now? What's the difference between now, and say five years ago, that makes it more- >> Oh, yeah. It's a huge inflection on their capabilities, I always say like emergent behavior, right? So as these models got more complex and our ability to train and deploy them, you know, got to this point... You know, they really crossed a threshold into doing things that are truly surprising, right? In terms of generating, you know, exhalation for things generating tax, summarizing tax, expending tax. And you know, exhibiting what to some may look like reasoning. They're not quite reasoning fundamentally. They're generating tax that looks like they're reasoning, but they do it so well, that it feels like was done by a human, right. So I would say that the biggest changes that, you know, now, they can actually do things that are extremely useful for business in people's lives today. And that wasn't the case five years ago. So that's in the model capabilities and that is being paired with huge advances in computing that enabled this to be... Enables this to be, you know, actually see line of sites to be deployed at scale, right. And that's where we come in, by the way, but yeah. >> Yeah, I want to get into that. And also, you know, the fusion of data integrating data sets at scales. Another one we're seeing a lot of happening now. It's not just some, you know, siloed, pre-built data modeling. It's a lot of agility and a lot of new integration capabilities of data. How is that impacting the dynamics? >> Yeah, absolutely. So I'll say that the ability to either take the data that has that exists in training a model to do something useful with it, and more interestingly I would say, using baseline foundational models and with a little bit of data, turn them into something that can do a specialized task really, really well. Created this really fast proliferation of really impactful applications, right? >> If every company now is looking at this trend and I'm seeing a lot... And I think every company will rebuild their business with machine learning. If they're not already doing it. And the folks that aren't will probably be dinosaurs will be out of business. This is a real business transformation moment where machine learning and AI, as it goes mainstream. I think it's just the beginning. This is where you guys come in, and you guys are poised for handling this frenzy to change business with machine learning models. How do you guys help customers as they look at this, you know, transition to get, you know, concept to production with machine learning? >> Great. Great questions, yeah, so I would say that it's fair to say there's a bunch of models out there that can do useful things right off the box, right? So and also, the ability to create models improved quite a bit. So the challenge now shifted to customers, you know. Everyone is looking to incorporating AI into their applications. So what we do for them is to, first of all, how do you do that quickly, without needing highly specialized, difficult to find engineering? And very importantly, how do you do that at cost that's accessible, right? So all of these fantastic models that we just talked about, they use an amount of computing that's just astronomical compared to anything else we've done in the past. It means the costs that come with it, are also very, very high. So it's important to enable customers to, you know, incorporate AI into their applications, to their use cases in a way that they can do, with the people that they have, and the costs that they can afford, such that they can have, you know, the maximum impacting possibly have. And finally, you know, helping them deal with hardware availability, as you know, even though we made a lot of progress in making computing cheaper and cheaper. Even to this day, you know, you can never get enough. And getting an allocation, getting the right hardware to run these incredibly hungry models is hard. And we help customers deal with, you know, harder availability as well. >> Yeah, for the folks watching as a... If you search YouTube, there's an interview we did last year at "re:MARS," I mentioned that earlier, just a great interview. You talked about this hardware independence, this traction. I want to get into that, because if you look at all the foundation models that are out there right now, that are getting traction, you're seeing two trends. You're seeing proprietary and open source. And obviously, open source always wins in my opinion, but, you know, there's this iPhone moment and android moment that one of your investors John Torrey from Madrona, talked about was is iPhone versus Android moment, you know, one's proprietary hardware and they're very specialized high performance and then open source. This is an important distinction and you guys are hardware independent. What's the... Explain what all this means. >> Yeah. Great set of questions. First of all, yeah. So, you know, OpenAI, and of course, they create ChatGPT and they offer an API to run these models that does amazing things. But customers have to be able to go and send their data over to OpenAI, right? So, and run the model there and get the outputs. Now, there's open source models that can do amazing things as well, right? So they typically open source models, so they don't lag behind, you know, these proprietary closed models by more than say, you know, six months or so, let's say. And it means that enabling customers to take the models that they want and deploy under their control is something that's very valuable, because one, you don't have to expose your data to externally. Two, you can customize the model even more to the things that you wanted to do. And then three, you can run on an infrastructure that can be much more cost effective than having to, you know, pay somebody else's, you know, cost and markup, right? So, and where we help them is essentially help customers, enable customers to take machine learning models, say an open source model, and automate the process of putting them into production, optimize them to run with the right performance, and more importantly, give them the independence to run where they need to run, where they can run best, right? >> Yeah, and also, you know, I point out all the time that, you know, there's never any stopping the innovation of hardware silicon. You're seeing cloud computing more coming in there. So, you know, being hardware independent has some advantages. And if you look at OpenAI, for instance, you mentioned ChatGPT, I think this is interesting because I think everyone is scratching their head, going, "Okay, I need to move to this new generation." What's your pro tip and advice for folks who want to move to, or businesses that want to say move to machine learning? How do they get started? What are some of the considerations they need to think about to deploy these models into production? >> Yeah, great though. Great set of questions. First of all, I mean, I'm sure they're very aware of the kind of things that you want to do with AI, right? So you could be interacting with customers, you know, automating, interacting with customers. It could be, you know, finding issues in production lines. It could be, you know... Generating, you know, making it easier to produce content and so on. Like, you know, customers, users would have an idea what they want to do. You know, from that it can actually determine, what kind of machine learning models would solve the problem that would, you know, fits that use case. But then, that's when the hard thing begins, right? So when you find a model, identify the model that can do the thing that you wanted to do, you need to turn that into a thing that you can deploy. So how do you go from machine learning model that does a thing that you need to do, to a container with the right executor, the artifact they can actually go and deploy, right? So we've seen customers doing that on their own, right? So, and it's got a bit of work, and that's why we are excited about the automation that we can offer and then turn that into a turnkey problem, right? So a turnkey process. >> Luis, talk about the use cases. If I don't mind going and double down on the previous answer. You got existing services, and then there's new AI applications, AI for applications. What are the use cases with existing stuff, and the new applications that are being built? >> Yeah, I mean, existing itself is, for example, how do you do very smart search and auto completion, you know, when you are editing documents, for example. Very, very smart search of documents, summarization of tax, expanding bullets into pros in a way that, you know, don't have to spend as much human time. Just some of the existing applications, right? So some of the new ones are like truly AI native ways of producing content. Like there's a company that, you know, we share investors and love what they're doing called runwayyML, for example. It's sort of like an AI first way of editing and creating visual content, right? So you could say you have a video, you could say make this video look like, it's night as opposed to dark, or remove that dog in the corner. You can do that in a way that you couldn't do otherwise. So there's like definitely AI native use cases. And yet not only in life sciences, you know, there's quite a bit of advances on AI-based, you know, therapies and diagnostics processes that are designed using automated processes. And this is something that I feel like, we were just scratching the surface there. There's huge opportunities there, right? >> Talk about the inference and AI and production kind of angle here, because cost is a huge concern when you look at... And there's a hardware and that flexibility there. So I can see how that could help, but is there a cost freight train that can get out of control here if you don't deploy properly? Talk about the scale problem around cost in AI. >> Yeah, absolutely. So, you know, very quickly. One thing that people tend to think about is the cost is. You know, training has really high dollar amounts it tends over index on that. But what you have to think about is that for every model that's actually useful, you're going to train it once, and then run it a large number of times in inference. That means that over the lifetime of a model, the vast majority of the compute cycles and the cost are going to go to inference. And that's what we address, right? So, and to give you some idea, if you're talking about using large language model today, you know, you can say it's going to cost a couple of cents per, you know, 2,000 words output. If you have a million users active, you know, a day, you know, if you're lucky and you have that, you can, this cost can actually balloon very quickly to millions of dollars a month, just in inferencing costs. You know, assuming you know, that you actually have access to the infrastructure to run it, right? So means that if you don't pay attention to these inference costs and that's definitely going to be a surprise. And affects the economics of the product where this is embedded in, right? So this is something that, you know, if there's quite a bit of attention being put on right now on how do you do search with large language models and you don't pay attention to the economics, you know, you can have a surprise. You have to change the business model there. >> Yeah. I think that's important to call out, because you don't want it to be a runaway cost structure where you architected it wrong and then next thing you know, you got to unwind that. I mean, it's more than technical debt, it's actually real debt, it's real money. So, talk about some of the dynamics with the customers. How are they architecting this? How do they get ahead of that problem? What do you guys do specifically to solve that? >> Yeah, I mean, well, we help customers. So, it's first of all, be hyper aware, you know, understanding what's going to be the cost for them deploying the models into production and showing them the possibilities of how you can deploy the model with different cost structure, right? So that's where, you know, the ability to have hardware independence is so important because once you have hardware independence, after you optimize models, obviously, you have a new, you know, dimension of freedom to choose, you know, what is the right throughput per dollar for you. And then where, and what are the options? And once you make that decision, you want to automate the process of putting into production. So the way we help customers is showing very clearly in their use case, you know, how they can deploy their models in a much more cost-effective way. You know, when the cases... There's a case study that we put out recently, showing a 4x reduction in deployment costs, right? So this is by doing a mix optimization and choosing the right hardware. >> How do you address the concern that someone might say, Luis said, "Hey, you know, I don't want to degrade performance and latency, and I don't want the user experience to suffer." What's the answer there? >> Two things. So first of all, all of the manipulations that we do in the model is to turn the model to efficient code without changing the behavior of the models. We wouldn't degrade the experience of the user by having the model be wrong more often. And we don't change that at all. The model behaves the way it was validated for. And then the second thing is, you know, user experience with respect to latency, it's all about a maximum... Like, you could say, I want a model to run at 50 milliseconds or less. If it's much faster than 15 seconds, you're not going to notice the difference. But if it's lower, you're going to notice a difference. So the key here is that, how do you find a set of options to deploy, that you are not overshooting performance in a way that's going to lead to costs that has no additional benefits. And this provides a huge, a very significant margin of choices, set of choices that you can optimize for cost without degrading customer experience, right. End user experience. >> Yeah, and I also point out the large language models like the ChatGPTs of the world, they're coming out with Dave Moth and I were talking on this breaking analysis around, this being like, over 10X more computational intensive on capabilities. So this hardware independence is a huge thing. So, and also supply chain, some people can't get servers by the way, so, or hardware these days. >> Or even more interestingly, right? So they do not grow in trees, John. Like GPUs is not kind of stuff that you plant an orchard until you have a bunch and then you can increase it, but no, these things, you know, take a while. So, and you can't increase it overnight. So being able to live with those cycles that are available to you is not just important for all for cost, but also important for people to scale and serve more users at, you know, at whatever pace that they come, right? >> You know, it's really great to talk to you, and congratulations on OctaML. Looking forward to the startup showcase, we'll be featuring you guys there. But I want to get your personal opinion as someone in the industry and also, someone who's been in the computer science area for your career. You know, computer science has always been great, and there's more people enrolling in computer science, more diversity than ever before, but there's also more computer science related fields. How is this opening up computer science and where's AI going with the computers, with the science? Can you share your vision on, you know, the aperture, or the landscape of CompSci, or CS students, and opportunities. >> Yeah, no, absolutely. I think it's fair to say that computer has been embedded in pretty much every aspect of human life these days. Human life these days, right? So for everything. And AI has been a counterpart, it been an integral component of computer science for a while. And this medicines that happened in the last 10, 15 years in AI has shown, you know, new application has I think re-energized how people see what computers can do. And you, you know, there is this picture in our department that shows computer science at the center called the flower picture, and then all the different paddles like life sciences, social sciences, and then, you know, mechanical engineering, all these other things that, and I feel like it can replace that center with computer science. I put AI there as well, you see AI, you know touching all these applications. AI in healthcare, diagnostics. AI in discovery in the sciences, right? So, but then also AI doing things that, you know, the humans wouldn't have to do anymore. They can do better things with their brains, right? So it's permitting every single aspect of human life from intellectual endeavor to day-to-day work, right? >> Yeah. And I think the ChatGPT and OpenAI has really kind of created a mainstream view that everyone sees value in it. Like you could be in the data center, you could be in bio, you could be in healthcare. I mean, every industry sees value. So this brings up what I can call the horizontally scalable use constance. And so this opens up the conversation, what's going to change from this? Because if you go horizontally scalable, which is a cloud concept as you know, that's going to create a lot of opportunities and some shifting of how you think about architecture around data, for instance. What's your opinion on what this will do to change the inflection of the role of architecting platforms and the role of data specifically? >> Yeah, so good question. There is a lot in there, by the way, I should have added the previous question, that you can use AI to do better AI as well, which is what we do, and other folks are doing as well. And so the point I wanted to make here is that it's pretty clear that you have a cloud focus component with a nudge focused counterparts. Like you have AI models, but both in the Cloud and in the Edge, right? So the ability of being able to run your AI model where it runs best also has a data advantage to it from say, from a privacy point of view. That's inherently could say, "Hey, I want to run something, you know, locally, strictly locally, such that I don't expose the data to an infrastructure." And you know that the data never leaves you, right? Never leaves the device. Now you can imagine things that's already starting to happen, like you do some forms of training and model customization in the model architecture itself and the system architecture, such that you do this as close to the user as possible. And there's something called federated learning that has been around for some time now that's finally happening is, how do you get a data from butcher places, you do, you know, some common learning and then you send a model to the Edges, and they get refined for the final use in a way that you get the advantage of aggregating data but you don't get the disadvantage of privacy issues and so on. >> It's super exciting. >> And some of the considerations, yeah. >> It's super exciting area around data infrastructure, data science, computer science. Luis, congratulations on your success at OctaML. You're in the middle of it. And the best thing about its businesses are looking at this and really reinventing themselves and if a business isn't thinking about restructuring their business around AI, they're probably will be out of business. So this is a great time to be in the field. So thank you for sharing your insights here in theCUBE. >> Great. Thank you very much, John. Always a pleasure talking to you. Always have a lot of fun. And we both speak really fast, I can tell, you know, so. (both laughing) >> I know. We'll not the transcript available, we'll integrate it into our CubeGPT model that we have Luis. >> That's right. >> Great. >> Great. >> Great to talk to you, thank you, John. Thanks, man, bye. >> Hey, this is theCUBE. I'm John Furrier, here in Palo Alto, Cube Conversation. Thanks for watching. (gentle music)
SUMMARY :
Luis, great to see you. Great to chat with you again. introduce who you are in OctoML. And make them, you know, run. And you know, this the Just like the confluence of you know, What's the difference between now, Enables this to be, you know, And also, you know, the fusion of data So I'll say that the ability and you guys are poised for handling Even to this day, you know, and you guys are hardware independent. so they don't lag behind, you know, I point out all the time that, you know, that would, you know, fits that use case. and the new applications in a way that, you know, if you don't deploy properly? So, and to give you some idea, and then next thing you So that's where, you know, Luis said, "Hey, you know, that you can optimize for cost like the ChatGPTs of the world, that are available to you Can you share your vision on, you know, you know, the humans which is a cloud concept as you know, is that it's pretty clear that you have So thank you for sharing your I can tell, you know, so. We'll not the transcript available, Great to talk to you, I'm John Furrier, here in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Luis Ceze | PERSON | 0.99+ |
Dave Moth | PERSON | 0.99+ |
John Torrey | PERSON | 0.99+ |
Luis | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2,000 words | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
ORGANIZATION | 0.99+ | |
last year | DATE | 0.99+ |
OctoML | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
4x | QUANTITY | 0.99+ |
android | TITLE | 0.99+ |
Madrona | ORGANIZATION | 0.99+ |
Two things | QUANTITY | 0.99+ |
50 milliseconds | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
five years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
OctaML | ORGANIZATION | 0.98+ |
University of Washington | ORGANIZATION | 0.98+ |
OctoML | PERSON | 0.97+ |
Android | TITLE | 0.97+ |
first | QUANTITY | 0.96+ |
15 seconds | QUANTITY | 0.96+ |
a day | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
First | QUANTITY | 0.95+ |
ChatGPT | TITLE | 0.94+ |
three | QUANTITY | 0.93+ |
over 10X | QUANTITY | 0.93+ |
OpenAI | ORGANIZATION | 0.92+ |
OctoML | TITLE | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
about three and a half years | QUANTITY | 0.91+ |
OpenAI | TITLE | 0.9+ |
Apache | ORGANIZATION | 0.9+ |
two trends | QUANTITY | 0.88+ |
Palo Alto Studios | LOCATION | 0.86+ |
millions of dollars a month | QUANTITY | 0.86+ |
One thing | QUANTITY | 0.84+ |
a million users | QUANTITY | 0.83+ |
Two | QUANTITY | 0.83+ |
Palo Alto, | LOCATION | 0.82+ |
CubeGPT | COMMERCIAL_ITEM | 0.81+ |
re:MARS | EVENT | 0.76+ |
ChatGPT | ORGANIZATION | 0.75+ |
decades | QUANTITY | 0.72+ |
single aspect | QUANTITY | 0.68+ |
couple of cents | QUANTITY | 0.66+ |
runwayyML | TITLE | 0.64+ |
10, 15 years | QUANTITY | 0.6+ |
Cube | TITLE | 0.57+ |
once | QUANTITY | 0.52+ |
last | DATE | 0.5+ |
Conversation | EVENT | 0.49+ |
Conversation | LOCATION | 0.41+ |
Edges | TITLE | 0.38+ |
Conversation | ORGANIZATION | 0.36+ |
Cube | ORGANIZATION | 0.36+ |
Brian Stevens, Neural Magic | Cube Conversation
>> John: Hello and welcome to this cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great conversation on making machine learning easier and more affordable in an era where everybody wants more machine learning and AI. We're featuring Neural Magic with the CEO is also Cube alumni, Brian Steve. CEO, Great to see you Brian. Thanks for coming on this cube conversation. Talk about machine learning. >> Brian: Hey John, happy to be here again. >> John: What a buzz that's going on right now? Machine learning, one of the hottest topics, AI front and center, kind of going mainstream. We're seeing the success of the, of the kind of NextGen capabilities in the enterprise and in apps. It's a really exciting time. So perfect timing. Great, great to have this conversation. Let's start with taking a minute to explain what you guys are doing over there at Neural Magic. I know there's some history there, neural networks, MIT. But the, the convergence of what's going on, this big wave hitting, it's an exciting time for you guys. Take a minute to explain the company and your mission. >> Brian: Sure, sure, sure. So, as you said, the company's Neural Magic and spun out at MIT four plus years ago, along with some people and, and some intellectual property. And you summarize it better than I can cause you said, we're just trying to make, you know, AI that much easier. And so, but like another level of specificity around it is. You know, in the world you have a lot of like data scientists really focusing on making AI work for whatever their use case is. And then the next phase of that, then they're looking at optimizing the models that they built. And then it's not good enough just to work on models. You got to put 'em into production. So, what we do is we make it easier to optimize the models that have been developed and trained and then trying to make it super simple when it comes time to deploying those in production and managing them. >> Brian: You know, we've seen this movie before with the cloud. You start to see abstractions come out. Data science we saw like was like the, the secret art of being like a data scientist now democratization of data. You're kind of seeing a similar wave with machine learning models, foundational models, some call it developers are getting involved. Model complexity's still there, but, but it's getting easier. There's almost like the democratization happening. You got complexity, you got deployment, it's challenges, cost, you got developers involved. So it's like how do you grow it? How do you get more horsepower? And then how do you make developers productive, right? So like, this seems to be the thread. So, so where, where do you see this going? Because there's going to be a massive demand for, I want to do more with my machine learning. But what's the data source? What's the formatting? This kind of a stack develop, what, what are you guys doing to address this? Can you take us through and demystify this, this wave that's hitting, that everyone's seeing? >> Brian: Yeah. Now like you said, like, you know, the democratization of all of it. And that brings me all the way back to like the roots of open source, right? When you think about like, like back in the day you had to build your own tech stack yourself. A lot of people probably probably don't remember that. And then you went, you're building, you're always starting on a body of code or a module that was out there with open source. And I think that's what I equate to where AI has gotten to with what you were talking about the foundational models that didn't really exist years ago. So you really were like putting the layers of your models together in the formulas and it was a lot of heavy lifting. And so there was so much time spent on development. With far too few success cases, you know, to get into production to solve like a business stereo technical need. But as these, what's happening is as these models are becoming foundational. It's meaning people don't have to start from scratch. They're actually able to, you know, the avant-garde now is start with existing model that almost does what you want, but then applying your data set to it. So it's, you know, it's really the industry moving forward. And then we, you know, and, and the best thing about it is open source plays a new dimension, but this time, you know, in the, in the realm of AI. And so to us though, like, you know, I've been like, I spent a career focusing on, I think on like the, not just the technical side, but the consumption of the technology and how it's still way too hard for somebody to actually like, operationalize technology that all those vendors throw at them. So I've always been like empathetic the user around like, you know what their job is once you give them great technology. And so it's still too difficult even with the foundational models because what happens is there's really this impedance mismatch between the development of the model and then where, where the model has to live and run and be deployed and the life cycle of the model, if you will. And so what we've done in our research is we've developed techniques to introduce what's known as sparsity into a machine learning model. It's already been developed and trained. And what that sparsity does is that unlocks by making that model so much smaller. So in many cases we can make a model 90 to 95% smaller, even smaller than that in research. So, and, and so by doing that, we do that in a way that preserves all the accuracy out of the foundational model as you talked about. So now all of a sudden you get this much smaller model just as accurate. And then the even more exciting part about it is we developed a software-based engine called Deep Source. And what that, what the Inference Runtime does is takes that now sparsified model and it runs it, but because you sparsified it, it only needs a fraction of the compute that it, that it would've needed otherwise. So what we've done is make these models much faster, much smaller, and then by pairing that with an inference runtime, you now can actually deploy that model anywhere you want on commodity hardware, right? So X 86 in the cloud, X 86 in the data center arm at the edge, it's like this massive unlock that happens because you get the, the state-of-the-art models, but you get 'em, you know, on the IT assets and the commodity infrastructure. That is where all the applications are running today. >> John: I want to get into the inference piece and the deep sparse you mentioned, but I first have to ask, you mentioned open source, Dave and I with some fellow cube alumnis. We're having a chat about, you know, the iPhone and Android moment where you got proprietary versus open source. You got a similar thing happening with some of these machine learning modules where there's a lot of proprietary things happening and there's open source movement is growing. So is there a balance there? Are they all trying to do the same thing? Is it more like a chip, you know, silicons involved, all kinds of things going on that are really fascinating from a science. What's your, what's your reaction to that? >> Brian: I think it's like anything that, you know, the way we talk about AI you think had been around for decades, but the reality is it's been some of the deep learning models. When we first, when we first started taking models that the brain team was working on at Google and billing APIs around them on Google Cloud where the first cloud to even have AI services was 2015, 2016. So when you think about it, it's really been what, 6 years since like this thing is even getting lift off. So I think with that, everybody's throwing everything at it. You know, there's tons of funded hardware thrown at specialty for training or inference new companies. There's legacy companies that are getting into like AI now and whether it's a, you know, a CPU company that's now building specialized ASEX for training. There's new tech stacks proprietary software and there's a ton of asset service. So it really is, you know, what's gone from nascent 8 years ago is the wild, wild west out there. So there's a, there's a little bit of everything right now and I think that makes sense because at the early part of any industry it really becomes really specialized. And that's the, you know, showing my age of like, you know, the early pilot of the two thousands, you know, red Hat people weren't running X 86 in enterprise back then and they thought it was a toy and they certainly weren't running open source, but you really, and it made sense that they weren't because it didn't deliver what they needed to at that time. So they needed specialty stacks, they needed expensive, they needed expensive hardware that did what an Oracle database needed to do. They needed proprietary software. But what happens is that commoditizes through both hardware and through open source and the same thing's really just starting with with AI. >> John: Yeah. And I think that's a great point before we to call that out because in any industry timing's everything, right? I mean I remember back in the 80s, late 80s and 90s, AI, you know, stuff was going on and it just wasn't, there wasn't enough horsepower, there wasn't enough tech. >> Brian: Yep. >> John: You mentioned some of the processing. So AI is this industry that has all these experts who have been itch scratching that itch for decades. And now with cloud and custom silicon. The tech fundamental at the lower end of the stack, if you will, on the performance side is significantly more performant. It's there you got more capabilities. >> Brian: Yeah. >> John: Now you're kicking into more software, faster software. So it just seems like we're at a tipping point where finally it's here, like that AI moment or machine learning and now data is, is involved. So this is where organizations I see really jumping in with the CEO mandate. Hey team, make ML work for us. Go figure it out. It's got to be an advantage for us. >> Brian: Yeah. >> John: So now they go, okay boss, we will. So what, what do they do? What's the steps does an enterprise take to get machine learning into their organizations? Cause you know, it's coming down from the boards, you know, how does this work for rob? >> Brian: Yeah. Like the, you know, the, what we're seeing is it's like anything, like it's, whether that was source adoption or whether that was cloud adoption, it always starts usually with one person. And increasingly it is the CEO, which realizes they're getting further behind the competition because they're not leaning in, you know, faster. But typically it really comes down to like a really strong practitioner that's inside the organization, right? And, that realizes that the number one goal isn't doing more and just training more models and and necessarily being proprietary about it. It's really around understanding the art of the possible. Something that's grounded in the art of the possible, what, what deep learning can do today and what business outcomes you can deliver, you know, if you can employ. And then there's well proven paths through that. It's just that because of where it's been, it's not that industrialized today. It's very much, you know, you see ML project by ML project is very snowflakey, right? And that was kind of the early days of open source as well. And so, we're just starting to get to the point where it's getting easier, it's getting more industrialized, there's less steps, there's less burdensome on developers, there's less burdensome on, on the deployment side. And we're trying to bring that, that whole last mile by saying, you know what? Deploying deep learning and AI models should be as easy as the as to deploy your application, right? You shouldn't have to take an extra step to deploy an AI model. It shouldn't have to require a new hardware, it shouldn't require a new process, a new DevOps model. It should be as simple as what you're already doing. >> John: What is the best practice for companies to effectively bring an acceptable level of machine learning and performance into their organizations? >> Brian: Yeah, I think like the, the number one start is like what you hinted at before is they, they have to know the use case. They have to, in most cases, you're going to find across every industry you know, that that problem's been tackled by some company, right? And then you have to have the best practice around fine-tuning the models already exist. So fine tuning that existing model. That foundational model on your unique dataset. You, you know, if you are in medical instruments, it's not good enough to identify that it's a medical instrument in the picture. You got to know what type of medical instrument. So there's always a fine tuning step. And so we've created open source tools that make it easy for you to do two things at once. You can fine tune that existing foundational model, whether that's in the language space or whether that's in the vision space. You can fine tune that on your dataset. And at the same time you get an optimized model that comes out the other end. So you get kind of both things. So you, you no longer have to worry about you're, we're freeing you from worrying about the complexity of that transfer learning, if you will. And we're freeing you from worrying about, well where am I going to deploy the model? Where does it need to be? Does it need to be on a device, an edge, a data center, a cloud edge? What kind of hardware is it? Is there enough hardware there? We're liberating you from all of that. Because what you want, what you can count on is there'll always be commodity capability, commodity CPUs where you want to deploy in abundance cause that's where your application is. And so all of a sudden we're just freeing you of that, of that whole step. >> John: Okay. Let's get into deep sparse because you mentioned that earlier. What inspired the creation of deep sparse and how does it differ from any other solutions in the market that are out there? >> Brian: Sure. So, so where unique is it? It starts by, by two things. One is what the industry's pretty good at from the optimization side is they're good at like this thing called quantization, which turns like, you know, big numbers into small numbers, lower precision. So a 32 bit representation of a, of AI weight into a bit. And they're good at like cutting out layers, which also takes away accuracy. What we've figured out is to take those, the industry techniques for those that are best practice, but we combined it with unstructured varsity. So by reducing that model by 90 to 95% in size, that's great because it's made it smaller. But we've taken that when it's the deep sparse engine, when you deploy it that looks at that model and says, because it's so much smaller, I no longer have to run the part of the model that's been essentially sparsified. So what that's done is, it's meant that you no longer need a supercomputer to run models because there's not nearly as much math and processing as there was before the model was optimized. So now what happens is, every CPU platform out there has, has an enormous amount of compute because we've sparsified the rest of it away. So you can pick a, you can pick your, your laptop and you have enough compute to run state-of-the-art models. The second thing that, and you need a software engine to do that cause it ignores the parts of the models. It doesn't need to run, which is what like specialized hardware can't do. The second part is it's then turned into a memory efficiency problem. So it's really around just getting memory, getting the models loaded into the cash of the computer and keeping it there. Never having to go back out to memory. So, so our techniques are both, we reduce the model size and then we only run the part of the model that matters and then we keep it all in cash. And so what that does is it gets us to like these, these low, low latency faster and we're able to increase, you know, the CPU processing by an order magnitude. >> John: Yeah. That low latency is key. And you got developers, you know, co coding super fast. We'll get to the developer angle in a second. I want to just follow up on this, this motivation behind the, the deep sparse because you know, as we were talking earlier before we came on camera about the old days, I mean, not too long ago, virtualization and VMware abstracted away the os from, from the hardware rights and the server virtualization changed the game. >> Brian: Yeah. >> John: And that basically invented cloud computing as we know it today. So, so we see that abstraction. >> Brian: Yeah. >> John: There seems to be a motivation behind abstracting the way the machine learning models away from the hardware. And that seems to be bringing advantages to the AI growth. Can you elaborate on, is that true? And it's, what's your comment? >> Brian: It's true. I think it's true for us. I don't think the industry's there yet, honestly. Cause I think the industry still is of that mindset that if I took, if it took these expensive GPUs to train my model, then I want to run my model on those same expensive GPUs. Because there's often like not a separation between the people that are developing AI and the people that have to manage and deploy at where you need it. So the reality is, is that that's everything that we're after. Like, do we decrease the cost? Yes. Do we make the models smaller? Yes. Do we make them faster? A yes. But I think the most amazing power is that we've turned AI into a docker based microservice. And so like who in the industry wants to deploy their apps the old way on a os without virtualization, without docker, without Kubernetes, without microservices, without service mesh without serverless. You want all those tools for your apps by converting AI models. So they can be run inside a docker container with no apologies around latency and performance cause it's faster. You get the best of that whole world that you just talked about, which is, you know, what we're calling, you know, software delivered AI. So now the AI lives in the same world. Organizations that have gone through that digital cloud transformation with their app infrastructure. AI fits into that world. >> John: And this is where the abstraction concepts matter. When you have these inflection points, the convergence of compute data, machine learning that powers AI, it really becomes a developer opportunity. Because now applications and businesses, when they actually go through the digital transformation, their businesses are completely transformed. There is no IT. Developers are the application. They are the company, right? So AI will be part of whatever business or app will be out there. So there is a application developer angle here. Brian, can you explain >> Brian: Oh completely. >> John: how they're going to use this? Because you mentioned docker container microservice, I mean this really is an insane flipping of the script for developers. >> Brian: Yeah. >> John: So what's that look like? >> Brian: Well speak, it's because like AI's kind of, I mean, again, like it's come so fast. So you figure there's my app team and here's my AI team, right? And they're in different places and the AI team is dragging in specialized infrastructure in support of that as well. And that's not how app developers think. Like they've ran on fungible infrastructure that subtracted and virtualized forever, right? And so what we've done is we've, in addition to fitting into that world that they, that they like, we've also made it simple for them for they don't have to be a machine learning engineer to be able to experiment with these foundational models and transfer learning 'em. We've done that. So they can do that in a couple of commands and it has a simple API that they can either link to their application directly as a library to make difference calls or they can stand it up as a standalone, you know, scale up, scale out inference server. They get two choices. But it really fits into that, you know, you know that world that the modern developer, whether they're just using Python or C or otherwise, we made it just simple. So as opposed to like Go learn something else, they kind of don't have to. So in a way though, it's made it. It's almost made it hard because people expect when we talk to 'em for the first time to be the old way. Like, how do you look like a piece of hardware? Are you compatible with my existing hardware that runs ML? Like, no, we're, we're not. Because you don't need that stack anymore. All you need is a library called to make your prediction and that's it. That's it. >> John: Well, I mean, we were joking on Twitter the other day with someone saying, is AI a pet or a cattle? Right? Because they love their, their AI bots right now. So, so I'd say pet there. But you look at a lot of, there's going to be a lot of AI. So on a more serious note, you mentioned in microservices, will deep sparse have an API for developers? And how does that look like? What do I do? >> Brian: Yeah. >> John: tell me what my, as a developer, what's the roadmap look like? What's the >> Brian: Yeah, it, it really looks, it really can go in both modes. It can go in a standalone server mode where it handles, you know, rest API and it can scale out with ES as the workload comes up and scale back and like try to make hardware do that. Hardware may scale back, but it's just sitting there dormant, you know, so with this, it scales the same way your application needs to. And then for a developer, they basically just, they just, the PIP install de sparse, you know, has one commanded to do an install, and then they do two calls, really. The first call is a library call that the app makes to create the model. And models really already trained, but they, it's called a model create call. And the second command they do is they make a call to do a prediction. And it's as simple as that. So it's, it's AI's as simple as using any other library that the developers are already using, which I, which sounds hard to fathom because it is just so simplified. >> John: Software delivered AI. Okay, that's a cool thing. I believe in it personally. I think that's the way to go. I think there's going to be plenty of hardware options if you look at the advances of cloud players that got more silicon coming out. Yeah. More GPU. I mean, there's more instance, I mean, everything's out there right now. So the question is how does that evolve in your mind? Because that's seems to be key. You have open source projects emerging. What, what path does this take? Is there a parallel mental model that you see, Brian, that is similar? You mentioned open source earlier. Is it more like a VMware virtualization thing or is it more of a cloud thing? Is there Yeah. Is it going to evolve in a, in a trajectory that looks similar to what we might've seen in the past? >> Brian: Yeah, we're, you know, when I, when when I got involved with the company, what I, when I thought about it and I was reasoning about it, like, do you, you know, you want to, like, we all do when you want to join something full-time. I thought about it and said, where will the industry eventually get to? Right? To fully realize the value of, of deep learning and what's plausible as it evolves. And to me, like I, I know it's the old adage of, you know, you know, software, its hardware, cloudy software. But it truly was like, you know, we can solve these problems in software. Like there's nothing special that's happening at the hardware layer and the processing AI. The reality is that it's just early in the industry. So the view that that we had was like, this is eventually the best place where the industry will be, is the liberation of being able to run AI anywhere. Like you're really not democratizing, you democratize the model. But if you can't run the model anywhere you want because these models are getting bigger and bigger with these large language models, then you're kind of not democratizing. And if you got to go and like by a cluster to run this thing on. So the democratization comes by if all of a sudden that model can be consumed anywhere on demand without planning, without provisioning, wherever infrastructure is. And so I think that's with or without Neural Magic, that's where the industry will go and will get to. I think we're the leaders, leaders in getting it there. It's right because we're more advanced on these techniques. >> John: Yeah. And your background too. You've seen OpenStack, pre-cloud, you saw open source grow and still exponentially growing. And so you have the same similar dynamic with machine learning models growing. And they're also segmenting into almost a, an ML stack or foundational model as we talk about. So you're starting to see the formation of tooling inference. So a lot of components coming. It's almost a stack, it's almost a, it literally is like an operating system problem space, you know? How do you run things, how do you link things? How do you bring things together? Is that what's going on here? Is this like a data modeling operating environment kind of red hat type thing going on? Like. >> Brian: Yeah. Yeah. Like I think there is, you know, I thought about that too. And I think there is the role of like distribution, because the industrialization not happening fast enough of this. Like, can I go back to like every customers, every, every user does it in their own kind of way. Like it's not, everyone's a little bit of a snowflake. And I think that's okay. There's definitely plenty of companies that want to come in and say, well, this is the way it's going to be and we industrialize it as long as you do it our way. The reality is technology doesn't get industrialized by one company just saying, do it our way. And so that's why like we've taken the approach through open source by saying like, Hey, you haven't really industrialized it if you said. We made it simple, but you always got to run AI here. Yeah, right. You only like really industrialize it if you break it down into components that are simple to use and they work integrated in the stack the way you want them to. And so to me, that first principles was getting thing into microservices and dockers that could be run on VMware, OpenShare on the cloud in the edge. And so that's the, that's the real part that we're happening with. The other part, like I do agree, like I think it's going to quickly move into less about the model. Less about the training of the model and the transfer learning, you know, the data set of the model. We're taking away the complexity of optimization. Giving liberating deployment to be anywhere. And I think the last mile, John is going to be around the ML ops around that. Because it's easy to think of like soft now that it's just a software problem, we've turned it into a software problem. So it's easy to think of software as like kind of a point release, but that's not the reality, right? It's a life cycle. And it's, and so I think ML very much brings in the what is the lifecycle of that deployment? And, you know, you get into more interesting conversations, to be honest than like, once you've deployed in a docking container is around like model drift and accuracy and the dataset changes and the user changes is how do you become from an ML perspective of where of that sending signal back retraining. And, and that's where I think a lot of the, in more of the innovation's going to start to move there. >> John: Yeah. And software also, the software problem, the software opportunity as well is developer focused. And if you look at the cloud native landscape now, similar stacks developing a lot of components. A lot of things to, to stitch together a lot of things that are automating under the hood. A lot of developer productivity conversations. I think this is going to go down that same road. I want to get your thoughts because developers will set the pace. And this is something that's clear in this next wave developer productivity. They're the defacto standards bodies. They will decide what microservices check, API check. Now, skill gap is going to be a problem because it's relatively new. So model sprawl, model sizes, proprietary versus open. There has to be a way to kind of crunch that down into a, like a DevOps, like just make it, get the developer out of the, the muck. So what's your view? Are we early days like that? Or what's the young kid in college studying CS or whatever degree who comes into this with, with both feet? What are they doing? >> Brian: I'll probably say like the, the non-popular answer to that. A little bit is it's happening so fast that it's going to get kind of boring fast. Meaning like, yeah, you could go to school and go to MIT, right? Sorry. Like, and you could get a hold through end like becoming a model architect, like inventing the next model, right? And the layers and combining 'em and et cetera, et cetera. And then what operators and, and building a model that's bigger than the last one and trains faster, right? And there will be those people, right? That actually, like they're building the engines the same way. You know, I grew up as an infrastructure software developer. There's not a lot of companies that hire those anymore because they're all sitting inside of three big clouds. Yeah. Right? So you better be a good app developer, but I think what you're going to see is before you had to be everything, you had to be the, if you were going to use infrastructure, you had to know how to build infrastructure. And I think the same thing's true around is quickly exiting ML is to be able to use ML in your company, you better be like, great at every aspect of ML, including every intricacy inside of the model and every operation's doing, that's quickly changing. Like, you're going to start with a starting point. You know, in the future you're not going to be like cracking open these GPT models, you're going to just be pulling them off the shelf, fine tuning 'em and go. You don't have to invent it. You don't have to understand it. And I think that's going to be a pivot point, you know, in the industry between, you know, what's the future? What's, what's the future of a, a data scientist? ML engineer researcher look like? >> John: I think that's, the outcome's going to be determined. I mean, you mentioned, you know, doing it yourself what an SRE is for a Google with the servers scale's huge. So yeah, it might have to, at the beginning get boring, you get obsolete quickly, but that means it's progressing. So, The scale becomes huge. And that's where I think it's going to be interesting when we see that scale. >> Brian: Yep. Yeah, I think that's right. I think that's right. And we always, and, and what I've always said, and much the, again, the distribute into my ML team is that I want every developer to be as adept at being able take advantage of ML as non ML engineer, right? It's got to be that simple. And I think, I think it's getting there. I really do. >> John: Well, Brian, great, great to have you on theCUBE here on this cube conversation. As part of the startup showcase that's coming up. You're going to be featured. Or your company would featured on the upcoming ABRA startup showcase on making machine learning easier and more affordable as more machine learning models come in. You guys got deep sparse and some great technology. We're going to dig into that next time. I'll give you the final word right now. What do you see for the company? What are you guys looking for? Give a plug for the company right now. >> Brian: Oh, give a plug that I haven't already doubled in as the plug. >> John: You're hiring engineers, I assume from MIT and other places. >> Brian: Yep. I think like the, the biggest thing is like, like we're on the developer side. We're here to make this easy. The majority of inference today is, is on CPUs already, believe it or not, as much as kind of, we like to talk about hardware and specialized hardware. The majority is already on CPUs. We're basically bringing 95% cost savings to CPUs through this acceleration. So, but we're trying to do it in a way that makes it community first. So I think the, the shout out would be come find the Neural Magic community and engage with us and you'll find, you know, a thousand other like-minded people in Slack that are willing to help you as well as our engineers. And, and let's, let's go take on some successful AI deployments. >> John: Exciting times. This is, I think one of the pivotal moments, NextGen data, machine learning, and now starting to see AI not be that chat bot, just, you know, customer support or some basic natural language processing thing. You're starting to see real innovation. Brian Stevens, CEO of Neural Magic, bringing the magic here. Thanks for the time. Great conversation. >> Brian: Thanks John. >> John: Thanks for joining me. >> Brian: Cheers. Thank you. >> John: Okay. I'm John Furrier, host of theCUBE here in Palo Alto, California for this cube conversation with Brian Stevens. Thanks for watching.
SUMMARY :
CEO, Great to see you Brian. happy to be here again. minute to explain what you guys in the world you have a lot So it's like how do you grow it? like back in the day you had and the deep sparse you And that's the, you know, late 80s and 90s, AI, you know, It's there you got more capabilities. the CEO mandate. Cause you know, it's coming the as to deploy your application, right? And at the same time you get in the market that are out meant that you no longer need a the deep sparse because you know, John: And that basically And that seems to be bringing and the people that have to the convergence of compute data, insane flipping of the script But it really fits into that, you know, But you look at a lot of, call that the app makes to model that you see, Brian, the old adage of, you know, And so you have the same the way you want them to. And if you look at the to see is before you had to be I mean, you mentioned, you know, the distribute into my ML team great to have you on theCUBE already doubled in as the plug. and other places. the biggest thing is like, of the pivotal moments, Brian: Cheers. host of theCUBE here in Palo Alto,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Stevens | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
32 bit | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Brian Steve | PERSON | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two calls | QUANTITY | 0.99+ |
both things | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
first call | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both feet | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
both modes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
80s | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
second command | QUANTITY | 0.98+ |
CUBE Insights Day 1 | CloudNativeSecurityCon 23
(upbeat music) >> Hey, everyone. Welcome back to theCUBE's day one coverage of Cloud Native SecurityCon 2023. This has been a great conversation that we've been able to be a part of today. Lisa Martin with John Furrier and Dave Vellante. Dave and John, I want to get your take on the conversations that we had today, starting with the keynote that we were able to see. What are your thoughts? We talked a lot about technology. We also talked a lot about people and culture. John, starting with you, what's the story here with this inaugural event? >> Well, first of all, there's two major threads. One is the breakout of a new event from CloudNativeCon/KubeCon, which is a very successful community and events that they do international and in North America. And that's not stopping. So that's going to be continuing to go great. This event is a breakout with an extreme focus on security and all things security around that ecosystem. And with extensions into the Linux Foundation. We heard Brian Behlendorf was on there from the Linux Foundation. So he was involved in Hyperledger. So not just Cloud Native, all things containers, Kubernetes, all things Linux Foundation as an open source. So, little bit more of a focus. So I like that piece of it. The other big thread on this story is what Dave and Yves were talking about on our panel we had earlier, which was the business model of security is real and that is absolutely happening. It's impacting business today. So you got this, let's build as fast as possible, let's retool, let's replatform, refactor and then the reality of the business imperative. To me, those are the two big high-order bits that are going on and that's the reality of this current situation. >> Dave, what are your top takeaways from today's day one inaugural coverage? >> Yeah, I would add a third leg of the stool to what John said and that's what we were talking about several times today about the security is a do-over. The Pat Gelsinger quote, from what was that, John, 2011, 2012? And that's right around the time that the cloud was hitting this steep part of the S-curve and do-over really has meant in looking back, leveraging cloud native tooling, and cloud native technologies, which are different than traditional security approaches because it has to take into account the unique characteristics of the cloud whether that's dynamic resource allocation, unlimited resources, microservices, containers. And while that has helped solve some problems it also brings new challenges. All these cloud native tools, securing this decentralized infrastructure that people are dealing with and really trying to relearn the security culture. And that's kind of where we are today. >> I think the other thing too that I had Dave is that was we get other guests on with a diverse opinion around foundational models with AI and machine learning. You're going to see a lot more things come in to accelerate the scale and automation piece of it. It is one thing that CloudNativeCon and KubeCon has shown us what the growth of cloud computing is is that containers Kubernetes and these new services are powering scale. And scale you're going to need to have automation and machine learning and AI will be a big part of that. So you start to see the new formation of stacks emerging. So foundational stacks is the machine learning and data apps are coming out. It's going to start to see more apps coming. So I think there's going to be so many new applications and services are going to emerge, and if you don't get your act together on the infrastructure side those apps will not be fully baked. >> And obviously that's a huge risk. Sorry, Dave, go ahead. >> No, that's okay. So there has to be hardware somewhere. You can't get away with no hardware. But increasingly the security architecture like everything else is, is software-defined and makes it a lot more flexible. And to the extent that practitioners and organizations can consolidate this myriad of tools that they have, that means they're going to have less trouble learning new skills, they're going to be able to spend more time focused and become more proficient on the tooling that is being applied. And you're seeing the same thing on the vendor side. You're seeing some of these large vendors, Palo Alto, certainly CrowdStrike and fundamental to their strategy is to pick off more and more and more of these areas in security and begin to consolidate them. And right now, that's a big theme amongst organizations. We know from the survey data that consolidating redundant vendors is the number one cost saving priority today. Along with, at a distant second, optimizing cloud costs, but consolidating redundant vendors there's nowhere where that's more prominent than in security. >> Dave, talk a little bit about that, you mentioned the practitioners and obviously this event bottoms up focused on the practitioners. It seems like they're really in the driver's seat now. With this being the inaugural Cloud Native SecurityCon, first time it's been pulled out of an elevated out of KubeCon as a focus, do you think this is about time that the practitioners are in the driver's seat? >> Well, they're certainly, I mean, we hear about all the tech layoffs. You're not laying off your top security pros and if you are, they're getting picked up very quickly. So I think from that standpoint, anybody who has deep security expertise is in the driver's seat. The problem is that driver's seat is pretty hairy and you got to have the stomach for it. I mean, these are technical heroes, if you will, on the front lines, literally saving the world from criminals and nation-states. And so yes, I think Lisa they have been in the driver's seat for a while, but it it takes a unique person to drive at those speeds. >> I mean, the thing too is that the cloud native world that we are living in comes from cloud computing. And if you look at this, what is a practitioner? There's multiple stakeholders that are being impacted and are vulnerable in the security front at many levels. You have application developers, you got IT market, you got security, infrastructure, and network and whatever. So all that old to new is happening. So if you look at IT, that market is massive. That's still not transformed yet to cloud. So you have companies out there literally fully exposed to ransomware. IT teams that are having practices that are antiquated and outdated. So security patching, I mean the blocking and tackling of the old securities, it's hard to even support that old environment. So in this transition from IT to cloud is changing everything. And so practitioners are impacted from the devs and the ones that get there faster and adopt the ways to make their business better, whether you call it modern technology and architectures, will be alive and hopefully thriving. So that's the challenge. And I think this security focus hits at the heart of the reality of business because like I said, they're under threats. >> I wanted to pick up too on, I thought Brian Behlendorf, he did a forward looking what could become the next problem that we really haven't addressed. He talked about generative AI, automating spearphishing and he flat out said the (indistinct) is not fixed. And so identity access management, again, a lot of different toolings. There's Microsoft, there's Okta, there's dozens of companies with different identity platforms that practitioners have to deal with. And then what he called free riders. So these are folks that go into the repos. They're open source repos, and they find vulnerabilities that developers aren't hopping on quickly. It's like, you remember Patch Tuesday. We still have Patch Tuesday. That meant Hacker Wednesday. It's kind of the same theme there going into these repos and finding areas where the practitioners, the developers aren't responding quickly enough. They just don't necessarily have the resources. And then regulations, public policy being out of alignment with what's really needed, saying, "Oh, you can't ship that fix outside of Germany." Or I'm just making this up, but outside of this region because of a law. And you could be as a developer personally liable for it. So again, while these practitioners are in the driver's seat, it's a hairy place to be. >> Dave, we didn't get the word supercloud in much on this event, did we? >> Well, I'm glad you brought that up because I think security is the big single, biggest challenge for supercloud, securing the supercloud with all the diversity of tooling across clouds and I think you brought something up in the first supercloud, John. You said, "Look, ultimately the cloud, the hyperscalers have to lean in. They are going to be the enablers of supercloud. They already are from an infrastructure standpoint, but they can solve this problem by working together. And I think there needs to be more industry collaboration. >> And I think the point there is that with security the trend will be, in my opinion, you'll see security being reborn in the cloud, around zero trust as structure, and move from an on-premise paradigm to fully cloud native. And you're seeing that in the network side, Dave, where people are going to each cloud and building stacks inside the clouds, hyperscaler clouds that are completely compatible end-to-end with on-premises. Not trying to force the cloud to be working with on-prem. They're completely refactoring as cloud native first. And again, that's developer first, that's data first, that's security first. So to me that's the tell sign. To me is if when you see that, that's good. >> And Lisa, I think the cultural conversation that you've brought into these discussions is super important because I've said many times, bad user behavior is going to trump good security every time. So that idea that the entire organization is responsible for security. You hear that all the time. Well, what does that mean? It doesn't mean I have to be a security expert, it just means I have to be smart. How many people actually use a VPN? >> So I think one of the things that I'm seeing with the cultural change is face-to-face problem solving is one, having remote teams is another. The skillset is big. And I think the culture of having these teams, Dave mentioned something about intramural sports, having the best people on the teams, from putting captains on the jersey of security folks is going to happen. I think you're going to see a lot more of that going on because there's so many areas to work on. You're going to start to see security embedded in all processes. >> Well, it needs to be and that level of shared responsibility is not trivial. That's across the organization. But they're also begs the question of the people problem. People are one of the biggest challenges with respect to security. Everyone has to be on board with this. It has to be coming from the top down, but also the bottom up at the same time. It's challenging to coordinate. >> Well, the training thing I think is going to solve itself in good time. And I think in the fullness of time, if I had to predict, you're going to see managed services being a big driver on the front end, and then as companies realize where their IP will be you'll see those managed service either be a core competency of their business and then still leverage. So I'm a big believer in managed services. So you're seeing Kubernetes, for instance, a lot of managed services. You'll start to see more, get the ball going, get that rolling, then build. So Dave mentioned bottoms up, middle out, that's how transformation happens. So I think managed services will win from here, but ultimately the business model stuff is so critical. >> I'm glad you brought up managed services and I want to add to that managed security service providers, because I saw a stat last year, 50% of organizations in the US don't even have a security operations team. So managed security service providers MSSPs are going to fill the gap, especially for small and midsize companies and for those larger companies that just need to augment and compliment their existing staff. And so those practitioners that we've been talking about, those really hardcore pros, they're going to go into these companies, some large, the big four, all have them. Smaller companies like Arctic Wolf are going to, I think, really play a key role in this decade. >> I want to get your opinion Dave on what you're hoping to see from this event as we've talked about the first inaugural standalone big focus here on security as a standalone. Obviously, it's a huge challenge. What are you hoping for this event to get groundswell from the community? What are you hoping to hear and see as we wrap up day one and go into day two? >> I always say events like this they're about educating, aspiring to action. And so the practitioners that are at this event I think, I used to say they're the technical heroes. So we know there's going to be another Log4j or a another SolarWinds. It's coming. And my hope is that when that happens, it's not an if, it's a when, that the industry, these practitioners are able to respond in a way that's safe and fast and agile and they're able to keep us protected, number one and number two, that they can actually figure out what happened in the long tail of still trying to clean it up is compressed. That's my hope or maybe it's a dream. >> I think day two tomorrow you're going to hear more supply chain, security. You're going to start to see them focus on sessions that target areas if within the CNCF KubeCon + CloudNativeCon area that need support around containers, clusters, around Kubernetes cluster. You're going to start to see them laser focus on cleaning up the house, if you will, if you can call it cleaning up or fixing what needs to get fixed or solved what needs to get solved on the cloud native front. That's going to be urgent. And again, supply chain software as Dave mentioned, free riders too, just using open source. So I think you'll see open source continue to grow, but there'll be an emphasis on verification and certification. And Docker has done a great job with that. You've seen what they've done with their business model over hundreds of millions of dollars in revenue from a pivot. Catch a few years earlier because they verify. So I think we're going to be in this verification blue check mark of code era, of code and software. Super important bill of materials. They call SBOMs, software bill of materials. People want to know what's in their software and that's going to be, again, another opportunity for machine learning and other things. So I'm optimistic that this is going to be a good focus. >> Good. I like that. I think that's one of the things thematically that we've heard today is optimism about what this community can generate in terms of today's point. The next Log4j is coming. We know it's not if, it's when, and all organizations need to be ready to Dave's point to act quickly with agility to dial down and not become the next headline. Nobody wants to be that. Guys, it's been fun working with you on this day one event. Looking forward to day two. Lisa Martin for Dave Vellante and John Furrier. You're watching theCUBE's day one coverage of Cloud Native SecurityCon '23. We'll see you tomorrow. (upbeat music)
SUMMARY :
to be a part of today. that are going on and that's the reality that the cloud was hitting So I think there's going to And obviously that's a huge risk. So there has to be hardware somewhere. that the practitioners is in the driver's seat. So all that old to new is happening. and he flat out said the And I think there needs to be So to me that's the tell sign. So that idea that the entire organization is going to happen. Everyone has to be on board with this. being a big driver on the front end, that just need to augment to get groundswell from the community? that the industry, these and that's going to be, and not become the next headline.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Brian Behlendorf | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
Yves | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
last year | DATE | 0.99+ |
Arctic Wolf | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
KubeCon | EVENT | 0.99+ |
One | QUANTITY | 0.99+ |
day one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Linux Foundation | ORGANIZATION | 0.98+ |
third leg | QUANTITY | 0.98+ |
day two | QUANTITY | 0.97+ |
Cloud Native SecurityCon 2023 | EVENT | 0.97+ |
one thing | QUANTITY | 0.97+ |
each cloud | QUANTITY | 0.97+ |
two major threads | QUANTITY | 0.96+ |
Cloud Native SecurityCon '23 | EVENT | 0.96+ |
SolarWinds | ORGANIZATION | 0.96+ |
CloudNativeSecurityCon 23 | EVENT | 0.95+ |
one | QUANTITY | 0.95+ |
Cloud Native SecurityCon | EVENT | 0.95+ |
Kubernetes | TITLE | 0.95+ |
day | QUANTITY | 0.95+ |
single | QUANTITY | 0.94+ |
dozens of companies | QUANTITY | 0.94+ |
CrowdStrike | ORGANIZATION | 0.94+ |
Patch Tuesday | TITLE | 0.93+ |
Day 1 | QUANTITY | 0.93+ |
Hyperledger | ORGANIZATION | 0.93+ |
supercloud | ORGANIZATION | 0.91+ |
hundreds of millions of dollars | QUANTITY | 0.91+ |
2012 | DATE | 0.89+ |
second | QUANTITY | 0.88+ |
first time | QUANTITY | 0.87+ |
Palo | ORGANIZATION | 0.87+ |
two big high-order bits | QUANTITY | 0.87+ |
Log4j | ORGANIZATION | 0.86+ |
Alto | LOCATION | 0.86+ |
few years earlier | DATE | 0.85+ |
Oracle Aspires to be the Netflix of AI | Cube Conversation
(gentle music playing) >> For centuries, we've been captivated by the concept of machines doing the job of humans. And over the past decade or so, we've really focused on AI and the possibility of intelligent machines that can perform cognitive tasks. Now in the past few years, with the popularity of machine learning models ranging from recent ChatGPT to Bert, we're starting to see how AI is changing the way we interact with the world. How is AI transforming the way we do business? And what does the future hold for us there. At theCube, we've covered Oracle's AI and ML strategy for years, which has really been used to drive automation into Oracle's autonomous database. We've talked a lot about MySQL HeatWave in database machine learning, and AI pushed into Oracle's business apps. Oracle, it tends to lead in AI, but not competing as a direct AI player per se, but rather embedding AI and machine learning into its portfolio to enhance its existing products, and bring new services and offerings to the market. Now, last October at Cloud World in Las Vegas, Oracle partnered with Nvidia, which is the go-to AI silicon provider for vendors. And they announced an investment, a pretty significant investment to deploy tens of thousands more Nvidia GPUs to OCI, the Oracle Cloud Infrastructure and build out Oracle's infrastructure for enterprise scale AI. Now, Oracle CEO, Safra Catz said something to the effect of this alliance is going to help customers across industries from healthcare, manufacturing, telecoms, and financial services to overcome the multitude of challenges they face. Presumably she was talking about just driving more automation and more productivity. Now, to learn more about Oracle's plans for AI, we'd like to welcome in Elad Ziklik, who's the vice president of AI services at Oracle. Elad, great to see you. Welcome to the show. >> Thank you. Thanks for having me. >> You're very welcome. So first let's talk about Oracle's path to AI. I mean, it's the hottest topic going for years you've been incorporating machine learning into your products and services, you know, could you tell us what you've been working on, how you got here? >> So great question. So as you mentioned, I think most of the original four-way into AI was on embedding AI and using AI to make our applications, and databases better. So inside mySQL HeatWave, inside our autonomous database in power, we've been driving AI, all of course are SaaS apps. So Fusion, our large enterprise business suite for HR applications and CRM and ELP, and whatnot has built in AI inside it. Most recently, NetSuite, our small medium business SaaS suite started using AI for things like automated invoice processing and whatnot. And most recently, over the last, I would say two years, we've started exposing and bringing these capabilities into the broader OCI Oracle Cloud infrastructure. So the developers, and ISVs and customers can start using our AI capabilities to make their apps better and their experiences and business workflow better, and not just consume these as embedded inside Oracle. And this recent partnership that you mentioned with Nvidia is another step in bringing the best AI infrastructure capabilities into this platform so you can actually build any type of machine learning workflow or AI model that you want on Oracle Cloud. >> So when I look at the market, I see companies out there like DataRobot or C3 AI, there's maybe a half dozen that sort of pop up on my radar anyway. And my premise has always been that most customers, they don't want to become AI experts, they want to buy applications and have AI embedded or they want AI to manage their infrastructure. So my question to you is, how does Oracle help its OCI customers support their business with AI? >> So it's a great question. So I think what most customers want is business AI. They want AI that works for the business. They want AI that works for the enterprise. I call it the last mile of AI. And they want this thing to work. The majority of them don't want to hire a large and expensive data science teams to go and build everything from scratch. They just want the business problem solved by applying AI to it. My best analogy is Lego. So if you think of Lego, Lego has these millions Lego blocks that you can use to build anything that you want. But the majority of people like me or like my kids, they want the Lego death style kit or the Lego Eiffel Tower thing. They want a thing that just works, and it's very easy to use. And still Lego blocks, you still need to build some things together, which just works for the scenario that you're looking for. So that's our focus. Our focus is making it easy for customers to apply AI where they need to, in the right business context. So whether it's embedding it inside the business applications, like adding forecasting capabilities to your supply chain management or financial planning software, whether it's adding chat bots into the line of business applications, integrating these things into your analytics dashboard, even all the way to, we have a new platform piece we call ML applications that allows you to take a machine learning model, and scale it for the thousands of tenants that you would be. 'Cause this is a big problem for most of the ML use cases. It's very easy to build something for a proof of concept or a pilot or a demo. But then if you need to take this and then deploy it across your thousands of customers or your thousands of regions or facilities, then it becomes messy. So this is where we spend our time making it easy to take these things into production in the context of your business application or your business use case that you're interested in right now. >> So you mentioned chat bots, and I want to talk about ChatGPT, but my question here is different, we'll talk about that in a minute. So when you think about these chat bots, the ones that are conversational, my experience anyway is they're just meh, they're not that great. But the ones that actually work pretty well, they have a conditioned response. Now they're limited, but they say, which of the following is your problem? And then if that's one of the following is your problem, you can maybe solve your problem. But this is clearly a trend and it helps the line of business. How does Oracle think about these use cases for your customers? >> Yeah, so I think the key here is exactly what you said. It's about task completion. The general purpose bots are interesting, but as you said, like are still limited. They're getting much better, I'm sure we'll talk about ChatGPT. But I think what most enterprises want is around task completion. I want to automate my expense report processing. So today inside Oracle we have a chat bot where I submit my expenses the bot ask a couple of question, I answer them, and then I'm done. Like I don't need to go to our fancy application, and manually submit an expense report. I do this via Slack. And the key is around managing the right expectations of what this thing is capable of doing. Like, I have a story from I think five, six years ago when technology was much inferior than it is today. Well, one of the telco providers I was working with wanted to roll a chat bot that does realtime translation. So it was for a support center for of the call centers. And what they wanted do is, Hey, we have English speaking employees, whatever, 24/7, if somebody's calling, and the native tongue is different like Hebrew in my case, or Chinese or whatnot, then we'll give them a chat bot that they will interact with and will translate this on the fly and everything would work. And when they rolled it out, the feedback from customers was horrendous. Customers said, the technology sucks. It's not good. I hate it, I hate your company, I hate your support. And what they've done is they've changed the narrative. Instead of, you go to a support center, and you assume you're going to talk to a human, and instead you get a crappy chat bot, they're like, Hey, if you want to talk to a Hebrew speaking person, there's a four hour wait, please leave your phone and we'll call you back. Or you can try a new amazing Hebrew speaking AI powered bot and it may help your use case. Do you want to try it out? And some people said, yeah, let's try it out. Plus one to try it out. And the feedback, even though it was the exact same technology was amazing. People were like, oh my God, this is so innovative, this is great. Even though it was the exact same experience that they hated a few weeks earlier on. So I think the key lesson that I picked from this experience is it's all about setting the right expectations, and working around the right use case. If you are replacing a human, the level is different than if you are just helping or augmenting something that otherwise would take a lot of time. And I think this is the focus that we are doing, picking up the tasks that people want to accomplish or that enterprise want to accomplish for the customers, for the employees. And using chat bots to make those specific ones better rather than, hey, this is going to replace all humans everywhere, and just be better than that. >> Yeah, I mean, to the point you mentioned expense reports. I'm in a Twitter thread and one guy says, my favorite part of business travel is filling out expense reports. It's an hour of excitement to figure out which receipts won't scan. We can all relate to that. It's just the worst. When you think about companies that are building custom AI driven apps, what can they do on OCI? What are the best options for them? Do they need to hire an army of machine intelligence experts and AI specialists? Help us understand your point of view there. >> So over the last, I would say the two or three years we've developed a full suite of machine learning and AI services for, I would say probably much every use case that you would expect right now from applying natural language processing to understanding customer support tickets or social media, or whatnot to computer vision platforms or computer vision services that can understand and detect objects, and count objects on shelves or detect cracks in the pipe or defecting parts, all the way to speech services. It can actually transcribe human speech. And most recently we've launched a new document AI service. That can actually look at unstructured documents like receipts or invoices or government IDs or even proprietary documents, loan application, student application forms, patient ingestion and whatnot and completely automate them using AI. So if you want to do one of the things that are, I would say common bread and butter for any industry, whether it's financial services or healthcare or manufacturing, we have a suite of services that any developer can go, and use easily customized with their own data. You don't need to be an expert in deep learning or large language models. You could just use our automobile capabilities, and build your own version of the models. Just go ahead and use them. And if you do have proprietary complex scenarios that you need customer from scratch, we actually have the most cost effective platform for that. So we have the OCI data science as well as built-in machine learning platform inside the databases inside the Oracle database, and mySQL HeatWave that allow data scientists, python welding people that actually like to build and tweak and control and improve, have everything that they need to go and build the machine learning models from scratch, deploy them, monitor and manage them at scale in production environment. And most of it is brand new. So we did not have these technologies four or five years ago and we've started building them and they're now at enterprise scale over the last couple of years. >> So what are some of the state-of-the-art tools, that AI specialists and data scientists need if they're going to go out and develop these new models? >> So I think it's on three layers. I think there's an infrastructure layer where the Nvidia's of the world come into play. For some of these things, you want massively efficient, massively scaled infrastructure place. So we are the most cost effective and performant large scale GPU training environment today. We're going to be first to onboard the new Nvidia H100s. These are the new super powerful GPU's for large language model training. So we have that covered for you in case you need this 'cause you want to build these ginormous things. You need a data science platform, a platform where you can open a Python notebook, and just use all these fancy open source frameworks and create the models that you want, and then click on a button and deploy it. And it infinitely scales wherever you need it. And in many cases you just need the, what I call the applied AI services. You need the Lego sets, the Lego death style, Lego Eiffel Tower. So we have a suite of these sets for typical scenarios, whether it's cognitive services of like, again, understanding images, or documents all the way to solving particular business problems. So an anomaly detection service, demand focusing service that will be the equivalent of these Lego sets. So if this is the business problem that you're looking to solve, we have services out there where we can bring your data, call an API, train a model, get the model and use it in your production environment. So wherever you want to play, all the way into embedding this thing, inside this applications, obviously, wherever you want to play, we have the tools for you to go and engage from infrastructure to SaaS at the top, and everything in the middle. >> So when you think about the data pipeline, and the data life cycle, and the specialized roles that came out of kind of the (indistinct) era if you will. I want to focus on two developers and data scientists. So the developers, they hate dealing with infrastructure and they got to deal with infrastructure. Now they're being asked to secure the infrastructure, they just want to write code. And a data scientist, they're spending all their time trying to figure out, okay, what's the data quality? And they're wrangling data and they don't spend enough time doing what they want to do. So there's been a lack of collaboration. Have you seen that change, are these approaches allowing collaboration between data scientists and developers on a single platform? Can you talk about that a little bit? >> Yeah, that is a great question. One of the biggest set of scars that I have on my back from for building these platforms in other companies is exactly that. Every persona had a set of tools, and these tools didn't talk to each other and the handoff was painful. And most of the machine learning things evaporate or die on the floor because of this problem. It's very rarely that they are unsuccessful because the algorithm wasn't good enough. In most cases it's somebody builds something, and then you can't take it to production, you can't integrate it into your business application. You can't take the data out, train, create an endpoint and integrate it back like it's too painful. So the way we are approaching this is focused on this problem exactly. We have a single set of tools that if you publish a model as a data scientist and developers, and even business analysts that are seeing a inside of business application could be able to consume it. We have a single model store, a single feature store, a single management experience across the various personas that need to play in this. And we spend a lot of time building, and borrowing a word that cellular folks used, and I really liked it, building inside highways to make it easier to bring these insights into where you need them inside applications, both inside our applications, inside our SaaS applications, but also inside custom third party and even first party applications. And this is where a lot of our focus goes to just because we have dealt with so much pain doing this inside our own SaaS that we now have built the tools, and we're making them available for others to make this process of building a machine learning outcome driven insight in your app easier. And it's not just the model development, and it's not just the deployment, it's the entire journey of taking the data, building the model, training it, deploying it, looking at the real data that comes from the app, and creating this feedback loop in a more efficient way. And that's our focus area. Exactly this problem. >> Well thank you for that. So, last week we had our super cloud two event, and I had Juan Loza on and he spent a lot of time talking about how open Oracle is in its philosophy, and I got a lot of feedback. They were like, Oracle open, I don't really think, but the truth is if you think about database Oracle database, it never met a hardware platform that it didn't like. So in that sense it's open. So, but my point is, a big part of of machine learning and AI is driven by open source tools, frameworks, what's your open source strategy? What do you support from an open source standpoint? >> So I'm a strong believer that you don't actually know, nobody knows where the next slip fog or the next industry shifting innovation in AI is going to come from. If you look six months ago, nobody foreseen Dali, the magical text to image generation and the exploding brought into just art and design type of experiences. If you look six weeks ago, I don't think anybody's seen ChatGPT, and what it can do for a whole bunch of industries. So to me, assuming that a customer or partner or developer would want to lock themselves into only the tools that a specific vendor can produce is ridiculous. 'Cause nobody knows, if anybody claims that they know where the innovation is going to come from in a year or two, let alone in five or 10, they're just wrong or lying. So our strategy for Oracle is to, I call this the Netflix of AI. So if you think about Netflix, they produced a bunch of high quality shows on their own. A few years ago it was House of Cards. Last month my wife and I binge watched Ginny and Georgie, but they also curated a lot of shows that they found around the world and bought them to their customers. So it started with things like Seinfeld or Friends and most recently it was Squid games and those are famous Israeli TV series called Founder that Netflix bought in, and they bought it as is and they gave it the Netflix value. So you have captioning and you have the ability to speed the movie and you have it inside your app, and you can download it and watch it offline and everything, but nobody Netflix was involved in the production of these first seasons. Now if these things hunt and they're great, then the third season or the fourth season will get the full Netflix production value, high value budget, high value location shooting or whatever. But you as a customer, you don't care whether the producer and director, and screenplay writing is a Netflix employee or is somebody else's employee. It is fulfilled by Netflix. I believe that we will become, or we are looking to become the Netflix of AI. We are building a bunch of AI in a bunch of places where we think it's important and we have some competitive advantage like healthcare with Acellular partnership or whatnot. But I want to bring the best AI software and hardware to OCI and do a fulfillment by Oracle on that. So you'll get the Oracle security and identity and single bill and everything you'd expect from a company like Oracle. But we don't have to be building the data science, and the models for everything. So this means both open source recently announced a partnership with Anaconda, the leading provider of Python distribution in the data science ecosystem where we are are doing a joint strategic partnership of bringing all the goodness into Oracle customers as well as in the process of doing the same with Nvidia, and all those software libraries, not just the Hubble, both for other stuff like Triton, but also for healthcare specific stuff as well as other ISVs, other AI leading ISVs that we are in the process of partnering with to get their stuff into OCI and into Oracle so that you can truly consume the best AI hardware, and the best AI software in the world on Oracle. 'Cause that is what I believe our customers would want the ability to choose from any open source engine, and honestly from any ISV type of solution that is AI powered and they want to use it in their experiences. >> So you mentioned ChatGPT, I want to talk about some of the innovations that are coming. As an AI expert, you see ChatGPT on the one hand, I'm sure you weren't surprised. On the other hand, maybe the reaction in the market, and the hype is somewhat surprising. You know, they say that we tend to under or over-hype things in the early stages and under hype them long term, you kind of use the internet as example. What's your take on that premise? >> So. I think that this type of technology is going to be an inflection point in how software is being developed. I truly believe this. I think this is an internet style moment, and the way software interfaces, software applications are being developed will dramatically change over the next year two or three because of this type of technologies. I think there will be industries that will be shifted. I think education is a good example. I saw this thing opened on my son's laptop. So I think education is going to be transformed. Design industry like images or whatever, it's already been transformed. But I think that for mass adoption, like beyond the hype, beyond the peak of inflected expectations, if I'm using Gartner terminology, I think certain things need to go and happen. One is this thing needs to become more reliable. So right now it is a complete black box that sometimes produce magic, and sometimes produce just nonsense. And it needs to have better explainability and better lineage to, how did you get to this answer? 'Cause I think enterprises are going to really care about the things that they surface with the customers or use internally. So I think that is one thing that's going to come out. And the other thing that's going to come out is I think it's going to come industry specific large language models or industry specific ChatGPTs. Something like how OpenAI did co-pilot for writing code. I think we will start seeing this type of apps solving for specific business problems, understanding contracts, understanding healthcare, writing doctor's notes on behalf of doctors so they don't have to spend time manually recording and analyzing conversations. And I think that would become the sweet spot of this thing. There will be companies, whether it's OpenAI or Microsoft or Google or hopefully Oracle that will use this type of technology to solve for specific very high value business needs. And I think this will change how interfaces happen. So going back to your expense report, the world of, I'm going to go into an app, and I'm going to click on seven buttons in order to get some job done like this world is gone. Like I'm going to say, hey, please do this and that. And I expect an answer to come out. I've seen a recent demo about, marketing in sales. So a customer sends an email that is interested in something and then a ChatGPT powered thing just produces the answer. I think this is how the world is going to evolve. Like yes, there's a ton of hype, yes, it looks like magic and right now it is magic, but it's not yet productive for most enterprise scenarios. But in the next 6, 12, 24 months, this will start getting more dependable, and it's going to change how these industries are being managed. Like I think it's an internet level revolution. That's my take. >> It's very interesting. And it's going to change the way in which we have. Instead of accessing the data center through APIs, we're going to access it through natural language processing and that opens up technology to a huge audience. Last question, is a two part question. And the first part is what you guys are working on from the futures, but the second part of the question is, we got data scientists and developers in our audience. They love the new shiny toy. So give us a little glimpse of what you're working on in the future, and what would you say to them to persuade them to check out Oracle's AI services? >> Yep. So I think there's two main things that we're doing, one is around healthcare. With a new recent acquisition, we are spending a significant effort around revolutionizing healthcare with AI. Of course many scenarios from patient care using computer vision and cameras through automating, and making better insurance claims to research and pharma. We are making the best models from leading organizations, and internal available for hospitals and researchers, and insurance providers everywhere. And we truly are looking to become the leader in AI for healthcare. So I think that's a huge focus area. And the second part is, again, going back to the enterprise AI angle. Like we want to, if you have a business problem that you want to apply here to solve, we want to be your platform. Like you could use others if you want to build everything complicated and whatnot. We have a platform for that as well. But like, if you want to apply AI to solve a business problem, we want to be your platform. We want to be the, again, the Netflix of AI kind of a thing where we are the place for the greatest AI innovations accessible to any developer, any business analyst, any user, any data scientist on Oracle Cloud. And we're making a significant effort on these two fronts as well as developing a lot of the missing pieces, and building blocks that we see are needed in this space to make truly like a great experience for developers and data scientists. And what would I recommend? Get started, try it out. We actually have a shameless sales plug here. We have a free deal for all of our AI services. So it typically cost you nothing. I would highly recommend to just go, and try these things out. Go play with it. If you are a python welding developer, and you want to try a little bit of auto mail, go down that path. If you're not even there and you're just like, hey, I have these customer feedback things and I want to try out, if I can understand them and apply AI and visualize, and do some cool stuff, we have services for that. My recommendation is, and I think ChatGPT got us 'cause I see people that have nothing to do with AI, and can't even spell AI going and trying it out. I think this is the time. Go play with these things, go play with these technologies and find what AI can do to you or for you. And I think Oracle is a great place to start playing with these things. >> Elad, thank you. Appreciate you sharing your vision of making Oracle the Netflix of AI. Love that and really appreciate your time. >> Awesome. Thank you. Thank you for having me. >> Okay. Thanks for watching this Cube conversation. This is Dave Vellante. We'll see you next time. (gentle music playing)
SUMMARY :
AI and the possibility Thanks for having me. I mean, it's the hottest So the developers, So my question to you is, and scale it for the thousands So when you think about these chat bots, and the native tongue It's just the worst. So over the last, and create the models that you want, of the (indistinct) era if you will. So the way we are approaching but the truth is if you the movie and you have it inside your app, and the hype is somewhat surprising. and the way software interfaces, and what would you say to them and you want to try a of making Oracle the Netflix of AI. Thank you for having me. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Netflix | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Elad Ziklik | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Safra Catz | PERSON | 0.99+ |
Elad | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
two part | QUANTITY | 0.99+ |
fourth season | QUANTITY | 0.99+ |
House of Cards | TITLE | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
first seasons | QUANTITY | 0.99+ |
Seinfeld | TITLE | 0.99+ |
Last month | DATE | 0.99+ |
third season | QUANTITY | 0.99+ |
four hour | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Hebrew | OTHER | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
last October | DATE | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two fronts | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
Juan Loza | PERSON | 0.99+ |
Founder | TITLE | 0.99+ |
four | DATE | 0.99+ |
six weeks ago | DATE | 0.99+ |
today | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
python | TITLE | 0.99+ |
five | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
two developers | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
H100s | COMMERCIAL_ITEM | 0.98+ |
five years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Friends | TITLE | 0.98+ |
one guy | QUANTITY | 0.98+ |
10 | QUANTITY | 0.97+ |
Dev Ittycheria, MongoDB | Cube Conversation: Partner Exclusive
>>Hi, I'm John Ferry with the Cube. We're here for a special exclusive conversation with David Geria, the CEO of Mongo MongoDB. Well established leading platform. It's been around for, I mean, decades. So continues to become the platform of choice for high performance data. This modern data stack that's emerging, a big part of the story here at a reinvent 2022 on top of an already performing a cloud with, you know, chips and silicon specialized instances, the world's gonna be getting faster, smaller, higher performance, lower cost specialized. Dave, thanks for taking the time with me today, >>John. It's great to be here. Thank you for having me. >>Do you see yourself as a ISV or you just go with that, because that's kind of a nomenclature >>When, when I think of the term isv, I think of the notion of someone building an end solution for customer to get something done. Or what we're building is essentially a developer data platform and we have thousands of ISVs who build software applications on our platform. So how could we be an isv? Because by definition I, you know, we enable people to do so many different things and you know, they can be the, you know, the largest companies of the world trying to transform their business or startups who are trying to disrupt either existing industries or create new ones. And so that's, and, and that's how our customers view MongoDB and, and the whole Atlas platform basically enables them to do some amazing things. The reason for that is, you know, you know, we believe that what we are enabling developers to do is be able to reduce the friction and the work required to build modern applications through the document model, which is really intuitive to the way developers think and code through the distributed nature of platforms. >>So, you know, things like charting no other company on the planet offers the capabilities we do to enable people to build the most highly performant and scalable applications. And also what we also do is enable people to, you know, run different types of workloads on our platform. So we have obviously transactional, we have search, we have time series, we enable people to do things like sophisticated device synchronization from Edge to the back end. We do graph, we do real time analytics. So being able to consolidate all that with developers on one elegant unified platform really makes, you know, it attractive for developers to build on long >>Db. You know, you guys are a feature partner of aws and I would speculate, I don't know if you can comment on this, but I would imagine that you probably produce a lot of revenue for Amazon because you really can't turn off EC two when you do a database work. So, you know, you kind of crank it all the time. You guys are a top partner. How long have you guys been a partner with aws? What's the relationship? >>The relationship's been strong, actually, Amazon spoke at one of our first user conferences in 2013. And since then we've been working together. We've been at reinvent since essentially 2015. And we've been a premier partner, an Emerald sponsor for the last Nu you know, I think four or five years. And so we're very committed to the relationship and I think there's some things that we have a lot, we have a lot of things in common. We care a lot about customers and for us, our customers, our developers, we care a lot about removing friction from their day to day work to move, be able to move fast and be able to, in order to seize new opportunities and respond to new threats. And so consequently, I think the partnership, obviously by nature of our, our common objectives has really come together. >>Talk about the journey of Mongo. I mean, you look back at the history, I, you go back the old lamp stack days, right? So you know, the day developer traction is just really kind of stuck at the none. I mean, it's, it's really well known. And I remember over the conversations, Dave Mongo doesn't scale. I mean, every year we heard something along those lines cuz it just kept scaling. I heard the same thing with AWS back in 2013 timeframe. You, oh, it's just, it's really not for a real prime time. It's, it's for hobbyists, not so much builders, maybe startup cloud, but that developer traction is translated. Can you take us through the journey of Mongo where it is now and, and kinda look back and, and, and take us through what's the state of the art now, >>Right? So just for those of you who, who, those, you know, those in your audience who don't know too much about Mon Be I'll just, you know, start with the background. The company was astounded by developers. It was basically the CTO and some key developers from Double Click who really saw the challenges and the limitations of the relational database architecture because they're trying to serve billions of ads per day and they constantly need to work on the constraints and relational database. And so they essentially decided, why don't we just build a database that we'd want to use? And that was a catalyst to starting MongoDB. The first thing they focused on was, rather than having a tabler data structure, they focused on a document data structure. Why documents? Because there's much more natural and intuitive to work with data and documents in terms of you can set parent child relationships and how you just think about the relationship with data is much more natural in a document than trying to connect data in a, you know, in hundreds of different tables. >>And so that enabled developers to just move so much faster. The second thing they focused on was building a truly distributed architecture, not kind of some adjunct, you know, you know, architecture that maybe made the existing architecture a little bit more scalable. They really took from the ground up a truly distributed architecture. So where you can do native replication, you can do charting and you can do it on a global basis. And so that was the, the other profound, you know, thing that they did. And then since then, what we've also done is, you know, the document model is truly a super set of other models. So we enabled other capabilities like search you can do joins, so you can do very transaction intensive use case among be where fully asset compliant. So you have the highest forms of data guarantees you can do very sophisticated things like time series, you can do device synchronization, you can do real time analytics because we can carve off read only nodes to be able to read and query data in real time rather than have to offload that data into a data warehouse. >>And so that enables developers to just build a wide variety of, of application longing to be, and they get one unified developer interface. It's highly elegant and seamless. And so essentially the cost and tax of matching multiple point tools goes away when, when I think of the term isv, I think of the notion of someone building an end solution for a customer to get something done. Or what we're building is essentially a developer data platform and we have thousands of ISVs who build software applications on our platform. So how could we be an isv? Because by definition I, you know, we enable people to do so many different things and you know, they can be the, you know, the largest companies in the world trying to transform their business or startups or trying to disrupt either existing industries or create new ones. And so that's, and and that's how our customers view MongoDB and, and the whole Atlas platform basically enables them to do some amazing things. >>Yeah, we're seeing a lot of activity on the Atlas. Do you see yourself as a ISV or you just go with that because that's kind of a nomenclature? >>No, we don't view ourselves as ISV at all. We view ourselves as a developer data platform. And the reason for that is, you know, you know, we believe that what we are enabling developers to do is be able to reduce the friction and the work required to build modern applications through the document model, which is really intuitive to the way developers think and code through the distributed nature of platforms. So, you know, things like sharding, no other company on the planet offers the capabilities we do to enable people to build the most highly performant and scalable applications. And also what we also do is enable people to, you know, run different types of workflows on our platform. So we have obviously transactional, we have search, we have time series, we enable people to do things like sophisticated device synchronization from Edge to the back end. We do graph, we do real time analytics. So being able to consolidate all that with developers on one elegant unified platform really makes, you know, it attractive for developers to build on long ndb. >>You know, the cloud adoption really is putting a lot of pressure on these systems and you're seeing companies in the ecosystem and AWS stepping up, you guys are doing great job, but we're seeing a lot more acceleration around it, on staying on premise for certain use cases. Yet you got the cloud as well growing for workloads and, and you get this hybrid steady state as an operational mode. I call that 10 of the classic cloud adoption track record. You guys are an example of multiple iterations in cloud. You're doing a lot more, we're starting to see this tipping point with others and customers coming kind of on that same pattern. Building platforms on top of aws on top of the primitives, more horsepower, higher level services, industry specific capabilities with data. I mean this is a new kind of cloud, kind of a next generation, you knows next gen you got the classic high performance infrastructure, it's getting better and better, but now you've got this new application platform, you know, reminds me of the old asp, you know, if you will. I mean, so are you seeing customers doing things differently? Can you share your, your reaction to this role of, you know, this new kind of SaaS platform that just isn't an application, it's, it's more, it's deeper than that. What's going on here? We call it super cloud, but >>Like what? Yeah, so essentially what what, you know, a lot of our customers doing, and by the way we have over 37,000 customers of all shapes and sizes from the largest companies in the world to cutting edge startups who are building applications among B, why do they choose MongoDB? Because essentially it's the, you know, the fastest way to innovate and the reason it's the fastest way to innovate is because they can work with data so much easier than working with data on other types of architecture. So the document model is profoundly a breakthrough way to work with data to make it very, very easy. So customers are essentially building these modern applications, you know, applications built on microservices, event driven architectures, you know, addressing sophisticated use cases like time series to, and then ultimately now they're getting into machine learning. We have a bunch of companies building machine learning applications on top of MongoDB. And the reason they're doing that is because one, they get the benefits of being able to, you know, build and work with, with data so much easier than any other platform. And it's highly scale and performant in a way that no other platform is. So literally they can run their, you know, workloads both locally and one, you know, autonomous zone or they can basically be or available zone or they could be basically, you know, anywhere in the world. And we also offer multicloud capabilities, which I can get into later. >>Let's talk about the performance side. I know I was speaking with some Amazon folks every year it's the same story. They're really working on the physics, they're getting the chips, they wanna squeeze as much energy out of that. I've never met a developer that said they wanna run their workload on a slower platform or slower hardware. We know said no developer, right? No one wants to do that. >>Correct. >>So you guys have a lot of experience tuning in with Graviton instances, we're seeing a lot more AWS EC two instances, we're seeing a lot more kind of integrated end to end stories. Data is now security, it's tied into data stacks or data modern kind of data hybrid stack. A lot going on around the hardware performance specialization, the role of data, kind of a modern data stack emerging. What, what's your thoughts on the that that Yeah, >>I, I think if you had asked me, you know, when the cloud started going vogue, like you know, the, you know, the, the later part of the last decade and told me, you know, sitting here 12, 15 years later, would you know, would we be talking about, you know, chip processing speeds? I'd probably thought, nah, we would've moved on by then. But what's really clear is that customers, to your point, customers care about performance, they care about price performance, right? So AWS's investments in Graviton, we have actually deployed a significant portion of our at fleet on Amazon now runs on Graviton. You know, they've built other chip sets like train and, and inferential for like, you know, training models and running inferences. They're doing things like Nitro. And so what that really speaks to is that the cloud providers are focusing on the price performance of their, as you call it, their primitives and their infrastructure and the infrastructure layer that are still very, very important. >>And, and you know, if you look at their revenue, about 60 to 70% of the revenue comes from that pure infrastructure. So to your point, they can't offer a second class solution and still win. So given that now they're seeing a lot of competition from Azure, Azure's building their own chip sets, Google's already obviously doing that and and building specialized chip sets for machine learning. You're seeing these cloud providers compete. So they have to really compete to make their platform the most performant, the most price competitive in the marketplace. Which gives us a great platform to build on to enable developers to build these incredibly highly performant applications that customers are now demand. >>I think that's a really great point. I mean, you know, it's so funny Dave, because you know, I remember those, we don't talk speeds and feeds anymore. We're not talking about boxes. I mean that's old kind of school thinking because it was a data center mentality, speeds and feeds and that was super important. But we're kind of coming back to that in the cloud now in distributed architecture, as you put your platforms out there for developers, you have to run fast. You gotta, you can't give the developer subpar or any kind of performance that's, they'll, they'll go somewhere else. I mean that's the reality of what developers, no one, again, no one says I wanna go on the slower platform unless it's some sort of policy based on price or some sort of thing. But, but for the most part it's gotta run fast. So you got the tail of two clouds going on here, you got Amazon classic ias, keep making it faster under the hood. >>And then you got the new abstraction layers of the higher level services. That's where you guys are bridging this new, new generational shift where it's like, hey, you know what? I can go, I can run a headless application, I can run a SAS app that's refactored with data. So you've seen a lot more innovation with developers, you know, running stuff in, in the C I C D pipeline that was once it, and you're seeing security and data operations kind of emerging as a structural change of how companies are, are are transforming on the business side. What's your reaction to that business transformation and the role of the developer? >>Right, so I mean I have to obviously give amazing kudos to the, you know, to AWS and the Amazon team for what they've built. Obviously they're the ones who kind of created the cloud industry and they continue to push the innovation in the space. I mean today they have over 300 services and you know, obviously, you know, no star today is building anything not on the cloud because they have so many building blocks to start with. But what we though have found from our talking to our customers is that in some ways there is still, you know, the onus is on the customer to figure out which building block to use to be able to stitch together the applications and solutions they wanna build. And what we have done is taken essentially an opinionated point of view and said we will enable you to do that. >>You know, using one data model. You know, Amazon today offers I think 17 or 18 different types of databases. We don't think like, you know, having a tool for every job makes sense because over time the tax and cost of learning, managing and supporting those different applications just don't make a lot of sense or just become cost prohibitive. And so we think offering one data model, one, you know, elegant user experience, you know, one way to address the broadest set of of use cases is that we think is a better way. But clearly customers have choice. They can use Amazon's primitives and those second layer services as you as you described, or they can use us. Unfortunately we've seen a lot of customers come to us with our approach and so does Amazon. And I have to give obviously again kudos and Amazon is very customer obsessed and so we have a great relationship with them, both technically in terms of the product integrations we do as well as working with 'em in the field, you know, on joint customer opportunities. >>Speaking of, while you mentioned that, I wanna just ask you, how is that marketplace relationship going with aws? Some of the partners are really seeing great economic and joint selling or them selling your, your stuff. So there's a real revenue pop there in that religion. Can you comment on that? >>So we had been working the partner in the marketplace for many years now, more from a field point of view where customers could leverage their existing commitments to AWS and leverage essentially, you know, using Atlas and applying in an atlas towards their commits. There was also some sales incentives for people in the field to basically work together so that, you know, everyone won should we collectively win a customer? What we recently announced is as pay as you Go initiative, where literally a customer on the Amazon marketplace can basically turn up, you know, an Alice instance with no commitment. So it's so easy. So we're just pushing the envelope to just reduce the friction for people to use Atlas on aws. And it's working really very well. The uptake has been been very strong and and we feel like we're just getting started because we're so excited about the results we're >>Seeing. You know, one of the things that's kind of not core in the keynote theme, but I think it's underlying message is clear in the industry, is the developer productivity. You said making things easy is a big deal, self-service, getting in and trying, these are what developer friendly tools are like and platform. So I have to ask you, cuz this comes up a lot in our kind of business conversation, is, is if you take digital transformation concept to its completion, assuming now you know, as a thought exercise, you completely transform a company with technology that's, that is the business transformation outcome. Take it to completion. What does that look like? I mean, if you go there you'd say, okay, the company is the app, the company is the data, it's not a department serving the business, it's the business. And so I think this is kind of what we're seeing as the next big mountain climb, which is companies that do transform there, they are technology companies, they're not a department like it. So I think a lot of companies are kind of saying, wait a minute, why would we have a department? It should be the company. What's your your your view on this because this >>Yeah, so I I've had the for good fortune of being able to talk to thousand customers all over the world. And you know, one thing John, they never tell me, they never tell me that they're innovating too quickly. In fact, they always tell me the reverse. They tell me all the obstacles and impediments they have to be able to be able to be able to move fast. So one of the reasons they gravitate to MongoDB is just the speed that they wish they can build applications to, to your point, developer productivity. And by definition, developer productivity is a proxy for innovation. The faster you can make your developers, you know, move, the faster they can push out code, the faster they can iterate and build new solutions or add more capabilities on the existing applications, the faster you can innovate either to, again, seize new opportunities or to respond to new threats in your business. >>And so that resonates with every C level executive. And to your point, the developers not some side hustle that they kind of think about once in a while. It's core to the business. So developers have amassed enormous amount of power and influence. You know, their, their, their engineering teams are front and center in terms of how they think about building capabilities and and building their business. And that's also obviously enabled, you know, to your point, every software company, every company's not becoming a software company because it all starts with softwares, software enables, defines or creates almost every company's value proposition. >>You know, it makes me smile because I love operating systems as one of my hobbies in college was, you know, systems programming and I remember those network kind of like the operating systems, the cloud. So, you know, everything's got specialized capabilities and that's a big theme here at Reinvent. If you look at the announcements Monday night with Peter DeSantis, you got, you got new instances, new chips. So this whole engine kind of specialized component is like an engine. You got a core and you got other subsystems. This is gonna be an integral part of how companies architect their platform or you know, Adam calls it the landing zone or whatever they wanna call it. But you gotta start seeing a new architectural thinking for companies. What's your, can you share your experience on how companies should look at this opportunity as a plethora of more goodness on the hardware? On hardware, but like chips and instances? Cause now you can mix and match. You've got, you've got, you got everything you need to kind of not roll your own but like really build foundational high performance capabilities. >>Yeah, so I I, so I think this is where I think Amazon is really enabling all companies, including, you know, companies like Mon db, you know, push the envelope and innovation. So for example, you know, the, the next big hurdle for us, I think we've seen two big platform shifts over the last 15 years of platform shifts, you know, to mobile and the platform shift to cloud. I believe the next big platform shift is going from dumb apps to smart apps, which you're building in, you know, machine learning and you know, AI and just very sophisticated automation. And when you start automating human decision making, rather than, you know, looking at a dashboard and saying, okay, I see the data now, now I have to do this. You can automate that into your applications and make your applications leveraging real time data become that much more smart. And that ultimately then becomes a developer challenge. And so we feel really good about our position in taking advantage of those next big trends and software leveraging the price performance curves that, you know, Amazon continues to push in terms of their hardware performance, networking performance, you know, you know, price, performance and storage to build those next generation of modern applications. >>Okay, so let me get this straight. You have next generation intelligent smart apps and you have AI generative solutions coming out around the corner. This is like pretty good position for Mongo to be in with data. I mean, this is what you do, you're in that exactly of the action. What's it like? I mean, you must be like trying to shake the world and wake up. The world's starting to wake up now through this. So what's, what's it like? >>Well, I mean we're really excited and bullish about the future. We think that we're well positioned because we know as to your point, you know, we have amassed amazing amount of developer mindshare. We are the most popular modern data platform out there in the world. There's developers in almost every corner of the planet using us to do something. And to your point, leveraging data and these advances in machine learning ai. And we think the more AI becomes democratized, not, you know, done by a bunch of data scientists sitting in some corner office, but essentially enabling developers to have the tools to build these very, very sophisticated, smart applications will, you know, will position as well. So that's, you know, obviously gonna be a focus for us over the, frankly, I think this is gonna be like a 10 year, 10 15 year run and we're just getting started in this whole >>Area. I think you guys are really well positioned. I think that's a great point. And Adam mentioned to me and, and Mike interviewed, he said on stage talk about it, the role of a data analyst kind of goes away. Everyone's a data analyst, right? You'll still see specialization on, on core data engineering, which is kind of like an SRE role for data. So data ops and data as code is a big deal making data applications. So again, exciting times and you guys are well positioned. If you had to bumper sticker the event this week here at Reinvent, what would you, how would you categorize this this point in time? I mean, Adam's great leader, he is gonna help educate customers how to use technology to, for business advantage and transformation. You know, Andy did a great job making technology great and innovative and setting the table, Adam's gotta bring it to the enterprises and businesses. So it's gonna be an interesting point in time we're in now. What, how would you categorize this year's reinvent, >>Right? I think the, the, the tech world is pivoting towards what I'd call rationalization or cost optimization. I think people obviously in, you know, the last 10 years have, you know, it's all about speed, speed, speed. And I think people still value speed, but they wanna do it at some sort of predictable cost model. And I think you're gonna see a lot more focus around cost and cost optimization. That's where we think having one platform is by definition of vendor consolidation way for people to cut costs so that they can basically, you know, still move fast but don't have to incur the tax of using a whole bunch of different point tools. And so we think we're well positioned. So the bumper sticker I think about is essentially, you know, do more for less with MongoDB. >>Yeah. And the developers on the front lines. Great stuff. You guys are great partner, a top partner at AWS and great reflection on, on where you guys been, but really where you are now and great opportunity. David Didier, thank you so much for spending the time and it's been great following Mongo and the continued rise of, of developers of the on the front lines really driving the business and that, and they are, I know, driving the business, so, and I think they're gonna continue Smart apps, intelligent apps, ai, generative apps are coming. I mean this is real. >>Thanks John. It's great speaking with >>You. Yeah, thanks. Thanks so much. Okay.
SUMMARY :
of an already performing a cloud with, you know, chips and silicon specialized instances, Thank you for having me. I, you know, we enable people to do so many different things and you know, they can be the, And also what we also do is enable people to, you know, run different types So, you know, you kind of crank it all the time. an Emerald sponsor for the last Nu you know, I think four or five years. So you know, the day developer traction is just really kind of stuck at the So just for those of you who, who, those, you know, those in your audience who don't know too much about Mon And so that was the, the other profound, you know, things and you know, they can be the, you know, the largest companies in the world trying to transform Do you see yourself as a ISV or you you know, you know, we believe that what we are enabling developers to do is be able to reduce know, reminds me of the old asp, you know, if you will. Yeah, so essentially what what, you know, a lot of our customers doing, and by the way we have over 37,000 Let's talk about the performance side. So you guys have a lot of experience tuning in with Graviton instances, we're seeing a lot like you know, the, you know, the, the later part of the last decade and told me, you know, And, and you know, if you look at their revenue, about 60 to 70% I mean, you know, it's so funny Dave, because you know, I remember those, And then you got the new abstraction layers of the higher level services. to the, you know, to AWS and the Amazon team for what they've built. And so we think offering one data model, one, you know, elegant user experience, Can you comment on that? can basically turn up, you know, an Alice instance with no commitment. is, is if you take digital transformation concept to its completion, assuming now you And you know, one thing John, they never tell me, they never tell me that they're innovating too quickly. you know, to your point, every software company, every company's not becoming a software company because or you know, Adam calls it the landing zone or whatever they wanna call it. So for example, you know, the, the next big hurdle for us, I think we've seen two big platform shifts over the I mean, this is what you do, So that's, you know, you guys are well positioned. I think people obviously in, you know, the last 10 years have, on where you guys been, but really where you are now and great opportunity. Thanks so much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
David Didier | PERSON | 0.99+ |
David Geria | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
17 | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter DeSantis | PERSON | 0.99+ |
John Ferry | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
four | QUANTITY | 0.99+ |
10 year | QUANTITY | 0.99+ |
Monday night | DATE | 0.99+ |
Dev Ittycheria | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Dave Mongo | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Atlas | TITLE | 0.99+ |
Mongo | PERSON | 0.99+ |
Mongo MongoDB | ORGANIZATION | 0.99+ |
over 300 services | QUANTITY | 0.99+ |
Double Click | ORGANIZATION | 0.98+ |
10 | QUANTITY | 0.98+ |
over 37,000 customers | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.98+ |
MongoDB | TITLE | 0.98+ |
Emerald | ORGANIZATION | 0.98+ |
Mongo | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
thousand customers | QUANTITY | 0.97+ |
second layer | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
about 60 | QUANTITY | 0.97+ |
EC two | TITLE | 0.96+ |
two clouds | QUANTITY | 0.95+ |
Reinvent | ORGANIZATION | 0.95+ |
second thing | QUANTITY | 0.94+ |
Azure | ORGANIZATION | 0.94+ |
one data model | QUANTITY | 0.93+ |
second class | QUANTITY | 0.92+ |
last decade | DATE | 0.92+ |
Nitro | ORGANIZATION | 0.9+ |
one data | QUANTITY | 0.89+ |
15 year | QUANTITY | 0.89+ |
70% | QUANTITY | 0.89+ |
Ali Ghodsi, Databricks | Cube Conversation Partner Exclusive
(outro music) >> Hey, I'm John Furrier, here with an exclusive interview with Ali Ghodsi, who's the CEO of Databricks. Ali, great to see you. Preview for reinvent. We're going to launch this story, exclusive Databricks material on the notes, after the keynotes prior to the keynotes and after the keynotes that reinvent. So great to see you. You know, you've been a partner of AWS for a very, very long time. I think five years ago, I think I first interviewed you, you were one of the first to publicly declare that this was a place to build a company on and not just post an application, but refactor capabilities to create, essentially a platform in the cloud, on the cloud. Not just an ISV; Independent Software Vendor, kind of an old term, we're talking about real platform like capability to change the game. Can you talk about your experience as an AWS partner? >> Yeah, look, so we started in 2013. I swiped my personal credit card on AWS and some of my co-founders did the same. And we started building. And we were excited because we just thought this is a much better way to launch a company because you can just much faster get time to market and launch your thing and you can get the end users much quicker access to the thing you're building. So we didn't really talk to anyone at AWS, we just swiped a credit card. And eventually they told us, "Hey, do you want to buy extra support?" "You're asking a lot of advanced questions from us." "Maybe you want to buy our advanced support." And we said, no, no, no, no. We're very advanced ourselves, we know what we're doing. We're not going to buy any advanced support. So, you know, we just built this, you know, startup from nothing on AWS without even talking to anyone there. So at some point, I think around 2017, they suddenly saw this company with maybe a hundred million ARR pop up on their radar and it's driving massive amounts of compute, massive amounts of data. And it took a little bit in the beginning just us to get to know each other because as I said, it's like we were not on their radar and we weren't really looking, we were just doing our thing. And then over the years the partnership has deepened and deepened and deepened and then with, you know, Andy (indistinct) really leaning into the partnership, he mentioned us at Reinvent. And then we sort of figured out a way to really integrate the two service, the Databricks platform with AWS . And today it's an amazing partnership. You know, we directly connected with the general managers for the services. We're connected at the CEO level, you know, the sellers get compensated for pushing Databricks, we're, we have multiple offerings on their marketplace. We have a native offering on AWS. You know, we're prominently always sort of marketed and you know, we're aligned also vision wise in what we're trying to do. So yeah, we've come a very, very long way. >> Do you consider yourself a SaaS app or an ISV or do you see yourself more of a platform company because you have customers. How would you categorize your category as a company? >> Well, it's a data platform, right? And actually the, the strategy of the Databricks is take what's otherwise five, six services in the industry or five, six different startups, but do them as part of one data platform that's integrated. So in one word, the strategy of data bricks is "unification." We call it the data lake house. But really the idea behind the data lake house is that of unification, or in more words it's, "The whole is greater than the sum of its parts." So you could actually go and buy five, six services out there or actually use five, six services from the cloud vendors, stitch it together and it kind of resembles Databricks. Our power is in doing those integrated, together in a way in which it's really, really easy and simple to use for end users. So yeah, we're a data platform. I wouldn't, you know, ISV that's a old term, you know, Independent Software Vendor. You know, I think, you know, we have actually a whole slew of ISVs on top of Databricks, that integrate with our platform. And you know, in our marketplace as well as in our partner connect, we host those ISVs that then, you know, work on top of the data that we have in the Databricks, data lake house. >> You know, I think one of the things your journey has been great to document and watch from the beginning. I got to give you guys credit over there and props, congratulations. But I think you're the poster child as a company to what we see enterprises doing now. So go back in time when you guys swiped a credit card, you didn't need attending technical support because you guys had brains, you were refactoring, rethinking. It wasn't just banging out software, you had, you were doing some complex things. It wasn't like it was just write some software hosted on server. It was really a lot more. And as a result your business worth billions of dollars. I think 38 billion or something like that, big numbers, big numbers of great revenue growth as well, billions in revenue. You have customers, you have an ecosystem, you have data applications on top of Databricks. So in a way you're a cloud on top of the cloud. So is there a cloud on top of the cloud? So you have ISVs, Amazon has ISVs. Can you take us through what this means and at this point in history, because this seems to be an advanced version of benefits of platforming and refactoring, leveraging say AWS. >> Yeah, so look, when we started, there was really only one game in town. It was AWS. So it was one cloud. And the strategy of the company then was, well Amazon had this beautiful set of services that they're building bottom up, they have storage, compute, networking, and then they have databases and so on. But it's a lot of services. So let us not directly compete with AWS and try to take out one of their services. Let's not do that because frankly we can't. We were not of that size. They had the scale, they had the size and they were the only cloud vendor in town. So our strategy instead was, let's do something else. Let's not compete directly with say, a particular service they're building, let's take a different strategy. What if we had a unified holistic data platform, where it's just one integrated service end to end. So think of it as Microsoft office, which contains PowerPoint, and Word, and Excel and even Access, if you want to use it. What if we build that and AWS has this really amazing knack for releasing things, you know services, lots of them, every reinvent. And they're sort of a DevOps person's dream and you can stitch these together and you know you have to be technical. How do we elevate that and make it simpler and integrate it? That was our original strategy and it resonated with a segment of the market. And the reason it worked with AWS so that we wouldn't butt heads with AWS was because we weren't a direct replacement for this service or for that service, we were taking a different approach. And AWS, because credit goes to them, they're so customer obsessed, they would actually do what's right for the customer. So if the customer said we want this unified thing, their sellers would actually say, okay, so then you should use Databricks. So they truly are customer obsessed in that way. And I really mean it, John. Things have changed over the years. They're not the only cloud anymore. You know, Azure is real, GCP is real, there's also Alibaba. And now over 70% of our customers are on more than one cloud. So now what we hear from them is, not only want, do we want a simplified, unified thing, but we want it also to work across the clouds. Because those of them that are seriously considering multiple clouds, they don't want to use a service on cloud one and then use a similar service on cloud two. But it's a little bit different. And now they have to do twice the work to make it work. You know, John, it's hard enough as it is, like it's this data stuff and analytics. It's not a walk in the park, you know. You hire an administrator in the back office that clicks a button and its just, now you're a data driven digital transformed company. It's hard. If you now have to do it again on the second cloud with different set of services and then again on a third cloud with a different set of services. That's very, very costly. So the strategy then has changed that, how do we take that unified simple approach and make it also the same and standardize across the clouds, but then also integrate it as far down as we can on each of the clouds. So that you're not giving up any of the benefits that the particular cloud has. >> Yeah, I think one of the things that we see, and I want get your reaction to this, is this rise of the super cloud as we call it. I think you were involved in the Sky paper that I saw your position paper came out after we had introduced Super Cloud, which is great. Congratulations to the Berkeley team, wearing the hat here. But you guys are, I think a driver of this because you're creating the need for these things. You're saying, okay, we went on one cloud with AWS and you didn't hide that. And now you're publicly saying there's other clouds too, increased ham for your business. And customers have multiple clouds in their infrastructure for the best of breed that they have. Okay, get that. But there's still a challenge around the innovation, growth that's still around the corner. We still have a supply chain problem, we still have skill gaps. You know, you guys are unique at Databricks as other these big examples of super clouds that are developing. Enterprises don't have the Databricks kind of talent. They need, they need turnkey solutions. So Adam and the team at Amazon are promoting, you know, more solution oriented approaches higher up on the stack. You're starting to see kind of like, I won't say templates, but you know, almost like application specific headless like, low code, no code capability to accelerate clients who are wanting to write code for the modern error. Right, so this kind of, and then now you, as you guys pointed out with these common services, you're pushing the envelope. So you're saying, hey, I need to compete, I don't want to go to my customers and have them to have a staff or this cloud and this cloud and this cloud because they don't have the staff. Or if they do, they're very unique. So what's your reaction? Because this kind is the, it kind of shows your leadership as a partner of AWS and the clouds, but also highlights I think what's coming. But you share your reaction. >> Yeah, look, it's, first of all, you know, I wish I could take credit for this but I can't because it's really the customers that have decided to go on multiple clouds. You know, it's not Databricks that you know, push this or some other vendor, you know, that, Snowflake or someone who pushed this and now enterprises listened to us and they picked two clouds. That's not how it happened. The enterprises picked two clouds or three clouds themselves and we can get into why, but they did that. So this largely just happened in the market. We as data platforms responded to what they're then saying, which is they're saying, "I don't want to redo this again on the other cloud." So I think the writing is on the wall. I think it's super obvious what's going to happen next. They will say, "Any service I'm using, it better work exactly the same on all the clouds." You know, that's what's going to happen. So in the next five years, every enterprise will say, "I'm going to use the service, but you better make sure that this service works equally well on all of the clouds." And obviously the multicloud vendors like us, are there to do that. But I actually think that what you're going to see happening is that you're going to see the cloud vendors changing the existing services that they have to make them work on the other clouds. That's what's goin to happen, I think. >> Yeah, and I think I would add that, first of all, I agree with you. I think that's going to be a forcing function. Because I think you're driving it. You guys are in a way, one, are just an actor in the driving this because you're on the front end of this and there are others and there will be people following. But I think to me, I'm a cloud vendor, I got to differentiate. Adam, If I'm Adam Saleski, I got to say, "Hey, I got to differentiate." So I don't wan to get stuck in the middle, so to speak. Am I just going to innovate on the hardware AKA infrastructure or am I going to innovate at the higher level services? So what we're talking about here is the tail of two clouds within Amazon, for instance. So do I innovate on the silicon and get low level into the physics and squeeze performance out of the hardware and infrastructure? Or do I focus on ease of use at the top of the stack for the developers? So again, there's a channel of two clouds here. So I got to ask you, how do they differentiate? Number one and number two, I never heard a developer ever say, "I want to run my app or workload on the slower cloud." So I mean, you know, back when we had PCs you wanted to go, "I want the fastest processor." So again, you can have common level services, but where is that performance differentiation with the cloud? What do the clouds do in your opinion? >> Yeah, look, I think it's pretty clear. I think that it's, this is, you know, no surprise. Probably 70% or so of the revenue is in the lower infrastructure layers, compute, storage, networking. And they have to win that. They have to be competitive there. As you said, you can say, oh you know, I guess my CPUs are slower than the other cloud, but who cares? I have amazing other services which only work on my cloud by the way, right? That's not going to be a winning recipe. So I think all three are laser focused on, we going to have specialized hardware and the nuts and bolts of the infrastructure, we can do it better than the other clouds for sure. And you can see lots of innovation happening there, right? The Graviton chips, you know, we see huge price performance benefits in those chips. I mean it's real, right? It's basically a 20, 30% free lunch. You know, why wouldn't you, why wouldn't you go for it there? There's no downside. You know, there's no, "got you" or no catch. But we see Azure doing the same thing now, they're also building their own chips and we know that Google builds specialized machine learning chips, TPU, Tenor Processing Units. So their legs are focused on that. I don't think they can give up that or focused on higher levels if they had to pick bets. And I think actually in the next few years, most of us have to make more, we have to be more deliberate and calculated in the picks we do. I think in the last five years, most of us have said, "We'll do all of it." You know. >> Well you made a good bet with Spark, you know, the duke was pretty obvious trend that was, everyone was shut on that bandwagon and you guys picked a big bet with Spark. Look what happened with you guys? So again, I love this betting kind of concept because as the world matures, growth slows down and shifts and that next wave of value coming in, AKA customers, they're going to integrate with a new ecosystem. A new kind of partner network for AWS and the other clouds. But with aws they're going to need to nurture the next Databricks. They're going to need to still provide that SaaS, ISV like experience for, you know, a basic software hosting or some application. But I go to get your thoughts on this idea of multiple clouds because if I'm a developer, the old days was, old days, within our decade, full stack developer- >> It was two years ago, yeah (John laughing) >> This is a decade ago, full stack and then the cloud came in, you kind had the half stack and then you would do some things. It seems like the clouds are trying to say, we want to be the full stack or not. Or is it still going to be, you know, I'm an application like a PC and a Mac, I'm going to write the same application for both hardware. I mean what's your take on this? Are they trying to do full stack and you see them more like- >> Absolutely. I mean look, of course they're going, they have, I mean they have over 300, I think Amazon has over 300 services, right? That's not just compute, storage, networking, it's the whole stack, right? But my key point is, I think they have to nail the core infrastructure storage compute networking because the three clouds that are there competing, they're formidable companies with formidable balance sheets and it doesn't look like any of them is going to throw in the towel and say, we give up. So I think it's going to intensify. And given that they have a 70% revenue on that infrastructure layer, I think they, if they have to pick their bets, I think they'll focus it on that infrastructure layer. I think the layer above where they're also placing bets, they're doing that, the full stack, right? But there I think the demand will be, can you make that work on the other clouds? And therein lies an innovator's dilemma because if I make it work on the other clouds, then I'm foregoing that 70% revenue of the infrastructure. I'm not getting it. The other cloud vendor is going to get it. So should I do that or not? Second, is the other cloud vendor going to be welcoming of me making my service work on their cloud if I am a competing cloud, right? And what kind of terms of service are I giving me? And am I going to really invest in doing that? And I think right now we, you know, most, the vast, vast, vast majority of the services only work on the one cloud that you know, it's built on. It doesn't work on others, but this will shift. >> Yeah, I think the innovators dilemma is also very good point. And also add, it's an integrators dilemma too because now you talk about integration across services. So I believe that the super cloud movement's going to happen before Sky. And I think what explained by that, what you guys did and what other companies are doing by representing advanced, I call platform engineering, refactoring an existing market really fast, time to value and CAPEX is, I mean capital, market cap is going to be really fast. I think there's going to be an opportunity for those to emerge that's going to set the table for global multicloud ultimately in the future. So I think you're going to start to see the same pattern of what you guys did get in, leverage the hell out of it, use it, not in the way just to host, but to refactor and take down territory of markets. So number one, and then ultimately you get into, okay, I want to run some SLA across services, then there's a little bit more complication. I think that's where you guys put that beautiful paper out on Sky Computing. Okay, that makes sense. Now if you go to today's market, okay, I'm betting on Amazon because they're the best, this is the best cloud win scenario, not the most robust cloud. So if I'm a developer, I want the best. How do you look at their bet when it comes to data? Because now they've got machine learning, Swami's got a big keynote on Wednesday, I'm expecting to see a lot of AI and machine learning. I'm expecting to hear an end to end data story. This is what you do, so as a major partner, how do you view the moves Amazon's making and the bets they're making with data and machine learning and AI? >> First I want to lift off my hat to AWS for being customer obsessed. So I know that if a customer wants Databricks, I know that AWS and their sellers will actually help us get that customer deploy Databricks. Now which of the services is the customer going to pick? Are they going to pick ours or the end to end, what Swami is going to present on stage? Right? So that's the question we're getting. But I wanted to start with by just saying, their customer obsessed. So I think they're going to do the right thing for the customer and I see the evidence of it again and again and again. So kudos to them. They're amazing at this actually. Ultimately our bet is, customers want this to be simple, integrated, okay? So yes there are hundreds of services that together give you the end to end experience and they're very customizable that AWS gives you. But if you want just something simply integrated that also works across the clouds, then I think there's a special place for Databricks. And I think the lake house approach that we have, which is an integrated, completely integrated, we integrate data lakes with data warehouses, integrate workflows with machine learning, with real time processing, all these in one platform. I think there's going to be tailwinds because I think the most important thing that's going to happen in the next few years is that every customer is going to now be obsessed, given the recession and the environment we're in. How do I cut my costs? How do I cut my costs? And we learn this from the customers they're adopting the lake house because they're thinking, instead of using five vendors or three vendors, I can simplify it down to one with you and I can cut my cost. So I think that's going to be one of the main drivers of why people bet on the lake house because it helps them lower their TCO; Total Cost of Ownership. And it's as simple as that. Like I have three things right now. If I can get the same job done of those three with one, I'd rather do that. And by the way, if it's three or four across two clouds and I can just use one and it just works across two clouds, I'm going to do that. Because my boss is telling me I need to cut my budget. >> (indistinct) (John laughing) >> Yeah, and I'd rather not to do layoffs and they're asking me to do more. How can I get smaller budgets, not lay people off and do more? I have to cut, I have to optimize. What's happened in the last five, six years is there's been a huge sprawl of services and startups, you know, you know most of them, all these startups, all of them, all the activity, all the VC investments, well those companies sold their software, right? Even if a startup didn't make it big, you know, they still sold their software to some vendors. So the ecosystem is now full of lots and lots and lots and lots of different software. And right now people are looking, how do I consolidate, how do I simplify, how do I cut my costs? >> And you guys have a great solution. You're also an arms dealer and a innovator. So I have to ask this question, because you're a professor of the industry as well as at Berkeley, you've seen a lot of the historical innovations. If you look at the moment we're in right now with the recession, okay we had COVID, okay, it changed how people work, you know, people working at home, provisioning VLAN, all that (indistinct) infrastructure, okay, yeah, technology and cloud health. But we're in a recession. This is the first recession where the Amazon and the other cloud, mainly Amazon Web Services is a major economic puzzle in the piece. So they were never around before, even 2008, they were too small. They're now a major economic enabler, player, they're serving startups, enterprises, they have super clouds like you guys. They're a force and the people, their customers are cutting back but also they can also get faster. So agility is now an equation in the economic recovery. And I want to get your thoughts because you just brought that up. Customers can actually use the cloud and Databricks to actually get out of the recovery because no one's going to say, stop making profit or make more profit. So yeah, cut costs, be more efficient, but agility's also like, let's drive more revenue. So in this digital transformation, if you take this to conclusion, every company transforms, their company is the app. So their revenue is tied directly to their technology deployment. What's your reaction and comment to that because this is a new historical moment where cloud and scale and data, actually could be configured in a way to actually change the nature of a business in such a short time. And with the recession looming, no one's got time to wait. >> Yeah, absolutely. Look, the secular tailwind in the market is that of, you know, 10 years ago it was software is eating the world, now it's AI's going to eat all of software software. So more and more we're going to have, wherever you have software, which is everywhere now because it's eaten the world, it's going to be eaten up by AI and data. You know, AI doesn't exist without data so they're synonymous. You can't do machine learning if you don't have data. So yeah, you're going to see that everywhere and that automation will help people simplify things and cut down the costs and automate more things. And in the cloud you can also do that by changing your CAPEX to OPEX. So instead of I invest, you know, 10 million into a data center that I buy, I'm going to have headcount to manage the software. Why don't we change this to OPEX? And then they are going to optimize it. They want to lower the TCO because okay, it's in the cloud. but I do want the costs to be much lower that what they were in the previous years. Last five years, nobody cared. Who cares? You know what it costs. You know, there's a new brave world out there. Now there's like, no, it has to be efficient. So I think they're going to optimize it. And I think this lake house approach, which is an integration of the lakes and the warehouse, allows you to rationalize the two and simplify them. It allows you to basically rationalize away the data warehouse. So I think much faster we're going to see the, why do I need the data warehouse? If I can get the same thing done with the lake house for fraction of the cost, that's what's going to happen. I think there's going to be focus on that simplification. But I agree with you. Ultimately everyone knows, everybody's a software company. Every company out there is a software company and in the next 10 years, all of them are also going to be AI companies. So that is going to continue. >> (indistinct), dev's going to stop. And right sizing right now is a key economic forcing function. Final question for you and I really appreciate you taking the time. This year Reinvent, what's the bumper sticker in your mind around what's the most important industry dynamic, power dynamic, ecosystem dynamic that people should pay attention to as we move from the brave new world of okay, I see cloud, cloud operations. I need to really make it structurally change my business. How do I, what's the most important story? What's the bumper sticker in your mind for Reinvent? >> Bumper sticker? lake house 24. (John laughing) >> That's data (indistinct) bumper sticker. What's the- >> (indistinct) in the market. No, no, no, no. You know, it's, AWS talks about, you know, all of their services becoming a lake house because they want the center of the gravity to be S3, their lake. And they want all the services to directly work on that, so that's a lake house. We're Bumper see Microsoft with Synapse, modern, you know the modern intelligent data platform. Same thing there. We're going to see the same thing, we already seeing it on GCP with Big Lake and so on. So I actually think it's the how do I reduce my costs and the lake house integrates those two. So that's one of the main ways you can rationalize and simplify. You get in the lake house, which is the name itself is a (indistinct) of two things, right? Lake house, "lake" gives you the AI, "house" give you the database data warehouse. So you get your AI and you get your data warehousing in one place at the lower cost. So for me, the bumper sticker is lake house, you know, 24. >> All right. Awesome Ali, well thanks for the exclusive interview. Appreciate it and get to see you. Congratulations on your success and I know you guys are going to be fine. >> Awesome. Thank you John. It's always a pleasure. >> Always great to chat with you again. >> Likewise. >> You guys are a great team. We're big fans of what you guys have done. We think you're an example of what we call "super cloud." Which is getting the hype up and again your paper speaks to some of the innovation, which I agree with by the way. I think that that approach of not forcing standards is really smart. And I think that's absolutely correct, that having the market still innovate is going to be key. standards with- >> Yeah, I love it. We're big fans too, you know, you're doing awesome work. We'd love to continue the partnership. >> So, great, great Ali, thanks. >> Take care (outro music)
SUMMARY :
after the keynotes prior to the keynotes and you know, we're because you have customers. I wouldn't, you know, I got to give you guys credit over there So if the customer said we So Adam and the team at So in the next five years, But I think to me, I'm a cloud vendor, and calculated in the picks we do. But I go to get your thoughts on this idea Or is it still going to be, you know, And I think right now we, you know, So I believe that the super cloud I can simplify it down to one with you and startups, you know, and the other cloud, And in the cloud you can also do that I need to really make it lake house 24. That's data (indistinct) of the gravity to be S3, and I know you guys are going to be fine. It's always a pleasure. We're big fans of what you guys have done. We're big fans too, you know,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Alibaba | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
five vendors | QUANTITY | 0.99+ |
Adam Saleski | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Ali | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
three vendors | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Wednesday | DATE | 0.99+ |
Excel | TITLE | 0.99+ |
38 billion | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Word | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
two clouds | QUANTITY | 0.99+ |
Andy | PERSON | 0.99+ |
three clouds | QUANTITY | 0.99+ |
10 million | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
twice | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
over 300 services | QUANTITY | 0.99+ |
one game | QUANTITY | 0.99+ |
second cloud | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Sky | ORGANIZATION | 0.99+ |
one word | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
Access | TITLE | 0.98+ |
over 300 | QUANTITY | 0.98+ |
six years | QUANTITY | 0.98+ |
over 70% | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
Satyen Sangani, Alation | Cube Conversation
(upbeat electronic music) >> As we've previously reported on theCUBE, Alation was an early pioneer in the data, data governance, and data management space, which is now rapidly evolving with the help of AI and machine learning, and to what's often referred to as data intelligence. Many companies, you know, they didn't make it through the last era of data. They failed to find the right product market fit or scale beyond their close circle of friends, or some ran out of money or got acquired. Alation is a company who did make it through, and has continued to attract investor support, even in a difficult market where tech IPOs have virtually dried up. Back with me on theCUBE is Satyen Sangani, who's the CEO and co-founder of Alation. Satyen, good to see you again. Thanks for coming on. >> Great to see you, Dave. It's always nice to be on theCUBE. >> Hey, so remind our audience why you started Alation 10 years ago, you and your co-founders, and what you're all about today. >> Alation's vision is to empower a curious and rational world, which sounds like a really, I think, presumptuous thing to say. But I think it's something that we really need, right? If you think about how people make decisions, often it's still with bias or ideology, and we think a lot of that happens because people are intimidated by data, or often don't know how to use it, or don't know how to think scientifically. And we, at the core, started Alation because we wanted to demystify data for people. We wanted to help people find the data they needed and allow them to use it and to understand it better. And all of those core consumption values around information were what led us to start the company, because we felt like the world of data could be a little easier to use and manage. >> Your founding premise was correct. I mean, just getting the technology to work was so hard, and as you well know, it takes seven to 10 years to actually start a company and get traction, let alone hit escape velocity. So as I said in the open, you continue to attract new investors. What's the funding news? Please share with us. >> So we're announcing that we raised 123 million from a cohort of investors led by Thoma Bravo, Sanabil Investments, and Costanoa. Databricks Ventures is a participant in that round, along with many of our other existing investors, which would also include Salesforce amongst others. And so, super excited to get the round done in this interesting market. We were able to do that because of the business performance, and it was an up round, and all of that's great and gives our employees and our customers the fuel they need to get the product that they want. >> So why the E Round? Explain that. >> So, we've been accelerating growth over the last five quarters since our Series D. We've basically increased our growth rate to almost double since the time we raised our last round. And from our perspective, the data intelligence market, which is the market that we think we have the opportunity to continue to be the leading platform in, is growing super fast. And when faced with the decision of decelerating growth in the face of what might be, what could be a challenging macroeconomic environment, and accelerating when we're seeing customers increase the size of their commitments, more new customers sign on than ever, our growth rates increasing. We and the board basically chose to take the latter approach and we sort of said, "Look, this is amazing time in this category. This is an amazing time in this company. It's time to invest and it's time to be aggressive when a lot of other folks are fearful, and a lot of other folks aren't seeing the traction that we're seeing in our business. >> Why do you think you're seeing that traction? I mean, we always talk about digital transformation, which was a buzzword before the pandemic, but now it's become a mandate. Is that why? Is it just more data related? Explain that if you could. >> I think there's this potentially, you know, somewhat confusing thing about data. There's a, maybe it's a dirty secret of data, which is there's the sense that if you have a lot of data, and you're using data really well, and you're producing a ton of data, that you might be good at managing it. And the reality of it is that as you have more people using data and as you produce more data, it just becomes more and more confusing because more and more people are trying to access the same information to answer different questions, and more workloads are produced, and more applications are produced. And so the idea of getting more data actually means that it's really hard to manage and it becomes harder to manage at scale. And so, what we're seeing is that with the advent of platforms like AWS, like Snowflake, like Databricks, and certainly with all of the different on-premise applications that are getting born every single day, we're just seeing that data is becoming really much more confusing, but being able to navigate it is so much more important because it's the lifeblood for any business to build differentiation and satisfy their customers. >> Yeah, so last time we talked, we talked about the volume and velocity bromide from the last decade, but we talked about value and how hard it is to get value. So that's really the issue is the need and desire for more organizations to get more value out of that data is actually a stronger tailwind than the headwinds that you're seeing in the macroeconomic environment. >> Right. Because I think in good times you need data in order to be able to capitalize off all the opportunities that you've got, but in bad times you've got to make hard choices. And when you need to make hard choices, how do you do that? Well, you've got to figure out what the right decisions are, and the best way to do that is to have a lot of data and a lot of people who understand that data to be able to capitalize on it and make better insights and better decisions. And so, you don't see that just, by the way, theoretically. In the last quarter, we've seen three companies that have had cost reductions and force reductions where they are increasing at the same time their investment with Alation. And it's because they need the insight in order to be able to navigate these challenging times. >> Well, congratulations on the up round. That's awesome. I got to ask you, what was it like doing a raise in this environment? I mean, sellers are in control in the public markets. Late stage SaaS companies, that had to be challenging. How did you go about this? What were the investor conversations like? >> It certainly was a challenging fundraise. And I would say even though our business is doing way better and we were able to attract evaluation that would put us in the top quartile of public companies were we trading as a public company, which we aspire to do at some point, it was challenging because there was a whole slew of investors who were basically sitting on their hands. I had one investor conversation where an investor said to me, "Look, we think you're a great business, but we have companies that are able to give us 2.5 liquidation preference, and that gives us 70%, 75% of our return day one. So we're just going to go do those companies that may have been previously overvalued, but are willing to give us these terms because they want to keep their face valuation." Other investors said, "Look, we'd really rather that you ran a lower growth plan but with a potentially lower burn plan. But we think the upside is really something that you can capitalize on." From our perspective, we were pretty clear about the plan that we wanted to run and didn't want to necessarily totally accommodate to the fashion of the current market. We've always run a historically efficient business. The company has not burned as much as many of the data peers that we've seen to grow to get to our scale, but our general view was, look, we've got a really clear plan. The board, and the company, and the management team know exactly what we'd like to do. We've got customers that know exactly what they want from us, so we really just have to go execute. And the luck is that we found investors who were willing to do that. Many investors, and we picked one in Thoma Bravo that we felt could be the best partner for the coming phase of the company. >> So I love that because you see the opportunity, you've had a very efficient business. You're punching above your weight in terms of your use of capital. So you don't want to veer off. You know your business better than anybody. You don't want to veer off that plan. The board's very supportive. I could see you, you hear it all the time, we're going to dial down the growth, dial up the EBIT, and that's what markets want today. So congratulations on sticking to your beliefs and your vision. How do you plan to use the funds? >> We are planning to invest in sales and marketing globally. So we've expanded in Asia-Pacific over the most recent year, and also in (indistinct) and we plan to continue to do that. We're going to continue to expand in public sector with fed. And so, you would see us basically just increase our presence globally in all of the markets that you might expect. In particular, you're going to see us lean in heavily to many of the partners Databricks invested alongside this particular round. But you would have seen previously that Snowflake was a fabulous, and has been a fabulous partner of ours, and we are going to continue to invest alongside these leading data platforms. What you would also expect to see from us, though, is a lot of investment in R&D. This is a really nascent category. It's a really, really hard space. People would call it a crowded market because there are a lot of players. I think from our perspective, our aspirations to be the leading data intelligence platform, platform being a really key word there because it's not like we can do it all ourselves. We have a lot of different use cases in data intelligence, things like data quality and data observability, things like data privacy and data access control. And we have some really great partners that we walk alongside in order to make the end customer successful. I think a lot of folks in this market think, "Oh, we can just be master of all. Sort of jack of all trades, master of none." That is not our strategy. Our strategy is to really focus on getting all our customers super successful, really focused on engagement and adoption, because the really hard thing with these platforms is to get people to use them, and that is not a problem Alation has had historically. >> You know, it's really interesting, Satyen, you talk about, I mean, Thoma Bravo, obviously, very savvy investors, deep pockets, they've been making some moves. Certainly we've seen that in cyber security and data. So you got some quasi patient capital there. But the interesting thing to me is that the previous Snowflake investment last year and now Databricks, a lot of people think of them as sort of battling it out, but my view is it's not a zero sum game, meaning, yes, there's overlap, but they're filling a lot of gaps in the marketplace, and I think there's room, there's so much opportunity, and there's such a large tam, that partnering with both is a really, really smart idea. I'll give you the last word. Going forward, what can we expect from Elation? >> Well, I think that's absolutely true, and I think that the biggest boogeyman with all of this is that people don't use data. And so, our ability to partner together is really just a function of making customers successful and continuing to do that. And if we can do that, all companies will grow. We ended up ultimately partnering with Databricks and deepening our partnership, really, 'cause we had one already, primarily because of the fact that we have over a hundred customers that are jointly using the products today. And so, it certainly made sense for us to continue to make that experience better 'cause customers are demanding it. From my perspective, we just have this massive opportunity. We have the ability and the insight to run a really efficient, very, very high growth business at scale. And we have this tremendous ability to get so many more companies and people to use data much more efficiently and much better. Which broadly is, I think, a way in which we can impact the world in a really positive way. And so that's a once in a lifetime opportunity for me and for the team. And we're just going to get after it. >> Well, it's been fun watching Alation over the years. I remember mid last decade talking about this thing called data lakes and how they became data swamps, and you were helping clean that up. And now, the next 10 years, and data's not going to be like the last, you know, simplifying things and and really democratizing data is the big theme. Satyen, thanks for making time to come back on theCUBE, and congratulations on the raise. >> Thank you, Dave. It's always great to see you. >> And thank you for watching this conversation with the CEO in theCUBE, your leader in enterprise and emerging tech coverage. (gentle electronic music)
SUMMARY :
and has continued to It's always nice to be on theCUBE. and what you're all about today. and allow them to use it and as you well know, it and our customers the fuel So why the E Round? We and the board basically chose Explain that if you could. and it becomes harder to manage at scale. for more organizations to get more value and the best way to do that that had to be challenging. And the luck is that we found investors sticking to your beliefs of the markets that you might expect. of gaps in the marketplace, and the insight to run a really efficient, and data's not going to be It's always great to see you. And thank you for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alation | ORGANIZATION | 0.99+ |
Satyen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Sanabil Investments | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Satyen Sangani | PERSON | 0.99+ |
Databricks Ventures | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
10 years ago | DATE | 0.99+ |
Costanoa | ORGANIZATION | 0.99+ |
123 million | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
three companies | QUANTITY | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
10 years | QUANTITY | 0.98+ |
mid last decade | DATE | 0.98+ |
over a hundred customers | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
one investor | QUANTITY | 0.96+ |
AWS | ORGANIZATION | 0.94+ |
pandemic | EVENT | 0.93+ |
Thoma Bravo | ORGANIZATION | 0.91+ |
fed | ORGANIZATION | 0.9+ |
single day | QUANTITY | 0.87+ |
last decade | DATE | 0.87+ |
Series D. | OTHER | 0.87+ |
next 10 years | DATE | 0.85+ |
Alation | PERSON | 0.8+ |
Elation | ORGANIZATION | 0.8+ |
Asia-Pacific | LOCATION | 0.79+ |
double | QUANTITY | 0.78+ |
last five quarters | DATE | 0.76+ |
2.5 liquidation | QUANTITY | 0.75+ |
theCUBE | ORGANIZATION | 0.74+ |
Salesforce | ORGANIZATION | 0.73+ |
recent year | DATE | 0.72+ |
Thoma Bravo | PERSON | 0.69+ |
Snowflake | TITLE | 0.66+ |
t | DATE | 0.65+ |
Cube | ORGANIZATION | 0.53+ |
more | QUANTITY | 0.5+ |
data | QUANTITY | 0.49+ |
Amit Eyal Govrin, Kubiya.ai | Cube Conversation
(upbeat music) >> Hello everyone, welcome to this special Cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE in theCUBE Studios. We've got a special video here. We love when we have startups that are launching. It's an exclusive video of a hot startup that's launching. Got great reviews so far. You know, word on the street is, they got something different and unique. We're going to' dig into it. Amit Govrin who's the CEO and co-founder of Kubiya, which stands for Cube in Hebrew, and they're headquartered in Bay Area and in Tel Aviv. Amit, congratulations on the startup launch and thanks for coming in and talk to us in theCUBE >> Thank you, John, very nice to be here. >> So, first of all, a little, 'cause we love the Cube, 'cause theCUBE's kind of an open brand. We've never seen the Cube in Hebrew, so is that true? Kubiya is? >> Kubiya literally means cube. You know, clearly there's some additional meanings that we can discuss. Obviously we're also launching a KubCon, so there's a dual meaning to this event. >> KubCon, not to be confused with CubeCon. Which is an event we might have someday and compete. No, I'm only kidding, good stuff. I want to get into the startup because I'm intrigued by your story. One, you know, conversational AI's been around, been a category. We've seen chat bots be all the rage and you know, I kind of don't mind chat bots on some sites. I can interact with some, you know, form based knowledge graph, whatever, knowledge database and get basic stuff self served. So I can see that, but it never really scaled or took off. And now with Cloud Native kind of going to the next level, we're starting to see a lot more open source and a lot more automation, in what I call AI as code or you know, AI as a service, machine learning, developer focused action. I think you guys might have an answer there. So if you don't mind, could you take a minute to explain what you guys are doing, what's different about Kubiya, what's happening? >> Certainly. So thank you for that. Kubiya is what we would consider the first, or one of the first, advanced virtual assitants with a domain specific expertise in DevOps. So, we respect all of the DevOps concepts, GitOps, workflow automation, of those categories you've mentioned, but also the added value of the conversational AI. That's really one of the few elements that we can really bring to the table to extract what we call intent based operations. And we can get into what that means in a little bit. I'll save that maybe for the next question. >> So the market you're going after is kind of, it's, I love to hear starters when they, they don't have a Gartner Magic quadrant, they can fit nicely, it means they're onto something. What is the market you're going after? Because you're seeing a lot of developers driving a lot of the key successes in DevOps. DevOps has evolved to the point where, and DevSecOps, where developers are driving the change. And so having something that's developer focused is key. Are you guys targeting the developers, IT buyers, cloud architects? Who are you looking to serve with this new opportunity? >> So essentially self-service in the world of DevOps, the end user typically would be a developer, but not only, and obviously the operators, those are the folks that we're actually looking to help augment a lot of their efforts, a lot of the toil that they're experiencing in a day to day. So there's subcategories within that. We can talk about the different internal developer tools, or platforms, shared services platforms, service catalogs are tangential categories that this kind of comes on. But on top of that, we're adding the element of conversational AI. Which, as I mentioned, that's really the "got you". >> I think you're starting to see a lot of autonomous stuff going on, autonomous pen testing. There's a company out there doing I've seen autonomous AI. Automation is a big theme of it. And I got to ask, are you guys on the business side purely in the cloud? Are you born in the cloud, is it a cloud service? What's the product choice there? It's a service, right? >> Software is a service. We have the classic, Multi-Tenancy SAAS, but we also have a hybrid SAAS solution, which allows our customers to run workflows using remote runners, essentially hosted at their own location. >> So primary cloud, but you're agnostic on where they could consume, how they want to' consume the product. >> Technology agnostic. >> Okay, so that's cool. So let's get into the problem you're solving. So take me through, this will drive a lot of value here, when you guys did the company, what problems did you hone in on and what are you guys seeing as the core problem that you solve? >> So we, this is a unique, I don't know how unique, but this is a interesting proposition because I come from the business side, so call it the top down. I've been in enterprise sales, I've been in a CRO, VP sales hat. My co-founder comes from the bottom up, right? He ran DevOps teams and SRE teams in his previous company. That's actually what he did. So, we met each other halfway, essentially with me seeing a lot of these problems of self-service not being so self-service after all, platforms hitting walls with adoption. And he actually created his own self-service platform, within his last company, to address his own personal pains. So we essentially kind of met with both perspectives. >> So you're absolutely hardcore on self-service. >> We're enabling self-service. >> And that basically is what everybody wants. I mean, the developers want self-service. I mean, that's kind of like, you know, that's the nirvana. So take us through what you guys are offering, give us an example of use cases and who's buying your product, why, and take us through that whole piece. >> Do you mind if I take a step back and say why we believe self-service has somewhat failed or not gotten off. >> Yeah, absolutely. >> So look, this is essentially how we're looking at it. All the analysts and the industry insiders are talking about self-service platforms as being what's going to' remove the dependency of the operator in the loop the entire time, right? Because the operator, that scarce resource, it's hard to hire, hard to train, hard to retain those folks, Developers are obviously dependent on them for productivity. So the operators in this case could be a DevOps, could be a SecOps, it could be a platform engineer. It comes in different flavors. But the common denominator, somebody needs an access request, provisioning a new environment, you name it, right? They go to somebody, that person is operator. The operator typically has a few things on their plate. It's not just attending and babysitting platforms, but it's also innovating, spinning up, and scaling services. So they see this typically as kind of, we don't really want to be here, we're going to' go and do this because we're on call. We have to take it on a chin, if you may, for this. >> It's their child, they got to' do it. >> Right, but it's KTLOs, right, keep the lights on, this is maintenance of a platform. It's not what they're born and bred to do, which is innovate. That's essentially what we're seeing, we're seeing that a lot of these platforms, once they finally hit the point of maturity, they're rolled out to the team. People come to serve themselves in platform, and low and behold, it's not as self-service as it may seem. >> We've seen that certainly with Kubernetes adoption being, I won't say slow, it's been fast, but it's been good. But I think this is kind of the promise of what SRE was supposed to be. You know, do it once and then babysit in the sense of it's working and automated. Nothing's broken yet. Don't call me unless you need something, I see that. So the question, you're trying to make it easier then, you're trying to free up the talent. >> Talent to operate and have essentially a human, like in the loop, essentially augment that person and give the end users all of the answers they require, as if they're talking to a person. >> I mean it's basically, you're taking the virtual assistant concept, or chat bot, to a level of expertise where there's intelligence, jargon, experience into the workflows that's known. Not just talking to chat bot, get a support number to rebook a hotel room. >> We're converting operational workflows into conversations. >> Give me an example, take me through an example. >> Sure, let's take a simple example. I mean, not everyone provisions EC2's with two days (indistinct). But let's say you want to go and provision new EC2 instances, okay? If you wanted to do it, you could go and talk to the assistant and say, "I want to spin up a new server". If it was a human in the loop, they would ask you the following questions: what type of environment? what are we attributing this to? what type of instance? security groups, machine images, you name it. So, these are the questions that typically somebody needs to be armed with before they can go and provision themselves, serve themselves. Now the problem is users don't always have these questions. So imagine the following scenario. Somebody comes in, they're in Jira ticket queue, they finally, their turn is up and the next question they don't have the answer to. So now they have to go and tap on a friend, or they have to go essentially and get that answer. By the time they get back, they lost their turn in queue. And then that happens again. So, they lose a context, they lose essentially the momentum. And a simple access request, or a simple provision request, can easily become a couple days of ping pong back and forth. This won't happen with the virtual assistant. >> You know, I think, you know, and you mentioned chat bots, but also RPA is out there, you've seen a lot of that growth. One of the hard things, and you brought this up, I want to get your reaction to, is contextualizing the workflow. It might not be apparent, but the answer might be there, it disrupts the entire experience at that point. RPA and chat bots don't have that contextualization. Is that what you guys do differently? Is that the unique flavor here? Is that difference between current chat bots and RPA? >> The way we see it, I alluded to the intent based operations. Let me give a tangible experience. Even not from our own world, this will be easy. It's a bidirectional feedback loop 'cause that's actually what feeds the context and the intent. We all know Waze, right, in the world of navigation. They didn't bring navigation systems to the world. What they did is they took the concept of navigation systems that are typically satellite guided and said it's not just enough to drive down the 280, which typically have no traffic, right, and to come across traffic and say, oh, why didn't my satellite pick that up? So they said, have the end users, the end nodes, feed that direction back, that feedback, right. There has to be a bidirectional feedback loop that the end nodes help educate the system, make the system be better, more customized. And that's essentially what we're allowing the end users. So the maintenance of the system isn't entirely in the hands of the operators, right? 'Cause that's the part that they dread. And the maintenance of the system is democratized across all the users that they can teach the system, give input to the system, hone in the system in order to make it more of the DNA of the organization. >> You and I were talking before you came on this camera interview, you said playfully that the Siri for DevOps, which kind of implies, hey infrastructure, do something for me. You know, we all know Siri, so we get that. So that kind of illustrates kind of where the direction is. Explain why you say that, what does that mean? Is that like a NorthStar vision that you guys are approaching? You want to' have a state where everything's automated in it's conversational deployments, that kind of thing. And take us through why that Siri for DevOps is. >> I think it helps anchor people to what a virtual assistant is. Because when you hear virtual assistant, that can mean any one of various connotations. So the Siri is actually a conversational assistant, but it's not necessarily a virtual assistant. So what we're saying is we're anchoring people to that thought and saying, we're actually allowing it to be operational, turning complex operations into simple conversations. >> I mean basically they take the automate with voice Google search or a query, what's the score of the game? And, it also, and talking to the guy who invented Siri, I actually interviewed on theCUBE, it's a learning system. It actually learns as it gets more usage, it learns. How do you guys see that evolving in DevOps? There's a lot of jargon in DevOps, a lot of configurations, a lot of different use cases, a lot of new technologies. What's the secret sauce behind what you guys do? Is it the conversational AI, is it the machine learning, is it the data, is it the model? Take us through the secret sauce. >> In fact, it's all the above. And I don't think we're bringing any one element to the table that hasn't been explored before, hasn't been done. It's a recipe, right? You give two people the same ingredients, they can have complete different results in terms of what they come out with. We, because of our domain expertise in DevOps, because of our familiarity with developer workflows with operators, we know how to give a very well suited recipe. Five course meal, hopefully with Michelin stars as part of that. So a few things, maybe a few of the secret sauce element, conversational AI, the ability to essentially go and extract the intent of the user, so that if we're missing context, the system is smart enough to go and to get that feedback and to essentially feed itself into that model. >> Someone might say, hey, you know, conversational AI, that was yesterday's trend, it never happened. It was kind of weak, chat bots were lame. What's different now and with you guys, and the market, that makes a redo or a second shot at this, a second bite at the apple, as they say. What do you guys see? 'Cause you know, I would argue that it's, you know, it's still early, real early. >> Certainly. >> How do you guys view that? How would you handle that objection? >> It's a fair question. I wasn't around the first time around to tell you what didn't work. I'm not afraid to share that the feedback that we're getting is phenomenal. People understand that we're actually customizing the workflows, the intent based operations to really help hone in on the dark spots. We call it last mile, you know, bottlenecks. And that's really where we're helping. We're helping in a way tribalize internal knowledge that typically hasn't been documented because it's painful enough to where people care about it but not painful enough to where you're going to' go and sit down an entire day and document it. And that's essentially what the virtual assistant can do. It can go and get into those crevices and help document, and operationalize all of those toils. And into workflows. >> Yeah, I mean some will call it grunt work, or low level work. And I think the automation is interesting. I think we're seeing this in a lot of these high scale situations where the talented hard to hire person is hired to do, say, things that were hard to do, but now harder things are coming around the corner. So, you know, serverless is great and all this is good, but it doesn't make the complexity go away. As these inflection points continue to drive more scale, the complexity kind of grows, but at the same time so is the ability to abstract away the complexity. So you're starting to see the smart, hired guns move to higher, bigger problems. And the automation seems to take the low level kind of like capabilities or the toil, or the grunt work, or the low level tasks that, you know, you don't want a high salaried person doing. Or I mean it's not so much that they don't want to' do it, they'll take one for the team, as you said, or take it on the chin, but there's other things to work on. >> I want to add one more thing, 'cause this goes into essentially what you just said. Think about it's not the virtual system, what it gives you is not just the intent and that's one element of it, is the ability to carry your operations with you to the place where you're not breaking your workflows, you're actually comfortable operating. So the virtual assistant lives inside of a command line interface, it lives inside of chat like Slack, and Teams, and Mattermost, and so forth. It also lives within a low-code editor. So we're not forcing anyone to use uncomfortable language or operations if they're not comfortable with. It's almost like Siri, it travels in your mobile phone, it's on your laptop, it's with you everywhere. >> It makes total sense. And the reason why I like this, and I want to' get your reaction on this because we've done a lot of interviews with DevOps, we've met at every CubeCon since it started, and Kubernetes kind of highlights the value of the containers at the orchestration level. But what's really going on is the DevOps developers, and the CICD pipeline, with infrastructure's code, they're basically have a infrastructure configuration at their disposal all the time. And all the ops challenges have been around that, the repetitive mundane tasks that most people do. There's like six or seven main use cases in DevOps. So the guardrails just need to be set. So it sounds like you guys are going down the road of saying, hey here's the use cases you can bounce around these use cases all day long. And just keep doing your jobs cause they're bolting on infrastructure to every application. >> There's one more element to this that we haven't really touched on. It's not just workflows and use cases, but it's also knowledge, right? Tribal knowledge, like you asked me for an example. You can type or talk to the assistant and ask, "How much am I spending on AWS, on US East 1, on so and so customer environment last week?", and it will know how to give you that information. >> Can I ask, should I buy a reserve instances or not? Can I ask that question? 'Cause there's always good trade offs between buying the reserve instances. I mean that's kind of the thing that. >> This is where our ecosystem actually comes in handy because we're not necessarily going to' go down every single domain and try to be the experts in here. We can tap into the partnerships, API, we have full extensibility in API and the software development kit that goes into. >> It's interesting, opinionated and declarative are buzzwords in developer language. So you started to get into this editorial thing. So I can bring up an example. Hey cube, implement the best service mesh. What answer does it give you? 'Cause there's different choices. >> Well this is actually where the operator, there's clearly guard rails. Like you can go and say, I want to' spin up a machine, and it will give you all of the machines on AWS. Doesn't mean you have to get the X one, that's good for a SAP environment. You could go and have guardrails in place where only the ones that are relevant to your team, ones that have resources and budgetary, you know, guidelines can be. So, the operator still has all the control. >> It was kind of tongue in cheek around the editorialized, but actually the answer seems to be as you're saying, whatever the customer decided their service mesh is. So I think this is where it gets into as an assistant to architecting and operating, that seems to be the real value. >> Now code snippets is a different story because that goes on to the web, that goes onto stock overflow, and that's actually one of the things. So inside the CLI, you could actually go and ask for code snippets and we could actually go and populate that, it's a smart CLI. So that's actually one of the things that are an added value of that. >> I was saying to a friend and we were talking about open source and how when I grew up, there was no open source. If you're a developer now, I mean there's so much code, it's not so much coding anymore as it is connecting and integrating. >> Certainly. >> And writing glue layers, if you will. I mean there's still code, but it's not, you don't have to build it from scratch. There's so much code out there. This low-code notion of a smart system is interesting 'cause it's very matrix like. It can build its own code. >> Yes, but I'm also a little wary with low-code and no code. I think part of the problem is we're so constantly focused on categories and categorizing ourselves, and different categories take on a life of their own. So low-code no code is not necessarily, even though we have the low-code editor, we're not necessarily considering ourselves low-code. >> Serverless, no code, low-code. I was so thrown on a term the other day, architecture-less. As a joke, no we don't need architecture. >> There's a use case around that by the way, yeah, we do. Show me my AWS architecture and it will build the architect diagram for you. >> Again, serverless architect, this is all part of infrastructure's code. At the end of the day, the developer has infrastructure with code. Again, how they deploy it is the neuron. That's what we've been striving for. >> But infrastructure is code. You can destroy, you know, terraform, you can go and create one. It's not necessarily going to' operate it for you. That's kind of where this comes in on top of that. So it's really complimentary to infrastructure. >> So final question, before we get into the origination story, data and security are two hot areas we're seeing fill the IT gap, that has moved into the developer role. IT is essentially provisioned by developers now, but the OP side shifted to large scale SRE like environments, security and data are critical. What's your opinion on those two things? >> I agree. Do you want me to give you the normal data as gravity? >> So you agree that IT is now, is kind of moved into the developer realm, but the new IT is data ops and security ops basically. >> A hundred percent, and the lines are so blurred. Like who's what in today's world. I mean, I can tell you, I have customers who call themselves five different roles in the same day. So it's, you know, at the end of the day I call 'em operators 'cause I don't want to offend anybody because that's just the way it is. >> Architectural-less, we're going to' come back to that. Well, I know we're going to' see you at CubeCon. >> Yes. >> We should catch up there and talk more. I'm looking forward to seeing how you guys get the feedback from the marketplace. It should be interesting to hear, the curious question I have for you is, what was the origination story? Why did you guys come together, was it a shared problem? Was it a big market opportunity? Was it an itch you guys were scratching? Did you feel like you needed to come together and start this company? What was the real vision behind the origination? Take a take a minute to explain the story. >> No, absolutely. So I've been living in Palo Alto for the last couple years. Previous, also a founder. So, you know, from my perspective, I always saw myself getting back in the game. Spent a few years in AWS essentially managing partnerships for tier one DevOps partners, you know, all of the known players. Some in public, some of them not. And really the itch was there, right. I saw what everyone's doing. I started seeing consistency in the pains that I was hearing back, in terms of what hasn't been solved. So I already had an opinion where I wanted to go. And when I was visiting actually Israel with the family, I was introduced by a mutual friend to Shaked, Shaked Askayo, my co-founder and CTO. Amazing guy, unbelievable technologists, probably one the most, you know, impressive folks I've had a chance to work with. And he actually solved a very similar problem, you know, in his own way in a previous company, BlueVine, a FinTech company where he was head of SRE, having to, essentially, oversee 200 developers in a very small team. The ratio was incongruent to what the SRE guideline would tell. >> That's more than 10 x rate developer. >> Oh, absolutely. Sure enough. And just imagine it's four different time zones. He finishes day shift and you already had the US team coming, asking for a question. He said, this is kind of a, >> Got to' clone himself, basically. >> Well, yes. He essentially said to me, I had no day, I had no life, but I had Corona, I had COVID, which meant I could work from home. And I essentially programed myself in the form of a bot. Essentially, when people came to him, he said, "Don't talk to me, talk to the bot". Now that was a different generation. >> Just a trivial example, but the idea was to automate the same queries all the time. There's an answer for that, go here. And that's the benefit of it. >> Yes, so he was able to see how easy it was to solve, I mean, how effective it was solving 70% of the toil in his organization. Scaling his team, froze the headcount and the developer team kept on going. So that meant that he was doing some right. >> When you have a problem, and you need to solve it, the creativity comes out of the woodwork, you know, invention is the mother of necessity. So final question for you, what's next? Got the launch, what are you guys hope to do over the next six months to a year, hiring? Put a plug in for the company. What are you guys looking to do? Take a minute to share the future vision and get a plug in. >> A hundred percent. So, Kubiya, as you can imagine, announcing ourselves at CubeCon, so in a couple weeks. Opening the gates towards the public beta and NGA in the next couple months. Essentially working with dozens of customers, Aston Martin, and business earn in. We have quite a few, our website's full of quotes. You can go ahead. But effectively we're looking to go and to bring the next operator, generation of operators, who value their time, who value the, essentially, the value of tribal knowledge that travels between organizations that could be essentially shared. >> How many customers do you guys have in your pre-launch? >> It's above a dozen. Without saying, because we're actually looking to onboard 10 more next week. So that's just an understatement. It changes from day to day. >> What's the number one thing people are saying about you? >> You got that right. I know it's, I'm trying to be a little bit more, you know. >> It's okay, you can be cocky, startups are good. But I mean they're obviously, they're using the product and you're getting good feedback. Saving time, are they saying this is a dream product? Got it right, what are some of the things? >> I think anybody who doesn't feel the pain won't know, but the folks who are in the trenches, or feeling the pain, or experiencing this toil, who know what this means, they said, "You're doing this different, you're doing this right. You architected it right. You know exactly what the developer workflows," you know, where all the areas, you know, where all the skeletons are hidden within that. And you're attending to that. So we're happy about that. >> Everybody wants to clone themselves, again, the tribal knowledge. I think this is a great example of where we see the world going. Make things autonomous, operationally automated for the use cases you know are lock solid. Why wouldn't you just deploy? >> Exactly, and we have a very generous free tier. People can, you know, there's a plugin, you can sign up for free until the end of the year. We have a generous free tier. Yeah, free forever tier, as well. So we're looking for people to try us out and to give us feedback. >> I think the self-service, I think the point is, we've talked about it on the Cube at our events, everyone says the same thing. Every developer wants self-service, period. Full stop, done. >> What they don't say is they need somebody to help them babysit to make sure they're doing it right. >> The old dashboard, green, yellow, red. >> I know it's an analogy that's not related, but have you been to Whole Foods? Have you gone through their self-service line? That's the beauty of it, right? Having someone in a loop helping you out throughout the time. You don't get confused, if something's not working, someone's helping you out, that's what people want. They want a human in the loop, or a human like in the loop. We're giving that next best thing. >> It's really the ratio, it's scale. It's a scaling. It's force multiplier, for sure. Amit, thanks for coming on, congratulations. >> Thank you so much. >> See you at KubeCon. Thanks for coming in, sharing the story. >> KubiyaCon. >> CubeCon. Cube in Hebrew, Kubiya. Founder, co-founder and CEO here, sharing the story in the launch. Conversational AI for DevOps, the theory of DevOps, really kind of changing the game, bringing efficiency, solving a lot of the pain points of large scale infrastructure. This is theCUBE, CUBE conversation, I'm John Furrier, thanks for watching. (upbeat electronic music)
SUMMARY :
on the startup launch We've never seen the Cube so there's a dual meaning to this event. I can interact with some, you know, but also the added value of the conversational AI. a lot of the key successes in DevOps. a lot of the toil that they're What's the product choice there? We have the classic, Multi-Tenancy SAAS, So primary cloud, So let's get into the call it the top down. So you're absolutely I mean, the developers want self-service. Do you mind if I take a step back So the operators in this keep the lights on, this is of the promise of what SRE all of the answers they require, experience into the We're converting operational take me through an example. So imagine the following scenario. Is that the unique flavor here? that the end nodes help the Siri for DevOps, So the Siri is actually a is it the data, is it the model? the system is smart enough to a second bite at the apple, as they say. on the dark spots. And the automation seems to it, is the ability to carry So the guardrails just need to be set. the assistant and ask, I mean that's kind of the thing that. and the software development implement the best service mesh. of the machines on AWS. but actually the answer So inside the CLI, you could actually go I was saying to a And writing glue layers, if you will. So low-code no code is not necessarily, I was so thrown on a term the around that by the way, At the end of the day, You can destroy, you know, terraform, that has moved into the developer role. the normal data as gravity? is kind of moved into the developer realm, in the same day. to' see you at CubeCon. the curious question I have for you is, And really the itch was there, right. the US team coming, asking for a question. myself in the form of a bot. And that's the benefit of it. and the developer team kept on going. of the woodwork, you know, and NGA in the next couple months. It changes from day to day. bit more, you know. It's okay, you can be but the folks who are in the for the use cases you know are lock solid. and to give us feedback. everyone says the same thing. need somebody to help them That's the beauty of it, right? It's really the ratio, it's scale. Thanks for coming in, sharing the story. sharing the story in the launch.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
Siri | TITLE | 0.99+ |
six | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amit | PERSON | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
Amit Govrin | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Amit Eyal Govrin | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
200 developers | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Bay Area | LOCATION | 0.99+ |
two people | QUANTITY | 0.99+ |
Israel | LOCATION | 0.99+ |
Aston Martin | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Whole Foods | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Kubiya | ORGANIZATION | 0.99+ |
SRE | ORGANIZATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
BlueVine | ORGANIZATION | 0.99+ |
EC2 | TITLE | 0.99+ |
DevOps | TITLE | 0.98+ |
five different roles | QUANTITY | 0.98+ |
Five course | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Kubiya | PERSON | 0.98+ |
first time | QUANTITY | 0.97+ |
KubiyaCon | EVENT | 0.97+ |
second shot | QUANTITY | 0.96+ |
yesterday | DATE | 0.96+ |
hundred percent | QUANTITY | 0.96+ |
one element | QUANTITY | 0.96+ |
KubCon | EVENT | 0.96+ |
one more element | QUANTITY | 0.96+ |
second bite | QUANTITY | 0.95+ |
both perspectives | QUANTITY | 0.95+ |
Gartner | ORGANIZATION | 0.95+ |
ORGANIZATION | 0.95+ | |
Hebrew | OTHER | 0.94+ |
NorthStar | ORGANIZATION | 0.94+ |
Shaked Askayo | PERSON | 0.94+ |
Cube | ORGANIZATION | 0.93+ |
Shaked | PERSON | 0.93+ |
theCUBE Studios | ORGANIZATION | 0.93+ |
dozens of customers | QUANTITY | 0.93+ |
Corona | ORGANIZATION | 0.92+ |
DevSecOps | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
above a dozen | QUANTITY | 0.91+ |
One | QUANTITY | 0.9+ |
more than 10 x | QUANTITY | 0.9+ |
Siri for DevOps | TITLE | 0.9+ |
cube | PERSON | 0.9+ |
US East 1 | LOCATION | 0.89+ |
280 | QUANTITY | 0.89+ |
CubeCon | EVENT | 0.88+ |
two hot areas | QUANTITY | 0.87+ |
today | DATE | 0.87+ |
seven main use cases | QUANTITY | 0.84+ |
US | LOCATION | 0.84+ |
Michelin | TITLE | 0.83+ |
a year | QUANTITY | 0.83+ |
Matt McIlwain, Madrona | Cube Conversation, September 2022
>>Hi, welcome to this cube conversation here in Palo Alto, California. I'm John fur, host of the cube here at our headquarters on the west coast in Palo Alto, California. Got a great news guest here. Matt McGill, Wayne managing director of Madrona venture group is here with me on the big news and drone raising their record 690 million fund and partnering with their innovative founders. Matt, thanks for coming on and, and talking about the news and congratulations on the dry powder. >>Well, Hey, thanks so much, John. Appreciate you having me on the show. >>Well, great news here. Oley validation. We're in a new market. Everyone's talking about the new normal, we're talking about a recession inflation, but yet we've been reporting that this is kind of the first generation that cloud hyperscale economic scale and technical benefits have kind of hit any kind of economic downturn. If you go back to to 2008, our last downturn, the cloud really hasn't hit that tipping point. Now the innovation, as we've been reporting with our startup showcases and looking at the results from the hyperscalers, this funding news is kind of validation that the tech society intersection is working. You guys just get to the news 430 million in the Madrona fund nine and 200. And I think 60 million acceleration fund three, which means you're gonna go stay with your roots with seed early stage and then have some rocket fuel for kind of the accelerated expansion growth side of it. Not like late stage growth, but like scaling growth. This is kind of the news. Is that right? >>That's right. You know, we, we've had a long time strategy over 25 years here in Seattle of being early, early stage. You know, it's like our friends at Amazon like to say is, well, we're there at day one and we wanna help build companies for the long run for over 25 years. We've been doing that in Seattle. And I think one of the things we've realized, I mean, this is, these funds are the largest funds ever raised by a Seattle based venture capital firm and that's notable in and of itself. But we think that's the reason is because Seattle has continued to innovate in areas like consumer internet software cloud, of course, where the cloud capital of the world and increasingly the applications of machine learning. And so with all that combination, we believe there's a ton more companies to be built here in the Pacific Northwest and in Seattle in particular. And then through our acceleration fund where we're investing in companies anywhere in the country, in fact, anywhere in the world, those are the kinds of companies that want to have the Seattle point of view. They don't understand how to work with Amazon and AWS. They don't understand how to work with Microsoft and we have some unique relationships in those places and we think we can help them succeed in doing that. >>You know, it's notable that you guys in particular have been very close with Jeff Bayo Andy Jesse, and the success of ABUS as well as Microsoft. So, you know, Seattle has become cloud city. Everyone kind of knows that from a cloud perspective, obviously Microsoft's roots have been there for a long, long time. You go back, I mean, August capital, early days, funding Microsoft. You remember those days not to date myself, but you know, Microsoft kind of went up there and kind of established it a Amazon there as well. Now you got Google here, you got Facebook in the valley. You guys are now also coming down. This funding comes on the heels of you appointing a new managing director here in Palo Alto. This is now the migration of Madrona coming into the valley. Is that right? Is that what we're seeing? >>Well, I think what we're trying to do is bring the things that we know uniquely from Seattle and the companies here down to Silicon valley. We've got a terrific partner in Karama Hend, Andrew he's somebody that we have worked together with over the years, co-investing in companies. So we knew him really well. It was a bit opportunistic for us, but what we're hearing over and over again is a lot of these companies based in the valley, based in other parts of the country, they don't know really how to best work with the Microsoft and Amazons are understand the services that they offer. And, you know, we have that capability. We have those relationships. We wanna bring that to bear and helping build great companies. >>What is your expectation on the Silicon valley presence here? You can kind of give a hint here kind of a gateway to Seattle, but you got a lot of developers here. We just reported this morning that MEA just open source pie, torch to the Linux foundation again, and Mary material kind of trend we are seeing open source now has become there's no debate anymore has become the software industry. There's no more issue around that. This is real. I >>Think that's right. I mean, you know, once, you know, Satya became CEO, Microsoft, and they started embracing open source, you know, that was gonna be the last big tech holdout. We think open source is very interesting in terms of what it can produce and create in terms of next generation, innovative innovation. It's great to see companies like Facebook like Uber and others that have had a long track record of open source capabilities. But what we're also seeing is you need to build businesses around that, that a lot of enterprises don't wanna buy just the open source and stitch it together themselves. They want somebody to do it with them. And whether that's the way that, you know, companies like MongoDB have built that out over time or that's, you know, or elastic or, you know, companies like opt ML and our portfolio, or even the big cloud, you know, hyperscalers, you know, they are increasingly embracing open source and building finished services, managed services on top of it. So that's a big wave that we've been investing in for a number of years now and are highly confident gonna >>Continue. You know, I've been a big fan of Pacific Northwest for a while. You know, love going up there and talking to the folks at Microsoft and Amazon and AWS, but there's been a big trend in venture capital where a lot of the, the later stage folks, including private equity have come in, you seen tiger global even tiger global alumni, that the Cubs they call them, you know, they're coming down and playing in the early state and the results haven't been that good. You guys have had a track record in your success. Again, a hundred percent of your institutional investors have honed up with you on this two fund strategy of close to 700 million. What's this formula says, why aren't they winning what's is it, they don't have the ecosystem? Is it they're spraying and praying without a lot of discipline? What's the dynamic between the folks like Madrona, the Neas of the world who kind of come in and Sequoia who kind of do it right, right. Come in. And they get it done in the right way. The early stage. I just say the private equity folks, >>You know, I think that early stage venture is a local business. It is a geographically proximate business when you're helping incredible founders, try to really dial in that early founder market fit. This is before you even get to product market fit. And, and so the, the team building that goes on the talking to potential customers, the ITER iterating on business strategy, this is a roll up your sleeves kind of thing. It's not a financial transaction. And so what you're trying to do is have a presence and an understanding, a prepared mind of one of the big themes and the kinds of founders that with, you know, our encouragement and our help can go build lasting companies. Now, when you get to a, a, a later stage, you know, you get to that growth stage. It is generally more of a financial, you know, kind of engineering sort of proposition. And there's some folks that are great at that. What we do is we support these companies all the way through. We reserve enough capital to be with them at the seed stage, the series B stage the, you know, the crossover round before you go public, all of those sorts of things. And we love partnering with some of these other people, but there's a lot of heavy lifting at the early, early stages of a business. And it's, it's not, I think a model that everybody's architected to do >>Well, you know, trust becomes a big factor in all this. You kind of, when you talk about like that, I hear you speaking. It makes me think of like trusted advisor meets money, not so much telling people what to do. You guys have had a good track record and, and being added value, not values from track. And sometimes that values from track is getting in the way of the entrepreneur by, you know, running the certain meetings, driving board meetings and driving the agenda that you see to see that trend where people try too hard and that a force function, the entrepreneur we're living in a world now where everyone's talking to each other, you got, you know, there's no more glass door it's everyone's on Twitter, right? So you can see some move, someone trying to control the supply chain of talent by term sheet, overvaluing them. >>You guys are, have a different strategy. You guys have a network I've noticed that Madrona has attracted them high end talent coming outta Microsoft outta AWS season, season, senior talent. I won't say, you know, senior citizens, but you know, people have done things scaled up businesses, as well as attract young talent. Can you share with our audience that dynamic of the, the seasoned veterans, the systems thinkers, the ones who have been there done that built software, built teams to the new young entrepreneurs coming in, what's the dynamic, like, how do you guys look at at those networks? How do you nurture them? Could you share your, your strategy on how you're gonna pull all this together, going forward? >>You know, we, we think a lot about building the innovation ecosystem, like a phrase around here that you hear a lot is the bigger pie theory. How do we build the bigger pie? If we're focusing on building the bigger pie, there'll be plenty of that pie for Madrona Madrona companies. And in that mindset says, okay, how are we gonna invest in the innovation ecosystem? And then actually to use a term, you know, one of our founders who unfortunately passed away this year, Tom Aber, he had just written a book called flywheel. And I think it embodies this mindset that we have of how do you create that flywheel within a community? And of course, interestingly enough, I think Tom both learned and contributed to that. He was on the board of Amazon for almost 20 years in helping build some of the flywheels at Amazon. >>So that's what we carry forward. And we know that there's a lot of value in experiential learning. And so we've been fortunate to have some folks, you know, that have worked at some of those, you know, kind iconic companies, join us and find that they really love this company building journey. We've also got some terrific younger folks that have, you know, some very fresh perspectives and a lot of, a lot of creativity. And they're bringing that together with our team overall. And you know, what we really are trying to do at the end of the day is find incredible founders who wanna build something lasting, insignificant, and provide our kind of our time, our best ideas, our, our perspective. And of course our capital to help them be >>Successful. I love the ecosystem play. I think that's a human capital game too. I like the way you guys are thinking about that. I do wanna get your reaction, cause I know you're close to Amazon and Microsoft, but mainly Jeff Bezos as well. You mentioned your, your partner who passed away was on the board. A lot of great props on and tributes online. I saw that, I know I didn't know him at all. So I really can't comment, but I did notice that Bezos and, and jazz in particular were complimentary. And recently I just saw Bezos comment on Twitter about the, you know, the Lord of the rings movie. They're putting out the series and he says, you gotta have a team. That's kinda like rebels. I'm paraphrasing, cuz these folks never done a movie like this before. So they're, they're getting good props and reviews in this new world order where entrepreneurs gotta do things different. >>What's the one thing that you think entrepreneurs need to do different to make this next startup journey different and successful because the world is different. There's not a lot of press to relate to Andy Jassey even on stage last week in, in, in LA was kind of, he's not really revealing. He's on his talking points, message, the press aren't out there and big numbers anymore. And you got a lot of different go-to market strategies, omnichannel, social different ways to communicate to customers. Yeah. So product market fit is becomes big. So how do you see this new flywheel emerging for those entrepreneurs have to go out there, roll up their sleeves and make it happen. And what kind of resources do you think they need to be successful? What are you guys advocating? >>Well, you know, what's really interesting about that question is I've heard Jeff say many times that when people ask him, what's gotta be different. He, he reminds them to think about what's not gonna change. And he usually starts to then talk about things like price, convenience, and selection. Customer's never gonna want a higher price, less convenience, smaller selection. And so when you build on some of those principles of, what's not gonna change, it's easier for you to understand what could be changing as it relates to the differences. One of the biggest differences, I don't think any of us have fully figured out yet is what does it mean to be productive in a hybrid work mode? We happen to believe that it's still gonna have a kernel of people that are geographically close, that are part of the founding and building in the early stages of a company. >>And, and it's an and equation that they're going to also have people that are distributed around the country, perhaps around the world that are some of the best talent that they attract to their team. The other thing that I think coming back to what remains the same is being hyper focused on a certain customer and a certain problem that you're passionate about solving. And that's really what we look for when we look for this founder market fit. And it can be a lot of different things from the next generation water bottle to a better way to handle deep learning models and get 'em deployed in the cloud. If you've got that passion and you've got some inkling of the skill of how to build a better solution, that's never gonna go away. That's gonna be enduring, but exactly how you do that as a team in a hybrid world, I think that's gonna be different. >>Yeah. One thing that's not changing is that your investor, makeup's not changing a hundred percent of your existing institutional investors have signed back on with you guys and your oversubscribed, lot of demand. What is your flywheel success formula? Why is Tron is so successful? Can you share some feedback from your investors? What are they saying? Why are they re-upping share some inside baseball or anecdotal praise? >>Well, I think it's very kind to you to frame it that way. I mean, you know, it does for investors come back to performance. You know, these are university endowments and foundations that have a responsibility to, to generate great returns. And we understand that and we're very aligned with that. I think to be specific in the last couple years, they appreciated that we were also not holding onto our, our stocks forever, that we actually made some thoughtful decisions to sell some shares of companies like Smartsheet and snowflake and accolade in others, and actually distribute capital back to them when things were looking really, really good. But I think the thing, other thing that's very important here is that we've created a flywheel with our core strategy being Seattle based and then going out from there to try to find the best founders, build great companies with them, roll up our sleeves in a productive way and help them for the long term, which now leads to multiple generations of people, you know, at those companies. And beyond that we wanna be, you know, partner with and back again. And so you create this flywheel by having success with people in doing it in a respectful. And as you said earlier, a trusted way, >>What's the message for the Silicon valley crowd, obviously bay area, Silicon valley, Palo Alto office, and the center of it. Obviously you got them hybrid workforce hybrid venture model developing what's the goals. What's the message for Silicon valley? >>Well, our message for folks in Silicon valley is the same. It's always been, we we're excited to partner with them largely up here again, cause this is still our home base, but there'll be a, you know, select number of opportunities where we'll get a chance to partner together down in Silicon valley. And we think we bring something different with that deep understanding of cloud computing, that deep understanding of applied machine learning. And of course, some of our unique relationships up here that can be additive to what the they've already done. And some of them are just great partners and have built, you know, help build some really incredible companies over >>The years. Matt, I really appreciate you taking the time for this interview, given them big news. I guess the question on everyone's mind, certainly the entrepreneur's mind is how do I get some of that cash you have and put it into work for my opportunity. One what's the investment thesis can take a minute to put the plug in for the firm. What are you looking to invest in? What's the thesis? What kind of entrepreneurs you're looking for? I know fund one is seed fund nine is seed to, to a and B and the second one is beyond B and beyond for growth. What's the pitch. What's the pitch. >>Yeah. Well you can, you can think of us as you know, any stage from pre-seed to series seed. You know, we'll make a new investment in companies in all of those stages. You know, I think that, you know, the, the core pitch, you know, to us is, you know, your passion for the, for the problem that you're trying to, trying to get solved. And we're of course, very excited about that. And you know, at, at, at the end of the day, you know, if you want somebody that has a distinct point of view on the market that is based up here and can roll up their sleeves and work alongside you. We're, we're, we're the ones that are more than happy to do that. Proven track record of doing that for 25 plus years. And there's so much innovation ahead. There's so many opportunities to disrupt to pioneer, and we're excited to be a part of working with great founders to do that. >>Well, great stuff. We'll see you ATS reinvent coming up shortly and your annual get together. You always have your crew down there and, and team engaging with some of the cloud players as well. And looking forward to seeing how the Palo Alto team expands out. And Matt, thanks for coming on the cube. Appreciate your time. >>Thanks very much, John. Appreciate you having me look forward to seeing you at reinvent. >>Okay. Matt, Matt here with Madrona venture group, he's the partner managing partner Madrona group raises 690 million to fund nine and, and, and again, and big funds for accelerated growth fund. Three lot of dry powder. Again, entrepreneurship in technology is scaling. It's not going down. It's continuing to accelerate into this next generation super cloud multi-cloud hybrid cloud world steady state. This is the cubes coverage. I'm John for Silicon angle and host of the cube. Thanks for watching.
SUMMARY :
I'm John fur, host of the cube here Appreciate you having me on the show. This is kind of the news. You know, it's like our friends at Amazon like to say You know, it's notable that you guys in particular have been very close with Jeff Bayo Andy Jesse, And, you know, we have that capability. kind of a gateway to Seattle, but you got a lot of developers here. I mean, you know, once, you know, Satya became CEO, lot of the, the later stage folks, including private equity have come in, you seen tiger global even them at the seed stage, the series B stage the, you know, the crossover round before you go And sometimes that values from track is getting in the way of the entrepreneur by, you know, running the certain meetings, I won't say, you know, senior citizens, but you know, people have done things scaled up And then actually to use a term, you know, one of our founders who unfortunately passed away this And so we've been fortunate to have some folks, you know, that have worked at some of those, you know, I like the way you guys are thinking about What's the one thing that you think entrepreneurs need to do different to make this next startup And so when you build on some of those principles of, that I think coming back to what remains the same is being hyper focused on Can you share some feedback from your investors? And beyond that we wanna be, you know, partner with and back again. Obviously you got them hybrid workforce hybrid venture model And some of them are just great partners and have built, you know, help build some really incredible companies over I guess the question on everyone's mind, certainly the entrepreneur's mind is how do I get some of that cash you have and I think that, you know, the, the core pitch, you know, to us is, you know, And Matt, thanks for coming on the cube. I'm John for Silicon angle and host of the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Matt McIlwain | PERSON | 0.99+ |
Andy Jassey | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Madrona | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Tom Aber | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Matt McGill | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
September 2022 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Bezos | PERSON | 0.99+ |
LA | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Silicon valley | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Pacific Northwest | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
690 million | QUANTITY | 0.99+ |
Jeff Bayo | PERSON | 0.99+ |
25 plus years | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Andrew | PERSON | 0.99+ |
ABUS | ORGANIZATION | 0.99+ |
first generation | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
60 million | QUANTITY | 0.99+ |
Satya | PERSON | 0.99+ |
430 million | QUANTITY | 0.98+ |
Karama Hend | PERSON | 0.98+ |
John fur | PERSON | 0.98+ |
ORGANIZATION | 0.98+ | |
over 25 years | QUANTITY | 0.98+ |
Three | QUANTITY | 0.98+ |
almost 20 years | QUANTITY | 0.98+ |
Tron | TITLE | 0.98+ |
nine | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
hundred percent | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
the Lord of the rings | TITLE | 0.96+ |
Cubs | ORGANIZATION | 0.95+ |
ORGANIZATION | 0.95+ | |
One | QUANTITY | 0.95+ |
close to 700 million | QUANTITY | 0.94+ |
Ann Potten & Cole Humphreys, HPE | CUBE Conversation
>>Hi, everyone. Welcome to this program. Sponsored by HPE. I'm your host, Lisa Martin. We're here talking about being confident and trusting your server security with HPE. I have two guests here with me to talk about this important topic. Cole Humphreys joins us global server security product manager at HPE and Anne Potton trusted supply chain program lead at HPE guys. It's great to have you on the program. Welcome. >>Hi, thanks. Thank you. It's nice to be here, Anne. >>Let's talk about really what's going on there. Some of the trends, some of the threats there's so much change going on. What is HPE seeing? >>Yes. Good question. Thank you. Yeah. You know, cyber security threats are increasing everywhere and it's causing disruption to businesses and governments alike worldwide. You know, the global pandemic has caused limited employee availability. Originally this has led to material shortages and these things opens the door perhaps even wider for more counterfeit parts and products to enter the market. And these are challenges for consumers everywhere. In addition to this, we're seeing the geopolitical environment has changed. We're seeing, you know, rogue nation states using cybersecurity warfare tactics to immobilize an entity's ability to operate and perhaps even use their tactics for revenue generation, the Russian invasion of Ukraine as one example, but businesses are also under attack. You know, for example, we saw solar winds, software supply chain was attacked two years ago, which unfortunately went a notice for several months and then this was followed by the colonial pipeline attack and numerous others. >>You know, it just seems like it's almost a daily occurrence that we hear of a cyber attack on the evening news. And in fact, it's estimated that the cyber crime cost will reach over 10 and a half trillion dollars by 2025 and will be even more profitable than the global transfer of all major illegal drugs combined. This is crazy, you know, the macro environment in which companies operate in has changed over the years. And you know, all of these things together and coming from multiple directions presents a cybersecurity challenge for an organization and in particular it's supply chain. And this is why HPE is taking proactive steps to mitigate supply chain risk so that we can provide our customers with the most secure products and services. >>So Cole, let's bring you into the conversation and did a great job of summarizing the major threats that are going on the tumultuous landscape. Talk to us Cole about the security gap. What is it? What is HPE seeing and why are organizations in this situation? >>Hi, thanks Lisa. You know, what we're seeing is as this threat landscape increases to, you know, disrupt or attempt to disrupt our customers and our partners and ourselves, I, it's a kind of a double edge if you will, because you're seeing the increase in attacks, but what you're not seeing is that equal to growth of the skills and the experiences required to address the scale. So it really puts the pressure on companies because you have a skill gap, a talent gap, if you will. There's, you know, for example, there are projected to be three and a half million cyber roles open in the next few years, right? So all this scale is growing and people are just trying to keep up, but the gap is growing just literally the people to stop the bad actors from attacking the data and, and to complicate matters. You're also seeing a dynamic change of the who and the, how the attacks are happening, right? >>The classic attacks that you've seen, you know, and the SDK and all the, you know, the history books, those are not the standard plays anymore. You'll have, you know, nation states going after commercial entities and, you know, criminal syndicates and alluded to that. There's more money in it than the international drug trade. So you can imagine the amount of criminal interest in getting this money. So you put all that together. And the increasing of attacks, it just is really pressing down is, is literally, I mean, the reports we're reading over half of everyone, obviously the most critical infrastructure cares, but even just mainstream computing requirements need to have their data protected, help me protect my workloads and they don't have the people in house, right? So that's where partnership is needed, right? And that's where we believe, you know, our approach with our partner ecosystem is it's not HPE delivering everything ourself, but all of us in this together is really what we believe. The only way we're gonna be able to get this done. >>So collets double click on that HPE and its partner ecosystem can provide expertise that companies and every industry are lacking. You're delivering HPE as a 360 degree approach to security. Talk about what that 360 degree approach encompasses. >>Thank you. It is, it is an approach, right? Because I feel that security is a, it is a, it is a thread that will go through the entire construct of a technical solution, right there. Isn't a, oh, if you just buy this one server with this one feature, you don't have to worry about anything else. It's really it's everywhere. And at least the way we believe it, it's everywhere. And it in a 360 degree approach, the way we like to frame it is it's, it's this beginning with our supply chain, right? We take a lot of pride in the designs, you know, the really smart engineering teams, the design, our technology, our awesome world class global operations team, working in concert to deliver some of these technologies into the market. That is a huge, you know, great capability, but also a huge risk to customers, cuz that is the most vulnerable place that if you inject some sort of malware or, or tampering at that point, you know, the rest of the story really becomes mute because you've already defeated, right? >>And then you move in to you physically deployed that through our global operations. Now you're in an operating environment. That's where automation becomes key, right? We have software innovations in, you know, our ILO product of management inside those single servers. And we have really cool new grain lake for compute operations management services out there that give customers more control back and more information to deal with this scaling problem. And then lastly, as you begin to wrap up, you know, the natural life cycle and you need to move to new platforms and new technologies, right? We think about the exit of that life cycle and how do we make sure we dispose of the data and, and move those products into a secondary life cycle so that we can move back into this kind of circular 360 degree approach. We don't wanna leave our customers hanging anywhere in this entire journey. >>That 360 degree approach is so critical, especially given as we've talked about already in this segment, the changes, the dynamics in the environment. And as Cole said, this is this 360 degree approach that HPE is delivering is beginning in the manufacturing supply chain seems like the first line of defense against cyber attackers talked to us about why that's important. And where did the impetus come from? Was that COVID was that customer demand? >>Yep. Yep. Yeah. The supply chain is critical. Thank you. So in 2018, we, we could see all of these cybersecurity issues starting to emerge and predicted that this would be a significant challenge for our industry. So we formed a strategic initiative called the trusted supply chain program designed to mitigate cybersecurity risk in the supply chain and really starting at the product with the product life cycle, starting at the product design phase and moving through sourcing and manufacturing, how we deliver products to our customers and ultimately a product's end of life that Cole mentioned. So in doing this, we're able to provide our customers with the most secure products and services, whether they're buying their servers from, for their data center or using our own GreenLake services. So just to give you some examples, something that is foundational to our trusted supply chain program, we've built a very robust cybersecurity supply chain risk management program that includes assessing our risk at our all factories and our suppliers. >>Okay. We're also looking at strengthening our software supply chain by developing mechanisms to identify software vulnerabilities and hardening our own software build environments to protect against counterfeit parts that I mentioned in the beginning from entering our supply chain, we've recently started a blockchain program so that we can identify component provenance and trace part parts back to their original manufacturers. So our security efforts, you know, continue even after product manufacturing, we offer three different levels of secure delivery services for our customers, including, you know, a dedicated truck and driver or perhaps even an exclusive use vehicle. We can tailor our delivery services to whatever the customer needs. And then when a product is at its end of life, products are either recycled or disposed using our approved vendors. So our servers are also equipped with the one button secure erase that erases every bite of data, including firmware data and talking about products, we've taken additional steps to provide additional security features for our products. >>Number one, we can provide platform certificates that allow the user to cryptographically verify that their server hasn't been tampered with from the time it left the manufacturing facility to the time that it arrives at the customer's factory facility. In addition to that, we've launched a dedicated line of trusted supply chain servers with additional security features, including secure configuration lock chassis intrusion detection. And these are assembled at our us factory by us vetted employees. So lots of exciting things happening within the supply chain, not just to shore up our own supply chain risk, but also to provide our customer the most. So that announcement. >>All right, thank you. You know, they've got great setup though, because I think you gotta really appreciate the whole effort that we're putting into, you know, bringing these online. But one of the just transparently the gaps we had as we proved this out was as you heard, this initial proof was delivered with assembly in the us factory employees, you know, fantastic program really successful in all our target industries and, and even expanding to places we didn't really expect it to, but it's kind of going to the point of security. Isn't just for one industry or one set of customers, right? We're seeing it in our partners. We're seeing it in different industries than we have in the past. And, but the challenge was we couldn't get this global right out the gate, right? This has been a really heavy transparently, a us federal activated focus, right? >>If, if you've been tracked in what's going on since may of last year, there's been a call to action to improve a nation cybersecurity. So we've been all in on that and we have an opinion and we're working hard on that, but we're a global company, right? How can we get this out to the rest of the world? Well guess what, this month we figured it out and well, let's take a lot more than those month. We did a lot of work that we figured it out and we have launched a comparable service globally called server security optimization service, right? HPE server security optimization service for proli. I like to call it, you know, S S O S sauce, right? Do you wanna be clever HPE sauce that we can now deploy globally? We get that product hardened in the supply chain, right? Because if you take the best of your supply chain and you take your technical innovations, that you've innovated into the server, you can deliver a better experience for your customers, right? >>So the supply chain equals server technology and our awesome, you know, services teams deliver supply chain security at that last mile. And we can deliver it in the European markets. And now in the Asia Pacific markets right now, we could always just, we could ship it from the us to other markets. So we could always fulfill this promise, but I think it's just having that local access into your partner ecosystem and stuff just makes more sense, but it is big deal for us because now we have activated a meaningful supply chain security benefit for our entire global network of partners and customers, and we're excited about it. And we hope our customers are too. >>That's huge Cole. And, and in terms of this significance of the impact that HPE is delivering through its partner ecosystem globally as the supply chain continues to be one of the terms on everyone's lips here, I'm curious Cole, we just couple months ago, we're at discover. Can you talk about what HPE is doing here from a, a security perspective, this global approach that it's taking as it relates to what HPE was talking about at discover, in terms of we wanna secure the enterprise to deliver these experiences from edge to cloud. >>You know, I feel like for, for me, and, and I think you look at the shared responsibility models and you know, other frameworks out there, the way we're the way I believe it to be is this is it's, it's a solution, right? There's not one thing, you know, if you use HPE supply chain, the end, or if you buy an HPE pro line the end, right. It is an integrated connectedness with our, as a service platform, our service and support commitments, you know, our extensive partner ecosystem, our alliances, all of that comes together to ultimately offer that assurance to a customer. And I think these are specific, meaningful proof points in that chain of custody, right? That chain of trust, if you will, because as the world becomes more, zero trust, we are gonna have to prove ourselves more, right. And these are those kind of technical I credentials and identities and, you know, capabilities that a modern approach to security need. >>Excellent, great work there. And let's go ahead and, and take us home, take the audience through what you think ultimately, what HPE is doing, really infusing security at that 360 degree approach level that we talked about. What are some of the key takeaways that you want the audience that's watching here today to walk away with? >>Right. Right. Thank you. Yeah. You know, with the increase in cyber security threats, everywhere affecting all businesses globally, it's gonna require everyone in our industry to continue to evolve in our supply chain security in our product security in order to protect our customers in our business, continuity protecting our supply chain is something that HPE is very committed to and takes very seriously. So, you know, I think regardless of whether our customers are looking for an on-prem solution or a GreenLake service, you know, HPE is proactively looking for in mitigating any security risk in this supply chain so that we can provide our customers with the most secure products and services. >>Awesome. Ann and Cole. Thank you so much for joining me today, talking about what HPE is doing here and why it's important as our program is called to be confident and trust your server security with HPE and how HPE is doing that. Appreciate your insights on your time. >>Thank you so much for having thank >>You, Lisa, >>For Cole Humphreys and Anne Potton I'm Lisa Martin. We wanna thank you for watching this segment in our series. Be confident and trust your server security with HPE. We'll see you soon.
SUMMARY :
It's great to have you on the program. It's nice to be here, Anne. Some of the trends, you know, rogue nation states using cybersecurity warfare tactics to And you know, all of these things together So Cole, let's bring you into the conversation and did a great job of summarizing the major threats the pressure on companies because you have a skill gap, And that's where we believe, you know, our approach with our partner ecosystem as a 360 degree approach to security. We take a lot of pride in the designs, you know, the really smart engineering We have software innovations in, you know, our ILO product of supply chain seems like the first line of defense against cyber attackers talked to us So just to give you some examples, something that is foundational So our security efforts, you know, continue even after product manufacturing, supply chain risk, but also to provide our customer the most. But one of the just transparently the gaps we had as we proved this out was as you heard, I like to call it, you know, S S O S sauce, right? you know, services teams deliver supply chain security at that last mile. to be one of the terms on everyone's lips here, I'm curious Cole, we just couple months ago, the end, or if you buy an HPE pro line the end, right. And let's go ahead and, and take us home, take the audience through what you think in this supply chain so that we can provide our customers with the most secure products and services. server security with HPE and how HPE is doing that. We wanna thank you for watching this segment in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Anne Potton | PERSON | 0.99+ |
Anne | PERSON | 0.99+ |
Ann | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Ann Potten | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Cole Humphreys | PERSON | 0.99+ |
Cole | PERSON | 0.99+ |
two guests | QUANTITY | 0.99+ |
first line | QUANTITY | 0.99+ |
360 degree | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
2025 | DATE | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
one set | QUANTITY | 0.98+ |
over 10 and a half trillion dollars | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
ILO | ORGANIZATION | 0.97+ |
may | DATE | 0.97+ |
couple months ago | DATE | 0.96+ |
this month | DATE | 0.95+ |
one industry | QUANTITY | 0.94+ |
GreenLake | ORGANIZATION | 0.94+ |
three | QUANTITY | 0.93+ |
one | QUANTITY | 0.93+ |
last year | DATE | 0.92+ |
one example | QUANTITY | 0.92+ |
three and a half million cyber roles | QUANTITY | 0.91+ |
single servers | QUANTITY | 0.91+ |
double edge | QUANTITY | 0.9+ |
pandemic | EVENT | 0.9+ |
Ukraine | LOCATION | 0.83+ |
zero trust | QUANTITY | 0.8+ |
one server | QUANTITY | 0.78+ |
over half | QUANTITY | 0.77+ |
one thing | QUANTITY | 0.71+ |
COVID | OTHER | 0.69+ |
S S O | ORGANIZATION | 0.67+ |
next few years | DATE | 0.64+ |
Russian | OTHER | 0.63+ |
European | OTHER | 0.55+ |
bite | QUANTITY | 0.54+ |
months | QUANTITY | 0.46+ |
Snehal Antani, Horizon3.ai | CUBE Conversation
(upbeat music) >> Hey, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase, season two, episode four. I'm your host, Lisa Martin. This topic is cybersecurity detect and protect against threats. Very excited to welcome a CUBE alumni back to the program. Snehal Antani, the co-founder and CEO of Horizon3 joins me. Snehal, it's great to have you back in the studio. >> Likewise, thanks for the invite. >> Tell us a little bit about Horizon3, what is it that you guys do? You were founded in 2019, got a really interesting group of folks with interesting backgrounds, but talk to the audience about what it is that you guys are aiming to do. >> Sure, so maybe back to the problem we were trying to solve. So my background, I was a engineer by trade, I was a CIO at G Capital, CTO at Splunk and helped grow scale that company. And then took a break from industry to serve within the Department of Defense. And in every one of my jobs where I had cyber security in my responsibility, I suffered from the same problem. I had no idea I was secure or that we were fixing the right vulnerabilities or logging the right data in Splunk or that our tools and processes and people worked together well until the bad guys had showed up. And by then it was too late. And what I wanted to do was proactively verify my security posture, make sure that my security tools were actually effective, that my people knew how to respond to a breach before the bad guys were there. And so this whole idea of continuously verifying my security posture through security testing and pen testing became a passion project of mine for over a decade. And through my time in the DOD found the right group of an early people that had offensive cyber experience, that had defensive cyber experience, that knew how to build and ship and deliver software at scale. And we came together at the end of 2019 to start Horizon3. >> Talk to me about the current threat landscape. We've seen so much change in flux in the last couple of years. Globally, we've seen the threat actors are just getting more and more sophisticated as is the different types of attacks. What are you seeing kind of horizontally across the threat landscape? >> Yeah, the biggest thing is attackers don't have to hack in using Zero-days like you see in the movies. Often they're able to just log in with valid credentials that they've collected through some mechanism. As an example, if I wanted to compromise a large organization, say United Airlines, one of the things that an attacker's going to go off and do is go to LinkedIn and find all of the employees that work at United Airlines. Now you've got say, 7,000 pilots. Of those pilots, you're going to figure out quickly that their user IDs and passwords or their user IDs at least are first name, last initial @united.com. Cool, now I have 7,000 potential logins and all it takes is one of them to reuse a compromised password for their corporate email, and now you've got an initial user in the system. And most likely, that initial user has local admin on their laptops. And from there, an attacker can dump credentials and find a path to becoming a domain administrator. And what happens oftentimes is, security tools don't detect this because it looks like valid behavior in the organization. And this is pretty common, this idea of collecting information on an organization or a target using open source intelligence, using a mix of credential spraying and kind of low priority or low severity exploitations or misconfigurations to get in. And then from there, systematically dumping credentials, reusing those credentials, and finding a path towards compromise. And less than 2% of CVEs are actually used in exploits. Most of the time, attackers chain together misconfigurations, bad product defaults. And so really the threat landscape is, attackers don't hack in, they log in. And organizations have to focus on getting the basics right and fundamentals right first before they layer on some magic easy button that is some security AI tools hoping that that's going to save their day. And that's what we found systemically across the board. >> So you're finding that across the board, probably pan-industry that a lot of companies need to go back to basics. We talk about that a lot when we're talking about security, why do you think that is? >> I think it's because, one, most organizations are barely treading water. When you look at the early rapid adopters of Horizon3's pen testing product, autonomous pen testing, the early adopters tended to be teams where the IT team and the security team were the same person, and they were barely treading water. And the hardest part of my job as a CIO was deciding what not to fix. Because the bottleneck in the security process is the actual capacity to fix problems. And so, fiercely prioritizing issues becomes really important. But the tools and the processes don't focus on prioritizing what's exploitable, they prioritize by some arbitrary score from some arbitrary vulnerability scanner. And so we have as a fundamental breakdown of the small group of folks with the expertise to fix problems tend to be the most overworked and tend to have the most noise to need to sift through. So they don't even have time to get to the basics. They're just barely treading water doing their day jobs and they're often sacrificing their nights and weekends. All of us at Horizon3 were practitioners at one point in our career, we've all been called in on the weekend. So that's why what we did was fiercely focus on helping customers and users fix problems that truly matter, and allowing them to quickly reattack and verify that the problems were truly fixed. >> So when it comes to today's threat landscape, what is it that organizations across the board should really be focused on? >> I think, systemically, what we see are bad password or credential policies, least access privileged management type processes not being well implemented. The domain user tends to be the local admin on the box, no ability to understand what is a valid login versus a malicious login. Those are some of the basics that we see systemically. And if you layer that with it's very easy to say, misconfigure vCenter, or misconfigure a piece of Cisco gear, or you're not going to be installing, monitoring security observability tools on that HPE Integrated Lights Out server and so on. What you'll find is that you've got people overworked that don't have the capacity to fix. You have the fundamentals or the basics not well implemented. And you have a whole bunch of blind spots in your security posture. And defenders have to be right every time, attackers only have to be right once. And so what we have is this asymmetric fight where attackers are very likely to get in, and we see this on the news all the time. >> So, and nobody, of course, wants to be the next headline, right? Talk to me a little bit about autonomous pen testing as a service, what you guys are delivering, and what makes it unique and different than other tools that have been out, as you're saying, that clearly have gaps. >> Yeah. So first and foremost was the approach we took in building our product. What we set upfront was, our primary users should be IT administrators, network engineers, and that IT intern who, in three clicks, should have the power of a 20-year pen testing expert. So the whole idea was empower and enable all of the fixers to find, fix, and verify their security weaknesses continuously. That was the design goal. Most other security products are designed for security people, but we already know they're task saturated, they've got way too many tools under the belt. So first and foremost, we wanted to empower the fixers to fix problems that truly matter. The second part was, we wanted to do that without having to install credentialed agents all over the place or writing your own custom attack scripts, or having to do a bunch of configurations and make sure that it's safe to run against production systems so that you could test your entire attack surface. Your on-prem, your cloud, your external perimeter. And this is where AWS comes in to be very important, especially hybrid customers where you've got a portion of your infrastructure on AWS, a portion on-prem, and you use Horizon3 to be able to attack your complete attack surface. So we can start on-prem and we will find say, the AWS credentials file that was mistakenly saved on a shared drive, and then reuse that to become admin in the cloud. AWS didn't do anything wrong, the cloud team didn't do anything wrong, a developer happened to share a password or save a password file locally. That's how attackers get in. So we can start from on-prem and show how we can compromise the cloud, start from the cloud and show how we can compromise on-prem. Start from the outside and break in. And we're able to show that complete attack surface at scale for hybrid customers. >> So showing that complete attack surface sort of from the eyes of the attacker? >> That's exactly right, because while blue teams or the defenders have a very specific view of their environment, you have to look at yourself through the eyes of the attacker to understand what are your blind spots, what do they see that you don't see. And it's actually a discipline that is well entrenched within military culture. And that's also important for us as the company. We're about a third of Horizon3 served in US special operations or the intelligence community with the United States, and then DOD writ large. And a lot of that red team mindset, view yourself through the eyes of the attacker, and this idea of training like you fight and building muscle memory so you know how to react to the real incident when it occurs is just ingrained in how we operate, and we disseminate that culture through all of our customers as well. >> And at this point in time, every business needs to assume an attacker's going to get in. >> That's right. There are way too many doors and windows in the organization. Attackers are going to get in, whether it's a single customer that reused their Netflix password for their corporate email, a patch that didn't get applied properly, or a new Zero-day that just gets published. A piece of Cisco software that was misconfigured, not buy anything more than it's easy to misconfigure these complex pieces of technology. Attackers are going to get in. And what we want to understand as customers is, once they're in, what could they do? Could they get to my crown jewel's data and systems? Could they borrow and prepare for a much more complicated attack down the road? If you assume breach, now you want to understand what can they get to, how quickly can you detect that breach, and what are your ways to stifle their ability to achieve their objectives. And culturally, we would need a shift from talking about how secure I am to how defensible are we. Security is kind of a point in time state of your organization. Defensibility is how quickly you can adapt to the attacker to stifle their ability to achieve their objective. >> As things are changing constantly. >> That's exactly right. >> Yeah. Talk to me about a typical customer engagement. If there's, you mentioned folks treading water, obviously, there's the huge cybersecurity skills gap that we've been talking about for a long time now, that's another factor there. But when you're in customer conversations, who are you talking to? Typically, what are they coming to you for help? >> Yeah. One big thing is, you're not going to win and win a customer by taking 'em out to steak dinners. Not anymore. The way we focus on our go to market and our sales motion is cultivating champions. At the end of the proof of concept, our internal measure of successes is, is that person willing to get a Horizon3 tattoo? And you do that, not through steak dinners, not through cool swag, not through marketing, but by letting your results do the talking. Now, part of those results should not require professional services or consulting. The whole experience should be self-service, frictionless, and insightful. And that really is how we've designed the product and designed the entire sales motion. So a prospect will learn or discover about us, whether it's through LinkedIn, through social, through the website, but often because one of their friends or colleagues heard about us, saw our result, and is advocating on our behalf when we're not in the room. From there, they're going to be able to self-service, just log in to our product through their LinkedIn ID, their Google ID. They can engage with a salesperson if they want to. They can run a pen test right there on the spot against their home without any interaction with a sales rep. Let those results do the talking, use that as a starting point to engage in a more complicated proof of value. And the whole idea is we don't charge for these, we let our results do the talking. And at the end, after they've run us to find problems, they've gone off and fixed those issues, and they've rerun us to verify that what they've fixed was properly fixed, then they're hooked. And we have a hundred percent technical win rate with our prospects when they hit that find-fix-verify cycle, which is awesome. And then we get the tattoo for them, at least give them the template. And then we're off to the races. >> Sounds like you're making the process more simple. There's so much complexity behind it, but allowing users to be able to actually test it out themselves in a simplified way is huge. Allowing them to really focus on becoming defensible. >> That's exactly right. And the value is, especially now in security, there's so much hype and so much noise. There's a lot more time being spent self-discovering and researching technologies before you engage in a commercial discussion. And so what we try to do is optimize that entire buying experience around enabling people to discover and research and learn. The other part, remember is, offensive cyber and ethical hacking and so on is very mysterious and magical to most defenders. It's such a complicated topic with many nuance tools that they don't have the time to understand or learn. And so if you surface the complexity of all those attacker tools, you're going to overwhelm a person that is already overwhelmed. So we needed the experience to be incredibly simple and optimize that find-fix-verify aha moment. And once again, be frictionless and be insightful. >> Frictionless and insightful. Excellent. Talk to me about results, you mentioned results. We love talking about outcomes. When a customer goes through the PoC, PoV that you talked about, what are some of the results that they see that hook them? >> Yeah, the biggest thing is, what attackers do today is they will find a low from machine one plus a low from machine two equals compromised domain. What they're doing is they're chaining together issues across multiple parts of your system or your organization to opone your environment. What attackers don't do is find a critical vulnerability and exploit that single machine. It's always a chain, always multiple steps in the attack. And so the entire product and experience in, actually, our underlying tech is around attack paths. Here is the path, the attack path an attacker could have taken. That node zero our product took. Here is the proof of exploitation for every step along the way. So you know this isn't a false positive. In fact, you can copy and paste the attacker command from the product and rerun it yourself and see it for yourself. And then here is exactly what you have to go fix and why it's important to fix. So that path, proof, impact, and fix action is what the entire experience is focused on. And that is the results doing the talking, because remember, these folks are already overwhelmed, they're dealing with a lot of false positives. And if you tell them you've got another critical to fix, their immediate reaction is "Nope, I don't believe you. This is a false positive. I've seen this plenty of times, that's not important." So you have to, in your product experience and sales process and adoption process, immediately cut through that defensive or that reflex. And it's path, proof, impact. Here's exactly what you fix, here are the exact steps to fix it, and then you're off to the races. What I learned at Splunk was, you win hearts and minds of your users through amazing experience, product experience, amazing documentation. >> Yes. >> And a vibrant community of champions. Those are the three ingredients of success, and we've really made that the core of the product. So we win on our documentation, we win on the product experience, and we've cultivated pretty awesome community. >> Talk to me about some of those champions. Is there a customer story that you think really articulates the value of node zero and what it is that you are doing? >> Yeah, I'll tell you a couple. Actually, I just gave this talk at Black Hat on war stories from running 10,000 pen tests. And I'll try to be gentle on the vendors that were involved here, but the reality is, you got to be honest and authentic. So a customer, a healthcare organization ran a pen test and they were using a very well-known managed security services provider as their security operations team. And so they initiate the pen test and they wanted to audit their response time of their MSSP. So they run the pen test and we're in and out. The whole pen test runs two hours or less. And in those two hours, the pen test compromises the domain, gets access to a bunch of sensitive data, laterally maneuvers, rips the entire environment apart. It took seven hours for the MSSP to send an email notification to the IT director that said, "Hey, we think something suspicious is going on." >> Wow. >> Seven hours! >> That's a long time. >> We were in and out in two, seven hours for notification. And the issue with that healthcare company was, they thought they had hired the right MSSP, but they had no way to audit their performance. And so we gave them the details and the ammunition to get services credits to hold them accountable and also have a conversation of switching to somebody else. >> Accountability is key, especially when we're talking about the threat landscape and how it's evolving day to day. >> That's exactly right. Accountability of your suppliers or your security vendors, accountability of your people and your processes, and not having to wait for the bad guys to show up to test your posture. That's what's really important. Another story that's interesting. This customer did everything right. It was a banking customer, large environment, and they had Fortinet installed as their EDR type platform. And they initiate us as a pen test and we're able to get code execution on one of their machines. And from there, laterally maneuver to become a domain administrator, which in security is a really big deal. So they came back and said, "This is absolutely not possible. Fortinet should have stopped that from occurring." And it turned out, because we showed the path and the proof and the impact, Fortinet was misconfigured on three machines out of 5,000. And they had no idea. >> Wow. >> So it's one of those, you want to don't trust that your tools are working, don't trust your processes, verify them. Show me we're secure today. Show me we're secure tomorrow. And then show me again we're secure next week. Because my environment's constantly changing and the adversary always has a vote. >> Right, the constant change in flux is huge challenge for organizations, but those results clearly speak for themselves. You talked about speed in terms of time, how quickly can a customer deploy your technology, identify and remedy problems in their environment? >> Yeah, this find-fix-verify aha moment, if you will. So traditionally, a customer would have to maybe run one or two pen tests a year. And then they'd go off and fix things. They have no capacity to test them 'cause they don't have the internal attack expertise. So they'd wait for the next pen test and figure out that they were still exploitable. Usually, this year's pen test results look identical than last year's. That isn't sustainable. So our customers shift from running one or two pen tests a year to 40 pen tests a month. And they're in this constant loop of finding, fixing, and verifying all of the weaknesses in their infrastructure. Remember, there's infrastructure pen testing, which is what we are really good at, and then there's application level pen testing that humans are much better at solving. >> Okay. >> So we focus on the infrastructure side, especially at scale. But can you imagine, 40 pen tests a month, they run from the perimeter, the inside from a specific subnet, from work from home machines, from the cloud. And they're running these pen tests from many different perspectives to understand what does the attacker see from each of these locations in their organization and how do they systemically fix those issues? And what they look at is, how many critical problems were found, how quickly were they fixed, how often do they reoccur. And that third metric is important because you might fix something, but if it shows up again next week because you've got bad automation, you're in a rat race. So you want to look at that reoccurrence rate also. >> The reoccurrence rate. What are you most excited about as, obviously, the threat landscape continues to evolve, but what are you most excited about for the company and what it is that you're able to help organizations across industries achieve in such tumultuous times? >> Yeah. One of the coolest things is, because I was a customer for many of these products, I despised threat intelligence products. I despised them. Because there were basically generic blog posts. Maybe delivered as a data feed to my Splunk environment or something. But they're always really generic. Like, "You may have a problem here." And as a result, they weren't very actionable. So one of the really cool things that we do, it's just part of the product is this concept of flares, flares that we shoot up. And the idea is not to cause angst or anxiety or panic, but rather we look at threat intelligence and then because all of the insights we have from your pen test results, we connect those two together and say, "Your VMware Horizon instance at this IP is exploitable. You need to fix it as fast as possible, or is very likely to be exploited. And here is the threat intelligence and in the news from CSAI and elsewhere that shows why it's important." So I think what is really cool is we're able to take together threat intelligence out in the wild combined with very precise understanding of your environment to give you very accurate and actionable starting points for what you need to go fix or test or verify. And when we do that, what we see is almost like, imagine this ball bouncing, that is the first drop of the ball, and then that drives the first major pen test. And then they'll run all these subsequent pen tests to continue to find and fix and verify. And so what we see is this tremendous amount of excitement from customers that we're actually giving them accurate, detailed information to take advantage of, and we're not causing panic and we're not causing alert and fatigue as a result. >> That's incredibly important in this type of environment. Last question for you. If autonomous pen testing is obviously critical and has tremendous amount of potential for organizations, but it's only part of the equation. What's the larger vision? >> Yeah, we are not a pen testing company and that's something we decided upfront. Pen testing is a sensor. It collects and understands a tremendous amount of data for your attack surface. So the natural next thing is to analyze the pen test results over time to start to give you a more accurate understanding of your governance, risk, and compliance posture. So now what happens is, we are able to allow customers to go run 40 pen tests a month. And that kind of becomes the initial land or flagship product. But then from there, we're able to upsell or increase value to our customers and start to compete and take out companies like Security Scorecard or RiskIQ and other companies like that, where there tended to be, I was a user of all those tools, a lot of garbage in, garbage out. Where you can't fill out a spreadsheet and get an accurate understanding of your risk posture. You need to look at your detailed pen test results over time and use that to accurately understand what are your hotspots, what's your recurrence rate and so on. And being able to tell that story to your auditors, to your regulators, to the board. And actually, it gives you a much more accurate way to show return on investment of your security spend also. >> Which is huge. So where can customers and those that are interested go to learn more? >> So horizonthree.ai is the website. That's a great starting point. We tend to very much rely on social channels, so LinkedIn in particular, to really get our stories out there. So finding us on LinkedIn is probably the next best thing to go do. And we're always at the major trade shows and events also. >> Excellent. Snehal, it's been a pleasure talking to you about Horizon3, what it is that you guys are doing, why, and the greater vision. We appreciate your insights and your time. >> Thank you, likewise. >> All right. For my guest, I'm Lisa Martin. We want to thank you for watching the AWS Startup Showcase. We'll see you next time. (gentle music)
SUMMARY :
of the AWS Startup Showcase, but talk to the audience about what it is that my people knew how to respond Talk to me about the and do is go to LinkedIn and that across the board, the early adopters tended to that don't have the capacity to fix. to be the next headline, right? of the fixers to find, fix, to understand what are your blind spots, to assume an attacker's going to get in. Could they get to my crown coming to you for help? And at the end, after they've Allowing them to really and magical to most defenders. Talk to me about results, And that is the results doing Those are the three and what it is that you are doing? to the IT director that said, And the issue with that and how it's evolving day to day. the bad guys to show up and the adversary always has a vote. Right, the constant change They have no capacity to test them to understand what does the attacker see the threat landscape continues to evolve, And the idea is not to cause but it's only part of the equation. And that kind of becomes the initial land to learn more? So horizonthree.ai is the website. to you about Horizon3, what it is the AWS Startup Showcase.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Snehal | PERSON | 0.99+ |
two hours | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
United Airlines | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
20-year | QUANTITY | 0.99+ |
Seven hours | QUANTITY | 0.99+ |
seven hours | QUANTITY | 0.99+ |
Snehal Antani | PERSON | 0.99+ |
next week | DATE | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
three machines | QUANTITY | 0.99+ |
less than 2% | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
5,000 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
G Capital | ORGANIZATION | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
end of 2019 | DATE | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
Horizon3 | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
third metric | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
7,000 pilots | QUANTITY | 0.99+ |
DOD | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
US | LOCATION | 0.98+ |
One | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
hundred percent | QUANTITY | 0.97+ |
three clicks | QUANTITY | 0.97+ |
@united.com | OTHER | 0.97+ |
single machine | QUANTITY | 0.97+ |
two pen tests | QUANTITY | 0.97+ |
Horizon3 | TITLE | 0.97+ |
three ingredients | QUANTITY | 0.97+ |
40 pen tests | QUANTITY | 0.97+ |
7,000 potential logins | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
first major pen test | QUANTITY | 0.94+ |
this year | DATE | 0.94+ |
last couple of years | DATE | 0.94+ |
machine two | QUANTITY | 0.93+ |
first name | QUANTITY | 0.92+ |
10,000 pen tests | QUANTITY | 0.92+ |
United States | LOCATION | 0.91+ |
over a decade | QUANTITY | 0.91+ |
single customer | QUANTITY | 0.9+ |
40 pen tests a month | QUANTITY | 0.89+ |
Startup Showcase | EVENT | 0.86+ |
a year | QUANTITY | 0.86+ |
One big thing | QUANTITY | 0.85+ |
RiskIQ | ORGANIZATION | 0.85+ |
VMware | ORGANIZATION | 0.83+ |
ORGANIZATION | 0.82+ | |
first drop of | QUANTITY | 0.82+ |
Tony Taylor, HPE | CUBE Conversation, August 2022
>>Hey everyone. Lisa Martin here with you. I'm with HPE right now. Tony Taylor joins me the director global test and supply chain cybersecurity at HPE. Tony. It's great to have you on the cube. >>Hi, thank you. Lisa's please, please, to be here. >>Tell me a little bit about your role and your background. >>I've been in the computer industry for about 33 years. Done a variety of roles throughout operations, fulfillment, R and D doing different things. My current role here at HPE is to lead in the organization, responsible for developing test solutions and our PCA manufacturing process and our systems integration team. And then we implement a supply chain cybersecurity process. That's focused on internal aspects of development, activities, and strategies, and then how we will drive our supply chain, our suppliers, to make sure that they adhere to these guidelines. >>And your background is engineering. I saw LinkedIn a little bit of science in there. Tell me a little bit about your background and how you got to where you are now. >>Oh, that's a, that's a long story going through school and doing that type of work. I, I, I got a phone call too many years ago and got involved in the computer industry, going from a, a user and working on those processes and then changing that to building product, introducing new product, developing new solutions and ideas, working on innovation and design of new products, new, new hardware, working on new software processes did heuristics level customer testing. So it's just a wide variety of activities. I've spanned a lot of different things over the years, been very fortunate to travel the world live in different parts of the world to bring up these activities. >>I always love to hear people's back stories on how they got to where they were. If it was a zigzaggy path or kind a path >>Was get a phone call from buddy one day, Hey, we're doing this. You wanna do it. Then that's where I ended up. >>And the rest is history. So a lot of dynamics in the last couple of years, obviously we've been hearing so much about the supply chain in the news for various reasons, but what are you seeing in the marketplace where with regards to security and the trusted supply chain, obviously a big focus there. What are you seeing? >>A lot of changes that have been occurring over time and especially in the last couple years with the things that we're seeing geopolitically is changing our, our environments, the threat vectors that we're seeing in, in cybersecurity are changing. They're becoming more sophisticated. They're coming in in different areas. What we're seeing is greater penetration and our customers. We're seeing a greater number of incidences in the, in the field where that, that I told you I'd stumble. The we're seeing a greater number of instances in the field and it's becoming a bigger impact for our customers and, and the supply chain. So we we've seen a tax at the root of the cause where neon gas, we're no longer having those activities that are coming into the, into the space. You're seeing greater ransomware processes and additional challenges associated with the cost associated with these programs. The, the infiltration from a hardware perspective, we've looked at those types of processes going through the supply chain processes are getting hacked more with that increased sophistication, even at the user level with phishing and Sping, those kind of things. And then you're seeing the, the changes in the geopolitical market. That's beginning to drive, you know, governmental aspects and things like that are coming in. So we we've seen roughly about what 10 and a half trillion worth of cybersecurity estimated in 2025, our loss on an annual perspective across the globe is right around a hundred billion, 45% of organizations have experienced or will be experiencing an attack. And by, so it it's just on the rise and it's creating a lot of concern with, with our customers. >>Yeah, it's really not a matter of these days. If we get hit it's when, when, so organizations right across every industry have to be prepared, what is HPE? What is HPE C as opportunities, obviously the threat landscape changing dramatically, but there's opportunities there for your customers truly tighten security. What are some of those opportunities through the HPE lens? >>The, the opportunities as we're looking at it is from an internal perspective, we need to begin focusing on all the activities and work that we're doing. How do we at hard in our environments, how do we, how do we grow those things? And then begin to investigate the things that we need to do at, at the, in the supply base, as those customers are beginning to look at things, hardening their environments, looking at their it systems, where are the areas for penetration within their environments? When you look at the process, we, we think cyber security a lot of times is just about it. Attacks counterfeit is a big aspect associated with this, and that can impact many of the different types of organizations. So what we've done is we created a, a heat map, looking at the different places where we believe those penetrations can take place internal. And that's our, our communication back out to our customers, look at the areas where you can be penetrated. And then where do you think are the, the areas that you really need to focus on? And then look for that remediation plan? I think that's the opportunity for our, our customers is to harden, you know, have a zero trust, but verified type process, >>Right? That's critical these days, as we know that threat landscape has changed so much recently and is only going to continue to change. As we said, it's not a matter of if it's now a matter of when an organizations need to be ready for that. So then you talked about the heat map from a technology. What is to help organizations really achieve a 360 degree approach to security >>From an H focus starts with our chief technology office, right? So we're looking at all the strategies as are coming down. They, we look at designing our hardware solutions to be able to support those activities. We're designing our systems and, and the integration programs around like GreenLake as services that we're able to provide to our customers to support that. And, and then, you know, as we continue to do that, we, we will, you know, look at, look within the supply chain and what are the things that we can do there to help, you know, drive, you know, the, the improvements there to really ensure that the products that are being delivered will make those customers requirements. >>And I understand you might have a teaser for me in terms of what we can expect going forward with HPE, with respect to cybersecurity in the supply chain, >>Lots of really good things that are coming up. And from a supply chain perspective, look for an announcement coming up in October for cybersecurity month, about what our next steps are and how we're really going to attack this problem. >>Excellent. And we'll be waiting for cybersecurity month in October. And to hear that announced from, from HPE, Tony, thanks so much for joining me on the queue today. Talking a little bit about your background, how you got to where you are now, the trusted supply chain and what HPE is doing there to really help customers mitigate the risk. We appreciate your insights and your time. >>Thank you. I appreciate your time. >>All Tony Taylor. I'm Lisa Martin. Thank you so much for watching this conversation. We'll see you next time.
SUMMARY :
It's great to have you on the cube. Lisa's please, please, to be here. And then we implement a supply And your background is engineering. on those processes and then changing that to building product, I always love to hear people's back stories on how they got to where they were. Then that's where I ended So a lot of dynamics in the last couple of years, That's beginning to drive, you know, governmental aspects and things like that are coming in. What is HPE C as opportunities, obviously the threat landscape changing dramatically, our customers, look at the areas where you can be penetrated. So then you talked about the heat map from a technology. We're designing our systems and, and the integration programs around like GreenLake And from a supply chain perspective, look for And to hear that announced from, I appreciate your time. Thank you so much for watching this conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Tony | PERSON | 0.99+ |
Tony Taylor | PERSON | 0.99+ |
August 2022 | DATE | 0.99+ |
2025 | DATE | 0.99+ |
October | DATE | 0.99+ |
360 degree | QUANTITY | 0.99+ |
10 and a half trillion | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
about 33 years | QUANTITY | 0.98+ |
Lisa | PERSON | 0.97+ |
around a hundred billion | QUANTITY | 0.95+ |
GreenLake | ORGANIZATION | 0.94+ |
last couple of years | DATE | 0.91+ |
45% | QUANTITY | 0.91+ |
zero trust | QUANTITY | 0.87+ |
last couple years | DATE | 0.86+ |
years | DATE | 0.72+ |
day | QUANTITY | 0.61+ |
Uri May, Hunters | CUBE Conversation, August 2022
(upbeat music) >> Hey everyone. And welcome to this CUBE Conversation which is part of the AWS startup showcase. Season two, episode four of our ongoing series. The theme of this episode is cybersecurity, detect and protect against threats. I'm your host, Lisa Martin, and I'm pleased to be joined by the founder and CEO of Hunters.AI, Uri May. Uri, welcome to theCUBE. It's great to have you here. >> Thank you, Lisa. It's great to be here. >> Tell me a little bit about your background and the founders story. This company was only founded in 2018, so you're quite young. But gimme that backstory about what you saw in the market that really determined, this is needed. >> Yeah, absolutely. So, I mean, I think the biggest thing for us was the understanding that significant things have happened in the cybersecurity landscape for customers and technology stayed the same. I mean, we tried on solving the same... We tried on solving a big problem with the same old tools when we actually noticed that the problem has changed significantly. And we saw that change happening in two different dimensions. The first is the types of attacks that we're defending against. A decade ago, we were mostly focused on these highly sophisticated nation state efforts that included unknown techniques and tactics and highly sophisticated kind of methods. Nowadays, we're talking a lot about cyber crime gangs, whoops of people that are financially motivated or using off the shelf tools, of the shelf malware, coordinating in the dark web, attacking for money and ransom basically, versus sophisticated intelligence kind of objectives. And in the same time of that happening, we also saw what we like to refer to as explosion of the securities stack. So some of our customers are using more than 60 or 70 different security tools that are generating sometimes tens of terabytes a day of flows. That explosion of data, together with a very persistent and consistent threat that is continuously affecting customers, create a very different environment, where you need to analyze a big variety of data and you need to constantly defend yourself against stuff that are happening all the time. And that was kind of like our wake moment when we understand that the tools that are out there now might have been the right tools a decade ago, they are probably not the right tools to solve the problem now. So yeah, I think that that was kind of what led us to Hunters. And in the same time, and I think that that's my personal kind of story behind it. We used to talk a lot about the fact that we want to solve a fundamental problem. And we, as part of the ideation around Hunters and us zooming in on exactly the areas that we want to focus on in security, we talked with a lot of CSOs, we talked with a lot of industry experts, everyone directed us to the security operation center. I mean the notion that there's a lot of tools and there's always going to be a lot of tools, but eventually decisions are being made by people that are running security operation center, that are actually acting as the first line of defense. And that's where you feel that the processes are woke. That's where you feel that that technology doesn't really meet the rabel, and the rabel doesn't really meet the hold. And for us, it was a very clear sign that this is where we need to focus on. And that set us on a journey to explore red hunting and then understand that we can solve something bigger than that. And then eventually get to where we are today, which is go to market around. So holistic a platform that can help SOC analysts doing the day to day job defending the organizations. >> So you saw back in 2018, probably even before that that the SIEM market was prime and right for disruption. And only in a four year time period, there's been some pretty significant milestones and accomplishment that the team at Hunters has made in that short timeframe. Talk to me about some of those big milestones that the company has reached in just four years. >> Yeah, I think that the biggest thing and I know that it's going to sound like a cliche, but we're actually believing that I think it's the team. I mean, we're able to go to an organization of around 150 employees. All over the world, the course, I think I mean the last time that I checked, like 15 countries. That's the most amazing feeling that you can have. That ability to attract people to a single mission from all over the world and to get them collaborate and do amazing things and achieve unbelievable accomplishment. I think that's the biggest thing. The other thing for us was customers. I mean, think about it like, SIEM it's such a central and critical system. So for us as a young startup from Tel Aviv to go out to Enterprise America and convince the biggest enterprise around the world to rip and replace the the existing solutions that are being built by the biggest software brands out there and install Hunters instead, that's a huge leap of trust, that we are very grateful for, and we're trying to handle with a lot of care and a lot of responsibility. And obviously, I think that other than that, is all of the investors that we were able to attract that basically enabled all of that customer acquisition and team building and product development. And we're very fortunate to work with the biggest names out there, both from a strategic perspective and also from tier one VCs from mainly from the U.S., but from all over the world, actually that are backing us. >> Great customers, solid foundation. Hunters is built for the clouds, is powered by Snowflake. This is AWS built. Talk to me about what's in it for me from an AWS customer perspective. What's that value in it for them? >> Yeah, so I think that the most important thing, in my opinion, at least, is the security value that you're getting from it. Other than the fact that Hunters is a multi-tenant SaaS application running in AWS, it's also a system that is highly tuned and specifically built to be very effective against detecting threats inside AWS environments. So we invested a lot of time in research, in analyzing the way attackers are operating inside cloud environments, specifically in AWS. And then we model these techniques and tactics and procedures into the system. We're leveraging data sets like AWS CloudRail and CloudWatch and VPC Flow Logs, obviously AWS GuardDuty which is an amazing detection system that AWS offer to its customer, and we're able to leverage it, correlate it with other signals. And at the same time, there's also the commercial aspect and the business aspect. I mean, we're allowing AWS customers to leverage the AWS credits to the marketplace to fund same projects like Hunters that comes with a lot of efficiencies also. And with a lot of additional capabilities like I mentioned earlier. >> So let's crack open Hunters.AI. What makes this approach different? You talked about the challenges that you guys saw in the market that were gaps there, and why technology needed to come in from a disruption standpoint. But describe the differentiators. When you're talking to perspective customers, what are those key differentiators that Hunters brings to the table? >> Yeah, absolutely. So we like to divide it into three main pillars. The first pillar is everything that we do with data, that is very different from our competitors. We believe that data should be completely liberated from the analytical layer. And that's why we're storing data in a dedicated data warehouse. Snowflake, as you mentioned earlier, is one of our go to data warehouses. And that give customers the ability to own their own data. So you as a customer can opt in into using Hunters on top of your Snowflake. It's not the only way. You can also get Snowflake bundled as part of that, your Hunter subscription, but for some customers that ability to reduce vendor lock risk on data on your own and also level security data for other kind of workflows is something that is really huge. So that's the first thing that is very different. The second thing is what we like to call security engineering as a service. So when you buy Hunters, you don't just buy a data platform. You actually buy a system, a SOC platform that is already populated with use cases. So what we are saying is that in today's world the threats that we're handling as a SOC, as security operations center professionals are actually shared by 80% of the customers out there. So 80% of the customers share around 80% of the threat. And what we're basically saying is let us as a vendor, solve the detection response around that 80%. So you as a customer could focus on the 20% that is unique to your environment. Then in a lot of cases generate 80% of the impact. So that means that you are getting a lot of rebuilt tools and detections, data modeling to your integrations, automatic investigations, scoring correlations. All of these things are being continuously deployed and delivered by us because we're multi tenant SaaS. And also allowing you again to get this effortless tail key kind of solution that is very different from your experience with your current SIEM tools that usually involves a lot of tuning, professional services, configuration, et cetera. And the last aspect of it, is everything that we're doing around automation. We're leveraging very unique graph technology and what we call automatic investigation enrichments that allows us to take all of these signals that we're extracting from all over the attacks, of say AWS included, but also the endpoint and the email and the network and IOT environments and whatever automatically investigate them, load them into a graph and then automatically correlate them to what we call stones, which are basically representation of incidents that are happening across your tax office. And that's a very unique capability that we bring into the table that demonstrates our focus on the analytical lens. So it's not just log aggregation, and querying and dashboarding kind of system. It's actually a security analytic system that is able to drive real insights on top of the data that you're plugging into it. >> So talk to me, Uri, when you're in customer conversations these days the market is there's so many dynamics and flux that customers are dealing with. Obviously, the threat landscape continues to expand and really become quite amorphous as that perimeter blends. What are some of the specific challenges that security operation center or SOC teams come to you saying, help us eliminate this. We have so many tools, we've probably got limited resources. What are those challenges and how does Hunters really wipe those off the plate? >> Yeah, so I think the first and foremost has to do with the second pillar that I mentioned earlier and that's security engineering. So for most security operations centers and most organizations around the world, the feeling is that they're kind of like stuck on this third wheel. They keep on buying tools and then implementing these tools and then writing rules and then generating noise and then fine tuning the rules. And then testing the rules and understanding that the fine tuning actually generated misdetections. And they're kind of like stuck on this vicious side. And no one can really help because a lot of the stuff that they're building, they're building it in their environment. And what we're saying is that, let us do it for you. Well, that 80% that we've mentioned earlier and allows you to really focus on the stuff that you're doing and even offset your talent. So, we're not talking about really a talent reduction. Because everyone needs more talent in cybersecurity nowadays but we're talking a lot about offset. I mean, if we had a team of five people investing efforts in building walls, building automation, and now three or four of these people can go and do advanced investigations, instant response, threat hunting interval, that's meaningful. For a lot of SOCs, in a lot of cases that means either identifying and analyzing a threat in time or missing it. So, I mean, I think that that's the biggest thing. And the other thing has to do with the first thing that I mentioned earlier, and these are the data challenges. Data challenges in terms of cost, performance, the ability to absorb data sets that today's tools can't really support. I mean, for example, one of the biggest data sets that we're loading that is tremendously helpful is raw data for EDR products. Raw data for EDR products in large enterprises can get to 10, 15, 20 terabytes a day. In today's SIEMs and SOC platforms that the customers are using, this thing is just as prohibited from SOC. They can't really analyze it because it's so costly. So what we're saying is a lot of what we're seeing is a lot of customers, either not analyzing it at all, or saving it for a very little amount of time, account of days. Because they can't support the retention around it. So the ability to store huge data sets for longer period of time makes it something that a lot of big enterprises need. And to be honest, I think that in the next couple of years they would also be forced to have these kind of capabilities, even from a compliance perspective. >> So in terms of outcomes, I'm hearing reduction in costs really helping security teams utilize their resources, the ability to analyze growing volumes of data. That's only going to continue to increase as we know. Is there a customer story, Uri that you have that really, where the value proposition of Hunters really shines through? >> Yeah, I think that one thing comes to mind from those hospitality vertical and actually it's a reference customer. I mean, we can share the name. His name is booking.com. It's also publicly shown on our website. And they think the coolest thing that we were able to do with booking is give them that capability to stay up to date with the threats that they're facing. So it's not just that we saved a lot of efforts from them because we came with a lot of out of the box capabilities that they can use. We also kept them up to date with everything that they were facing. And there was a couple of cases, where we were able to detect threats that were very recently from threat perspective. Based on our ability to invest research time and efforts in everything that is going on in the ecosystem and the feedback that we got from the customer, and it's not a single of feedback. Like we're getting it a lot, is that, without you guys we wouldn't be able to do the effective research and then the implementation of this and the threat modeling and the implementation of these things in time. And walking with you kind of like made the difference between analyzing it and reacting in time and potentially blocking like a very serious bridge versus maybe finding out when it's too late. >> Huge impact there. And I'm kind of thinking, Hunters aim, might be one of the reasons that booking.com's tagline it's booking.com, booking.yeah. Yeah, we're secure. We know if we can demonstrate that to everyone that uses our service. I noticed kind of wrapping things up here, Uri. I noticed that back in I think it was January of 2022, Hunters raised about 60 million in series C. You talked about kind of being in the GTM phase, where are some of those strategic investments? What have you been doing, focusing on this year and what's to come as we round out 22? >> Yeah, absolutely. So, I mean, there's a lot of building going on. Yeah. Still, right. I mean, we're getting into that scale mode and scale phase but we're very much also building our capabilities, building our infrastructure, building our teams, building our business processes. So there's a lot of efforts going into that, but in the same time, I mean, we've being able to vary, to depending our relationship with DataBlitz which is a very important partner of us. And we got some big news coming up on that. And they were a strategic investor that participated in our series C. And in the same time we're walking in the air market which is a very interesting market for us. And we get a lot of support from one other strategic investor that joined the series C, Deutsche Telekom. And they are a huge provider in IT and security in email, other than doing a lot of other things and including T-systems and T-Mobile and everything that has to do with that. So we're getting a lot of support from them. And regardless, I think, and that ties back to what we've mentioned earlier, the ability for us to come to really big customers with the quality of investors that we have is a very important external validation. It's basically saying like this company is here to stay. We're aiming at disrupting the market. We're building something big. You can count on us by replacing this critical system that we're talking about. And sometimes it makes a difference, like sometimes for some of the customers, it means that this is something that I can rely on. Like it's not a startup that is going to be sold two months after I'm deploying it. And it's not a founder that is going to disappear on me. And for a lot of customers, these things happen, especially in an ecosystem like cybersecurity, that is so big with such a huge variety of different systems. So, yeah, I think that we're getting ready for that scale mode and hopefully it'll happen sooner than what we think. >> A lot of growth already as we mentioned in the beginning of the program. Since just 2018 it sounds like from a foundation perspective, you guys are strong, you're rocking away and ready to really take things into 2023 with such force. Uri, thank you so much for joining me on the program, talking about what Hunters.AI is up to and how you're different and why you're disrupting the SIEM market. We appreciate your insights and your time. >> Absolutely. Lisa, the pleasure was all mine. Thank you for having me. >> Likewise. For Uri May, I'm Lisa Martin. Thank you for watching our CUBE Conversation as part of the AWS startup showcase. Keep it right here for more actions on theCUBE, your leader in tech coverage. (upbeat music)
SUMMARY :
and I'm pleased to be joined and the founders story. that the tools that are out there now that the SIEM market was prime that are being built by the biggest Hunters is built for the that AWS offer to its customer, that Hunters brings to the table? And that give customers the and flux that customers are dealing with. And the other thing has to do the ability to analyze and the feedback that we being in the GTM phase, and everything that has to do with that. and ready to really take things Lisa, the as part of the AWS startup showcase.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Deutsche Telekom | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
DataBlitz | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Uri May | PERSON | 0.99+ |
January of 2022 | DATE | 0.99+ |
August 2022 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
five people | QUANTITY | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
15 countries | QUANTITY | 0.99+ |
booking.com | ORGANIZATION | 0.99+ |
Uri | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
second thing | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
first pillar | QUANTITY | 0.99+ |
second pillar | QUANTITY | 0.99+ |
more than 60 | QUANTITY | 0.99+ |
Hunters | ORGANIZATION | 0.99+ |
tens of terabytes | QUANTITY | 0.99+ |
Hunters.AI | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
about 60 million | QUANTITY | 0.98+ |
T-Mobile | ORGANIZATION | 0.98+ |
Hunters | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
around 150 employees | QUANTITY | 0.98+ |
four year | QUANTITY | 0.98+ |
two different dimensions | QUANTITY | 0.98+ |
A decade ago | DATE | 0.98+ |
today | DATE | 0.97+ |
first line | QUANTITY | 0.97+ |
two months | QUANTITY | 0.96+ |
three main pillars | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
single mission | QUANTITY | 0.95+ |
single | QUANTITY | 0.94+ |
around 80% | QUANTITY | 0.94+ |
third wheel | QUANTITY | 0.94+ |
70 different security tools | QUANTITY | 0.93+ |
series C. | OTHER | 0.93+ |
a decade ago | DATE | 0.92+ |
Snowflake | TITLE | 0.92+ |
booking.yeah | ORGANIZATION | 0.92+ |
15 | QUANTITY | 0.9+ |
20 terabytes a day | QUANTITY | 0.9+ |
CUBE Conversation | EVENT | 0.88+ |
Season two | QUANTITY | 0.86+ |
tier one | QUANTITY | 0.86+ |
Hunters.AI | TITLE | 0.86+ |
Rakesh Narasimhan, Anitian | CUBE Conversation, August 2022
(bright upbeat music) >> Welcome, everyone, to this Cube conversation. It's part of our season two, episode four of the ongoing AWS Startup Showcase Series. Today's theme, "Cybersecurity: Detect and Protect Against Threats." I'm your host, Lisa Martin. I've got one of our alumni back with us. Rakesh Narasimhan joins me, President and CEO of Anitian. Rakesh, it's great to have you back on the program. >> Thank you very much. Pleasure to be here. >> So some congratulations are in order. I see that Anitian was recently awarded nine global InfoSec awards at RSA conference just this year including couple great titles here hot company and security company of the year. Talk to the audience who knows Anitian what is it doing to enable and empower the digital transformation for enterprises that are, I mean, we've been talking about the acceleration of digital transformation. How is Anitian an enabler of that? >> Thank you again for the opportunity. I think the big change that we brought to the table in Anitian is really what is typically a very manual, complex time consuming and quite expensive process. We've just brought software innovations to it and really that's customers who are trying to do compliance or security in the cloud which just provide a platform that basically accelerates a customer's application migration to cloud. And so that ability is the software innovation that we were able to bring to the space and that just wasn't there before. And so we're just happy that we took the opportunity to innovate there and just bring it to the customers. >> So let's now talk to and address those AWS customers. When you're talking to prospects, existing AWS customers what do you say are the differentiators that makes Anitian so unique when in AWS. >> That's a great question. I think the biggest innovation, the biggest thing that we bring to the table is really an acceleration and timeline and completion of their application. So if you're a customer and you're trying to get into a new market for compliance, for example or you're trying to basically get a new application up and running in a secure environment in either one of those cases, we have a product offering a platform offering that enables you to quickly get up and running and get to production. And that's been the reason why we've enjoyed enormous success in the marketplace in the AWS customer base. >> One of the areas where I see that an Anitian has been very successful is in helping cloud software vendors get FedRAMP compliance and be able to access what is a huge federal market. How are you able to do that? >> Yeah, I think the big thing that we focused on was you have a complete class of SaaS vendors out there who provide enormous innovation that they bring to the marketplace but the government market in general has not been able to participate in it because it again, like I said, it's very complex. It takes time and it's very expensive. And so we focused on that opportunity to really make it easier for all these cloud service providers to be able to bring their innovations to the government market, for example, with FedRAMP and so we help with the automation and the acceleration with our platform offering on top of cloud providers like AWS, and that enables the SaaS provider to offer that opportunity that hitherto is not available to now make it available in the government marketplace. And that's a huge buyer, if you will their budgets are huge. They're still buying even on a downturn in the market even as commercial vendors, who look at that, that market everybody's nervous about it. But if you look at the government market they have budget, they're buying and that needs to be provided to the install base. And so we help make that happen. >> How does that make you unique from a competitive perspective to be able to accelerate veteran for AWS customers in particular? >> I think the biggest issue has always been three things, right? It's complex, it's time consuming but most importantly, how quickly can a company make their software innovations available to a large market has always been sort of the challenge especially in the federal market. So we basically pre-engineering a platform taking care of all the requirements of the standard in compliance and security and then essentially help the customer bring that innovation on top of the AWS environment and making that available to the customers and record time. That's the reason why we're able to enjoy the success. Historically, the space has been very very focused on a lot of consulting folks really providing consulting on an hourly basis. We thought of actually bringing a software oriented approach just like people buy email, they buy service and then all the innovations that come along with it for the subscription that you pay. It's a very similar concept we brought to this space prior to this, either people did it themselves or they hired a lot of consulting folks to tell them what to do. And that could take a long time and then not just time and expense but every single time they made a change they would still, again, have to go redo all that work. We just brought a platform approach which is well understood by now in the industry you pay a subscription, you buy a platform and all the innovations come along for them. So that's huge productivity, time to market but most importantly it enables them to achieve their revenue goals because they're trying to get to market and service the customer, right? So we help them accomplish that in record time. >> So you are really impacting your customer's bottom line. You've been very successful in helping AWS public sector customers to accelerate FedRAMP. As you talked about FedRAMP compliance how are you now switching gears to focus on the AWS commercial customers and even enterprise DevOps teams to be able to accelerate cloud application security? >> Yeah, I think, again we started from a place of humility, if you will. You know, there's a lot of vendors a lot of folks make a lot of claims. We wanted to make sure that we first we're very good at doing something. And that's something was really go after the federal market and the success we achieved in that marketplace had a few insights for ourselves which was people really struggle in all kinds of environments, not just public sector. And what we found is that commercial customers are also trying to go to cloud. They're also dealing with the issues of security in securing their environments. And it's really the DevOps and DevSecOps folks on whom this burden falls. And they have to answer to so many different constituencies in an enterprise company. And so we time and time again while we did the work in FedRAMP we learned that, you know it's not just about compliance. It's also about securing on a base of standards. So how could we provide the same pre-engineered environment for DevOps and DevSecops teams to be able to run that environment for their applications that became an 'aha' for us because we were running into it all the time in the public sector side. So we went and talked to a few customers and said, 'Hey, how about we do the same thing on the commercial side for you?' And I wish I could take credit for this but it's actually not true. It's actually customers who came to us and said, 'Hey you did this really well for us in public sector side. Could you provide the same thing for us in the commercial side?' where it's not about all the documentation and all the audits and things that happen on the compliance side of the house. I just want you to provide an environment so that our DevOps teams could just operate in that environment and Devs can work on it. Can you do that? And we'll pay you. And that was born really our idea of secure cloud enterprise. Our primary offering historically has been secure cloud compliance with a compliance business if you will, where people could go into market and have a completely new market to go after. Whereas in the enterprise side we brought those innovations, those learnings and brought it to a commercial market. And so that's the new product, if you will, that we're launching to service that customer base, if you will. >> So if I'm an AWS customer when do I know it's time to contact Anitian and say, 'Guys we need help and we think you're the right ones to help us accelerate.' >> Yeah, I think it's re really straightforward if you are a customer commercial SaaS vendor, if you will, that runs an AWS and you want to go after a new market then you come to us and we can help you quickly get to all the compliance standards so that you can go sell in the government marketplace. That's an offering we already have, or you are a a brand new company and B2B company and you're developing an application and you want a pre-engineered environment that passes all the security standards so that you don't have to worry about it. You have a subscription to AWS and you have a subscription to us. And then that basically provides you a secure environment in which you can start developing your applications and start developing, deploying them much like your DevOps cycle would work. So we provide that basis already for you. So if you're a customer on the B2B side and you're going to cloud to get your applications to the marketplace on AWS, we're a great solution for you to actually have that engineered platform in place already. So those are the two areas where you can contact us and we can help you out. >> And talk to me about when you are in customer conversations especially as we've had such challenging times the last couple of years, how have those customer conversations changed and evolved? Are you seeing an acceleration up the C-suite stack? Is this a key priority for the CEO and his or her team? >> Yeah, I think it's a phenomenal point. I think security's always been top of mind for folks, not just the C-suite, but in boardrooms as well. But you know, the key thing we found is that even in a down market, sometimes in the environment that is playing out in the macro environment. I think the thing that has not changed is people are still trying to figure out how to make their dollar go further. And how do I get a better return on investment? So if you look at our compliance business that growth is all about that market is growing. There's still opportunity, and people are still having budgets and spending. So commercial companies are still trying to figure out how can I extend my market reach into new markets? So that's an area that the C-suite is really interested in. Funny enough, you would think in the cyber world it's a CSOs who are the ones who actually are looking for solutions from us that certainly an audience but CEOs and CROs are the folks who really clamor for our solution because it is their ability to enter a new market and go after a new budget that can grow their business and have an ROI pretty quickly. That's the ability for them to make that decision. So it's very pertinent to their buying behavior that we have aligned ourselves to very simply put by engaging us. They get to go after a new market to establish a new line of revenue they didn't have before. So that's always interesting to any C-suite member as you can imagine. And that's the compliance side. >> Absolutely establishing new revenue streams is huge and that's a big competitive differentiator. We've seen a lot of customers that weren't able in any industry to do that during the challenging pandemic times. And that is a game changer for organizations across industries. >> Exactly, exactly. And wishing that play out, not just on that side, but even on the commercial side where people are also trying to figure out how do I basically make sure it's pre-done so that it's one less thing for me to have to worry about so that I can be more productive. I can get to market pretty quickly which means I can, again, deliver to my customers quickly which means revenue for them as well. So we are the security business, but really if you notice we're solving a business problem for our customers and we're aligned to their ROI so that it's relatively easier for them to make a decision. They certainly get security in compliance but the bigger benefit for them is to grow their business itself. So we are trying to accelerate that momentum for them. >> That's critical, and I'm sure your customers really appreciate the impact that you're having on their growth, their ability to deliver to what I can only presume is their demanding customers. As one of the things I know that's been in short supply the last couple of years, is patience and tolerance. Is there Rakesh a customer story that you think really articulates the value of what Anitian is delivering? Maybe a favorite customer story that you mentioned when you're giving talks? >> Sure, sure. We really have a very customer base across the landscape. If you think about our compliance business, Smartsheet is a great example who partnered early. They were not even in the cloud before. And then that's a great example with AWS where the three of us work together to offer Smartsheet the collaboration software public SaaS company, if you will, who really established themselves and differentiated themselves in the marketplace by offering that on AWS. And we helped them accomplish their FedRAMP itself not just for once, but you know they've been great customers of ours multiple renewals over the years and every single year that the business that they get on the federal sizes increased because of the work that they did first with us. And so, you know, we've look for more opportunities with them, certainly on that part. And increasingly we start thinking about where else can we help them grow? Because typically most customers have a thing to solve on a compliance standard, but it turns out that the compliance journey is, you know some companies are trying to do Socto to be able to even sell. Then you want to do electronic commerce. You might have to do PCI or you want to sell under the federal government. You'll have to do FedRAMP and FedRAMP has moderate, high but depending on the customers you have, including DOD and once you get to DOD, they'll ask for IL4 and IL5. So these are different compliance regimes. If you will think of them as a journey and we want to be the company that provides a seamless progression for customers as they're on that journey so that we can actually deliver something of value. We're not interested in nickel and diamond customers and charging them by the hour, we're a platform player. We want to make sure that they use it to basically get their ROI and growth happening. And we just take care of the hard part of making sure that they're in compliance, right? And similarly, we're bringing the same idea like Smartsheet. I told you about to a commercial marketplace of customers who can do the same thing for commercial apps in the cloud. And so that gives us a very clean way for customers to really become not just productive, but satisfy their customers quickly and hence grow their business. And we celebrate that collaboration and all of that happens because of AWS and our ability to focus on those customers >> Sounds like a great partnership and definite synergy there on I know, and, you know as well, how customer obsessed in their own words AWS. Speaking of customers one more question for you in terms of being on that journey that compliance journey, which isn't a destination, right? It's probably a zigzaggy path. Do you work with customers that both haven't started the process to FedRAMP plans or those that maybe have with a competitor are running into roadblocks? Are those both routes to market for you? >> Yeah, we interestingly enough historically we used to see a lot of folks who have tried to do it themselves and found it hard or for a variety of reasons they just gave up. And so they would come to us. We have also examples of customers who have tried to go down the consulting path and has not worked and come to us so that it's sort of a broken project. We start from there, but a majority of our business is people who've gotten a contract from one of the agencies. Then they're like, 'oh now what!' We need to get this done before September. And so what's the quickest way to get there. And generally that's where we can help you because we are the best, fastest way to get there. And so we get that mix of customers people who have already tried hasn't worked out people who have tried with other folks hasn't worked out, but a majority of the folks are people who don't even know, you know how to go about doing it, but they know they have to do it in order for them to keep the customer that they've won one of the agencies, if you will. So that has given us a very healthy perspective on how to help customers of different kinds in that journey. The other thing is, you know, we've grown tremendously in the last couple of years. And the other thing we learned is every customer is different. And we tried to bring a very common approach to addressing this problem. Even though customers come in all shapes and forms we have startup companies in, you know early forms of maturity. And we have like really iconic, you know unicorn companies who we've helped go through FedRAMP. So the gamut is large, but you know we're learning a lot by doing this. And I think that's the key thing for me. I want our company to be one that is growing with innovation, but at the same time keeping flexibility in our approach so that we are not just learning new things, we're delivering on the harder problems our customers are facing. Cause I think that's where software innovation can really play a big differentiating role. And that's the reason why I always enjoyed being at Anitian and growing the business and keeping the company really, fast moving and innovative. >> Speaking of being fast moving and innovative here we are coming up on the fourth quarter of calendar year 22, what's next for Anitian? What are some of the exciting things that have you pumped up? Have you mojo going for what's next for the rest of the year? >> Yeah, I think a big portion of my enthusiasm for the company and the road ahead is I think it's rare if you look at the industry, oftentimes you see companies that start out with a single solution and then are able to grow from there. One of the best advantages Anitian has is this platform centric approach to do compliance on the journey I talked about. So if you think about that journey every customer that is going to cloud has this challenge that, they either have to comply do a bunch of standards, one or many. And then how do I do that in a platform approach in a common way so that I don't have to worry about it. I play a subscription and I am just protected by that. And I actually get the marketplace. So that's a tremendous journey we are on. We've only done a few of them and we have a whole new set of compliance standards coming on our platform. So that's one way, look forward to that. The other one I'm really looking forward to is the commercial customers. There's a huge opportunity for people to really know that they're sitting on top of a very secure environment in AWS. And how do I quickly propel myself into the marketplace so that I can be differentiated. I can get to market quickly but I can also make sure my innovations are getting to the marketplace as a customer, right? So I think I'm really excited about the things we are bringing to market just not just this year, but next year early next year on the compliance side, as well as the commercial side, that'll actually differentiate us and make it a lasting part of a customer's journey. And that's, I think the best thing you can hope for building a lasting company where your innovations are powering the productivity of your customers in a meaningful manner. And I always feel proud of the team. You mentioned the awards, but honestly more than anything else, we've put together a great team. And the team does a tremendous job with a very good ecosystem of partners. And our humility is it's not just us it's the ecosystem together. And the partnership with Amazon that helps us be the company we are able to be. We live in really story times and we're lucky to be part of this opportunity if you will. >> Yeah better together. That ecosystem is incredibly powerful. Thank you so much Rakesh for talking about what's going on at Anition, how you're helping customers, accelerate FedRAMP compliance, what you're doing in the commercial space and how you're helping your customers really improve their bottom line. We thank you so much for partnering with the Cube for season two, episode four of the AWS startup showcase. >> My pleasure. Thank you very much. >> And we want to thank you for watching but keep it right here for more action on the Cube which as you know, is your leader in tech coverage. I'm Lisa Martin. See you next time. (lively music)
SUMMARY :
of the ongoing AWS Pleasure to be here. and empower the digital transformation and just bring it to the customers. So let's now talk to and that enables you to quickly get up One of the areas where I see and that needs to be for the subscription that you pay. on the AWS commercial customers and the success we achieved and say, 'Guys we need help and we think and we can help you quickly get And that's the compliance side. And that is a game changer so that it's one less thing for me to have that you think really articulates but depending on the customers you have, that both haven't started the process So the gamut is large, but you know every customer that is going to cloud of the AWS startup showcase. Thank you very much. And we want to thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Rakesh | PERSON | 0.99+ |
Rakesh Narasimhan | PERSON | 0.99+ |
August 2022 | DATE | 0.99+ |
two areas | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Anitian | PERSON | 0.99+ |
Anitian | ORGANIZATION | 0.99+ |
Anition | ORGANIZATION | 0.99+ |
three things | QUANTITY | 0.99+ |
FedRAMP | ORGANIZATION | 0.99+ |
Cybersecurity: Detect and Protect Against Threats | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Today | DATE | 0.97+ |
this year | DATE | 0.96+ |
Smartsheet | TITLE | 0.96+ |
One | QUANTITY | 0.95+ |
early next year | DATE | 0.94+ |
DOD | TITLE | 0.93+ |
single solution | QUANTITY | 0.92+ |
one way | QUANTITY | 0.88+ |
one more question | QUANTITY | 0.88+ |
Anitian | TITLE | 0.85+ |
last couple of years | DATE | 0.83+ |
one of | QUANTITY | 0.81+ |
RSA conference | EVENT | 0.81+ |
calendar year 22 | DATE | 0.79+ |
September | DATE | 0.75+ |
DevSecops | ORGANIZATION | 0.72+ |
couple great titles | QUANTITY | 0.71+ |
once | QUANTITY | 0.71+ |
Startup Showcase Series | EVENT | 0.7+ |
season two | QUANTITY | 0.68+ |
FedRAMP | TITLE | 0.67+ |
urth | DATE | 0.67+ |
four | OTHER | 0.67+ |
IL5 | ORGANIZATION | 0.66+ |
episode four | OTHER | 0.63+ |
single time | QUANTITY | 0.63+ |
nine global | QUANTITY | 0.62+ |
single | QUANTITY | 0.62+ |
IL4 | ORGANIZATION | 0.6+ |
agencies | QUANTITY | 0.59+ |
Cube | COMMERCIAL_ITEM | 0.59+ |
Ann Potten & Cole Humphreys | CUBE Conversation, August 2022
(upbeat music) >> Hi, everyone, welcome to this program sponsored by HPE. I'm your host, Lisa Martin. We're here talking about being confident and trusting your server security with HPE. I have two guests here with me to talk about this important topic. Cole Humphreys joins us, global server security product manager at HPE, and Ann Potten, trusted supply chain program lead at HPE. Guys, it's great to have you on the program, welcome. >> Hi, thanks. >> Thank you. It's nice to be here. >> Ann let's talk about really what's going on there. Some of the trends, some of the threats, there's so much change going on. What is HPE seeing? >> Yes, good question, thank you. Yeah, you know, cybersecurity threats are increasing everywhere and it's causing disruption to businesses and governments alike worldwide. You know, the global pandemic has caused limited employee availability originally, this has led to material shortages, and these things opens the door perhaps even wider for more counterfeit parts and products to enter the market, and these are challenges for consumers everywhere. In addition to this, we're seeing the geopolitical environment has changed. We're seeing rogue nation states using cybersecurity warfare tactics to immobilize an entity's ability to operate, and perhaps even use their tactics for revenue generation. The Russian invasion of Ukraine is one example. But businesses are also under attack, you know, for example, we saw SolarWinds' software supply chain was attacked two years ago, which unfortunately went unnoticed for several months. And then, this was followed by the Colonial Pipeline attack and numerous others. You know, it just seems like it's almost a daily occurrence that we hear of a cyberattack on the evening news. And, in fact, it's estimated that the cyber crime cost will reach over $10.5 trillion by 2025, and will be even more profitable than the global transfer of all major illegal drugs combined. This is crazy. You know, the macro environment in which companies operate in has changed over the years. And, you know, all of these things together and coming from multiple directions presents a cybersecurity challenge for an organization and, in particular, its supply chain. And this is why HPE is taking proactive steps to mitigate supply chain risk, so that we can provide our customers with the most secure products and services. >> So, Cole, let's bring you into the conversation. Ann did a great job of summarizing the major threats that are going on, the tumultuous landscape. Talk to us, Cole, about the security gap. What is it, what is HPE seeing, and why are organizations in this situation? >> Hi, thanks, Lisa. You know, what we're seeing is as this threat landscape increases to, you know, disrupt or attempt to disrupt our customers, and our partners, and ourselves, it's a kind of a double edge, if you will, because you're seeing the increase in attacks, but what you're not seeing is an equal to growth of the skills and the experiences required to address the scale. So it really puts the pressure on companies, because you have a skill gap, a talent gap, if you will, you know, for example, there are projected to be 3 1/2 million cyber roles open in the next few years, right? So all this scale is growing, and people are just trying to keep up, but the gap is growing, just literally the people to stop the bad actors from attacking the data. And to complicate matters, you're also seeing a dynamic change of the who and the how the attacks are happening, right? The classic attacks that you've seen, you know, in the espionage in all the, you know, the history books, those are not the standard plays anymore. You'll have, you know, nation states going after commercial entities and, you know, criminal syndicates, as Ann alluded to, that there's more money in it than the international drug trade, so you can imagine the amount of criminal interest in getting this money. So you put all that together and the increasing of attacks it just is really pressing down as literally, I mean, the reports we're reading over half of everyone. Obviously, the most critical infrastructure cares, but even just mainstream computing requirements need to have their data protected, "Help me protect my workloads," and they don't have the people in-house, right? So that's where partnership is needed, right? And that's where we believe, you know, our approach with our partner ecosystem this is not HPE delivering everything ourself, but all of us in this together is really what we believe the only way we're going to be able to get this done. >> So, Cole, let's double-click on that, HPE and its partner ecosystem can provide expertise that companies in every industry are lacking. You're delivering HPE as a 360-degree approach to security. Talk about what that 360-degree approach encompasses. >> Thank you, it is an approach, right? Because I feel that security it is a thread that will go through the entire construct of a technical solution, right? There isn't a, "Oh, if you just buy this one server with this one feature, you don't have to worry about anything else." It's really it's everywhere, at least the way we believe it, it's everywhere. And in a 360-degree approach, the way we like to frame it, is it's this beginning with our supply chain, right? We take a lot of pride in the designs, you know, the really smart engineering teams, the designer, technology, our awesome, world-class global operations team working in concert to deliver some of these technologies into the market, that is, you know, a great capability, but also a huge risk to customers. 'Cause that is the most vulnerable place that if you inject some sort of malware or tampering at that point, you know, the rest of the story really becomes mute, because you've already defeated, right? And then, you move in to you physically deployed that through our global operations, now you're in an operating environment. That's where automation becomes key, right? We have software innovations in, you know, our iLO product of management inside those single servers, and we have really cool new GreenLake for compute operations management services out there that give customers more control back and more information to deal with this scaling problem. And then, lastly, as you begin to wrap up, you know, the natural life cycle, and you need to move to new platforms and new technologies, we think about the exit of that life cycle, and how do we make sure we dispose of the data and move those products into a secondary life cycle, so that we can move back into this kind of circular 360-degree approach. We don't want to leave our customers hanging anywhere in this entire journey. >> That 360-degree approach is so critical, especially given, as we've talked about already in this segment, the changes, the dynamics in the environment. Ann, as Cole said, this 360-degree approach that HPE is delivering is beginning in the manufacturing supply chain, seems like the first line of defense against cyberattackers. Talk to us about why that's important and where did the impetus come from? Was that COVID, was that customer demand? >> Yep, yep. Yeah, the supply chain is critical, thank you. So in 2018, we could see all of these cybersecurity issues starting to emerge and predicted that this would be a significant challenge for our industry. So we formed a strategic initiative called the Trusted Supply Chain Program designed to mitigate cybersecurity risk in the supply chain, and really starting with the product life cycle, starting at the product design phase and moving through sourcing and manufacturing, how we deliver products to our customers and, ultimately, a product's end of life that Cole mentioned. So in doing this, we're able to provide our customers with the most secure products and services, whether they're buying their servers for their data center or using our own GreenLake services. So just to give you some examples, something that is foundational to our Trusted Supply Chain Program we've built a very robust cybersecurity supply chain risk management program that includes assessing our risk at all factories and our suppliers, okay? We're also looking at strengthening our software supply chain by developing mechanisms to identify software vulnerabilities and hardening our own software build environments. To protect against counterfeit parts, that I mentioned in the beginning, from entering our supply chain, we've recently started a blockchain program so that we can identify component provenance and trace parts back to their original manufacturers. So our security efforts, you know, continue even after product manufacturing. We offer three different levels of secured delivery services for our customers, including, you know, a dedicated truck and driver, or perhaps even an exclusive use vehicle. We can tailor our delivery services to whatever the customer needs. And then, when a product is at its end of life, products are either recycled or disposed using our approved vendors. So our servers are also equipped with the One-Button Secure Erase that erases every byte of data, including firmware data. And talking about products, we've taken additional steps to provide additional security features for our products. Number one, we can provide platform certificates that allow the user to cryptographically verify that their server hasn't been tampered with from the time it left the manufacturing facility to the time that it arrives at the customer's facility. In addition to that, we've launched a dedicated line of trusted supply chain servers with additional security features, including Secure Configuration Lock, Chassis Intrusion Detection, and these are assembled at our U.S. factory by U.S. vetted employees. So lots of exciting things happening within the supply chain not just to shore up our own supply chain risk, but also to provide our customers with the most secure product. And so with that, Cole, do you want to make our big announcement? >> All right, thank you. You know, what a great setup though, because I think you got to really appreciate the whole effort that we're putting into, you know, bringing these online. But one of the, just transparently, the gaps we had as we proved this out was, as you heard, this initial proof was delivered with assembly in the U.S. factory employees. You know, fantastic program, really successful in all our target industries and even expanding to places we didn't really expect it to. But it's kind of going to the point of security isn't just for one industry or one set of customers, right? We're seeing it in our partners, we're seeing it in different industries than we have in the past. But the challenge was we couldn't get this global right out the gate, right? This has been a really heavy, transparently, a U.S. federal activated focus, right? If you've been tracking what's going on since May of last year, there's been a call to action to improve the nation's cybersecurity. So we've been all in on that, and we have an opinion and we're working hard on that, but we're a global company, right? How can we get this out to the rest of the world? Well, guess what? This month we figured it out and, well, it's take a lot more than this month, we did a lot of work, but we figured it out. And we have launched a comparable service globally called Server Security Optimization Service, right? HPE Server Security Optimization Service for ProLiant. I like to call it, you know, SSOS Sauce, right? Do you want to be clever? HPE Sauce that we can now deploy globally. We get that product hardened in the supply chain, right? Because if you take the best of your supply chain and you take your technical innovations that you've innovated into the server, you can deliver a better experience for your customers, right? So the supply chain equals server technology and our awesome, you know, services teams deliver supply chain security at that last mile, and we can deliver it in the European markets and now in the Asia Pacific markets, right? We could ship it from the U.S. to other markets, so we could always fulfill this promise, but I think it's just having that local access into your partner ecosystem and stuff just makes more sense. But it is a big deal for us because now we have activated a meaningful supply chain security benefit for our entire global network of partners and customers and we're excited about it, and we hope our customers are too. >> That's huge, Cole and Ann, in terms of the significance of the impact that HPE is delivering through its partner ecosystem globally as the supply chain continues to be one of the terms on everyone's lips here. I'm curious, Cole, we just couple months ago, we're at Discover, can you talk about what HPE is doing here from a security perspective, this global approach that it's taking as it relates to what HPE was talking about at Discover in terms of we want to secure the enterprise to deliver these experiences from edge to cloud. >> You know, I feel like for me, and I think you look at the shared-responsibility models and, you know, other frameworks out there, the way I believe it to be is it's a solution, right? There's not one thing, you know, if you use HPE supply chain, the end, or if you buy an HPE ProLiant, the end, right? It is an integrated connectedness with our as-a-service platform, our service and support commitments, you know, our extensive partner ecosystem, our alliances, all of that comes together to ultimately offer that assurance to a customer, and I think these are specific meaningful proof points in that chain of custody, right? That chain of trust, if you will. Because as the world becomes more zero trust, we are going to have to prove ourselves more, right? And these are those kind of technical credentials, and identities and, you know, capabilities that a modern approach to security need. >> Excellent, great work there. Ann, let's go ahead and take us home. Take the audience through what you think, ultimately, what HPE is doing really infusing security at that 360-degree approach level that we talked about. What are some of the key takeaways that you want the audience that's watching here today to walk away with? >> Right, right, thank you. Yeah, you know, with the increase in cybersecurity threats everywhere affecting all businesses globally, it's going to require everyone in our industry to continue to evolve in our supply chain security and our product security in order to protect our customers and our business continuity. Protecting our supply chain is something that HPE is very committed to and takes very seriously. So, you know, I think regardless of whether our customers are looking for an on-prem solution or a GreenLake service, you know, HPE is proactively looking for and mitigating any security risk in the supply chain so that we can provide our customers with the most secure products and services. >> Awesome, Anne and Cole, thank you so much for joining me today talking about what HPE is doing here and why it's important, as our program is called, to be confident and trust your server security with HPE, and how HPE is doing that. Appreciate your insights and your time. >> Thank you so much for having us. >> Thank you, Lisa. >> For Cole Humphreys and Anne Potten, I'm Lisa Martin, we want to thank you for watching this segment in our series, Be Confident and Trust Your Server Security with HPE. We'll see you soon. (gentle upbeat music)
SUMMARY :
you on the program, welcome. It's nice to be here. Some of the trends, some of the threats, that the cyber crime cost you into the conversation. and the increasing of attacks 360-degree approach to security. that is, you know, a great capability, in the environment. So just to give you some examples, and our awesome, you know, services teams in terms of the significance of the impact and identities and, you know, Take the audience through what you think, so that we can provide our customers thank you so much for joining me today we want to thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Anne Potten | PERSON | 0.99+ |
Cole | PERSON | 0.99+ |
Ann | PERSON | 0.99+ |
Ann Potten | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
August 2022 | DATE | 0.99+ |
Anne | PERSON | 0.99+ |
Cole Humphreys | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Discover | ORGANIZATION | 0.99+ |
360-degree | QUANTITY | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
SolarWinds' | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
U.S. | LOCATION | 0.99+ |
over $10.5 trillion | QUANTITY | 0.99+ |
first line | QUANTITY | 0.99+ |
two years ago | DATE | 0.99+ |
2025 | DATE | 0.99+ |
today | DATE | 0.99+ |
couple months ago | DATE | 0.98+ |
one example | QUANTITY | 0.98+ |
one set | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
This month | DATE | 0.96+ |
ProLiant | ORGANIZATION | 0.94+ |
zero trust | QUANTITY | 0.93+ |
GreenLake | ORGANIZATION | 0.92+ |
single | QUANTITY | 0.92+ |
three | QUANTITY | 0.9+ |
one industry | QUANTITY | 0.89+ |
this month | DATE | 0.89+ |
pandemic | EVENT | 0.89+ |
SSOS Sauce | ORGANIZATION | 0.85+ |
double | QUANTITY | 0.81+ |
3 1/2 million cyber roles | QUANTITY | 0.78+ |
over half | QUANTITY | 0.77+ |
one feature | QUANTITY | 0.76+ |
last year | DATE | 0.75+ |
one server | QUANTITY | 0.75+ |
next few years | DATE | 0.73+ |
Supply Chain Program | OTHER | 0.72+ |
Be Confident and Trust | TITLE | 0.72+ |
Ukraine | LOCATION | 0.71+ |
Number one | QUANTITY | 0.7+ |
HPE | COMMERCIAL_ITEM | 0.68+ |
Ramesh Prabagaran, Prosimo | CUBE Conversation
(upbeat music) >> Hello, welcome to this Cube Conversation here in Palo Alto, California. I'm John Furrier, host of theCube. We have a returning Cube alumni, Ramesh Prabagan, who is the co-founder and CEO of Prosimo.io. Great to see you, Ramesh. Thanks for coming in to our studio, and welcome to the new layout. >> Thanks for having me here, John. After a series of Zoom conversations, it's great to be live and in the flesh! >> Great to be in person. We also got a new stage for our Supercloud event, which we've been opening up to the community, looking forward to getting your perspective on that soon as well. But I want to keep the conversation really about you guys. I want to get the story down. You guys came out of stealth, Multicloud, Supercloud is right in your wheelhouse. >> Exactly. >> You got to love Supercloud. >> Yeah. As I walked in, I saw Supercloud all over the place, and it just gives you a jolt of energy. >> Well, you guys are in the middle of the action. Your company, I want you to explain this in a minute, is in the middle of this next wave. Because we had the structural change I called Cloud One. Amazon, use case, developers, no need to build a data center, all that goodness happens, higher level service of abstractions are happening, and then Azure comes in. More PaaS, and then more install base, now they're nipping at the heels. So full on hyperscale, Cap Backs growth, great for everybody. Now comes new use cases. Cloud to cloud, app to app, you see Databricks, Snowflake, MongoDB, all doing extremely well by leveraging the Cap Backs, now it's an ops problem. >> Exactly. >> Now ops and security. >> Yeah. It's speed of applications. >> How are you guys vectoring into that? Explain what you guys do. >> Absolutely. So let me take kind of the customer pain point first, right? Because it's always easier to explain that, and then we explain what is it that we do. So, it's no surprise. Applications are moving into the cloud, or people are building apps in the cloud in masses. The infrastructure that's sitting in front of these applications, cutting across networking, security, the operational piece associated with that, does not move at the same speed. The apps sometimes get upgraded two, three times a day, the infrastructure gets touched one time a week at best. And so increasingly, the cloud platform teams, the developers are all like, "Hey, why? Why? Why?" Right? "I thought things were supposed to move fast in the cloud." It doesn't. Now, if you double click on that, really, it's two reasons. One, those that won't have consistency across the stack that they hired in the data center, they bring a virtual form factor of that stack and line it up in the cloud, and before you know it, it's cost, it's operation complexity, there are multiple single panes of glass, all the fun stuff associated... >> Just to interject real quick. It is fast in the cloud if you're a developer. >> Exactly. >> So it's kind of like, hurry up, slow down, wait. >> Correct. >> So the developers are shifting left, open source is booming. Things are fine for developers right now. If you're a developer, things are good. >> But the guy sitting in front of that... >> The ops guys, they've got to deal with things like lock-in, choice, security. >> Exactly. And those are really the key challenges. We've seen some that actually said, "Hey, know what, I don't want to bring my data center stack into the cloud. Let me go cloud-native. And they start to build it up. 14 services from AWS, 15 from iGR, 14 more from GCP, even if you are in a single cloud. They just keep it to that. I need to know how to put this together. Because all these services are great, but how do I put this together. And enterprises don't have just one application, they have hundreds of these applications. So the requirements of a database is different than a service mesh, different than a serverless application, different than a web application. And before you know it, "How do I put all these things together?" And so we looked at this problem, and we said, "Okay. We subscribe to the fact that cloud-native is the way to go, right, but something needs to be there to make this simple." Right? And so, first thing that we did was bring all these cloud-native services together, we help orchestrate that, and we said, "okay, know what, Mr. Enterprise? We got you covered." Right? But now, it doesn't stop there. That's like, 10% of the value, right? What do you really need? What do you care about now? Because the apps are in the center of the universe, and who's talking to it? It's another application sitting either in the same cloud, or in a different cloud, or it's a user connecting into the application. So now, let's talk about what are the networking security operational requirements required for these apps to talk to each other, or the user to talk to the application. That's really what we focus on. >> Yeah. And I think one of the things that's driving this opportunity for you, and I want to get your reaction to this, is that the modern application movement is all about cloud-native. Okay, they're obviously doing great. Now, kind of the kumbaya moment in enterprise is that the security team and ops teams have to play ball and be friends with the developer, and vice versa. So harmony's coming there. So the little harmony. And two, the business is driving apps. IT is transforming over. This is why the Supercloud idea is interesting to Dave and I. Because when we coined that term, multi-cloud was not a market. Everyone has multiple clouds, 'cause they have Microsoft Office, that's now in the cloud, they got SQL Server, I mean it's really kind of Microsoft Cloud. >> Exactly. >> So you have a cloud. But do you have ops teams building on the stack? What about the network layer? This is where the rubber meets the road. >> Absolutely, yeah. And if you look at the challenges there, if you just focus on networking and security, right? When applications need to talk to each other, you have a whole bunch of underlying services, but somebody needs to put this thing on top. Because what you care about is "can these group of users talk to these class of applications." Or, "these group of applications, can they talk to each other," right? This whole notion of connectivity is just table stakes. Everybody just assumes it's there, right? It's the next layer up, which is, "how do I bring Zero Trust access? How do I get the observability?" And observability is not just a bunch of pretty donut chats. I have had people look to me in my previous company, the start-up, and said, "okay, give me all these nice donut chats, but so what? What do you want me to do with this?" And so you have to translate that into real actions, right? "How do I bring Zero Trust capabilities? How do I bring the observability capabilities? How do I understand cloud-native and networking and bring those things together so that you can help solve for the problem." >> It's interesting, one of the questions I had here to ask you was "what does it mean to be cloud-native, and why now?" And you brought up Zero Trust, trust and verify, these are security concepts. But if you look at what's going on at KubeKon and CNCF and Linux Foundation, software supply chain's a huge issue, where trust is the issue. They want trust there, so you got Zero Trust here. What is it? Zero Trust or trust? I mean, what's there? Is one hardware based, perimeter, networking? That kind of perimeter's dead, ton of... >> No, the whole- >> Trust or Zero Trust. >> The whole concept of Zero Trust is don't trust what is underlying, just trust what you're talking to. So if you and I talking to each other, John, you need to trust me, I need to trust you, and be able to have this conversation. >> You've been verified. >> Exactly, right? But in the application world, if you talk about two apps that are talking to each other, let's say there is a web application in one AWS region talking to a database in a different region, right? Now, do you want to make sure you are able to build that trust all the way from the application to the application? Or do you want to move the trust boundary to the two entities that are talking to each other so that irrespective of what they go on underneath the covers, you can be always sure that these two things are trusted. >> So, Ramesh, I was on LinkedIn yesterday, I wrote a comment, Dave Vallante wrote a post on Supercloud, we're talking about it, and I wrote, "Cloud as a commodity," question, and then a bunch of other stuff that we're going to talk about, and Keith Townsend jumped on that, and got on Twitter, put a poll, "Is cloud a commodity? Source: me." So, it started a big thread. And the reaction was interesting. And my point was to be provocative on "Cloud isn't commodity, but there's commodity elements." EC2 and S3, you can look at that and say, "that's commodity IaaS," but Amazon Web Services has done an amazing job for higher level services. Okay, so how does that translate into the use cases that you see that you guys are going after and solving, because it's the same kind of concept. IaaS and SaaS have to work together to solve problems, but that's in an integrated environment, say, in a native-cloud. How does that work across clouds? >> Yeah, no, you bring up a great point, John. So, let's take the simple use case, right? Let's keep the user to app thing to the side. Let us say two apps need to talk to each other, right? There are multiple ways in which you can solve this problem. You can build highways. That's what our customers call it. I'll build highways. I don't care what goes on those highways, I'll just build highways. You bring any kind of application workload on it, I just make sure that the highways are good, right? That's kind of the lowest common denominator. It's the path to least resistance. You can get stuff done, but it's not going to move the needle, right? Then you have really modern, kind of service networking, where, okay, I'm looking at every single HTTP, API, n:point, whatnot, and I'm optimizing for that. Right? Great if you know what you're doing, but, like, if you have thousands of these applications, it's not going to be really feasible to do that. And so, what we have seen customers do, actually, is employ a mixed approach, where they say, "I'm going to build these highways, the highways are going to make sure that I can go from one place to another, and maybe within regions, across clouds, whatnot, but then, I have specific requirements that my business needs, that actually needs tweaking, right? And so I'm going to tweak those things. That's why, what we call as like, full stack transit, is exactly that, right, which is, I'll build you the guts of it so that hey, you know what, if somebody screams at you, "Hey, why is my application not accessible?" You don't have that problem. It is always accessible. But then, the requirements for performance, the requirements for Zero Trust, the requirements for segmentation, and all of that are things that... >> That's a hard problem. >> That's a hard problem to solve. >> And you guys are solving that? >> Absolutely, exactly. >> So, let me throw this at you. So, okay, I get that. And by the way, that's exactly what we're seeing. Dave and I were also debating about multi-cloud as what it is. Now, the nirvana definition is, "Well, I have a workload, that's going to work the same, and just magically just shift to Azure." (Ramesh laughs) >> Like, 'cause there's better resources. >> There is no magic there. >> So, but this brings up the point of operations. Now, Databricks and Snowflake, they're building their software to run on multi-cloud seamlessly. Now they can do that, 'cause it's their application. What is the multi-cloud use case, so that's a Supercloud use case in your mind, because right now it's not yet there. What is the Supercloud use case that's going to allow this seamless management or workloads. What's your view? >> Yeah, so if you take enterprise, right? Large enterprise in particular. They invariably have some workloads that are on, let's say, if the primary cloud is AWS, there are some workloads in Azure. Maybe they have acquired a new company, maybe a start-up that uses GCP, whatnot. So they have sprinkles of workloads in other clouds. >> So that's the breed kind of thing. >> Yeah, exactly. That's not what causes anybody to wake up in the morning and say, "I need to have a Supercloud strategy." That's not the thing, right? But now, increasingly you're seeing "pick the right cloud for the appropriate workload." That is going to change quite a bit. Because I have my infrastructure heavy workloads in AWS. I have quite a bit of like, analytics and mining type of applications that are better on GCP. I have all of my package applications work well on Azure, right? How do I make sure all of this. And it's not apps of this kind. Even simple things like VDI. VDI always used to be, "I have this instance I run up" and whatnot. Now every single cloud provider is giving you their own flavor of virtual desktop. And so, how do you make sure all of these things work together, right? And once again, what we have seen customers do is they settle on one cloud as their primary, but then you always have sprinkles of workloads across all of the clouds. Now, you could also go down the path, and you're increasingly seeing this, you could go down the path of, "Hey, I'm using cloud as backbone," right? Cloud providers have invested massive amounts of dollars to make sure that the infrastructure reaches there. Literally almost to the extent that every user in a metro city is ten milliseconds from the public cloud. And so they have allowed for that. Now, you can actually use cloud backbones to get the availability, the liability and whatnot. So these are some new use cases that we have seen actually blew up in customers. I was just doing an interview, and the topic was the innovator's dilemma. And one of the panelists said, "It's not the innovator's dilemma, it's the integrator dilemma." Because if you have commodity, and you have choices on, say, backbones and whatnot for transit, the integration is the key glue now. What's your reaction to that? >> Absolutely. And we have seen, we used to spend quite a bit of time in kind of what is the day zero problem, right? Like, how do I put this together? Conversations are moved past that, because there are multiple ways in which you can do that right now, right? Conversations are moving to kind of, "this is more of an operational problem for me." It's not just operations in the form of "Hey, I need to find out where the problem is, troubleshoot it, and so forth. But I need to make like really high quality decisions." And those decisions are going to be guided by data. We have enterprise customers that acquire new companies. Or they have a new site that they open up. >> It's a mishmash. >> Yeah, exactly. It's a New York based company and they acquire a team out in Sidney, Australia, right? Does your cloud tell you today that you have new users, or new applications that are in Sidney, and naturally just extend? No, it doesn't. Somebody has to look at the macro problem, look at "Where are all my workloads?" Do a bunch of engineering to make that work, right? We took it upon ourselves to say "Hey, you know what, twenty-four hours later, you're going to get a recommendation in the platform that says, 'okay, you have new set of applications, a new set of users coming from Sidney, Australia, what have you done about it?' Click a button, and then you expand on it. >> It's kind of like how IT became the easy way to run the data center. Before IT you had to be a PhD, and roll out, I mean, you know how it was, right? So you're kind of taking that same approach. Okay, well, Ramesh, great stuff. I want to do a followup, certainly with you on this. 'Cause you're in the middle of where this wave is going, this structural change, and certainly can participate in that Supercloud conversation. But for your company, what's going on there? Give us an update, customer activity, what's it like, you guys came out of stealth, what's been the reaction, give a plug for the company, who you going to hire, take a minute to plug it. >> Oh, wonderful, thank you. So, primary use cases are really around cloud networking. How do you go within the cloud, and across clouds, and to the cloud, right? So those are really the key use cases. We go after large enterprises predominantly, but any kind of mid enterprise that is extremely cloud oriented, has lot of workloads in the cloud, equally applicable, applicable there. So we have about 60 of the Fortune 500s that we are engaged in right now. Many of them are paying customers as well. >> How are they buying, service? Is it... >> Yeah. So we provide software that actually sits inside the customer's own administrative control, delivered as a service, that they can use to go- >> So on-premise hosting or in the cloud? >> Entirely in the cloud, delivered as a service, so they didn't need to take care of the maintenance and whatnot, but they just consume it from the cloud directly, okay? And so, where we are right now is essentially, I have a branch of repeatable use cases that many customers are employing us for. So again, building highways, many different ways to build highways, at the same time take care of the micro-segmentation requirements, and then importantly, this whole NetDevOps, right? This whole NetDevOps is a cultural shift that we have seen. So if you are a network engineer, NetDevOps seems like it's a foreign term, right? But if you are an operational engineer, then NetDevOps, you know exactly what to do. So bringing all those principles together, making sure that the networking teams are empowered to essentially embrace the cloud that I created, the single biggest thing that we have done, I would say done well, is we have built very well on top of the cloud provider. So we don't go against cloud-native services. They have done that really, really well. It makes no sense to go say, "I have a better transit gateway than you." No. Hands down, an AWS transit gateway, or an Azure V1 and whatnot, are some of the best services that they have provided. But what does that mean? >> How do you build software into it? >> Exactly, right? And so how can you build a layer of software on top, so that when you attach that into the applications, right, that you can actually get the experience required, you can get the security requirements and so forth. So that's kind of where we are. We're also humbled by essentially some of the mega partners that have taken a bet on us, sometimes to the extent that, we're a 70% company, and some of the partners that we are talking to actually are quite humbling, right? >> Hey, lot more resource. >> Exactly, yeah. >> And how many rounds of financing have you done? >> So we have done two rounds of financing, we have raised about 55,000,000 in capital, again, really great set of investors backing us up, and a strong sense of conviction, on kind of where we are going. >> Do you think you're early, or not? 'Cause, that's always probably the biggest scary, I can see the smile, is that what keeps you up at night? >> So, yeah, exactly, I go through these phases internally in my head. >> The vision's right on the money, no doubt about it. >> So when you win an opportunity, and we have like, a few dozen of these, right, when you win an opportunity, you're like, "Yes, absolutely, this is where it is," right, and you go for a week and you don't win something, and you're like, "Hey man, why are we not seeing this?" Right, and so you go through these cycles, but I'll tell you with conviction, the fact that customers are moving workloads into the public cloud, not in dozens but in like, the hundreds and the thousands, essentially means that they need something like this. >> And the cloud-native wave is driving big time. >> Exactly, right. And so, when the customer as a conversation with AWS, Azure, GCP, and they are privy to all the services, and we go in after that and talk about, "How do I put this together and help you focus on your outcomes?" That mentally moves them. >> It's a day zero opportunity, and then you got headroom beyond that. >> Exactly. So that's the positive side of it, and enterprises certainly are sometimes a little cautious about when they're up new technologies and so forth. It's a natural cycle. Fortunately, again we are humbled by the fact that we have a few dozen of the pioneering customers that are using our platform. That gives you the legitimacy for a start-up. >> You got great pedigree on clients. Real quick, final question. 30 seconds. What's the pain point, for people watching, when do they call you in? What's their environment look like, what are some of the things that give the signals that you guys got to get the call? >> If you have more than, let's say five or ten VPCs in the cloud, and you have not invested in building a networking platform that gives you the connectivity, the security, the observability, and the performance requirements, you absolutely have to do that, right? Because we have seen many, many customers, it goes from 5 to 50 to 100 within a week, and so you don't want to be caught essentially in the midst of that. >> One more final final question. Since you're a seasoned entrepreneur, you've been there, done that previous times, >> Yeah, I've got scars. (laughs) >> Yes, we've all got scar tissue. We've been doing theCube for 12 years, we've seen a lot of stuff. What's the difference now in this market that's different than before? What's exciting you? What's the big change? What's, in your opinion, happening now that's really important that people should pay attention to? >> Absolutely. A lot of it is driven by one, the focus on the cloud itself, right? That's driving a sense of speed like never before. Because in the infrastructure world, yeah you do it today, oh, you do it six months from now, you had some leeway. Here, networking security teams are being yelled at almost every single day, by the cloud guy saying, "You guys are not moving fast enough, fast enough, fast enough." So that thing is different. So it helps, going to shrink the sale cycle for us. So second big one is, nobody knows, essentially, the new set of use cases that are coming about. We are seeing patterns emerge in terms of new use cases almost every single day. Some days it's like completely on the other end of the spectrum. Like, "I'm only serverless and service mesh." On the other end, it's like, "I have a package application, I'm moving it to the cloud." Right? And so, we're learning a lot as well. >> A great time for Supercloud. >> Exactly. >> Do the cloud really well, make it super, bring it to other use cases, stitch it all together, make it easy to use, reduce the complexity, it's just evolution. >> Yeah. And our goal is essentially, enterprise customers should not be focused so much on building infrastructure this way, right? They should focus on users, application services, let vendors like us worry about the nitty-gritty underneath. >> Ramesh, thank you for this conversation. It's a great Cube conversation. In the middle of all the action, Supercloud, multi-cloud, the future is going to be very much cloud-based, IaaS, SaaS, connecting environments. This is the cloud 2.0, Superclouds. And this is what people are going to be working on. I'm John Furrier with theCube, thanks for watching. (soft music)
SUMMARY :
Thanks for coming in to our studio, it's great to be live and in the flesh! really about you guys. and it just gives you a jolt of energy. is in the middle of this next wave. How are you guys vectoring into that? And so increasingly, the It is fast in the cloud So it's kind of like, So the developers are shifting left, got to deal with things That's like, 10% of the value, right? is that the modern application movement building on the stack? so that you can help one of the questions I had here to ask you So if you and I talking to each other, But in the application world, into the use cases that you see I just make sure that the And by the way, that's What is the multi-cloud use case, if the primary cloud is AWS, across all of the clouds. It's not just operations in the form of to say "Hey, you know what, IT became the easy way and to the cloud, right? How are they buying, service? that actually sits inside the customer's making sure that the and some of the partners that So we have done two So, yeah, exactly, I The vision's right on the money, Right, and so you go through these cycles, And the cloud-native and help you focus on your outcomes?" and then you got headroom beyond that. of the pioneering customers that give the signals and so you don't want to be caught that previous times, Yeah, I've got scars. What's the difference now in this market of the spectrum. Do the cloud really well, the nitty-gritty underneath. the future is going to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ramesh Prabagan | PERSON | 0.99+ |
Sidney | LOCATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
KubeKon | ORGANIZATION | 0.99+ |
Ramesh | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ramesh Prabagaran | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
two reasons | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two apps | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two entities | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
14 | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Sidney, Australia | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two rounds | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
Prosimo.io | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
Multicloud | ORGANIZATION | 0.99+ |
ten milliseconds | QUANTITY | 0.99+ |
three times a day | QUANTITY | 0.98+ |
one application | QUANTITY | 0.98+ |
IaaS | TITLE | 0.98+ |
Zero Trust | ORGANIZATION | 0.98+ |
one time a week | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
Zero Trust | ORGANIZATION | 0.98+ |
SaaS | TITLE | 0.98+ |
14 services | QUANTITY | 0.97+ |
100 | QUANTITY | 0.97+ |
twenty-four hours later | DATE | 0.97+ |
a week | QUANTITY | 0.97+ |
S3 | TITLE | 0.97+ |
Microsoft | ORGANIZATION | 0.97+ |
about 60 | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
EC2 | TITLE | 0.95+ |
single panes | QUANTITY | 0.94+ |
Prosimo | PERSON | 0.94+ |
15 | QUANTITY | 0.93+ |
ORGANIZATION | 0.93+ | |
Cloud | TITLE | 0.92+ |
GCP | ORGANIZATION | 0.92+ |
zero | QUANTITY | 0.92+ |
dozens | QUANTITY | 0.91+ |
Azure | TITLE | 0.91+ |
NetDevOps | TITLE | 0.91+ |
one cloud | QUANTITY | 0.91+ |
Ameya Talwalkar, Cequence Security | CUBE Conversation
(upbeat music) >> Hello, and welcome to this CUBE Conversation. I'm John Furrier, host of theCUBE here in Palo Alto, California for a great remote interview with Ameya Talwalkar, CEO of Cequence Security. Protecting APIs is the name of the game. Ameya thanks for coming on this CUBE Conversation. >> Thank you, John. Thanks for having us. >> So, I mean, obviously APIs, cloud, it runs everything. It's only going to get better, faster, more containers, more Kubernetes, more cloud-native action, APIs are at the center of it. Quick history, Cequence, how you guys saw the problem and where is it today? >> Yeah, so we started building the company or the product, the first product of the company focused on abuse or business logic abuse on APIs. We had design partners in large finance FinTech companies that are now customers of Cequence that were sort of API first, if you will. There were products in the market that were, you know, solving this problem for them on the web and in some cases mobile applications, but since these were API first very modern FinTech and finance companies that deal with lot of large enterprises, merchants, you have it, you name it. They were struggling to protect their APIs while they had protection on web and mobile applications. So that's the genesis. The problem has evolved exponentially in terms of volume size, pain, the ultimate financial losses from those problems. So it has, it's been a interesting journey and I think we timed it perfectly in terms of when we got started with the problem we started with. >> Yeah, I'm sure if you look at the growth of APIs, they're just exponentially growing because of the development, cloud-native development wave plus open source driving a lot of action. I was talking to a developer the other day and he's like, "Just give me a bag of Lego blocks and I'll build whatever application." I mean, this essentially- >> Yeah. >> API first is, has got us here, and that's standard. >> Yeah. >> Everyone's building on top of APIs, but the infrastructure going cloud-native is growing as well. So how do you secure APIs without slowing down the application velocity? Which everyone's trying to make go faster. So you got faster velocity on the developer side and (chuckles) more APIs coming. How do you secure the API infrastructure without slowing down the apps? >> Yeah, I'll come to the how part of it but I'll give you a little bit of commentary on what the problem really is. It's what has happened in the last few years is as you mentioned, the sort of journey to the cloud whether it's a public cloud or a private cloud, some enterprises have gone to a multi-cloud strategy. What really has happened is two things. One is because of that multi-environment deployment there is no defined parameter anymore to your applications or APIs. And so the parameter where people typically used to have maybe a CDN or WAF or other security controls at the parameter and then you have your infrastructure hosting these apps and APIs is completely gone away, that just doesn't exist anymore. And even more so for APIs which really doesn't have a whole lot of content to be cashed. They don't use CDN. So they are behind whatever API gateways whether they're in the cloud or whatever, they're hosting their APIs. And that has become your micro parameter, if you will, as these APIs are getting spread. And so the security teams are struggling with, how do I protect such a diverse set of environments that I am supposed to manage and protect where I don't have a unified view. I don't have even, like a complete view, if you will, of these APIs. And back in the days when phones or the modern iPhones and Android phones became popular, there used to be a sort of ad campaign I remember that said, "There is an app for that." >> Yeah. >> So the fast forward today, it's like, "There's an API for that." So everything you wanted to do today as a consumer or a business- >> John: Yeah. >> You can call an API and get your business done. And that's the challenge that's the explosion in APIs. >> Yeah. >> (laughs) Go ahead. >> It's interesting you have the API life cycle concept developing. Now you got, everyone knows- >> Right. >> The application life cycle, you know CI/CD pipelining, shifting left, but the surface area, you got web app firewalls which everyone knows is kind of like outdated, but you got API gateways. >> Yep. >> The surface area- >> Yeah. >> Is only increasing. So I have to ask you, do the existing API security tools out there bring that full application- >> Yeah. >> And API life cycle together? 'Cause you got to discover- >> Yep. >> The environment, you got to know what to protect and then also net new functionality. Can you comment? >> Right. Yeah. So that actually goes to your how question from, you know, previous section which is really what Cequence has defined is a API protection life cycle. And it's this concrete six-step process in which you protect your APIs. And the reason why we say it's a life cycle is it's not something that you do once and forget about it. It's a continuous process that you have to keep doing because your DevOps teams are publishing new APIs almost every day, every other day, if you will. So the start of that journey of that life cycle is really about discovering your external facing API attack surface which is where we highlight new hosting environments. We highlight accidental exposures. People are exposing their staging APIs. They might have access to production data. They are exposing Prometheus or performance monitoring servers. We find PKCS 7 files. We find Log4j vulnerabilities. These are things that you can just get a view of from outside looking in and then go about prioritizing which API environments you want to protect. So that's step number one. Step number two, really quick is do an inventory of all your APIs once you figure out which environments you want to protect or prioritize. And so that inventory includes a runtime inventory. Also creating specifications for these APIs. In lot of places, we find unmanaged APIs, shadow APIs and we create the API inventory and also push them towards sort of a central API management program. The third step is really looking at the risk of these APIs. Make sure they are using appropriate security controls. They're not leaking any sensitive information, PCI, PHI, PII, or other sort of industry-specific sensitive information. They are conforming to their schema. So sometimes the APIs dba.runtime from their schema and then that can cause a risk. So that's the first, sort of first half of this life cycle, if you will, which is really making sure your APIs are secure, they're using proper hygiene. The second half is about attack detection and prevention. So the fourth step is attack detection. And here again, we don't stop just at the OWASP Top 10 category of threats, a lot of other vendors do. They just do the OWASP API Top 10, but we think it's more than that. And we go deeper into business logic abuse, bots, and all the way to fraud. And that's sort of the attack detection piece of this journey. Once you detect these attacks, you start about, think about prevention of these attacks, also natively with Cequence. And the last step is about testing and making sure your APIs are secure even before they go live. >> What's- >> So that's a journey. Yeah. >> What's the secret sauce? What makes you different? 'Cause you got two sides to that coin. You got the auditing, kind of figure things out, and then you got the in-built attacks. >> Yeah. >> What makes you guys different? >> Yeah. So the way we are different is, first of all, Cequence is the only vendor that can, that has all these six steps in a single platform. We talked about security teams just lacking that complete view or consistent and uniform view of all your, you know, parameter, all your API infrastructure. We are combining that into a single platform with all the six steps that you can do in just one platform. >> John: Yeah. >> Number two is the outside looking in view which is the external discovery. It's something Cequence is unique in this space, uniquely doing this in this space. The third piece is the depth of our detection which is we don't just stop at the OWASP API Top 10, we go to fraud, business logic abuse, and bot attacks. And the mitigation, this will be interesting to you, which is a lot of the API security vendors say you come into existence because your WAF is not protecting your APIs, but they turn around when they detect the attacks to rely on a WAF to mitigate this or prevent these threats. And how can you sort of comprehend all that, right? >> Yeah. >> So we are unique in the sense we can prevent the attacks that we detect in the same platform without reliance on any other third-party solution. >> Yeah, I mean we- >> The last part is, sorry, just one last. >> Go ahead. Go ahead. >> Which is the scale. So we are serving largest of the large Fortune 100, Fortune 50 enterprises. We are processing 6 billion API calls per day. And one of the large customers of ours is processing 1 billion API calls per day with Cequence. So scale of APIs that we can process and how we can scale is also unique to Cequence. >> Yeah, I think the scale thing's a huge message. There, just, I put a little accent on that. I got to comment because we had an event last week called Supercloud which we were trying to talking about, you know, as clouds become more multicloud, you get more super capabilities. But automation, with super cloud comes super hackers. So as things advance, you're seeing the step function, the bad guys are getting better too. You mentioned bots. So I have to ask you what are some of the sophisticated attacks that you see that look like legitimate traffic or transactions? Can you comment on what your scale and your patterns are showing? Because the attacks are coming in fast and furious >> Correct. So APIs make the attack easier because APIs are well documented. So you want your partners and, you know, programmers to use your API ecosystem, but at the same time the attackers are getting the same information and they can program against those APIs very easily which means what? They are going to write a bunch of bots and automation to cause a lot of pain. The kind of sophistication we have seen is I'll just give a few examples. Ulta Beauty is one of our customers, very popular retailer in the US. And we recently found an interesting attack. They were selling some high-end hair curling high ends which are very high-end demand, very expensive, very hard to find. And so this links sort of physical path to API security, think about it, which is the bad guys were using a bot to scrape a third-party service which was giving local inventory information available to people who wanted to search for these items which are high in demand, low in supply. And they wrote a bot to find where, which locations have these items in supply, and they went and sort of broke into these showrooms and stole those items. So not only we say are saving them from physical theft and all the other problems that they have- >> Yeah. >> But also, they were paying about $25,000 per month extra- >> Yeah. >> For this geo-location service that was looking at their inventory. So that's the kind of abuse that can go on with APIs. Even when the APIs are perfectly secure, they're using appropriate security controls, these can go on. >> You know, that's a really great example. I'm glad you brought that up because I observed at AWS re:Inforce in Boston that Steven Schmidt has changed his title from chief information security officer to just chief security officer, to the point when asked he said, "Physical security is now tied together with the online." So to your point- >> Yeah. >> About the surveillance and attack setup- >> Yeah. >> For the physical, you got warehouses- >> Yep. >> You've got brick and mortar. This is the convergence of security. >> Correct. Absolutely. I mean, we do deal with many other, sort of a governance case. We help a Fortune 50 finance company which operates worldwide. And their gets concern is if an API is hosted in a certain country in Europe which has the most sort of aggressive data privacy and data regulations that they have to deal with, they want to make sure the consumer of that API is within a certain geo location whereby they're not subject to liabilities from GDPR and other data residency regulation. And we are the ones that are giving them that view. And we can have even restrict and make sure they're compliant with that regulation that they have to sort of comply with. >> I could only imagine that that geo-regional view and the intelligence and the scale gives you insights- >> Yeah. >> Into attacks that aren't really kind of, aren't supposed to be there. In other words, if you can keep the data in the geo, then you could look- >> Yep. >> At anything else as that, you know, you don't belong here kind of track. >> You don't belong here. Exactly. Yeah, yeah. >> All right. So let's get to the API. >> Yeah, I mean- >> So the API visibility is an issue, right? So I can see that, check, sold me on that, protection is key, but if, what's the current security team makeup? Are they buying into this or are they just kind of the hair on fire? What are security development teams doing? 'Cause they're under a lot of pressure to do the hardcore security work. And APIs, again, surface area's wide open, they're part of everyone's access. >> Yeah. So I mentioned about the six-step journey of the life cycle. Right? We see customers come to us with very acute pain point and they say, "Our hair is on, our hair on fire. (John laughing) Solve this problem for us." Like one large US telco company came to us to, just a simple problem, do the inventory and risk assessment of all our APIs. That's our number one pain point. Ended up starting with them on those two pain points or those two stops on their life cycle. And then we ended up solving all the six steps with them because once we started creating an inventory and looking at the risk profile, we also observed that these same APIs were target by bots and fraudsters doing all kinds of bad things. So once we discovered those problems we expanded the scope to sort of have the whole life cycle covered with the Cequence platform. And that's the typical experience which is, it's typically the security team. There are developer communities that are coming to us with sort of the testing aspect of it which integrated into DevOps toolchains and CI/CD pipelines. But otherwise, it's all about security challenges, acute pain points, and then expanding into the whole journey. >> All right. So you got the detection, you got the alerting, you got the protection, you got the mitigation. What's the advice- >> Yeah. >> To the customer or the right approach to set up with Cequence so that they can have the best protection. What the motion? What's the initial engagement look like? How do they engage? How do they operationalize? >> Yeah. >> You guys take me through that. >> Yeah. The simple way of engaging with Cequence is get that external assessment which will map your APIs for you, it'll create a assessment for you. We'll present that assessment, you know, to your security team. And like 90% of the times customers have an aha moment, (John chuckles) that they didn't know something that we are showing them. They find APIs that were not supposed to be public. They will find hosting environments that they didn't know about. They will find API gateways that were, like not commissioned, but being used. And so start there, start their journey with an assessment with Cequence, and then work with us to prioritize what problems you want to solve next once you have that assessment. >> So really making sure that their inventory of API is legit. >> Yep. Yep, absolutely. >> It's basically- >> Yep. >> I mean, you're starting to see more of this in the cloud-native, you know, Sbot, they call 'em, you know, (indistinct) materials. >> (Ameya faintly speaking). What do you got out there, kind of full understanding of what's being instrumented out there, big time. >> Yeah. The thing is a lot of analysts say that APIs is the number one attack vector this year and going forward, but you'll be surprised to see that it's not the APIs that get targeted that are poorly secured. Actually, the APIs that are completely not secured are the ones that are attacked the most because there are plenty of them. So start with the assessment, figure out the APIs that are out there and then start your journey. That's sort of my recommendation. >> So based on your advice what you're saying is there's a, most people make the mistake of having a lot of undocumented or unauthorized APIs out there that are unsecured. >> Yeah. And security teams are unaware of those APIs. So how do you protect something that you don't know even exists? >> Yeah. >> Right? So that's the challenge. >> Okay. You know, the APIs have to be secure. And as applications connect too, there's the other side of the APIs, whether that's credential passing, so much is at stake here relative to the security. It's not just access it's what's behind it. There's a lot of trust coming in. So, you know, I got to ask you a final question. You got zero trust and you got trust kind of coming together. What's (laughs), how do you respond to that? >> Yeah. Zero trust is part of it in the sense that you have to not trust sort of any API consumer as a completely trusted entity. Just like I gave you the Ultra Beauty example. They had trusted this third party to be absolutely safe and secure, you know, no controls necessary to sort of monitor their traffic, whereas they can be abused by their end consumers and cause you a lot of pain. So there is a sort of a linkage between zero trust. Never trusts anybody until you verify, that's the sort of angle, that's sort of the connection between APIs security and zero trust. >> Ameya, thank you for coming on theCUBE. Really appreciate the conversation. I'll give you the final word. What should people know about Cequence Security? How would you give the pitch? You go, you know, quick summary, what's going on? >> Yeah. So very excited to be in this space. We sort of are the largest security of API security vendor in the space in terms of revenue, the largest volume of API traffic that we process. And we are just getting started. This is a exciting journey we are on, we are very happy to serve the, you know, Fortune 50, you know, global 200 customers that we have, and we are expanding into many geographies and locations. And so look for some exciting updates from us in the coming days. >> Well, congratulations on your success. Love the approach, love the scale. I think scale's a new competitive advantage. I think that's the new lock-in if you're good, and your scaling providing a lot of benefits. So Ameya, thank you for coming, sharing the story. Looking forward to chatting again soon. >> Thank you very much. Thanks for having us. >> Okay. This is a CUBE Conversation. I'm John Furrier, here at Palo Alto, California. Thanks for watching. (cheerful music)
SUMMARY :
Protecting APIs is the name of the game. APIs are at the center of it. So that's the genesis. because of the development, and that's standard. So you got faster velocity And back in the days when So the fast forward today, And that's the challenge that's the explosion in APIs. you have the API life but you got API gateways. So I have to ask you, do the The environment, you is it's not something that you So that's a journey. and then you got So the way we are And the mitigation, this in the sense we can prevent the attacks The last part is, sorry, Go ahead. And one of the large customers So I have to ask you So you want your partners So that's the kind of abuse So to your point- This is the convergence of security. that they have to sort of comply with. keep the data in the geo, At anything else as that, you know, You don't belong here. So let's get to the API. So the API visibility So I mentioned about the six-step So you got the detection, To the customer or the And like 90% of the times So really making sure in the cloud-native, you know, What do you got out there, see that it's not the APIs most people make the mistake So how do you protect something So that's the challenge. You know, the APIs have to be secure. that you have to not trust You go, you know, quick We sort of are the largest So Ameya, thank you for Thank you very much. I'm John Furrier, here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ameya Talwalkar | PERSON | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
Ameya | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
six-step | QUANTITY | 0.99+ |
third piece | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
six steps | QUANTITY | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
second half | QUANTITY | 0.99+ |
fourth step | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
six steps | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
third step | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Cequence Security | ORGANIZATION | 0.99+ |
GDPR | TITLE | 0.99+ |
Ulta Beauty | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
first half | QUANTITY | 0.99+ |
OWASP | TITLE | 0.98+ |
two pain points | QUANTITY | 0.98+ |
200 customers | QUANTITY | 0.98+ |
single platform | QUANTITY | 0.98+ |
two stops | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
zero trust | QUANTITY | 0.96+ |
Android | TITLE | 0.95+ |
6 billion API calls per day | QUANTITY | 0.94+ |
first product | QUANTITY | 0.94+ |
this year | DATE | 0.94+ |
Zero | QUANTITY | 0.93+ |
about $25,000 per | QUANTITY | 0.92+ |
Fortune 50 | ORGANIZATION | 0.91+ |
1 billion API calls per day | QUANTITY | 0.91+ |
telco | ORGANIZATION | 0.91+ |
Inforce | ORGANIZATION | 0.9+ |
Lego | ORGANIZATION | 0.89+ |
zero | QUANTITY | 0.88+ |
step number one | QUANTITY | 0.87+ |
Number two | QUANTITY | 0.86+ |
Step number two | QUANTITY | 0.79+ |
Top 10 | QUANTITY | 0.79+ |
OWASP API Top | TITLE | 0.76+ |
last few years | DATE | 0.76+ |
Beauty | ORGANIZATION | 0.66+ |
CUBE Conversation | EVENT | 0.65+ |
Supercloud | ORGANIZATION | 0.64+ |
Cequence | ORGANIZATION | 0.63+ |
one pain point | QUANTITY | 0.63+ |
CUBE | ORGANIZATION | 0.57+ |
our customers | QUANTITY | 0.55+ |
10 | OTHER | 0.53+ |
Fortune 100 | ORGANIZATION | 0.53+ |
API | QUANTITY | 0.52+ |
theCUBE | ORGANIZATION | 0.51+ |
PKCS 7 | TITLE | 0.49+ |
Cequence | TITLE | 0.46+ |
Sbot | ORGANIZATION | 0.45+ |
CUBE | TITLE | 0.45+ |
Sam Kassoumeh, SecurityScorecard | CUBE Conversation
(upbeat music) >> Hey everyone, welcome to this CUBE conversation. I'm John Furrier, your host of theCUBE here in Palo Alto, California. We've got Sam Kassoumeh, co-founder and chief operating office at SecurityScorecard here remotely coming in. Thanks for coming on Sam. Security, Sam. Thanks for coming on. >> Thank you, John. Thanks for having me. >> Love the security conversations. I love what you guys are doing. I think this idea of managed services, SaaS. Developers love it. Operation teams love getting into tools easily and having values what you guys got with SecurityScorecard. So let's get into what we were talking before we came on. You guys have a unique solution around ratings, but also it's not your grandfather's pen test want to be security app. Take us through what you guys are doing at SecurityScorecard. >> Yeah. So just like you said, it's not a point in time assessment and it's similar to a traditional credit rating, but also a little bit different. You can really think about it in three steps. In step one, what we're doing is we're doing threat intelligence data collection. We invest really heavily into R&D function. We never stop investing in R&D. We collect all of our own data across the entire IPV force space. All of the different layers. Some of the data we collect is pretty straightforward. We might crawl a website like the example I was giving. We might crawl a website and see that the website says copyright 2005, but we know it's 2022. Now, while that signal isn't enough to go hack and break into the company, it's definitely a signal that someone might not be keeping things up to date. And if a hacker saw that it might encourage them to dig deeper. To more complex signals where we're running one of the largest DNS single infrastructures in the world. We're monitoring command and control malware and its behaviors. We're essentially collecting signals and vulnerabilities from the entire IPV force space, the entire network layer, the entire web app player, leaked credentials. Everything that we think about when we talk about the security onion, we collect data at each one of those layers of the onion. That's step one. And we can do all sorts of interesting insights and information and reports just out of that thread intel. Now, step two is really interesting. What we do is we go identify the attack surface area or what we call the digital footprint of any company in the world. So as a customer, you can simply type in the name of a company and we identify all of the domains, sub domains, subsidiaries, organizations that are identified on the internet that belong to that organization. So every digital asset of every company we go out and we identify that and we update that every 24 hours. And step three is the rating. The rating is probabilistic and it's deterministic. The rating is a benchmark. We're looking at companies compared to their peers of similar size within the same industry and we're looking at how they're performing. And it's probabilistic in the sense that companies that have an F are about seven to eight times more likely to experience a breach. We're an A through F scale, universally understood. Ds and Fs, more likely to experience a breach. A's we see less breaches now. Like I was mentioning before, it doesn't mean that an F is always going to get hacked or an A can never get hacked. If a nation state targets an A, they're going to eventually get in with enough persistence and budget. If the pizza shop on the corner has an F, they may never get hacked because no one cares, but natural correlation, more doors open to the house equals higher likelihood someone unauthorized is going to walk in. So it's really those three steps. The collection, we map it to the surface area of the company and then we produce a rating. Today we're rating about 12 million companies every single day. >> And how many people do you have as customers? >> We have 50,000 organizations using us, both free and paid. We have a freemium tier where just like Yelp or a LinkedIn business profile. Any company in the world has a right to go claim the score. We never extort companies to fix the score. We never charge a company to see the score or fix it. Any company in a world without paying us a cent can go in. They can understand what we're seeing about them, what a hacker could see about their environment. And then we empower them with the tools to fix it and they can fix it and the score will go up. Now companies pay us because they want enterprise capabilities. They want additional modules, insights, which we can talk about. But in total, there's about 50,000 companies that at any given point in time, they're monitoring about a million and a half organizations of the 12 million that we're rating. It sounds like Google. >> If you want to look at it. >> Sounds like Google Search you got going on there. You got a lot of search and then you create relevance, a score, like a ranking. >> That's precisely it. And that's exactly why Google ventures invested in us in our Series B round. And they're on our board. They looked and they said, wow, you guys are building like a Google Search engine over some really impressive threat intelligence. And then you're distilling it into a score which anybody in the world can easily understand. >> Yeah. You obviously have page rank, which changed the organic search business in the late 90s, early 2000s and the rest is history. AdWords. >> Yeah. >> So you got a lot of customer growth there potentially with the opt-in customer view, but you're looking at this from the outside in. You're looking at companies and saying, what's your security posture? Getting a feel for what they got going on and giving them scores. It sounds like it's not like a hacker proof. It's just more of a indicator for management and the team. >> It's an indicator. It's an indicator. Because today, when we go look at our vendors, business partners, third parties were flying blind. We have no idea how they're doing, how they're performing. So the status quo for the last 20 years has been perform a risk assessments, send a questionnaire, ask for a pen test and an audit evidence. We're trying to break that cycle. Nobody enjoys it. They're long tail. It's a trust without verification. We don't really like that. So we think we can evolve beyond this point in time assessment and give a continuous view. Now, today, historically, we've been outside in. Not intrusive, and we'll show you what a hacker can see about an environment, but we have some cool things percolating under the hood that give more of a 360 view outside, inside, and also a regulatory compliance view as well. >> Why is the compliance of the whole third party thing that you're engaging with important? Because I mean, obviously having some sort of way to say, who am I dealing with is important. I mean, we hear all kinds of things in the security landscape, oh, zero trust, and then we hear trust, supply chain, software risk, for example. There's a huge trust factor there. I need to trust this tool or this container. And then you got the zero trust, don't trust anything. And then you've got trust and verify. So you have all these different models and postures, and it just seems hard to keep up with. >> Sam: It's so hard. >> Take us through what that means 'cause pen tests, SOC reports. I mean the clouds help with the SOC report, but if you're doing agile, anything DevOps, you basically would need to do a pen test like every minute. >> It's impossible. The market shifted to the cloud. We watched and it still is. And that created a lot of complexity, not to date myself. But when I was starting off as a security practitioner, the data center used to be in the basement and I would have lunch with the database administrator and we talk about how we were protecting the data. Those days are long gone. We outsource a lot of our key business practices. We might use, for example, ADP for a payroll provider or Dropbox to store our data. But we've shifted and we no longer no who that person is that's protecting our data. They're sitting in another company in another area unknown. And I think about 10, 15 years ago, CISOs had the realization, Hey, wait a second. I'm relying on that third party to function and operate and protect my data, but I don't have any insight, visibility or control of their program. And we were recommended to use questionnaires and audit forms, and those are great. It's good hygiene. It's good practice. Get to know the people that are protecting your data, ask them the questions, get the evidence. The challenge is it's point in time, it's limited. Sometimes the information is inaccurate. Not intentionally, I don't think people intentionally want to go lie, but Hey, if there's a $50 million deal we're trying to close and it's dependent on checking this one box, someone might bend a rule a little bit. >> And I said on theCUBE publicly that I think pen test reports are probably being fudged and dates being replicated because it's just too fast. And again, today's world is about velocity on developers, trust on the code. So you got all kinds of trust issues. So I think verification, the blue check mark on Twitter kind of thing going on, you're going to see a lot more of that and I think this is just the beginning. I think what you guys are doing is scratching the surface. I think this outside in is a good first step, but that's not going to solve the internal problem that still coming and have big surface areas. So you got more surface area expanding. I mean, IOT's coming in, the Edge is coming fast. Never mind hybrid on-premise cloud. What's your organizations do to evaluate the risk and the third party? Hands shaking, verification, scorecards. Is it like a free look here or is it more depth to it? Do you double click on it? Take us through how this evolves. >> John it's become so disparate and so complex, Because in addition to the market moving to the cloud, we're now completely decentralized. People are working from home or working hybrid, which adds more endpoints. Then what we've learned over time is that it's not just a third party problem, because guess what? My third parties behind the scenes are also using third parties. So while I might be relying on them to process my customer's payment information, they're relying on 20 vendors behind the scene that I don't even know about. I might have an A, they might have an A. It's really important that we expand beyond that. So coming out of our innovation hub, we've developed a number of key capabilities that allow us to expand the value for the customer. One, you mentioned, outside in is great, but it's limited. We can see what a hacker sees and that's helpful. It gives us pointers where to maybe go ask double click, get comfort, but there's a whole nother world going on behind the firewall inside of an organization. And there might be a lot of good things going on that CISO security teams need to be rewarded for. So we built an inside module and component that allows teams to start plugging in the tools, the capabilities, keys to their cloud environments. And that can show anybody who's looking at the scorecard. It's less like a credit score and more like a social platform where we can go and look at someone's profile and say, Hey, how are things going on the inside? Do they have two-factor off? Are there cloud instances configured correctly? And it's not a point in time. This is a live connection that's being made. This is any point in time, we can validate that. The other component that we created is called an evidence locker. And an evidence locker, it's like a secure vault in my scorecard and it allows me to upload things that you don't really stand for or check for. Collateral, compliance paperwork, SOC 2 reports. Those things that I always begrudgingly email. I don't want to share with people my trade secrets, my security policies, and have it sit on their exchange server. So instead of having to email the same documents out, 300 times a month, I just upload them to my evidence locker. And what's great is now anybody following my scorecard can proactively see all the great things I'm doing. They see the outside view. They see the inside view. They see the compliance view. And now they have the holy grail view of my environment and can have a more intelligent conversation. >> Access to data and access methods are an interesting innovation area around data lineage. Tracing is becoming a big thing. We're seeing that. I was just talking with the Snowflake co-founder the other day here in theCUBE about data access and they're building a proprietary mesh on top of the clouds to figure out, Hey, I don't want to give just some tool access to data because I don't know what's on the other side of those tools. Now they had a robust ecosystem. So I can see this whole vendor risk supply chain challenge around integration as a huge problem space that you guys are attacking. What's your reaction to that? >> Yeah. Integration is tricky because we want to be really particular about who we allow access into our environment or where we're punching holes in the firewall and piping data out out of the environment. And that can quickly become unwieldy just with the control that we have. Now, if we give access to a third party, we then don't have any control over who they're sharing our information with. When I talk to CISOs today about this challenge, a lot of folks are scratching their head, a lot of folks treat this as a pet project. Like how do I control the larger span beyond just the third parties? How do I know that their software partners, their contractors that they're working with building their tools are doing a good job? And even if I know, meaning, John, you might send me a list of all of your vendors. I don't want to be the bad guy. I don't really have the right to go reach out to my vendors' vendors knocking on their door saying, hi, I'm Sam. I'm working with John and he's your customer. And I need to make sure that you're protecting my data. It's an awkward chain of conversation. So we're building some tools that help the security teams hold the entire ecosystem accountable. We actually have a capability called automatic vendor discovery. We can go detect who are the vendors of a company based on the connections that we see, the inbound and outbound connections. And what often ends up happening John is we're bringing to the attention to our customers, awareness about inbound and outbound connections. They had no idea existed. There were the shadow IT and the ghost vendors that were signed without going through an assessment. We detect those connections and then they can go triage and reduce the risk accordingly. >> I think that risk assessment of vendors is key. I was just reading a story about this, about how a percentage, I forget the number. It was pretty large of applications that aren't even being used that are still on in companies. And that becomes a safe haven for bad actors to hang out and penetrate 'cause they get overlooked 'cause no one's using them, but they're still online. And so there's a whole, I called cleaning up the old dead applications that are still connected. >> That happens all the time. Those applications also have applications that are dead and applications that are alive may also have users that are dead as well. So you have that problem at the application level, at the user level. We also see a permutation of what you describe, which is leftover artifacts due to configuration mistakes. So a company just put up a new data center, a satellite office in Singapore and they hired a team to go install all the hardware. Somebody accidentally left an administrative portal exposed to the public internet and nobody knew the internet works, the lights are on, the office is up and running, but there was something that was supposed to be turned off that was left turned on. So sometimes we bring to company's attention and they say, that's not mine. That doesn't belong to me. And we're like, oh, well, we see some reason why. >> It's his fault. >> Yeah and they're like, oh, that was the contractor set up the thing. They forgot to turn off the administrative portal with the default login credentials. So we shut off those doors. >> Yeah. Sam, this is really something that's not talked about a lot in the industry that we've become so reliant on managed services and other people, CISOs, CIOs, and even all departments that have applications, even marketing departments, they become reliant on agencies and other parties to do stuff for them which inherently just increases the risk here of what they have. So there inherently could be as secure as they could be, but yet exposed completely on the other side. >> That's right. We have so many virtual touch points with our partners, our vendors, our managed service providers, suppliers, other third parties, and all the humans that are involved in that mix. It creates just a massive ripple effect. So everybody in a chain can be doing things right. And if there's one bad link, the whole chain breaks. I know it's like the cliche analogy, but it rings true. >> Supply chain trust again. Trust who you trust. Let's see how those all reconcile. So Sam, I have to ask you, okay, you're a former CISO. You've seen many movies in the industry. Co-founded this company. You're in the front lines. You've got some cool things happening. I can almost imagine the vision is a lot more than just providing a rating and score. I'm sure there's more vision around intelligence, automation. You mentioned vault, wallet capabilities, exchanging keys. We heard at re:Inforce automated reasoning, metadata reasoning. You got all kinds of crypto and quantum. I mean, there's a lot going on that you can tap into. What's your vision where you see SecurityScorecard going? >> When we started the company, the rating was the thing that we sold and it was a language that helped technical and non-technical folks alike level the playing field and talk about risk and use it to drive their strategy. Today, the rating just opens the door to that discussion and there's so much additional value. I think in the next one to two years, we're going to see the rating becomes standardized. It's going to be more frequently asked or even required or leveraged by key decision makers. When we're doing business, it's going to be like, Hey, show me your scorecard. So I'm seeing the rating get baked more and more the lexicon of risk. But beyond the rating, the goal is really to make a world a safer place. Help transform and rise the tide. So all ships can lift. In order to do that, we have to help companies, not only identify the risk, but also rectify the risk. So there's tools we build to really understand the full risk. Like we talked about the inside, the outside, the fourth parties, fifth parties, the real ecosystem. Once we identified where are all the Fs and bad things, will then what? So couple things that we're doing. We've launched a pro serve arm to help companies. Now companies don't have to pay to fix the score. Anybody, like I said, can fix the score completely free of charge, but some companies need help. They ask us and they say, Hey, I'm looking for a trusted advisor. A Sherpa, a guide to get me to a better place or they'll say, Hey, I need some pen testing services. So we've augmented a service arm to help accelerate the remediation efforts. We're also partnered with different industries that use the rating as part of a larger picture. The cyber rating isn't the end all be all. When companies are assessing risk, they may be looking at a financial ratings, ESG ratings, KYC AML, cyber security, and they're trying to form a complete risk profile. So we go and we integrate into those decision points. Insurance companies, all the top insurers, re-insurers, brokers are leveraging SecurityScorecard as an ingredient to help underwrite for cyber liability insurance. It's not the only ingredient, but it helps them underwrite and identify the help and price the risk so they can push out a policy faster. First policy is usually the one that's signed. So time to quote is an important metric. We help to accelerate that. We partner with credit rating agencies like Fitch, who are talking to board members, who are asking, Hey, I need a third party, independent verification of what my CISO is saying. So the CISO is presenting the rating, but so are the proxy advisors and the ratings companies to the board. So we're helping to inform the boards and evolve how they're thinking about cyber risk. We're helping with the insurance space. I think that, like you said, we're only scratching the surface. I can see, today we have about 50,000 companies that are engaging a rating and there's no reason why it's not going to be in the millions in just the next couple years here. >> And you got the capability to bring in more telemetry and see the new things, bring that into the index, bring that into the scorecard and then map that to potential any vulnerabilities. >> Bingo. >> But like you said, the old days, when you were dating yourself, you were in a glass room with a door lock and key and you can see who's two folks in there having lunch, talking database. No one's going to get hurt. Now that's gone, right? So now you don't know who's out there and machines. So you got humans that you don't know and you got machines that are turning on and off services, putting containers out there. Who knows what's in those payloads. So a ton of surface area and complexity to weave through. I mean only is going to get done with automation. >> It's the only way. Part of our vision includes not attempting to make a faster questionnaire, but rid ourselves of the process all altogether and get more into the continuous assessment mindset. Now look, as a former CISO myself, I don't want another tool to log into. We already have 50 tools we log into every day. Folks don't need a 51st and that's not the intent. So what we've done is we've created today, an automation suite, I call it, set it and forget it. Like I'm probably dating myself, but like those old infomercials. And look, and you've got what? 50,000 vendors business partners. Then behind there, there's another a hundred thousand that they're using. How are you going to keep track of all those folks? You're not going to log in every day. You're going to set rules and parameters about the things that you care about and you care depending on the nature of the engagement. If we're exchanging sensitive data on the network layer, you might care about exposed database. If we're doing it on the app layer, you're going to look at application security vulnerabilities. So what our customers do is they go create rules that say, Hey, if any of these companies in my tier one critical vendor watch list, if they have any of these parameters, if the score drops, if they drop below a B, if they have these issues, pick these actions and the actions could be, send them a questionnaire. We can send the questionnaire for you. You don't have to send pen and paper, forget about it. You're going to open your email and drag the Excel spreadsheet. Those days are over. We're done with that. We automate that. You don't want to send a questionnaire, send a report. We have integrations, notify Slack, create a Jira ticket, pipe it to ServiceNow. Whatever system of record, system of intelligence, workflow tools companies are using, we write in and allow them to expedite the whole. We're trying to close the window. We want to close the window of the attack. And in order to do that, we have to bring the attention to the people as quickly as possible. That's not going to happen if someone logs in every day. So we've got the platform and then that automation capability on top of it. >> I love the vision. I love the utility of a scorecard, a verification mark, something that could be presented, credential, an image, social proof. To security and an ongoing way to monitor it, observe it, update it, add value. I think this is only going to be the beginning of what I would see as much more of a new way to think about credentialing companies. >> I think we're going to reach a point, John, where and some of our customers are already doing this. They're publishing their scorecard in the public domain, not with the technical details, but an abstracted view. And thought leaders, what they're doing is they're saying, Hey, before you send me anything, look at my scorecard securityscorecard.com/securityrating, and then the name of their company, and it's there. It's in the public domain. If somebody Googles scorecard for certain companies, it's going to show up in the Google Search results. They can mitigate probably 30, 40% of inbound requests by just pointing to that thing. So we want to give more of those tools, turn security from a reactive to a proactive motion. >> Great stuff, Sam. I love it. I'm going to make sure when you hit our site, our company, we've got camouflage sites so we can make sure you get the right ones. I'm sure we got some copyright dates. >> We can navigate the decoys. We can navigate the decoys sites. >> Sam, thanks for coming on. And looking forward to speaking more in depth on showcase that we have upcoming Amazon Startup Showcase where you guys are going to be presenting. But I really appreciate this conversation. Thanks for sharing what you guys are working on. We really appreciate. Thanks for coming on. >> Thank you so much, John. Thank you for having me. >> Okay. This is theCUBE conversation here in Palo Alto, California. Coming in from New York city is the co-founder, chief operating officer of securityscorecard.com. I'm John Furrier. Thanks for watching. (gentle music)
SUMMARY :
to this CUBE conversation. Thanks for having me. and having values what you guys and see that the website of the 12 million that we're rating. then you create relevance, wow, you guys are building and the rest is history. for management and the team. So the status quo for the and it just seems hard to keep up with. I mean the clouds help Sometimes the information is inaccurate. and the third party? the capabilities, keys to the other day here in IT and the ghost vendors I forget the number. and nobody knew the internet works, the administrative portal the risk here of what they have. and all the humans that You're in the front lines. and the ratings companies to the board. and see the new things, I mean only is going to and get more into the I love the vision. It's in the public domain. I'm going to make sure when We can navigate the decoys. And looking forward to speaking Thank you so much, John. city is the co-founder,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Sam Kassoumeh | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
50 tools | QUANTITY | 0.99+ |
12 million | QUANTITY | 0.99+ |
20 vendors | QUANTITY | 0.99+ |
Fitch | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
$50 million | QUANTITY | 0.99+ |
fifth parties | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto, California | LOCATION | 0.99+ |
today | DATE | 0.99+ |
SecurityScorecard | ORGANIZATION | 0.99+ |
First policy | QUANTITY | 0.99+ |
two folks | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Excel | TITLE | 0.99+ |
50,000 vendors | QUANTITY | 0.99+ |
Dropbox | ORGANIZATION | 0.99+ |
late 90s | DATE | 0.99+ |
fourth parties | QUANTITY | 0.99+ |
51st | QUANTITY | 0.99+ |
Yelp | ORGANIZATION | 0.99+ |
early 2000s | DATE | 0.99+ |
two-factor | QUANTITY | 0.99+ |
securityscorecard.com | OTHER | 0.99+ |
first step | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
three steps | QUANTITY | 0.98+ |
eight times | QUANTITY | 0.98+ |
one bad link | QUANTITY | 0.98+ |
about 50,000 companies | QUANTITY | 0.98+ |
one box | QUANTITY | 0.98+ |
millions | QUANTITY | 0.98+ |
Googles | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
step two | QUANTITY | 0.97+ |
about 12 million companies | QUANTITY | 0.97+ |
Snowflake | ORGANIZATION | 0.97+ |
50,000 organizations | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
2005 | DATE | 0.96+ |
ORGANIZATION | 0.96+ | |
zero trust | QUANTITY | 0.96+ |
2022 | DATE | 0.95+ |
step one | QUANTITY | 0.95+ |
360 view | QUANTITY | 0.95+ |
300 times a month | QUANTITY | 0.94+ |
securityscorecard.com/securityrating | OTHER | 0.94+ |
a cent | QUANTITY | 0.93+ |
Sherpa | ORGANIZATION | 0.93+ |
AdWords | TITLE | 0.93+ |
SOC 2 | TITLE | 0.92+ |
New York city | LOCATION | 0.91+ |
CUBE | ORGANIZATION | 0.91+ |
about a million and a half organizations | QUANTITY | 0.89+ |
Amazon Startup Showcase | EVENT | 0.89+ |
Series B | OTHER | 0.86+ |
CISO | ORGANIZATION | 0.86+ |
one | QUANTITY | 0.86+ |
step three | QUANTITY | 0.86+ |
next couple years | DATE | 0.84+ |
24 hours | QUANTITY | 0.84+ |
zero | QUANTITY | 0.84+ |
single | QUANTITY | 0.84+ |
about seven | QUANTITY | 0.83+ |
Ed Casmer, Cloud Storage Security | CUBE Conversation
(upbeat music) >> Hello, and welcome to "theCUBE" conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE," got a great security conversation, Ed Casper who's the founder and CEO of Cloud Storage Security, the great Cloud background, Cloud security, Cloud storage. Welcome to the "theCUBE Conversation," Ed. Thanks for coming on. >> Thank you very much for having me. >> I got Lafomo on that background. You got the nice look there. Let's get into the storage blind spot conversation around Cloud Security. Obviously, reinforced has came up a ton, you heard a lot about encryption, automated reasoning but still ransomware was still hot. All these things are continuing to be issues on security but they're all brought on data and storage, right? So this is a big part of it. Tell us a little bit about how you guys came about the origination story. What is the company all about? >> Sure, so, we're a pandemic story. We started in February right before the pandemic really hit and we've survived and thrived because it is such a critical thing. If you look at the growth that's happening in storage right now, we saw this at reinforced. We saw even a recent AWS Storage Day. Their S3, in particular, houses over 200 trillion objects. If you look just 10 years ago, in 2012, Amazon touted how they were housing one trillion objects, so in a 10 year period, it's grown to 200 trillion and really most of that has happened in the last three or four years, so the pandemic and the shift in the ability and the technologies to process data better has really driven the need and driven the Cloud growth. >> I want to get into some of the issues around storage. Obviously, the trend on S3, look at what they've done. I mean, I saw my land at storage today. We've interviewed her. She's amazing. Just the EC2 and S3 the core pistons of AWS, obviously, the silicons getting better, the IaaS layers just getting so much more innovation. You got more performance abstraction layers at the past is emerging Cloud operations on premise now with hybrid is becoming a steady state and if you look at all the action, it's all this hyper-converged kind of conversations but it's not hyper-converged in a box, it's Cloud Storage, so there's a lot of activity around storage in the Cloud. Why is that? >> Well, because it's that companies are defined by their data and, if a company's data is growing, the company itself is growing. If it's not growing, they are stagnant and in trouble, and so, what's been happening now and you see it with the move to Cloud especially over the on-prem storage sources is people are starting to put more data to work and they're figuring out how to get the value out of it. Recent analysts made a statement that if the Fortune 1000 could just share and expose 10% more of their data, they'd have net revenue increases of 65 million. So it's just the ability to put that data to work and it's so much more capable in the Cloud than it has been on-prem to this point. >> It's interesting data portability is being discussed, data access, who gets access, do you move compute to the data? Do you move data around? And all these conversations are kind of around access and security. It's one of the big vulnerabilities around data whether it's an S3 bucket that's an manual configuration error, or if it's a tool that needs credentials. I mean, how do you manage all this stuff? This is really where a rethink kind of comes around so, can you share how you guys are surviving and thriving in that kind of crazy world that we're in? >> Yeah, absolutely. So, data has been the critical piece and moving to the Cloud has really been this notion of how do I protect my access into the Cloud? How do I protect who's got it? How do I think about the networking aspects? My east west traffic after I've blocked them from coming in but no one's thinking about the data itself and ultimately, you want to make that data very safe for the consumers of the data. They have an expectation and almost a demand that the data that they consume is safe and so, companies are starting to have to think about that. They haven't thought about it. It has been a blind spot, you mentioned that before. In regards to, I am protecting my management plane, we use posture management tools. We use automated services. If you're not automating, then you're struggling in the Cloud. But when it comes to the data, everyone thinks, "Oh, I've blocked access. I've used firewalls. I've used policies on the data," but they don't think about the data itself. It is that packet that you talked about that moves around to all the different consumers and the workflows and if you're not ensuring that that data is safe, then, you're in big trouble and we've seen it over and over again. >> I mean, it's definitely a hot category and it's changing a lot, so I love this conversation because it's a primary one, primary and secondary cover data cotton storage. It's kind of good joke there, but all kidding aside, it's a hard, you got data lineage tracing is a big issue right now. We're seeing companies come out there and kind of superability tangent there. The focus on this is huge. I'm curious, what was the origination story? What got you into the business? Was it like, were you having a problem with this? Did you see an opportunity? What was the focus when the company was founded? >> It's definitely to solve the problems that customers are facing. What's been very interesting is that they're out there needing this. They're needing to ensure their data is safe. As the whole story goes, they're putting it to work more, we're seeing this. I thought it was a really interesting series, one of your last series about data as code and you saw all the different technologies that are processing and managing that data and companies are leveraging today but still, once that data is ready and it's consumed by someone, it's causing real havoc if it's not either protected from being exposed or safe to use and consume and so that's been the biggest thing. So we saw a niche. We started with this notion of Cloud Storage being object storage, and there was nothing there protecting that. Amazon has the notion of access and that is how they protect the data today but not the packets themselves, not the underlying data and so, we created the solution to say, "Okay, we're going to ensure that that data is clean. We're also going to ensure that you have awareness of what that data is, the types of files you have out in the Cloud, wherever they may be, especially as they drift outside of the normal platforms that you're used to seeing that data in. >> It's interesting that people were storing data lakes. Oh yeah, just store a womp we might need and then became a data swamp. That's kind of like go back 67 years ago. That was the conversation. Now, the conversation is I need data. It's got to be clean. It's got to feed the machine learning. This is going to be a critical aspect of the business model for the developers who are building the apps, hence, the data has code reference which we've focused on but then you say, "Okay, great. Does this increase our surface area for potential hackers?" So there's all kinds of things that kind of open up, we start doing cool, innovative, things like that so, what are some of the areas that you see that your tech solves around some of the blind spots or with object store, the things that people are overlooking? What are some of the core things that you guys are seeing that you're solving? >> So, it's a couple of things, right now, the still the biggest thing you see in the news is configuration issues where people are losing their data or accidentally opening up to rights. That's the worst case scenario. Reads are a bad thing too but if you open up rights and we saw this with a major API vendor in the last couple of years they accidentally opened rights to their buckets. Hackers found it immediately and put malicious code into their APIs that were then downloaded and consumed by many, many of their customers so, it is happening out there. So the notion of ensuring configuration is good and proper, ensuring that data has not been augmented inappropriately and that it is safe for consumption is where we started and, we created a lightweight, highly scalable solution. At this point, we've scanned billions of files for customers and petabytes of data and we're seeing that it's such a critical piece to that to make sure that that data's safe. The big thing and you brought this up as well is the big thing is they're getting data from so many different sources now. It's not just data that they generate. You see one centralized company taking in from numerous sources, consolidating it, creating new value on top of it, and then releasing that and the question is, do you trust those sources or not? And even if you do, they may not be safe. >> We had an event around super Clouds is a topic we brought up to get bring the attention to the complexity of hybrid which is on premise, which is essentially Cloud operations. And the successful people that are doing things in the software side are essentially abstracting up the benefits of the infrastructures of service from HN AWS, right, which is great. Then they innovate on top so they have to abstract that storage is a key component of where we see the innovations going. How do you see your tech that kind of connecting with that trend that's coming which is everyone wants infrastructures code. I mean, that's not new. I mean, that's the goal and it's getting better every day but DevOps, the developers are driving the operations and security teams to like stay pace, so policy seeing a lot of policy seeing some cool things going on that's abstracting up from say storage and compute but then those are being put to use as well, so you've got this new wave coming around the corner. What's your reaction to that? What's your vision on that? How do you see that evolving? >> I think it's great, actually. I think that the biggest problem that you have to do as someone who is helping them with that process is make sure you don't slow it down. So, just like Cloud at scale, you must automate, you must provide different mechanisms to fit into workflows that allow them to do it just how they want to do it and don't slow them down. Don't hold them back and so, we've come up with different measures to provide and pretty much a fit for any workflow that any customer has come so far with. We do data this way. I want you to plug in right here. Can you do that? And so it's really about being able to plug in where you need to be, and don't slow 'em down. That's what we found so far. >> Oh yeah, I mean that exactly, you don't want to solve complexity with more complexity. That's the killer problem right now so take me through the use case. Can you just walk me through how you guys engage with customers? How they consume your service? How they deploy it? You got some deployment scenarios. Can you talk about how you guys fit in and what's different about what you guys do? >> Sure, so, we're what we're seeing is and I'll go back to this data coming from numerous sources. We see different agencies, different enterprises taking data in and maybe their solution is intelligence on top of data, so they're taking these data sets in whether it's topographical information or whether it's in investing type information. Then they process that and they scan it and they distribute it out to others. So, we see that happening as a big common piece through data ingestion pipelines, that's where these folks are getting most of their data. The other is where is the data itself, the document or the document set, the actual critical piece that gets moved around and we see that in pharmaceutical studies, we see it in mortgage industry and FinTech and healthcare and so, anywhere that, let's just take a very simple example, I have to apply for insurance. I'm going to upload my Social Security information. I'm going to upload a driver's license, whatever it happens to be. I want to one know which of my information is personally identifiable, so I want to be able to classify that data but because you're trusting or because you're taking data from untrusted sources, then you have to consider whether or not it's safe for you to use as your own folks and then also for the downstream users as well. >> It's interesting, in the security world, we hear zero trust and then we hear supply chain, software supply chains. We get to trust everybody, so you got kind of two things going on. You got the hardware kind of like all the infrastructure guys saying, "Don't trust anything 'cause we have a zero trust model," but as you start getting into the software side, it's like trust is critical like containers and Cloud native services, trust is critical. You guys are kind of on that balance where you're saying, "Hey, I want data to come in. We're going to look at it. We're going to make sure it's clean." That's the value here. Is that what I'm hearing you, you're taking it and you're saying, "Okay, we'll ingest it and during the ingestion process, we'll classify it. We'll do some things to it with our tech and put it in a position to be used properly." Is that right? >> That's exactly right. That's a great summary, but ultimately, if you're taking data in, you want to ensure it's safe for everyone else to use and there are a few ways to do it. Safety doesn't just mean whether it's clean or not. Is there malicious content or not? It means that you have complete coverage and control and awareness over all of your data and so, I know where it came from. I know whether it's clean and I know what kind of data is inside of it and we don't see, we see that the interesting aspects are we see that the cleanliness factor is so critical in the workflow, but we see the classification expand outside of that because if your data drifts outside of what your standard workflow was, that's when you have concerns, why is PII information over here? And that's what you have to stay on top of, just like AWS is control plane. You have to manage it all. You have to make sure you know what services have all of a sudden been exposed publicly or not, or maybe something's been taken over or not and you control that. You have to do that with your data as well. >> So how do you guys fit into the security posture? Say it a large company that might want to implement this right away. Sounds like it's right in line with what developers want and what people want. It's easy to implement from what I see. It's about 10, 15, 20 minutes to get up and running. It's not hard. It's not a heavy lift to get in. How do you guys fit in once you get operationalized when you're successful? >> It's a lightweight, highly scalable serverless solution, it's built on Fargate containers and it goes in very easily and then, we offer either native integrations through S3 directly, or we offer APIs and the APIs are what a lot of our customers who want inline realtime scanning leverage and we also are looking at offering the actual proxy aspects. So those folks who use the S3 APIs that our native AWS, puts and gets. We can actually leverage our put and get as an endpoint and when they retrieve the file or place the file in, we'll scan it on access as well, so, it's not just a one time data arrest. It can be a data in motion as you're retrieving the information as well >> We were talking with our friends the other day and we're talking about companies like Datadog. This is the model people want, they want to come in and developers are driving a lot of the usage and operational practice so I have to ask you, this fits kind of right in there but also, you also have the corporate governance policy police that want to make sure that things are covered so, how do you balance that? Because that's an important part of this as well. >> Yeah, we're really flexible for the different ways they want to consume and and interact with it. But then also, that is such a critical piece. So many of our customers, we probably have a 50/50 breakdown of those inside the US versus those outside the US and so, you have those in California with their information protection act. You have GDPR in Europe and you have Asia having their own policies as well and the way we solve for that is we scan close to the data and we scan in the customer's account, so we don't require them to lose chain of custody and send data outside of the accoun. That is so critical to that aspect. And then we don't ask them to transfer it outside of the region, so, that's another critical piece is data residency has to be involved as part of that compliance conversation. >> How much does Cloud enable you to do this that you couldn't really do before? I mean, this really shows the advantage of natively being in the Cloud to kind of take advantage of the IaaS to SAS components to solve these problems. Share your thoughts on how this is possible. What if there was no problem, what would you do? >> It really makes it a piece of cake. As silly as that sounds, when we deploy our solution, we provide a management console for them that runs inside their own accounts. So again, no metadata or anything has to come out of it and it's all push button click and because the Cloud makes it scalable because Cloud offers infrastructure as code, we can take advantage of that and then, when they say go protect data in the Ireland region, they push a button, we stand up a stack right there in the Ireland region and scan and protect their data right there. If they say we need to be in GovCloud and operate in GovCloud East, there you go, push the button and you can behave in GovCloud East as well. >> And with server lists and the region support and all the goodness really makes a really good opportunity to really manage these Cloud native services with the data interaction so, really good prospects. Final question for you. I mean, we love the story. I think it is going to be a really changing market in this area in a big way. I think the data storage relationship relative to higher level services will be huge as Cloud native continues to drive everything. What's the future? I mean, you guys see yourself as a all encompassing, all singing and dancing storage platform or a set of services that you're going to enable developers and drive that value. Where do you see this going? >> I think that it's a mix of both. Ultimately, you saw even on Storage Day the announcement of file cash and file cash creates a new common name space across different storage platforms and so, the notion of being able to use one area to access your data and have it come from different spots is fantastic. That's been in the on-prem world for a couple of years and it's finally making it to the Cloud. I see us following that trend in helping support. We're super laser-focused on Cloud Storage itself so, EBS volumes, we keep having customers come to us and say, "I don't want to run agents in my EC2 instances. I want you to snap and scan and I don't want to, I've got all this EFS and FSX out there that we want to scan," and so, we see that all of the Cloud Storage platforms, Amazon work docs, EFS, FSX, EBS, S3, we'll all come together and we'll provide a solution that's super simple, highly scalable that can meet all the storage needs so, that's our goal right now and where we're working towards. >> Well, Cloud Storage Security, you couldn't get a more a descriptive name of what you guys are working on and again, I've had many contacts with Andy Jassy when he was running AWS and he always loves to quote "The Innovator's Dilemma," one of his teachers at Harvard Business School and we were riffing on that the other day and I want to get your thoughts. It's not so much "The Innovator's Dilemma" anymore relative to Cloud 'cause that's kind of a done deal. It's "The Integrator's Dilemma," and so, it's the integrations are so huge now. If you don't integrate the right way, that's the new dilemma. What's your reaction to that? >> A 100% agreed. It's been super interesting. Our customers have come to us for a security solution and they don't expect us to be 'cause we don't want to be either. Our own engine vendor, we're not the ones creating the engines. We are integrating other engines in and so we can provide a multi engine scan that gives you higher efficacy. So this notion of offering simple integrations without slowing down the process, that's the key factor here is what we've been after so, we are about simplifying the Cloud experience to protecting your storage and it's been so funny because I thought customers might complain that we're not a name brand engine vendor, but they love the fact that we have multiple engines in place and we're bringing that to them this higher efficacy, multi engine scan. >> I mean the developer trends can change on a dime. You make it faster, smarter, higher velocity and more protected, that's a winning formula in the Cloud so Ed, congratulations and thanks for spending the time to riff on and talk about Cloud Storage Security and congratulations on the company's success. Thanks for coming on "theCUBE." >> My pleasure, thanks a lot, John. >> Okay. This conversation here in Palo Alto, California I'm John Furrier, host of "theCUBE." Thanks for watching.
SUMMARY :
the great Cloud background, You got the nice look there. and driven the Cloud growth. and if you look at all the action, and it's so much more capable in the Cloud It's one of the big that the data that they consume is safe and kind of superability tangent there. and so that's been the biggest thing. the areas that you see and the question is, do you and security teams to like stay pace, problem that you have to do That's the killer problem right now and they distribute it out to others. and during the ingestion and you control that. into the security posture? and the APIs are what of the usage and operational practice and the way we solve for of the IaaS to SAS components and because the Cloud makes it scalable and all the goodness really and so, the notion of and so, it's the and so we can provide a multi engine scan I mean the developer I'm John Furrier, host of "theCUBE."
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ed Casper | PERSON | 0.99+ |
Ed Casmer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
US | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
200 trillion | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
February | DATE | 0.99+ |
Ireland | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
65 million | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
10% | QUANTITY | 0.99+ |
information protection act | TITLE | 0.99+ |
15 | QUANTITY | 0.99+ |
FSX | TITLE | 0.99+ |
Ed | PERSON | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
one time | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
10 years ago | DATE | 0.99+ |
one trillion objects | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
100% | QUANTITY | 0.98+ |
billions of files | QUANTITY | 0.98+ |
20 minutes | QUANTITY | 0.98+ |
Harvard Business School | ORGANIZATION | 0.98+ |
Asia | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
67 years ago | DATE | 0.98+ |
over 200 trillion objects | QUANTITY | 0.98+ |
50/50 | QUANTITY | 0.97+ |
Cloud Storage Security | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
pandemic | EVENT | 0.96+ |
today | DATE | 0.95+ |
HN AWS | ORGANIZATION | 0.95+ |
Cloud | TITLE | 0.94+ |
The Integrator's Dilemma | TITLE | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
EC2 | TITLE | 0.93+ |
zero trust | QUANTITY | 0.93+ |
last couple of years | DATE | 0.93+ |
about 10 | QUANTITY | 0.93+ |
EFS | TITLE | 0.9+ |
one area | QUANTITY | 0.88+ |
The Innovator's Dilemma | TITLE | 0.87+ |
10 year period | QUANTITY | 0.81+ |
GovCloud | TITLE | 0.78+ |
Cloud Storage | TITLE | 0.77+ |
The Innovator's Dilemma | TITLE | 0.75+ |
Lafomo | PERSON | 0.75+ |
EBS | TITLE | 0.72+ |
last three | DATE | 0.71+ |
Storage Day | EVENT | 0.7+ |
Cloud Security | TITLE | 0.69+ |
CUBE | ORGANIZATION | 0.67+ |
Fortune 1000 | ORGANIZATION | 0.61+ |
EBS | ORGANIZATION | 0.59+ |
Andrew Elvish & Christian Morin | CUBE Conversation
>>Welcome to this Q conversation. I'm Dave Nicholson. And today we are joined by Andrew ish and Chris Y Moran, both from Gentech. Andrew is the vice president of marketing. Chris John is the, uh, vice president of product engineering, gentlemen, welcome to the cube. >>Welcome David. Thanks for having us. Hey, >>David, thanks for having us on your show. >>Absolutely. Give us just, let's start out by, uh, giving us some background on, on Gentech. How would you describe to a relative coming over and asking you what you do for a living? What Genotech does? >>Well, I'll take a shot at that. I'm the marketing guy, David, but, uh, I think the best way to think of Genotech first and foremost is a software company. We, uh, we do a really good job of bringing together all of that physical security sensor network onto a platform. So people can make sense out of the data that comes from video surveillance, cameras, access control, reads, license plate recognition, cameras, and from a whole host of different sensors that can live out there in the world. Temperature, sensors, microwaves, all sorts of stuff. So we're a company that's really good at making sense of complex data from sensors. That's kind of, I think that's kind of what we >>Do and, and, and we focus specifically on like larger, complex, critical infrastructure type projects, whether they be airports, uh, large enterprise campuses and whatnot. So we're not necessarily your well known consumer type brand. >>So you mentioned physical, you mentioned physical security. Um, what about the intersection between physical security and, and cyber security who are, who are the folks that you work with directly as customers and where do they, where do they sit in that spectrum of cyber versus physical? >>So we predominantly work with physical security professionals and, uh, they typically are responsible for the security of a facility, a campus, a certain area. And we'll talk about security cameras. We'll talk about access control devices with card readers and, and, and locks, uh, intrusion detection, systems, fences, and whatnot. So anything that you would see that physically protects a facility. And, uh, what's actually quite interesting is that, you know, cybersecurity, we, we hear about cybersecurity and depressed all the time, right. And who's been hacked this week is typically like, uh, a headline that we're all like looking at, uh, we're looking for in the news. Um, so we actually do quite a lot of, I would say education work with the physical security professional as it pertains to the importance of cyber security in the physical security system, which in and of itself is an information system. Right. Um, so you don't wanna put a system in place to protect your facility that is full of cybersecurity holes because at that point, you know, your physical security systems becomes, uh, your weakest link in your security chain. Uh, the way I like to say it is, you know, there's no such thing as physical security versus cyber security, it's just security. Uh, really just the concept or a context of what threat vectors does this specific control or mechanism actually protects against >>Those seem to be words to live by, but are, are they aspirational? I mean, do you, do you see gaps today, uh, between the worlds of cyber and physical security? >>I mean, for sure, right? Like we, physical security evolved from a different part of the enterprise, uh, structure then did it or cyber security. So they, they come at things from a different angle. Um, so, you know, for a long time, the two worlds didn't really meet. Uh, but now what we're seeing, I would say in the last 10 years, Christian, about that, there's a huge convergence of cyber security with physical security. It, so information technology with operation technology really coming together quite tightly in the industry. And I think leading companies and sophisticated CISOs are really giving a big pitcher thought to what's going on across the organization, not just in cybersecurity. >>Yeah. I think we've come a long way from CCTV, which stands for closed circuit television, uh, which was typically like literally separated from the rest of the organization, often managed by the facilities, uh, part of any organization. Uh, and now we're seeing more and more organizations where this is converging together, but there's still ways to go, uh, to get this proper convergence in place. But, you know, we're getting there. >>How, how does Gentech approach its addressable market? Is this, is this a direct model? Uh, do you work with partners? What, what does that look like in your world? >>Well, we're a, we're a partner led company Gentech, you know, model on many friends is all about our partners. So we go to market through our integration channel. So we work with really great integrators all around the world. Um, and they bring together our software platform, which is usually forms the nucleus of sort of any O T security network. Uh, they bring that together with all sorts of other things, such as the sensor network, the cabling, all of that. It's a very complex multiplayer world. And also in that, you know, partnership ecosystem and Christian, this is more your world. We have to build deep integrations with all of these companies that build sensors, whether that's access, Bosch, Canon, uh, Hanoi, you know, we're, we're really working with them them. And of course with our storage and server partners >>Like Dell >>Mm-hmm <affirmative>. Yeah. So we have, we have like hundreds of, I would say ecosystem partners, right? Camera manufacturers, uh, access control reader, controller manufacturers, intrusion detection, manufacturers, late LIDAR radar, you know, the list goes on and on and on. And, and basically we bring this all together. The system integrator really is going to pick best of breed based on a specific end customer's I would say requirements and then roll out the system. According >>That's very interesting, you know, at, at Silicon angle on the cube, um, we've initiated coverage of this subject of the question, does hardware still matter? And, and you know, of course we're, we're approaching that primarily from kind of the traditional it, uh, perspective, but you said at the outset, you you're a software company mm-hmm <affirmative>, but clearly correct me if I'm wrong, your software depends upon all of these hardware components and as they improve, I imagine you can do things that maybe you couldn't do before those improvements. The first thing that comes to mind is just camera resolution. Um, you know, sort of default today is 4k, uh, go back five years, 10 years. I imagine that some of the sophisticated things that you can do today weren't possible because the hardware was lagging. Is that, is that a, is that a fair assessment? >>Oh, that's a fair assessment. Just going back 20 years ago. Uh, just VGA resolution on a security camera was like out of this world resolution, uh, even more so if it was like full motion, 30 images per second. So you typically have like, probably even like three 20 by 2 44 images per second, like really lousy resolution, just from a resolution perspective, the, the imagery sensors have, have really increased in terms of what they can provide, but even more so is the horsepower of these devices. Mm-hmm, <affirmative> now it's not uncommon to have, uh, pretty, pretty powerful Silicon in those devices now that can actually run machine learning models and you can actually do computer vision and analytics straight into the device. Uh, as you know, in some of the initial years, you would actually run this on kind of racks of servers in this data center. >>Now you can actually distribute those workloads across on the edge. And what we're seeing is, you know, the power that the edge provides is us as a software company, we have the opportunity to actually bring our workloads where it makes most sense. And in some cases we'll actually also have a ground station kind of in between the sensors and potentially the cloud, uh, because the use case just, uh, calls for it. Uh, just looking from a, from a, from a video security perspective, you know, when you have hundreds or thousands of cameras on an airport, it's just not economical or not even feasible in some cases to bring all that footage to the cloud even more so when 99% of that footage is never watched by anybody. So what's the point. Uh, so you just wanna provide the clips that, that actually do matter to the cloud and for longer term retention, you also want to be able to have sometimes more resilient systems, right? So what happens if the cloud disconnects, you can stop the operations of that airport or stop that operations of that, of that prison, right? It needs to continue to operate and therefore you need higher levels of resiliency. So you do need that hardware. So it's really a question of what it calls for and having the right size type of hardware so that you don't overly complexify the installation, uh, and, and actually get the job done. Are >>You comparing airports to prisons >>Christian? Well, nowadays they're pretty much prepared <laugh>, >>But I mean, this is exactly it, David, but I mean, this payload, especially from the video surveillance, like the, the workload that's going through to the, these ground stations really demands flexible deployment, right? So like we think about it as edge to cloud and, uh, you know, that's, what's really getting us excited because it, it gives so much more flexibility to the, you know, the C I S O and security professionals in places like prisons, airports, also large scale retail and banking, and, uh, other places, >>Universities, the list goes on and on and on, and >>On the flexibility of deployment just becomes so much easier because these are lightweight, you usually word deploying on a Linux box and it can connect seamlessly with like large scale head end storage or directly to, uh, cloud providers. It's, it's really a sophisticated new way of looking at how you architect out these networks. >>You've just given, you've just given a textbook example of why, uh, folks in the it world have been talking about hybrid cloud for, for, for such a long time, and some have scoffed at the idea, but you just, you just present a perfect use case for that combination of leveraging cloud with, uh, on-premises hardware and tracking with hardware advances, um, uh, on, on the subject of camera resolution. I don't know if you've seen this meme, but there's a great one with the, the first deep field image from the, from the, I was gonna say humble, the James web space telescope, uh, in contrast with a security camera F photo, which is really blurry of someone in your driveway <laugh>, uh, which is, which is, uh, sort of funny. The reality though, is I've seen some of these latest generation security cameras, uh, you know, beyond 4k resolution. And it's amazing just, you know, the kind of detail that you can get into, but talk about what what's, what's exciting in your world. What's, what's Gentech doing, you know, over the next, uh, several quarters that's, uh, particularly interesting what's on the leading edge of your, of your world. >>Well, I think right now what's on the leading edges is being driven by our end users. So the, so the, the companies, the governments, the organizations that are implementing our software into these complex IOT networks, they wanna do more with that data, right? It's not just about, you know, monitoring surveillance. It's not just about opening and closing doors or reading license plates, but more and more we're seeing organizations taking this bigger picture view of the data that is generated in their organizations and how they can take value out of existing investments that they've made in sensor networks, uh, and to take greater insight into operations, whether that can be asset utilization, customer service efficiency, it becomes about way more than just, you know, either physical security or cyber security. It becomes really an enterprise shaping O T network. And to us, that is like a massive, massive opportunity, uh, in the, in the industry today. >>Yeah. >>Now you're you're you're oh, go ahead. I'm sorry, Christian, go ahead. Yeah, >>No, it's, it's, it's good. But, you know, going back to a comment that I mentioned earlier about how it was initially siloed and now, you know, we're kind of discovering this diamond in the rough, in terms of all these sensors that are out there, which a lot of organizations didn't even know existed or didn't even know they had. And how can you bring that on kind of across the organizations for non-security related applications? So that's kind of one very interesting kind of, uh, direction that we're, that we've been undergoing for the last few years, and then, you know, security, uh, and physical security for that matter often is kind of the bastard step child. Doesn't get all the budget and, you know, there's lots of opportunities for, to help them increase and improve their operations, uh, as, as Andrew pointed out and really help bringing them into the 21st century. >>Yeah. >>And you're, you're headquartered in Montreal, correct? >>Yes. >>Yeah. So, so the reason, the reason why that's interesting is because, um, and, you know, correct me if I'm, if I'm off base here, but, but you're sort of the bridge between north America and Europe. Uh, and, and, uh, and so you sit at that nexus where, uh, you probably have more of an awareness of, uh, trends in security, which overlap with issues of privacy. Yeah. Where Europe has led in a lot of cases. Um, some of those European like rules are coming to north America. Um, is there anything in your world that is particularly relevant or that concerns you about north America catching up, um, or, or do those worlds of privacy and security not overlap as much as I might think they do? >>Ah, thank you. Any >>Thoughts? >>Absolutely not. No, no. <laugh> joking aside. This is, this is, this is, >>Leave me hanging >><laugh>, uh, this is actually core to our DNA. And, and, and we, we often say out loud how, like Europe has really paved the way for a different way, uh, of, of looking at privacy from a security setting, right. And they're not mutually exclusive. Right. You can have high security all while protecting people's privacy. And it's all of a question of ensuring that, you know, how you kind of, I would say, uh, ethically, uh, use said technology and we can actually put some safeguards in it. So to minimize the likelihood of there being abuse, right? There's, there's something that we do, which we call the privacy protector, which, you know, for all intents and purposes, it's not that complex of an idea. It's, it's really the concept of you have security cameras in a public space or a more sensitive location. And you have your security guards that can actually watch that footage when nothing really happens. >>You, you want to protect people's privacy in these situations. Uh, however, you still want to be able to provide a view to the security guard so they can still make out that, you know, there there's actually people walking around or there's a fight that broke out. And in the likelihood that something did happen, then you can actually view the overall footage. So, and with, with the details that the cameras that you had, you know, the super high mega pixel cameras that you have will provide. So we blur the images of the individuals. We still keep the background. And once you have the proper authorization, and this is based on the governance of the organization, so it can be a four I principle where it could be the chief security officer with the chief privacy officer need to authorize this footage to be kind of UN blurred. And at that point you can UN blur the footage and provide it to law enforcement for the investigation, for example. >>Excellent. I've got Andrew, if you wanted, then I, then I'm. Well, so I, I've a, I have a final question for you. And this comes out of a game that, uh, some friends and I, some friends of mine and I devised over the years, primarily this is played with strangers that you meet on airplanes as you're traveling. But the question you ask is in your career, what you're doing now and over the course of your careers, um, what's the most shocking thing <laugh> that people would learn from what, you know, what do you, what do you find? What's the craziest thing. When you go in to look at these environments that you see that people should maybe address, um, well, go ahead and start with you, Andrew. >>I, >>The most shocking thing you see every day in your world, >>It's very interesting. The most shocking thing I think we've seen in the industry is how willing, uh, some professionals are in our industry to install any kind of device on their networks without actually taking the time to do due diligence on what kind of security risks these devices can have on a network. Because I think a lot of people don't think about a security camera as first and foremost, a computer, and it's a computer with an IP address on a network, and it has a visual sensor, but we always get pulled in by that visual sensor. Right. And it's like, oh, it's a camera. No, it's a computer. And, you know, over the last, I would say eight years in the industry, we've spent a lot of time trying to sensitize the industry to the fact that, you know, you can't just put devices on your, your network without understanding the supply chain, without understanding the motives behind who's put these together and their track record of cybersecurity. So probably the weirdest thing that I've seen in my, um, you know, career in this industry is just the willingness of people not to take time to do due diligence before they hook something up on onto their corporate network where, you know, data can start leaking out, being exfiltrated by those devices and malevolent actors behind them. So gotta ask questions about what you put on your network. >>Christian, did he steal your, did he steal your thunder? Do you have any other, any other thoughts? >>Well, so first of all, there's things I just cannot say on TV. Okay. But you can't OK. >>You can't. Yeah, yeah, yeah. Saying that you're shocked that not everyone speaks French doesn't count. Okay. Let's just get, let's get past that, but, but go, but yeah, go ahead. Any thoughts? >>So, uh, you know, I, I would say something that I I've seen a lot and, and specifically with customers sometimes that were starting to shop for a new system is you'd be surprised by first of all, there's a camera, the likelihood of actually somebody watching it live while you're actually in the field of view of that camera is close to Neil first and foremost, second, there's also a good likelihood that that camera doesn't even record. It actually is not even functional. And, and I would say a lot of organizations often realize that, you know, that camera was not functioning when they actually knew do need to get the footage. And we've seen this with some large incidents, uh, very, uh, bad incidents that happened, uh, whether in the UK or in Boston or whatnot, uh, when they're, when law enforcement is trying to get footage and they realize that a lot of cameras actually weren't recording and, and, and goes back to Andrew's point in terms of the selection process of these devices. >>Yeah. Image resolution is important, like, because you need an, an image that it actually usable so that you can actually do something with it forensically, but you know, these cameras need to be recorded by a reliable system and, and should something happen with the device. And there's always going to be something, you know, power, uh, uh, a bird ate the lens. I don't know what it might be, or squirrel ate the wire. Um, and the camera doesn't work anymore. So you have to replace it. So having a system that provides, you know, you with like health insights in terms of, of, of if it's working or not is, is actually quite important. It needs to be managed like any it environment, right? Yeah. You have all these devices and if one of them goes down, you need to manage it. And most organizations it's fire and forget, I sign a purchase order. I bought my security system, I installed it. It's done. We move on to the next one and seven years later, something bad happens. And like, uhoh, >>It's not a CCTV system. It's a network. Yeah. Life cycle management counts. >>Well, uh, I have to say on that, uh, I'm gonna be doing some research on Canadian birds and squirrels. I, I had no idea, >>Very hungry. >>Andrew, Chris, John, thank you so much. Great conversation, uh, from all of us here at the cube. Thanks for tuning in. Stay tuned. The cube from Silicon angle media, we are your leader in tech coverage.
SUMMARY :
Andrew is the vice president of marketing. Thanks for having us. How would you describe to a relative coming over and asking you what you I'm the marketing guy, David, but, uh, I think the best way to think of So we're not necessarily your well known consumer type brand. So you mentioned physical, you mentioned physical security. Uh, the way I like to say it is, you know, so, you know, for a long time, the two worlds didn't really meet. But, you know, we're getting there. And also in that, you know, partnership ecosystem and you know, the list goes on and on and on. I imagine that some of the sophisticated things that you can do today weren't possible Uh, as you know, in some of the initial years, from a video security perspective, you know, when you have hundreds or thousands of cameras on an It's, it's really a sophisticated new way of looking at how you architect uh, you know, beyond 4k resolution. It's not just about, you know, Yeah, Doesn't get all the budget and, you know, there's lots of opportunities for, to help them increase Uh, and, and, uh, and so you sit at that nexus where, Ah, thank you. this is, this is, It's, it's really the concept of you have security cameras in a public space or a And in the likelihood that something did happen, then you can actually view the overall footage. what, you know, what do you, what do you find? to sensitize the industry to the fact that, you know, you can't just put devices But you can't OK. Saying that you're shocked that not everyone speaks French doesn't count. So, uh, you know, I, I would say something that I I've seen a lot and, and specifically with customers So having a system that provides, you know, you with like health insights It's not a CCTV system. Well, uh, I have to say on that, uh, I'm gonna be doing some research Andrew, Chris, John, thank you so much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Canon | ORGANIZATION | 0.99+ |
Bosch | ORGANIZATION | 0.99+ |
Genotech | ORGANIZATION | 0.99+ |
Gentech | ORGANIZATION | 0.99+ |
Hanoi | ORGANIZATION | 0.99+ |
Chris John | PERSON | 0.99+ |
Montreal | LOCATION | 0.99+ |
Chris Y Moran | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
99% | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Christian Morin | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
21st century | DATE | 0.99+ |
eight years | QUANTITY | 0.99+ |
Andrew Elvish | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
2 | QUANTITY | 0.99+ |
Andrew ish | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
north America | LOCATION | 0.99+ |
Neil | PERSON | 0.99+ |
north America | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
this week | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
seven years later | DATE | 0.98+ |
John | PERSON | 0.98+ |
two worlds | QUANTITY | 0.97+ |
second | QUANTITY | 0.96+ |
five years | QUANTITY | 0.95+ |
20 years ago | DATE | 0.94+ |
first | QUANTITY | 0.93+ |
nexus | ORGANIZATION | 0.92+ |
4k | QUANTITY | 0.92+ |
Christian | ORGANIZATION | 0.91+ |
44 images per second | QUANTITY | 0.91+ |
Linux | TITLE | 0.9+ |
first thing | QUANTITY | 0.89+ |
French | OTHER | 0.89+ |
one | QUANTITY | 0.87+ |
James | ORGANIZATION | 0.85+ |
last 10 years | DATE | 0.83+ |
three | QUANTITY | 0.83+ |
30 images per second | QUANTITY | 0.81+ |
thousands of cameras | QUANTITY | 0.81+ |
first deep | QUANTITY | 0.8+ |
European | OTHER | 0.79+ |
Canadian | OTHER | 0.78+ |
four | QUANTITY | 0.77+ |
Europe | ORGANIZATION | 0.68+ |
Silicon | ORGANIZATION | 0.64+ |
last | DATE | 0.61+ |
O | ORGANIZATION | 0.56+ |
years | DATE | 0.5+ |
one | DATE | 0.38+ |