Breaking Analysis: Supercloud is becoming a thing
>> From The Cube studios in Palo Alto, in Boston, bringing you data driven insights from the cube and ETR. This is breaking analysis with Dave Vellante. >> Last year, we noted in a breaking analysis that the cloud ecosystem is innovating beyond the idea or notion of multi-cloud. We've said for years that multi-cloud is really not a strategy but rather a symptom of multi-vendor. And we coined this term supercloud to describe an abstraction layer that lives above the hyperscale infrastructure that hides the underlying complexities, the APIs, and the primitives of each of the respective clouds. It interconnects whether it's On-Prem, AWS, Azure, Google, stretching out to the edge and creates a value layer on top of that. So our vision is that supercloud is more than running an individual service in cloud native mode within an individual individual cloud rather it's this new layer that builds on top of the hyperscalers. And does things irrespective of location adds value and we'll get into that in more detail. Now it turns out that we weren't the only ones thinking about this, not surprisingly, the majority of the technology ecosystem has been working towards this vision in various forms, including some examples that actually don't try to hide the underlying primitives. And we'll talk about that, but give a consistent experience across the DevSecOps tool chain. Hello, and welcome to this week's Wikibon, Cube insights powered by ETR. In this breaking analysis, we're going to share some recent examples and direct quotes about supercloud from the many Cube guests that we've had on over the last several weeks and months. And we've been trying to test this concept of supercloud. Is it technically feasible? Is it business rational? Is there business case for it? And we'll also share some recent ETR data to put this into context with some of the players that we think are going after this opportunity and where they are in their supercloud build out. And as you can see I'm not in the studio, everybody's got COVID so the studios shut down temporarily but breaking analysis continues. So here we go. Now, first thing is we uncovered an article from earlier this year by Lori MacVittie, is entitled, Supercloud: The 22 Answer to Multi-Cloud Challenges. What a great title. Of course we love it. Now, what really interested us here is not just the title, but the notion that it really doesn't matter what it's called, who cares? Supercloud, distributed cloud, someone even called it Metacloud recently, and we'll get into that. But Lori is a technologist. She's a developer by background. She works at F-Five and she's partial to the supercloud definition that was put forth by Cornell. You can see it here. That's a cloud architecture that enables application migration as a service across different availability zones or cloud providers, et cetera. And that the supercloud provides interfaces to allocate, migrate and terminate resources... And can span all major public cloud providers as well as private clouds. Now, of course, we would take that as well to the edge. So sure. That sounds about right and provides further confirmation that something new is really happening out there. And that was our initial premise when we put this fourth last year. Now we want to dig deeper and hear from the many Cube guests that we've interviewed recently probing about this topic. We're going to start with Chuck Whitten. He's Dell's new Co-COO and most likely part of the Dell succession plan, many years down the road hopefully. He coined the phrase multi-cloud by default versus multi-cloud by design. And he provides a really good business perspective. He's not a deep technologist. We're going to hear from Chuck a couple of times today including one where John Furrier asks him about leveraging hyperscale CapEx. That's an important concept that's fundamental to supercloud. Now, Ashesh Badani heads products at Red Hat and he talks about what he calls Metacloud. Again, it doesn't matter to us what you call it but it's the ecosystem gathering and innovating and we're going to get his perspective. Now we have a couple of clips from Danny Allan. He is the CTO of Veeam. He's a deep technologist and super into the weeds, which we love. And he talks about how Veeam abstracts the cloud layer. Again, a concept that's fundamental to supercloud and he describes what a supercloud is to him. And we also bring with Danny the edge discussion to the conversation. Now the bottom line from Danny is we want to know is supercloud technically feasible? And is it a thing? And then we have Jeff Clarke. Jeff Clark is the Co-COO and Vice Chairman of Dell super experienced individual. He lays out his vision of supercloud and what John Furrier calls a business operating system. You're going to hear from John a couple times. And he, Jeff Clark has a dropped the mic moment, where he says, if we can do this X, we'll describe what X is, it's game over. Okay. So of course we wanted to then go to HPE, one of Dell's biggest competitors and Patrick Osborne is the vice president of the storage business unit at Hewlett Packet Enterprise. And so given Jeff Clarke's game over strategy, we want to understand how HPE sees supercloud. And the bottom line, according to Patrick Osborne is that it's real. So you'll hear from him. And now Raghu Raghuram is the CEO of VMware. He threw a curve ball at this supercloud concept. And he flat out says, no, we don't want to hide the underlying primitives. We want to give developers access to those. We want to create a consistent developer experience in that DevsSecOps tool chain and Kubernetes runtime environments, and connect all the elements in the application development stack. So that's a really interesting perspective that Raghu brings. And then we end on Itzik Reich. Itzik is a technologist and a technical team leader who's worked as a go between customers and product developers for a number of years. And we asked Itzik, is supercloud technically feasible and will it be a reality? So let's hear from these experts and you can decide for yourselves how real supercloud is today and where it is, run the sizzle >> Operative phrase is multi-cloud by default that's kind of the buzz from your keynote. What do you mean by that? >> Well, look, customers have woken up with multiple clouds, multiple public clouds, On-Premise clouds increasingly as the edge becomes much more a reality for customers clouds at the edge. And so that's what we mean by multi-cloud by default. It's not yet been designed strategically. I think our argument yesterday was, it can be and it should be. It is a very logical place for architecture to land because ultimately customers want the innovation across all of the hyperscale public clouds. They will see workloads and use cases where they want to maintain an On-Premise cloud, On-Premise clouds are not going away, I mentioned edge clouds, so it should be strategic. It's just not today. It doesn't work particularly well today. So when we say multi-cloud by default we mean that's the state of the world today. Our goal is to bring multi-cloud by design as you heard. >> Really great question, actually, since you and I talked, Dave, I've been spending some time noodling just over that. And you're right. There's probably some terminology, something that will get developed either by us or in collaboration with the industry. Where we sort of almost have the next almost like a Metacloud that we're working our way towards. >> So we manage both the snapshots and we convert it into the Veeam portable data format. And here's where the supercloud comes into play. Because if I can convert it into the Veeam portable data format, I can move that OS anywhere. I can move it from physical to virtual, to cloud, to another cloud, back to virtual, I can put it back on physical if I want to. It actually abstracts the cloud layer. There are things that we do when we go between cloud some use BIOS, some use UEFI, but we have the data in backup format, not snapshot format, that's theirs, but we have it in backup format that we can move around and abstract workloads across all of the infrastructure. >> And your catalog is control in control of that. Is that right? Am I thinking about that the right way? >> Yeah it is, 100%. And you know what's interesting about our catalog, Dave, the catalog is inside the backup. Yes. So here's, what's interesting about the edge, two things, on the edge you don't want to have any state, if you can help it. And so containers help with that You can have stateless environments, some persistent data storage But we not not only provide the portability in operating systems, we also do this for containers. And that's true. If you go to the cloud and you're using say EKS with relational database services RDS for the persistent data later, we can pick that up and move it to GKE or move it to OpenShift On-Premises. And so that's why I call this the supercloud, we have all of this data. Actually, I think you termed the term supercloud. >> Yeah. But thank you for... I mean, I'm looking for a confirmation from a technologist that it's technically feasible. >> It is technically feasible and you can do it today. >> You said also technology and business models are tied together and enabler. If you believe that then you have to believe that it's a business operating system that they want. They want to leverage whatever they can. And at the end of the day, they have to differentiate what they do. >> Well, that's exactly right. If I take that in what Dave was saying and I summarize it the following way, if we can take these cloud assets and capabilities, combine them in an orchestrated way to deliver a distributed platform, game over. >> We have a number of platforms that are providing whether it's compute or networking or storage, running those workloads that they plum up into the cloud they have an operational experience in the cloud and they now they have data services that are running in the cloud for us in GreenLake. So it's a reality, we have a number of platforms that support that. We're going to have a a set of big announcements coming up at HPE Discover. So we led with Electra and we have a block service. We have VM backup as a service and DR on top of that. So that's something that we're providing today. GreenLake has over, I think it's actually over 60 services right now that we're providing in the GreenLake platform itself. Everything from security, single sign on, customer IDs, everything. So it's real. We have the proofpoint for it. >> Yeah. So I want to clarify something that you said because this tends to be very commonly confused by customers. I use the word abstraction. And usually when people think of abstraction, they think it hides capabilities of the cloud providers. That's not what we are trying to do. In fact, that's the last thing we are trying to do. What we are trying to do is to provide a consistent developer experience regardless of where you want to build your application. So that you can use the cloud provider services if that's what you want to use. But the DevSecOp tool chain, the runtime environment which turns out to be Kubernetes and how you control the Kubernetes environment, how do you manage and secure and connect all of these things. Those are the places where we are adding the value. And so really the VMware value proposition is you can build on the cloud of your choice but providing these consistent elements, number one, you can make better use of us, your scarce developer or operator resources and expertise. And number two, you can move faster. And number three, you can just spend less as a result of this. So that's really what we are trying to do. We are not... So I just wanted to clarify the word abstraction. In terms of where are we? We are still, I would say, in the early stages. So if you look at what customers are trying to do, they're trying to build these greenfield applications. And there is an entire ecosystem emerging around Kubernetes. There is still, Kubernetes is not a developer platform. The developer experience on top of Kubernetes is highly inconsistent. And so those are some of the areas where we are introducing new innovations with our Tanzu Application Platform. And then if you take enterprise applications, what does it take to have enterprise applications running all the time be entirely secure, et cetera. >> Well, look, the multi-cloud by default today are isolated clouds. They don't work together. Your data is siloed. It's locked up and it is expensive to move and make sense of it. So I think the word you and I were batting around before, this is an interconnected tissue. That's what the world needs. They need the clouds to work together as a single platform. That's the problem that we're trying to solve. And you saw it in some of our announcements here that we're starting to make steps on that journey to make multi-cloud work together much simpler. >> It's interesting, you mentioned the hyperscalers and all that CapEx investments. Why wouldn't you want to take advantage of a cloud and build on the CapEx and then ultimately have the solutions machine learning as one area. You see some specialization with the clouds. But you start to see the rise of superclouds, Dave calls them, and that's where you can innovate on a cloud then go to the multiple clouds. Snowflakes is one, we see a lot of examples of supercloud... >> Project Alpine was another one. I mean, it's early, but it's its clearly where you're going. The technology is just starting to come around. I mean it's real. >> Yeah. I mean, why wouldn't you want to take advantage of all of the cloud innovation out there? >> Is that something that's, that supercloud idea is a reality from a technologist perspective. >> I think it is. So for example Katie Gordon, which I believe you've interviewed earlier this week, was demonstrating the Kubernetes data mobility aspect which is another project. That's exactly part of the it's rationale, the rationale of customers being able to move some of their Kubernetes workloads to the cloud and back and between different clouds. Why are we doing? Because customers wants to have the ability to move between different cloud providers, using a common API that will be able to orchestrate all of those things with a self-service that may be offered via the APEX console itself. So it's all around enabling developers and meeting them where they are today and also meeting them into tomorrow's world where they actually may have changed their mind to do those things. So yes we are walking on all of those different aspects. >> Okay. Let's take a quick look at some of the ETR data. This is an X-Y graph. You've seen it a number of times on breaking analysis, it plots the net score or spending momentum on the Y-axis and overlap or pervasiveness in the ETR dataset on the X-axis, used to be called market share. I think that term was off putting to some people, but anyway it's an indicator of presence in the dataset. Now that red dotted line that's rarefied air where anything above that line is considered highly elevated. Now you can see we've plotted Azure and AWS in the upper right. GCP is in there and Kubernetes. We've done that as reference points. They're not necessarily building supercloud platforms. We'll see if they ever want to do so. And Kubernetes of course not a company, but we put 'em in there for context. And we've cherry picked a few players that we believe are building out or are important for supercloud build out. Let's start with Snowflake. We've talked a lot about this company. You can see they're highly elevated on the vertical axis. We see the data cloud as a supercloud in the making. You've got pure storage in there. They made the public, the early part of its supercloud journey at Accelerate 2019 when it unveiled a hybrid block storage service inside of AWS, it connects its On-Prem to AWS and creates that singular experience for pure customers. We see Hashi, HashiCorp as an enabling infrastructure, as code. So they're enabling infrastructure as code across different clouds and different locations. You see Nutanix. They're embarking on their multi-cloud strategy but it's doing so in a way that we think is supercloud, like now. Now Veeam, we were just at VeeamON. And this company has tied Dell for the number one revenue player in data protection. That's according to IDC. And we don't think it won't be long before it holds that position alone at the top as it's growing faster than in Dell in the space. We'll see, Dell is kind of waking up a little bit and putting more resource on that. But Veeam, they're a pure play vendor in data protection. And you heard their CTO, Danny Allan's view on Supercloud, they're doing it today. And we heard extensive comments as well from Dell that's clearly where they're headed, project Alpine was an early example from Dell technologies world of Supercloud in our view. And HPE with GreenLake. Finally beginning to talk about that cross cloud experience. I think it in initially HPE has been more focused on the private cloud, we'll continue to probe. We'll be at HPE discover later on the spring, actually end of June. And we'll continue to probe to see what HPE is doing specifically with GreenLake. Now, finally, Cisco, we put them on the chart. We don't have direct quotes from recent shows and events but this data really shows you the size of Cisco's footprint within the ETR data set that's on the X-axis. Now the cut of this ETR data includes all sectors across the ETR taxonomy which is not something that we commonly show but you can see the magnitude of Cisco's presence. It's impressive. Now, they had better, Cisco that is, had better be building out a supercloud in our view or they're going to be left behind. And I'm quite certain that they're actually going to do so. So we have a lot of evidence that we're putting forth here and seeing in the marketplace what we said last year, the ecosystem is take taking shape, supercloud is forming and becoming a thing. And really in our view, is the future of cloud. But there are always risks to these predictive scenarios and we want to acknowledge those. So first, look, we could end up with a bunch of bespoke superclouds. Now one supercloud is better than three separate cloud native services that do fundamentally the same thing from the same vendor. One for AWS, one for GCP and one for Azure. So maybe that's not all that bad. But to point number two, we hope there evolves a set of open standards for self-service infrastructure, federated governance, and data sharing that will evolve as a horizontal layer versus a set of proprietary vendor specific tools. Now, maybe a company like Veeam will provide that as a data management layer or some of Veeam's competitors or maybe it'll emerge again as open source. As well, and this next point, we see the potential for edge disruptions, changing the economics of the data center. Edge in fact could evolve on its own, independent of the cloud. In fact, David Floria sees the edge somewhat differently from Danny Allan. Floria says he sees a requirement for distributed stateful environments that are ephemeral where recovery is built in. And I said, David, stateful? Ephemeral? Stateful ephemeral? Isn't that an oxymoron? And he responded that, look, if it's not ephemeral the costs are going to be prohibitive. He said the biggest mistake the companies could make is thinking that the edge is simply an extension of their current cloud strategies. We're seeing that a lot. Dell largely talks about the edge as retail. Now, and Telco is a little bit different, but back to Floria's comments, he feels companies have to completely reimagine an integrated file and recovery system which is much more data efficient. And he believes that the technology will evolve with massive volumes and eventually seep into enterprise cloud and distributed data centers with better economics. In other words, as David Michelle recently wrote, we're about 15 years into the most recent cloud cycle and history shows that every 15 years or so, something new comes along that is a blind spot and highly disruptive to existing leaders. So number four here is really important. Remember, in 2007 before AWS introduced the modern cloud, IBM outpost, sorry, IBM outspent Amazon and Google and RND and CapEx and was really comparable to Microsoft. But instead of inventing cloud, IBM spent hundreds of billions of dollars on stock buybacks and dividends. And so our view is that innovation rewards leaders. And while it's not without risks, it's what powers the technology industry it always has and likely always will. So we'll be watching that very closely, how companies choose to spend their free cash flow. Okay. That's it for now. Thanks for watching this episode of The Cube Insights, powered by ETR. Thanks to Stephanie Chan who does some of the background research? Alex Morrison is on production and is going to compile all this stuff. Thank you, Alex. We're all remote this week. Kristen Nicole and Cheryl Knight do Cube distribution and social distribution and get the word out, so thank you. Robert Hof is our editor in chief. Don't forget the checkout etr.ai for all the survey action. Remember I publish each week on wikibon.com and siliconangle.com and you can check out all the breaking analysis podcasts. All you can do is search breaking analysis podcast so you can pop in the headphones and listen while you're on a walk. You can email me at david.vellante@siliconangle.com. If you want to get in touch or DM me at DVellante, you can always hit me up into a comment on our LinkedIn posts. This is Dave Vellante. Thank you for watching this episode of break analysis, stay safe, be well and we'll see you next time. (upbeat music)
SUMMARY :
insights from the cube and ETR. And that the supercloud that's kind of the buzz from your keynote. across all of the something that will get developed all of the infrastructure. Is that right? for the persistent data later, from a technologist that and you can do it today. And at the end of the day, and I summarize it the following way, experience in the cloud And so really the VMware value proposition They need the clouds to work and build on the CapEx starting to come around. of all of the cloud innovation out there? Is that something that's, That's exactly part of the it's rationale, And he believes that the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Clark | PERSON | 0.99+ |
Floria | PERSON | 0.99+ |
Jeff Clarke | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Katie Gordon | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Danny | PERSON | 0.99+ |
Alex Morrison | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Lori | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Danny Allan | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
David Michelle | PERSON | 0.99+ |
Robert Hof | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Alex | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Patrick Osborne | PERSON | 0.99+ |
Danny Allan | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Lori MacVittie | PERSON | 0.99+ |
Chuck Whitten | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Hewlett Packet Enterprise | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Supercloud: The 22 Answer to Multi-Cloud Challenges | TITLE | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
end of June | DATE | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
each week | QUANTITY | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
David Floria | PERSON | 0.98+ |
today | DATE | 0.98+ |
tomorrow | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
VeeamON | ORGANIZATION | 0.98+ |
over 60 services | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
F-Five | ORGANIZATION | 0.98+ |
Raghu Raghuram | PERSON | 0.98+ |
Rick Farnell, Protegrity | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences
(gentle music) >> Welcome to today's session of the AWS Startup Showcase The Next Big Thing in AI, Security, & Life Sciences. Today we're featuring Protegrity for the life sciences track. I'm your host for theCUBE, Natalie Erlich, and now we're joined by our guest, Rick Farnell, the CEO of Protegrity. Thank you so much for being with us. >> Great to be here. Thanks so much Natalie, great to be on theCUBE. >> Yeah, great, and so we're going to talk today about the ransomware game, and how it has changed with kinetic data protection. So, the title of today's video segment makes a bold claim, how are kinetic data and ransomware connected? >> So first off kinetic data, data is in use, it's moving, it's not static, it's no longer sitting still, and your data protection has to adhere to those same standards. And I think if you kind of look at what's happening in the ransomware kind of attacks, there's a couple of different things going on, which is number one, bad actors are getting access to data in the clear, and they're holding that data ransom, and threatening to release that data. So kind of from a Protegrity standpoint, with our protection capabilities, that data would be rendered useless to them in that scenario. So there's lots of ways in which kind of backup data protection, really wonderful opportunities to do both data protection and kind of that backup mixed together really is a wonderful solution to the threat of ransomware. And it's a serious issue and it's not just targeting the most highly regulated industries and customers, we're seeing kind of attacks on pipeline and ferry companies, and really there is no end to where some of these bad actors are really focusing on and the damages can be in the hundreds of millions of dollars and last for years after from a brand reputation. So I think if you look at how data is used today, there's that kind of opposing forces where the business wants to use data at the speed of light to produce more machine learning, and more artificial intelligence, and predict where customers are going to be, and have wonderful services at their fingertips. But at the same time, they really want to protect their data, and sometimes those architectures can be at odds, and at Protegrity, we're really focusing on solving that problem. So free up your data to be used in artificial intelligence and machine learning, while making sure that it is absolutely bulletproof from some of these ransomware attacks. >> Yeah, I mean, you bring a really fascinating point that's really central to your business. Could you tell us more about how you're actually making that data worthless? I mean, that sounds really revolutionary. >> So, it sounds novel, right? To kind of make your data worthless in the wrong hands. And I think from a Protegrity perspective, our kind of policy and protection capability follows the individual piece of data no matter where it lives in the architecture. And we do a ton of work as the world does with Amazon Web Services, so kind of helping customers really blend their hybrid cloud strategies with their on-premise and their use of AWS, is something that we thrive at. So protecting that data, not just at rest or while it's in motion, but it's a continuous protection policy that we can basically preserve the privacy of the data but still keep it unique for use in downstream analytics and machine learning. >> Right, well, traditional security is rather stifling, so how can we fix this, and what are you doing to amend that? >> Well, I think if you look at cybersecurity, and we certainly play a big role in the cybersecurity world but like any industry, there are many layers. And traditional cybersecurity investment has been at the perimeter level, at the network level keeping bad actors out, and once people do get through some of those fences, if your data is not protected at a fine grain level, they have access to it. And I think from our standpoint, yes, we're last line of defense but at the same time, we partner with folks in the cybersecurity industry and with AWS and with others in the backup and recovery to give customers that level of protection, but still allow their kinetic data to be utilized in downstream analytics. >> Right, well, I'd love to hear more about the types of industries that you're helping, and specifically healthcare obviously, a really big subject for the year and probably now for years to come, how is this industry using kinetic protection at the moment? >> So certainly, as you mentioned, some of the most highly regulated industries are our sweet spot. So financial services, insurance, online retail, and healthcare, or any industry that has sensitive data and sensitive customer data, so think first name last name, credit card information, national ID number, social security number blood type, cancer type. That's all sensitive information that you as an organization want to protect. So in the healthcare space, specifically, some of the largest healthcare organizations in the world rely on Protegrity to provide that level of protection, but at the same time, give them the business flexibility to utilize that data. So one of our customers, one of the leaders in online prescriptions, and that is an AWS customer, to allow a wonderful service to be delivered to all of their customers while maintaining protection. If you think about sharing data on your watch with your insurance provider, we have lots of customers that bridge that gap and have that personal data coming in to the insurance companies. All the way to, if in a use case in the future, looking at the pandemic, if you have to prove that you've been vaccinated, we're talking about some sensitive information, so you want to be able to show that information but still have the confidence that it's not going to be used for nefarious purposes. >> Right, and what is next for Protegrity? >> Well, I think continuing on our journey, we've been around for 17 years now, and I think the last couple, there's been an absolute renaissance in fine-grained data protection or that connected data protection, and organizations are recognizing that continuing to protect your perimeter, continuing to protect your firewalls, that's not going to go away anytime soon. Your access points, your points of vulnerability to keep bad actors out, but at the same time, recognizing that the data itself needs to be protected but with that balance of utilizing it downstream for analytic purposes, for machine learning, for artificial intelligence. Keeping the data of hundreds of millions if not billions of people saved, that's what we do. If you were to add up the customers of all of our customers, the largest banks, the largest insurance companies, largest healthcare companies in the world, globally, we're protecting the private data of billions of human beings. And it doesn't just stop there, I think you asked a great question about kind of the industry and yes, insurance, healthcare, retail, where there's a lot of sensitive data that certainly can be a focus point. But in the IOT space, kind of if you think about GPS location or geolocation, if you think about a device, and what it does, and the intelligence that it has, and the decisions that it makes on the fly, protecting data and keeping that safe is not just a personal thing, we're stepping into intellectual property and some of the most valuable assets that companies have, which is their decision-making on how they use data and how they deliver an experience, and I think that's why there's been such a renaissance, if you will, in kind of that fine grain data protection that we provide. >> Yeah, well, what is Protegrity's role now in future proofing businesses against cyber attacks? I mean, you mentioned really the ramifications of that and the impact it can have on businesses, but also on governments. I mean, obviously this is really critical. >> So there's kind of a three-step approach, and this is something that we have certainly kind of felt for a long, long time, and we work on with our customers. One is having that fine-grain data protection. So tokenizing your data so that if someone were to get your data, it's worthless, unless they have the ability to unlock every single individual piece of data. So that's number one, and then that's kind of what Protegrity provides. Number two, having a wonderful backup capability to roll kind of an active-active, AWS being one of the major clouds in the world where we deploy our software regularly and work with our customers, having multi-regions, multi-capabilities for an active-active scenario where if there's something that goes down or happens you can bring that down and bring in a new environment up. And then third is kind of malware detection in the rest of the cyber world to make sure that you rinse kind of your architecture from some of those agents. And I think when you kind of look at it, ransomware, they take data, they encrypt your data, so they force you to give them Bitcoin, or whatnot, or they'll release some of your data. And if that data is rendered useless, that's one huge step in kind of your discussions with these nefarious actors and be like you could release it, but there's nothing there, you're not going to see anything. And then second, if you have a wonderful backup capability where you wind down that environment that has been infiltrated, prove that this new environment is safe, have your production data have rolling and then wind that back up, you're back in business. You don't have to notify your customers, you don't have to deal with the ransomware players. So it's really a three-step process but ultimately it starts with protecting your data and tokenizing your data, and that's something that Protegrity does really, really well. >> So you're basically able to eliminate the financial impact of a breach? >> Honestly, we dramatically reduce the risk of customers being at risk for ransomware attacks 100%. Now, tokenizing data and moving that direction is something that it's not trivial, we are literally replacing production data with a token and then making sure that all downstream applications have the ability to utilize that, and make sure that the analytic systems and machine learning systems, and artificial intelligence applications that are built downstream on that data have the ability to execute, but that is something that from our patent portfolio and what we provide to our customers, again, some of the largest organizations in retail, in financial services, in banking, and in healthcare, we've been doing that for a long time. We're not just saying that we can do this and we're in version one of our product, we've been doing this for years, supporting the largest organizations with a 24 by seven capability. >> Right, and tell us a bit about the competitive landscape, where do you see your offering compared to your competitors? >> So, kind of historically back, let's call it an era ago maybe even before cloud even became a thing, and hybrid cloud, there were a handful of players that could acquire into much larger organizations, those organizations have been dusting off those acquired assets, and we're seeing them come back in. There's some new entrants into our space that have some protection mechanisms, whether it be encryption, or whether it be anonymization, but unless you're doing fine grain tokenization, you're not going to be able to allow that data to participate in the artificial intelligence world. So, we see kind of a range of competition there. And then I'd say probably the biggest competitor, Natalie, is customers not doing tokenization. They're saying, "No, we're okay, we'll continue protecting our firewall, we'll continue protecting our access points, we'll invest a little bit more in maybe some governance, but that fine grain data protection, maybe it's not for us." And that is the big shift that's happening. You look at kind of the beginning of this year with the solar winds attack, and the vulnerability that caused the very large and important organizations found themselves the last few weeks with all the ransomware attacks that are happening on meat processing plants and facilities, shutting down meat production, pipeline, stopping oil and gas and kind of that. So we're seeing a complete shift in the types of organizations and the industries that need to protect their data. It's not just the healthcare organizations, or the banks, or the credit card companies, it is every single industry, every single size company. >> Right, and I got to ask you this questioning, what is your defining contribution to the future of cloud scale? >> Well, ultimately we kind of have a charge here at Protegrity where we feel like we protect the world's most sensitive data. And when we come into work every day, that's what every single employee thinks at Protegrity. We are standing behind billions of individuals who are customers of our customers, and that's a cultural thing for us, and we take that very serious. We have maniacal customer support supporting our biggest customers with a fall of the sun 24 by seven global capability. So that's number one. So, I think our part in this is really helping to educate the world that there is a solution for this ransomware and for some of these things that don't have to happen. Now, naturally with any solution, there's going to be some investment, there's going to be some architecture changes, but with partnerships like AWS, and our partnership with pretty much every data provider, data storage provider, data solution provider in the world, we want to provide fine-grain data protection, any data in any system on any platform. And that's our mission. >> Well, Rick Farnell, this has been really fascinating conversation with you, thank you so much. The CEO of Protegrity, really great to have you on this program for the AWS Startup Showcase, talking about how ransomware game has changed with the kinetic data protection. Really appreciate it. Again, I'm your host Natalie Erlich, thank you again very much for watching. (light music)
SUMMARY :
of the AWS Startup Showcase Great to be here. and how it has changed with and kind of that backup mixed together that's really central to your business. in the architecture. but at the same time, and have that personal data coming in and some of the most valuable and the impact it can have on businesses, have the ability to unlock and make sure that the analytic systems And that is the big that don't have to happen. really great to have you on this program
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Natalie Erlich | PERSON | 0.99+ |
Rick Farnell | PERSON | 0.99+ |
Natalie | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Protegrity | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
24 | QUANTITY | 0.99+ |
pandemic | EVENT | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
17 years | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
third | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Today | DATE | 0.98+ |
billions of people | QUANTITY | 0.98+ |
One | QUANTITY | 0.97+ |
three-step | QUANTITY | 0.97+ |
hundreds of millions of dollars | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
billions of human beings | QUANTITY | 0.96+ |
billions of individuals | QUANTITY | 0.93+ |
seven | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
Next Big Thing | TITLE | 0.85+ |
Startup Showcase | EVENT | 0.85+ |
first | QUANTITY | 0.83+ |
this year | DATE | 0.78+ |
last | DATE | 0.78+ |
Number two | QUANTITY | 0.76+ |
single industry | QUANTITY | 0.76+ |
single employee | QUANTITY | 0.75+ |
weeks | DATE | 0.73+ |
years | QUANTITY | 0.72+ |
single size | QUANTITY | 0.7+ |
one | OTHER | 0.7+ |
Startup Showcase The Next Big Thing in | EVENT | 0.68+ |
Security, & | EVENT | 0.67+ |
ransomware | TITLE | 0.64+ |
an | DATE | 0.63+ |
24 | DATE | 0.59+ |
couple | QUANTITY | 0.59+ |
single individual piece | QUANTITY | 0.59+ |
Sciences | EVENT | 0.58+ |
step | QUANTITY | 0.54+ |
version | QUANTITY | 0.46+ |
sun | EVENT | 0.36+ |
Ariel Assaraf, Coralogix | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences
(upbeat music) >> Hello and welcome today's session for the AWS Startup Showcase, the next big thing in AI, Security and Life Sciences featuring Coralogix for the AI track. I'm your host, John Furrier with theCUBE. We're here we're joined by Ariel Assaraf, CEO of Coralogix. Ariel, great to see you calling in from remotely, videoing in from Tel Aviv. Thanks for coming on theCUBE. >> Thank you very much, John. Great to be here. >> So you guys are features a hot next thing, start next big thing startup. And one of the things that you guys do we've been covering for many years is, you're into the log analytics, from a data perspective, you guys decouple the analytics from the storage. This is a unique thing. Tell us about it. What's the story? >> Yeah. So what we've seen in the market is that probably because of the great job that a lot of the earlier generation products have done, more and more companies see the value in log data, what used to be like a couple rows, that you add, whenever you have something very important to say, became a standard to document all communication between different components, infrastructure, network, monitoring, and the application layer, of course. And what happens is that data grows extremely fast, all data grows fast, but log data grows even faster. What we always say is that for sure data grows faster than revenue. So as fast as a company grows, its data is going to outpace that. And so we found ourselves thinking, how can we help companies be able to still get the full coverage they want without cherry picking data or deciding exactly what they want to monitor and what they're taking risk with. But still give them the real time analysis that they need to make sure that they get the full insight suite for the entire data, wherever it comes from. And that's why we decided to decouple the analytics layer from storage. So instead of ingesting the data, then indexing and storing it, and then analyzing the stored data, we analyze everything, and then we only store it matters. So we go from the insights backwards. That allowed us to reduce the amount of data, reduce the digital exhaust that it creates, and also provide better insights. So the idea is that as this world of data scales, the need for real time streaming analytics is going to increase. >> So what's interesting is we've seen this decoupling with storage and compute be a great success formula and cloud scale, for instance, that's a known best practice. You're taking a little bit different. I love how you're coming backwards from it, you're working backwards from the insights, almost doing some intelligence on the front end of the data, probably sees a lot of storage costs. But I want to get specifically back to this real time. How do you do that? And how did you come up with this? What's the vision? How did you guys come up with the idea? What was the magic light bulb that went off for Coralogix? >> Yes, the Coralogix story is very interesting. Actually, it was no light bulb, it was a road of pain for years and years, we started by just you know, doing the same, maybe faster, a couple more features. And it didn't work out too well. The first few years, the company were not very successful. And we've grown tremendously in the past three years, almost 100X, since we've launched this, and it came from a pain. So once we started scaling, we saw that the side effects of accessing the storage for analytics, the latency it creates, the the dependency on schema, the price that it poses on our customers became unbearable. And then we started thinking, so okay, how do we get the same level of insights, because there's this perception in the world of storage. And now it started to happen in analytics, also, that talks about tiers. So you want to get a great experience, you pay a lot, you want to get a less than great experience, you pay less, it's a lower tier. And we decided that we're looking for a way to give the same level of real time analytics and the same level of insights. Only without the issue of dependencies, decoupling all the storage schema issues and latency. And we built our real time pipeline, we call it Streama. Streama is a Coralogix real time analysis platform that analyzes everything in real time, also the stateful thing. So stateless analytics in real time is something that's been done in the past and it always worked well. The issue is, how do you give a stateful insight on data that you analyze in real time without storing and I'll explain how can you tell that a certain issue happened that did not happen in the past three months if you did not store the past three months? Or how can you tell that behavior is abnormal if you did not store what's normal, you did not store to state. So we created what we call the state store that holds the state of the system, the state of data, were a snapshot on that state for the entire history. And then instead of our state being the storage, so you know, you asked me, how is this compared to last week? Instead of me going to the storage and compare last week, I go to the state store, and you know, like a record bag, I just scroll fast, I find out one piece of state. And I say, okay, this is how it looked like last week, compared to this week, it changed in ABC. And once we started doing that we on boarded more and more services to that model. And our customers came in and say, hey, you're doing everything in real time. We don't need more than that. Yeah, like a very small portion of data, we actually need to store and frequently search, how about you guys fit into our use cases, and not just sell on quota? And we decided to basically allow our customers to choose what is the use case that they have, and route the data through different use cases. And then each log records, each log record stops at the relevant stops in our data pipeline based on the use case. So just like you wouldn't walk into the supermarket, you fill in a bag, you go out, they weigh it and they say, you know, it's two kilograms, you pay this amount, because different products have different costs and different meaning to you. That same way, exactly, We analyze the data in real time. So we know the importance of data, and we allow you to route it based on your use case and pay a different amount per use case. >> So this is really interesting. So essentially, you guys, essentially capture insights and store those, you call them states, and then not have to go through the data. So it's like you're eliminating the old problem of, you know, going back to the index and recovering the data to get the insights, did we have that? So anyway, it's a round trip query, if you will, you guys are start saving all that data mining cost and time. >> We call it node zero side effects, that round trip that you that you described is exactly it, no side effects to an analysis that is done in real time. I don't need to get the latency from the storage, a bit of latency from the database that holds the model, a bit of latency from the cache, everything stays in memory, everything stays in stream. >> And so basically, it's like the definition of insanity, doing the same thing over and over again and expecting a different result. Here, that's kind of what that is, the old model of insight is go query the database and get something back, you're actually doing the real time filtering on the front end, capturing the insights, if you will, storing those and replicating that as use case. Is that right? >> Exactly. But then, you know, there's still the issue of customer saying, yeah, but I need that data. Someday, I need to really frequently search, I don't know, you know, the unknown unknowns, or some of the day I need for compliance, and I need an immutable record that stays in my compliance bucket forever. So we allowed customers, we have this some that screen, we call the TCO optimizer, that allows them to define those use cases. And they can always access the data by creating their remote storage from Coralogix, or carrying the hot data that is stored with Coralogix. So it's all about use cases. And it's all about how you consume the data because it doesn't make sense for me to pay the same amount or give the same amount of attention to a record that is completely useless. It's just there for the record or for a compliance audit, that may or may not happen in the future. And, you know, do the same with the most critical exception in my application log that has immediate business impact. >> What's really good too, is you can actually set some policy up if you want a certain use cases, okay, store that data. So it's not to say you don't want to store it, but you might want to store it on certain use cases. So I can see that. So I got to ask the question. So how does this differ from the competition? How do you guys compete? Take us through a use case of a customer? How do you guys go to the customer and you just say, hey, we got so much scar tissue from this, we learned the hard way, take it from us? How does it go? Take us through an example. >> So an interesting example of actually a company that is not the your typical early adopter, let's call it this way. A very advanced in technology and smart company, but a huge one, one of the largest telecommunications company in India. And they were actually cherry picking about 100 gigs of data per day, and sending it to one of the legacy providers which has a great solution that does give value. But they weren't even thinking about sending their entire data set because of cost because of scale, because of, you know, just a clutter. Whenever you search, you have to sift through millions of records that many of them are not that important. And we help them actually ask analyze their data and work with them to understand these guys had over a terabyte of data that had incredible insights, it was like a goldmine of insights. But now you just needed to prioritize it by their use case, and they went from 100 gig with the other legacy solution to a terabyte, at almost the same cost, with more advanced insights within one week, which isn't in that scale of an organization is something that is is out of the ordinary, took them four months to implement the other product. But now, when you go from the insights backwards, you understand your data before you have to store it, you understand the data before you have to analyze it, or before you have to manually sift through it. So if you ask about the difference, it's all about the architecture. We analyze and only then index instead of indexing and then analyzing. It sounds simple. But of course, when you look at this stateful analytics, it's a lot more, a lot more complex. >> Take me through your growth story, because first of all, I'll get back to the secret sauce in the same way. I want to get back to how you guys got here. (indistinct) you had this problem? You kind of broke through, you hit the magic formula, talking about the growth? Where's the growth coming from? And what's the real impact? What's the situation relative to the company's growth? >> Yeah, so we had a first rough three years that I kind of mentioned, and then I was not the CEO at the beginning, I'm one of the co founders. I'm more of the technical guy, was the product manager. And I became CEO after the company was kind of on the verge of closing at the end of 2017. And the CTO left the CEO left, the VP of R&D became the CTO, I became the CEO, we were five people with $200,000 in the bank that you know, you know that that's not a long runway. And we kind of changed attitudes. So we kind of, so we first we launched this product, and then we understood that we need to go bottoms up, you can go to enterprises and try to sell something that is out of the ordinary, or that changes how they're used to working or just, you know, sell something, (indistinct) five people will do under $1,000 in the bank. So we started going from bottoms up, and the earlier adopters. And it's still until today, you know, the the more advanced companies, the more advanced teams. This is our Gartner friend Coralogix, the preferred solution for Advanced, DevOps and Platform Teams. So they started adopting Coralogix, and then it grew to the larger organization, and they were actually pushing, there are champions within their organizations. And ever since. So until the beginning of 2018, we raised about $2 million and had sales or marginal. Today, we have over 1500, pink accounts, and we raised almost $100 million more. >> Wow, what a great pivot. That was great example of kind of getting the right wave here, cloud wave. You said in terms of customers, you had the DevOps kind of (indistinct) initially. And now you said expanded out to a lot more traditional enterprise, you can take me through the customer profile. >> Yeah, so I'd say it's still the core would be cloud native and (indistinct) companies. These are typical ones, we have very tight integration with AWS, all the services, all the integrations required, we know how to read and write back to the different services and analysis platforms in AWS. Also for Asia and GCP, but mostly AWS. And then we do have quite a few big enterprise accounts, actually, five of the largest 50 companies in the world use Coralogix today. And it grew from those DevOps and platform evangelists into the level of IT, execs and even (indistinct). So today, we have our security product that already sells to some of the biggest companies in the world, it's a different profile. And the idea for us is that, you know, once you solve that issue of too much data, too expensive, not proactive enough, too couple with the storage, you can actually expand that from observability logging metrics, now into tracing and then into security and maybe even to other fields, where the cost and the productivity are an issue for many companies. >> So let me ask you this question, then Ariel, if you don't mind. So if a customer has a need for Coralogix, is it because the data fall? Or they just got data kind of sprawled all over the place? Or is it that storage costs are going up on S3 or what's some of the signaling that you would see, that would be like, telling you, okay, okay, what's the opportunity to come in and either clean house or fix the mess or whatnot, Take us through what you see. What do you see is the trend? >> Yeah. So like the tip customer (indistinct) Coralogix will be someone using one of the legacy solution and growing very fast. That's the easiest way for us to know. >> What grows fast? The storage, the storage is growing fast? >> The company is growing fast. >> Okay. And you remember, the data grows faster than revenue. And we know that. So if I see a company that grew from, you know, 50 people to 500, in three years, specifically, if it's cloud native or internet company, I know that their data grew not 10X, but 100X. So I know that that company that might started with a legacy solution at like, you know, $1,000 a month, and they're happy with it. And you know, for $1,000 a month, if you don't have a lot of data, those legacy solutions, you know, they'll do the trick. But now I know that they're going to get asked to pay 50, 60, $70,000 a month. And this is exactly where we kick in. Because now, when it doesn't fit the economic model, when it doesn't fit the unit economics, and he started damaging the margins of those companies. Because remember, those internet and cloud companies, it's not costs are not the classic costs that you'll see in an enterprise, they're actually damaging your unit economics and the valuation of the business, the bigger deal. So now, when I see that type of organization, we come in and say, hey, better coverage, more advanced analytics, easier integration within your organization, we support all the common open source syntaxes, and dashboards, you can plug it into your entire environment, and the costs are going to be a quarter of whatever you're paying today. So once they see that they see, you know, the Dev friendliness of the product, the ease of scale, the stability of the product, it makes a lot more sense for them to engage in a PLC, because at the end of the day, if you don't prove value, you know, you can come with 90% discount, it doesn't do anything, not to prove the value to them. So it's a great door opener. But from then on, you know, it's a PLC like any other. >> Cloud is all about the PLC or pilot, as they say. So take me through the product, today, and what's next for the product, take us through the vision of the product and the product strategy. >> Yeah, so today, the product allows you to send any log data, metric data or security information, analyze it a million ways, we have one of the most extensive alerting mechanism to market, automatic anomaly detection, data flustering. And all the real law, you know, the real time pipeline, things that help companies make their data smarter, and more readable, parsing, enriching, getting external sources to enrich the data, and so on, so forth. Where we're stepping in now is actually to make the final step of decoupling the analytics from storage, what we call the datalist data platform in which no data will sit or reside within the Coralogix cloud, everything will be analyzed in real time, stored in a storage of choice of our customers, then we'll allow our customers to remotely query that incredible performance. So that'll bring our customers away, to have the first ever true SaaS experience for observability. Think about no quota plans, no retention, you send whatever you want, you pay only for what you send, you retain it, how long you want to retain it, and you get all the real time insights much, much faster than any other product that keeps it on a hot storage. So that'll be our next step to really make sure that, you know, we're kind of not reselling cloud storage, because a lot of the times when you are dependent on storage, and you know, we're a cloud company, like I mentioned, you got to keep your unit economics. So what do you do? You sell storage to the customer, you add your markup, and then you you charge for it. And this is exactly where we don't want to be. We want to sell the intelligence and the insights and the real time analysis that we know how to do and let the customers enjoy the, you know, the wealth of opportunities and choices their cloud providers offer for storage. >> That's great vision in a way, the hyper scalars early days showed that decoupling compute from storage, which I mentioned earlier, was a huge category creation. Here, you're doing it for data. We call hyper data scale, or like, maybe there's got to be a name for this. What do you see, about five years from now? Take us through the trajectory of the next five years, because certainly observability is not going away. I mean, it's data management, monitoring, real time, asynchronous, synchronous, linear, all the stuffs happening, what's the what's the five year vision? >> Now add security and observability, which is something we started preaching for, because no one can say I have observability to my environment when people you know, come in and out and steal data. That's no observability. But the thing is that because data grows exponentially, because it grows faster than revenue what we believe is that in five years, there's not going to be a choice, everyone are going to have to analyze the data in real time. Extract the insights and then decide whether to store it on a you know long term archive or not, or not store it at all. You still want to get the full coverage and insights. But you know, when you think about observability, unlike many other things, the more data you have many times, the less observability you get. So you think of log data unlike statistics, if my system was only in recording everything was only generating 10 records a day, I have full, incredible observability I know everything that I've done. what happens is that you pay more, you get less observability, and more uncertainty. So I think that you know, with time, we'll start seeing more and more real time streaming analytics, and a lot less storage based and index based solutions. >> You know, Ariel, I've always been saying to Dave Vellante on theCUBE, many times that there needs to be insights as to be the norm, not the exception, where, and then ultimately, it would be a database of insights. I mean, at the end of the day, the insights become more plentiful. You have the ability to actually store those insights, and refresh them and challenge them and model update them, verify them, either sunset them or add to them or you know, saying that's like, when you start getting more data into your organization, AI and machine learning prove that pattern recognition works. So why not grab those insights? >> And use them as your baseline to know what's important, and not have to start by putting everything in a bucket. >> So we're going to have new categories like insight, first, software (indistinct) >> Go from insights backwards, that'll be my tagline, if I have to, but I'm a terrible marketing (indistinct). >> Yeah, well, I mean, everyone's like cloud, first data, data is data driven, insight driven, what you're basically doing is you're moving into the world of insights driven analytics, really, as a way to kind of bring that forward. So congratulations. Great story. I love the pivot love how you guys entrepreneurially put it all together and had the problem your own problem and brought it out and to the to the rest of the world. And certainly DevOps in the cloud scale wave is just getting bigger and bigger and taking over the enterprise. So great stuff. Real quick while you're here. Give a quick plug for the company. What you guys are up to, stats, vitals, hiring, what's new, give the commercial. >> Yeah, so like mentioned over 1500 being customers growing incredibly in the past 24 months, hiring, almost doubling the company in the next few months. offices in Israel, East Center, West US, and UK and Mumbai. Looking for talented engineers to join the journey and build the next generation of data lists data platforms. >> Ariel Assaraf, CEO of Coralogix. Great to have you on theCUBE and thank you for participating in the AI track for our next big thing in the Startup Showcase. Thanks for coming on. >> Thank you very much John, really enjoyed it. >> Okay, I'm John Furrier with theCUBE. Thank you for watching the AWS Startup Showcase presented by theCUBE. (calm music)
SUMMARY :
Ariel, great to see you Thank you very much, John. And one of the things that you guys do So instead of ingesting the data, And how did you come up with this? and we allow you to route and recovering the data database that holds the model, capturing the insights, if you will, that may or may not happen in the future. So it's not to say you that is not the your sauce in the same way. and the earlier adopters. And now you said expanded out to And the idea for us is that, the opportunity to come in So like the tip customer and the costs are going to be a quarter and the product strategy. and let the customers enjoy the, you know, of the next five years, the more data you have many times, You have the ability to and not have to start by Go from insights backwards, I love the pivot love how you guys and build the next generation and thank you for Thank you very much the AWS Startup Showcase
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Ariel Assaraf | PERSON | 0.99+ |
$200,000 | QUANTITY | 0.99+ |
Israel | LOCATION | 0.99+ |
India | LOCATION | 0.99+ |
90% | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
last week | DATE | 0.99+ |
$1,000 | QUANTITY | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
10X | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
two kilograms | QUANTITY | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Mumbai | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
50 | QUANTITY | 0.99+ |
Ariel | PERSON | 0.99+ |
50 people | QUANTITY | 0.99+ |
Coralogix | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
three years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
five people | QUANTITY | 0.99+ |
100X | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
five year | QUANTITY | 0.99+ |
each log | QUANTITY | 0.99+ |
about $2 million | QUANTITY | 0.99+ |
four months | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
one piece | QUANTITY | 0.99+ |
millions of records | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
50 companies | QUANTITY | 0.99+ |
almost $100 million | QUANTITY | 0.99+ |
one week | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
500 | QUANTITY | 0.98+ |
Asia | LOCATION | 0.98+ |
Coralogix | PERSON | 0.98+ |
West US | LOCATION | 0.98+ |
over 1500 | QUANTITY | 0.98+ |
East Center | LOCATION | 0.97+ |
under $1,000 | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
each log records | QUANTITY | 0.96+ |
10 records a day | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
end of 2017 | DATE | 0.96+ |
about 100 gigs | QUANTITY | 0.96+ |
Streama | TITLE | 0.95+ |
$1,000 a month | QUANTITY | 0.95+ |
R&D | ORGANIZATION | 0.95+ |
beginning | DATE | 0.95+ |
first few years | QUANTITY | 0.93+ |
past three months | DATE | 0.93+ |
$70,000 a month | QUANTITY | 0.9+ |
Coralogix | TITLE | 0.9+ |
GCP | ORGANIZATION | 0.88+ |
TCO | ORGANIZATION | 0.88+ |
AWS Startup Showcase | EVENT | 0.87+ |
Toni Manzano, Aizon | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences
(up-tempo music) >> Welcome to today's session of the cube's presentation of the AWS startup showcase. The next big thing in AI security and life sciences. Today, we'll be speaking with Aizon, as part of our life sciences track and I'm pleased to welcome the co-founder as well as the chief science officer of Aizon: Toni Monzano, will be discussing how artificial intelligence is driving key processes in pharma manufacturing. Welcome to the show. Thanks so much for being with us today. >> Thank you Natalie to you and to your introduction. >> Yeah. Well, as you know industry 4.0 is revolutionizing manufacturing across many industries. Let's talk about how it's impacting biotech and pharma and as well as Aizon's contributions to this revolution. >> Well, actually pharmacogenetics is totally introducing a new concept of how to manage processes. So, nowadays the industry is considering that everything is particularly static, nothing changes and this is because they don't have the ability to manage the complexity and the variability around the biotech and the driving factor in processes. Nowadays, with pharma - technologies cloud, our computing, IOT, AI, we can get all those data. We can understand the data and we can interact in real time, with processes. This is how things are going on nowadays. >> Fascinating. Well, as you know COVID-19 really threw a wrench in a lot of activity in the world, our economies, and also people's way of life. How did it impact manufacturing in terms of scale up and scale out? And what are your observations from this year? >> You know, the main problem when you want to do a scale-up process is not only the equipment, it is also the knowledge that you have around your process. When you're doing a vaccine on a smaller scale in your lab, the only parameters you're controlling in your lab, they have to be escalated when you work from five liters to 2,500 liters. How to manage this different of a scale? Well, AI is helping nowadays in order to detect and to identify the most relevant factors involved in the process. The critical relationship between the variables and the final control of all the full process following a continued process verification. This is how we can help nowadays in using AI and cloud technologies in order to accelerate and to scale up vaccines like the COVID-19. >> And how do you anticipate pharma manufacturing to change in a post COVID world? >> This is a very good question. Nowadays, we have some assumptions that we are trying to overpass yet with human efforts. Nowadays, with the new situation, with the pandemic that we are living in, the next evolution that we are doing humans will take care about the good practices of the new knowledge that we have to generate. So AI will manage the repetitive tasks, all the human condition activity that we are doing, So that will be done by AI, and humans will never again do repetitive tasks in this way. They will manage complex problems and supervise AI output. >> So you're driving more efficiencies in the manufacturing process with AI. You recently presented at the United nations industrial development organization about the challenges brought by COVID-19 and how AI is helping with the equitable distribution of vaccines and therapies. What are some of the ways that companies like Aizon can now help with that kind of response? >> Very good point. Could you imagine you're a big company, a top pharma company, that you have an intellectual property of COVID-19 vaccine based on emergency and principle, and you are going to, or you would like to, expand this vaccination in order not to get vaccination, also to manufacture the vaccine. What if you try to manufacture these vaccines in South Africa or in Asia in India? So the secret is to transport, not only the raw material not only the equipment, also the knowledge. How to appreciate how to control the full process from the initial phase 'till their packaging and the vials filling. So, this is how we are contributing. AI is packaging all this knowledge in just AI models. This is the secret. >> Interesting. Well, what are the benefits for pharma manufacturers when considering the implementation of AI and cloud technologies. And how can they progress in their digital transformation by utilizing them? >> One of the benefits is that you are able to manage the variability the real complexity in the world. So, you can not create processes, in order to manufacture drugs, just considering that the raw material that you're using is never changing. You cannot consider that all the equipment works in the same way. You cannot consider that your recipe will work in the same way in Brazil than in Singapore. So the complexity and the variability is must be understood as part of the process. This is one of the benefits. The second benefit is that when you use cloud technologies, you have not a big care about computing's licenses, software updates, antivirals, scale up of cloud ware computing. Everything is done in the cloud. So well, this is two main benefits. There are more, but this is maybe the two main ones. >> Yeah. Well, that's really interesting how you highlight how this is really. There's a big shift how you handle this in different parts of the world. So, what role does compliance and regulation play here? And of course we see differences the way that's handled around the world as well. >> Well, I think that is the first time the human race in the pharma - let me say experience - that we have a very strong commitment from the 30 bodies, you know, to push forward using this kind of technologies actually, for example, the FDA, they are using cloud, to manage their own system. So why not use them in pharma? >> Yeah. Well, how does AWS and Aizon help manufacturers address these kinds of considerations? >> Well, we have a very great partner. AWS, for us, is simplifying a lot our life. So, we are a very, let me say different startup company, Aizon, because we have a lot of PhDs in the company. So we are not in the classical geeky company with guys all day parameter developing. So we have a lot of science inside the company. So this is our value. So everything that is provided by Amazon, why we have to aim to recreate again so we can rely on Sage Maker. we can rely on Cogito, we can rely on Landon we can rely on Esri to have encryption data with automatic backup. So, AWS is simplifying a lot of our life. And we can dedicate all our knowledge and all our efforts to the things that we know: pharma compliance. >> And how do you anticipate that pharma manufacturing will change further in the 2021 year? Well, we are participating not only with business cases. We also participate with the community because we are leading an international project in order to anticipate this kind of new breakthroughs. So, we are working with, let me say, initiatives in the - association we are collaborating in two different projects in order to apply AI in computer certification in order to create more robust process for the MRA vaccine. We are collaborating with the - university creating the standards for AI application in GXP. We collaborating with different initiatives with the pharma community in order to create the foundation to move forward during this year. >> And how do you see the competitive landscape? What do you think Aizon provides compared to its competitors? >> Well, good question. Probably, you can find a lot of AI services, platforms, programs softwares that can run in the industrial environment. But I think that it will be very difficult to find a GXP - a full GXP-compliant platform working on cloud with AI when AI is already qualified. I think that no one is doing that nowadays. And one of the demonstration for that is that we are also writing some scientific papers describing how to do that. So you will see that Aizon is the only company that is doing that nowadays. >> Yeah. And how do you anticipate that pharma manufacturing will change or excuse me how do you see that it is providing a defining contribution to the future of cloud-scale? >> Well, there is no limits in cloud. So as far as you accept that everything is varied and complex, you will need power computing. So the only way to manage this complexity is running a lot of power computation. So cloud is the only system, let me say, that allows that. Well, the thing is that, you know pharma will also have to be compliant with the cloud providers. And for that, we created a new layer around the platform that we say qualification as a service. We are creating this layer in order to qualify continuously any kind of cloud platform that wants to work on environment. This is how we are doing that. >> And in what areas are you looking to improve? How are you constantly trying to develop the product and bring it to the next level? >> Always we have, you know, in mind the patient. So Aizon is a patient-centric company. Everything that we do is to improve processes in order to improve at the end, to deliver the right medicine at the right time to the right patient. So this is how we are focusing all our efforts in order to bring this opportunity to everyone around the world. For this reason, for example, we want to work with this project where we are delivering value to create vaccines for COVID-19, for example, everywhere. Just packaging the knowledge using AI. This is how we envision and how we are acting. >> Yeah. Well, you mentioned the importance of science and compliance. What do you think are the key themes that are the foundation of your company? >> The first thing is that we enjoy the task that we are doing. This is the first thing. The other thing is that we are learning every day with our customers and for real topics. So we are serving to the patients. And everything that we do is enjoying science enjoying how to achieve new breakthroughs in order to improve life in the factory. We know that at the end will be delivered to the final patient. So enjoying making science and creating breakthroughs; being innovative. >> Right, and do you think that in the sense that we were lucky, in light of COVID, that we've already had these kinds of technologies moving in this direction for some time that we were somehow able to mitigate the tragedy and the disaster of this situation because of these technologies? >> Sure. So we are lucky because of this technology because we are breaking the distance, the physical distance, and we are putting together people that was so difficult to do that in all the different aspects. So, nowadays we are able to be closer to the patients to the people, to the customer, thanks to these technologies. Yes. >> So now that also we're moving out of, I mean, hopefully out of this kind of COVID reality, what's next for Aizon? Do you see more collaboration? You know, what's next for the company? >> The next for the company is to deliver AI models that are able to be encapsulated in the drug manufacturing for vaccines, for example. And that will be delivered with the full process not only materials, equipment, personnel, recipes also the AI models will go together as part of the recipe. >> Right, well, we'd love to hear more about your partnership with AWS. How did you get involved with them? And why them, and not another partner? >> Well, let me explain to you a secret. Seven years ago, we started with another top cloud provider, but we saw very soon, that this other cloud provider were not well aligned with the GXP requirements. For this reason, we met with AWS. We went together to some seminars, conferences with top pharma communities and pharma organizations. We went there to make speeches and talks. We felt that we fit very well together because AWS has a GXP white paper describing very well how to rely on AWS components. One by one. So this is for us, this is a very good credential, when we go to our customers. Do you know that when customers are acquiring and are establishing the Aizon platform in their systems, they are outbidding us. They are outbidding Aizon. Well we have to also outbid AWS because this is the normal chain in pharma supplier. Well, that means that we need this documentation. We need all this transparency between AWS and our partners. This is the main reason. >> Well, this has been a really fascinating conversation to hear how AI and cloud are revolutionizing pharma manufacturing at such a critical time for society all over the world. Really appreciate your insights, Toni Monzano: the chief science officer and co-founder of Aizon. I'm your host, Natalie Erlich, for the Cube's presentation of the AWS startup showcase. Thanks very much for watching. (soft upbeat music)
SUMMARY :
of the AWS startup showcase. and to your introduction. contributions to this revolution. and the variability around the biotech in a lot of activity in the world, the knowledge that you the next evolution that we are doing in the manufacturing process with AI. So the secret is to transport, considering the implementation You cannot consider that all the equipment And of course we see differences from the 30 bodies, you and Aizon help manufacturers to the things that we in order to create the is that we are also to the future of cloud-scale? So cloud is the only system, at the right time to the right patient. the importance of science and compliance. the task that we are doing. and we are putting in the drug manufacturing love to hear more about This is the main reason. of the AWS startup showcase.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Toni Monzano | PERSON | 0.99+ |
Natalie Erlich | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Natalie | PERSON | 0.99+ |
Aizon | ORGANIZATION | 0.99+ |
Singapore | LOCATION | 0.99+ |
Brazil | LOCATION | 0.99+ |
South Africa | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
COVID-19 | OTHER | 0.99+ |
one | QUANTITY | 0.99+ |
2,500 liters | QUANTITY | 0.99+ |
five liters | QUANTITY | 0.99+ |
2021 year | DATE | 0.99+ |
30 bodies | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
second benefit | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
Toni Manzano | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
two main benefits | QUANTITY | 0.99+ |
pandemic | EVENT | 0.98+ |
today | DATE | 0.98+ |
two different projects | QUANTITY | 0.98+ |
COVID | OTHER | 0.97+ |
Seven years ago | DATE | 0.97+ |
two main ones | QUANTITY | 0.97+ |
this year | DATE | 0.96+ |
Landon | ORGANIZATION | 0.95+ |
first thing | QUANTITY | 0.92+ |
FDA | ORGANIZATION | 0.89+ |
MRA | ORGANIZATION | 0.88+ |
Cube | ORGANIZATION | 0.85+ |
United nations | ORGANIZATION | 0.82+ |
first time | QUANTITY | 0.8+ |
Sage Maker | TITLE | 0.77+ |
Startup Showcase | EVENT | 0.73+ |
GXP | ORGANIZATION | 0.64+ |
Esri | ORGANIZATION | 0.64+ |
GXP | TITLE | 0.6+ |
Cogito | ORGANIZATION | 0.6+ |
Aizon | TITLE | 0.57+ |
benefits | QUANTITY | 0.36+ |
GXP | COMMERCIAL_ITEM | 0.36+ |
Gil Geron, Orca Security | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences
(upbeat electronic music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. The Next Big Thing in AI, Security, and Life Sciences. In this segment, we feature Orca Security as a notable trend setter within, of course, the security track. I'm your host, Dave Vellante. And today we're joined by Gil Geron. Who's the co-founder and Chief Product Officer at Orca Security. And we're going to discuss how to eliminate cloud security blind spots. Orca has a really novel approach to cybersecurity problems, without using agents. So welcome Gil to today's sessions. Thanks for coming on. >> Thank you for having me. >> You're very welcome. So Gil, you're a disruptor in security and cloud security specifically and you've created an agentless way of securing cloud assets. You call this side scanning. We're going to get into that and probe that a little bit into the how and the why agentless is the future of cloud security. But I want to start at the beginning. What were the main gaps that you saw in cloud security that spawned Orca Security? >> I think that the main gaps that we saw when we started Orca were pretty similar in nature to gaps that we saw in legacy, infrastructures, in more traditional data centers. But when you look at the cloud when you look at the nature of the cloud the ephemeral nature, the technical possibilities and disruptive way of working with a data center, we saw that the usage of traditional approaches like agents in these environments is lacking, it actually not only working as well as it was in the legacy world, it's also, it's providing less value. And in addition, we saw that the friction between the security team and the IT, the engineering, the DevOps in the cloud is much worse or how does that it was, and we wanted to find a way, we want for them to work together to bridge that gap and to actually allow them to leverage the cloud technology as it was intended to gain superior security than what was possible in the on-prem world. >> Excellent, let's talk a little bit more about agentless. I mean, maybe we could talk a little bit about why agentless is so compelling. I mean, it's kind of obvious it's less intrusive. You've got fewer processes to manage, but how did you create your agentless approach to cloud security? >> Yes, so I think the basis of it all is around our mission and what we try to provide. We want to provide seamless security because we believe it will allow the business to grow faster. It will allow the business to adopt technology faster and to be more dynamic and achieve goals faster. And so we've looked on what are the problems or what are the issues that slow you down? And one of them, of course, is the fact that you need to install agents that they cause performance impact, that they are technically segregated from one another, meaning you need to install multiple agents and they need to somehow not interfere with one another. And we saw this friction causes organization to slow down their move to the cloud or slow down the adoption of technology. In the cloud, it's not only having servers, right? You have containers, you have manage services, you have so many different options and opportunities. And so you need a different approach on how to secure that. And so when we understood that this is the challenge, we decided to attack it in three, using three periods; one, trying to provide complete security and complete coverage with no friction, trying to provide comprehensive security, which is taking an holistic approach, a platform approach and combining the data in order to provide you visibility into all of your security assets, and last but not least of course, is context awareness, meaning being able to understand and find these the 1% that matter in the environment. So you can actually improve your security posture and improve your security overall. And to do so, you had to have a technique that does not involve agents. And so what we've done, we've find a way that utilizes the cloud architecture in order to scan the cloud itself, basically when you integrate Orca, you are able within minutes to understand, to read, and to view all of the risks. We are leveraging a technique that we are calling side scanning that uses the API. So it uses the infrastructure of the cloud itself to read the block storage device of every compute instance and every instance, in the environment, and then we can deduce the actual risk of every asset. >> So that's a clever name, side scanning. Tell us a little bit more about that. Maybe you could double click on, on how it works. You've mentioned it's looking into block storage and leveraging the API is a very, very clever actually quite innovative. But help us understand in more detail how it works and why it's better than traditional tools that we might find in this space. >> Yes, so the way that it works is that by reading the block storage device, we are able to actually deduce what is running on your computer, meaning what kind of waste packages applications are running. And then by con combining the context, meaning understanding that what kind of services you have connected to the internet, what is the attack surface for these services? What will be the business impact? Will there be any access to PII or any access to the crown jewels of the organization? You can not only understand the risks. You can also understand the impact and then understand what should be our focus in terms of security of the environment. Different factories, the fact that we are doing it using the infrastructure itself, we are not installing any agents, we are not running any packet. You do not need to change anything in your architecture or design of how you use the cloud in order to utilize Orca Orca is working in a pure SaaS way. And so it means that there is no impact, not on cost and not on performance of your environment while using Orca. And so it reduces any friction that might happen with other parties of the organization when you enjoy the security or improve your security in the cloud. >> Yeah, and no process management intrusion. Now, I presume Gil that you eat your own cooking, meaning you're using your own product. First of all, is that true? And if so, how has your use of Orca as a chief product officer help you scale Orca as a company? >> So it's a great question. I think that something that we understood early on is that there is a, quite a significant difference between the way you architect your security in cloud and also the way that things reach production, meaning there's a difference, that there's a gap between how you imagined, like in everything in life how you imagine things will be and how they are in real life in production. And so, even though we have amazing customers that are extremely proficient in security and have thought of a lot of ways of how to secure the environment. Ans so, we of course, we are trying to secure environment as much as possible. We are using Orca because we understand that no one is perfect. We are not perfect. We might, the engineers might, my engineers might make mistakes like every organization. And so we are using Orca because we want to have complete coverage. We want to understand if we are doing any mistake. And sometimes the gap between the architecture and the hole in the security or the gap that you have in your security could take years to happen. And you need a tool that will constantly monitor your environment. And so that's why we are using Orca all around from day one not to find bugs or to do QA, we're doing it because we need security to our cloud environment that will provide these values. And so we've also passed the compliance auditing like SOC 2 and ISO using Orca and it expedited and allowed us to do these processes extremely fast because of having all of these guardrails and metrics has. >> Yeah, so, okay. So you recognized that you potentially had and did have that same problem as your customer has been. Has it helped you scale as a company obviously but how has it helped you scale as a company? >> So it helped us scale as a company by increasing the trust, the level of trust customer having Orca. It allowed us to adopt technology faster, meaning we need much less diligence or exploration of how to use technology because we have these guardrails. So we can use the richness of the technology that we have in the cloud without the need to stop, to install agents, to try to re architecture the way that we are using the technology. And we simply use it. We simply use the technology that the cloud offer as it is. And so it allows you a rapid scalability. >> Allows you allows you to move at the speed of cloud. Now, so I'm going to ask you as a co-founder, you got to wear many hats first of a co-founder and the leadership component there. And also the chief product officer, you got to go out, you got to get early customers, but but even more importantly you have to keep those customers retention. So maybe you can describe how customers have been using Orca. Did they, what was their aha moment that you've seen customers react to when you showcase the new product? And then how have you been able to keep them as loyal partners? >> So I think that we are very fortunate, we have a lot of, we are blessed with our customers. Many of our customers are vocal customers about what they like about Orca. And I think that something that comes along a lot of times is that this is a solution they have been waiting for. I can't express how many times I hear that I could go on a call and a customer says, "I must say, I must share. "This is a solution I've been looking for." And I think that in that respect, Orca is creating a new standard of what is expected from a security solution because we are transforming the security all in the company from an inhibitor to an enabler. You can use the technology. You can use new tools. You can use the cloud as it was intended. And so (coughs) we have customers like one of these cases is a customer that they have a lot of data and they're all super scared about using S3 buckets. We call over all of these incidents of these three buckets being breached or people connecting to an s3 bucket and downloading the data. So they had a policy saying, "S3 bucket should not be used. "We do not allow any use of S3 bucket." And obviously you do need to use S3 bucket. It's a powerful technology. And so the engineering team in that customer environment, simply installed a VM, installed an FTP server, and very easy to use password to that FTP server. And obviously two years later, someone also put all of the customer databases on that FTP server, open to the internet, open to everyone. And so I think it was for him and for us as well. It was a hard moment. First of all, he planned that no data will be leaked but actually what happened is way worse. The data was open to the to do to the world in a technology that exists for a very long time. And it's probably being scanned by attackers all the time. But after that, he not only allowed them to use S3 bucket because he knew that now he can monitor. Now, you can understand that they are using the technology as intended, now that they are using it securely. It's not open to everyone it's open in the right way. And there was no PII on that S3 bucket. And so I think the way he described it is that, now when he's coming to a meeting about things that needs to be improved, people are waiting for this meeting because he actually knows more than what they know, what they know about the environment. And I see it really so many times where a simple mistake or something that looks benign when you look at the environment in a holistic way, when you are looking on the context, you understand that there is a huge gap. That should be the breech. And another cool example was a case where a customer allowed an access from a third party service that everyone trusts to the crown jewels of the environment. And he did it in a very traditional way. He allowed a certain IP to be open to that environment. So overall it sounds like the correct way to go. You allow only a specific IP to access the environment but what he failed to to notice is that everyone in the world can register for free for this third-party service and access the environment from this IP. And so, even though it looks like you have access from a trusted service, a trusted third party service, when it's a Saas service, it's actually, it can mean that everyone can use it in order to access the environment and using Orca, you saw immediately the access, you saw immediately the risk. And I see it time after time that people are simply using Orca to monitor, to guardrail, to make sure that the environment stays safe throughout time and to communicate better in the organization to explain the risk in a very easy way. And the, I would say the statistics show that within few weeks, more than 85% of the different alerts and risks are being fixed, and think it comes to show how effective it is and how effective it is in improving your posture, because people are taking action. >> Those are two great examples, and of course they have often said that the shared responsibility model is often misunderstood. And those two examples underscore thinking that, "oh I hear all this, see all this press about S3, but it's up to the customer to secure the endpoint components et cetera. Configure it properly is what I'm saying. So what an unintended consequence, but but Orca plays a role in helping the customer with their portion of that shared responsibility. Obviously AWS is taking care of this. Now, as part of this program we ask a little bit of a challenging question to everybody because look it as a startup, you want to do well you want to grow a company. You want to have your employees, you know grow and help your customers. And that's great and grow revenues, et cetera but we feel like there's more. And so we're going to ask you because the theme here is all about cloud scale. What is your defining contribution to the future of cloud at scale, Gil? >> So I think that cloud is allowed the revolution to the data centers, okay? The way that you are building services, the way that you are allowing technology to be more adaptive, dynamic, ephemeral, accurate, and you see that it is being adopted across all vendors all type of industries across the world. I think that Orca is the first company that allows you to use this technology to secure your infrastructure in a way that was not possible in the on-prem world, meaning that when you're using the cloud technology and you're using technologies like Orca, you're actually gaining superior security that what was possible in the pre cloud world. And I think that, to that respect, Orca is going hand in hand with the evolution and actually revolutionizes the way that you expect to consume security, the way that you expect to get value, from security solutions across the world. >> Thank You for that Gil. And so we're at the end of our time, but we'll give you a chance for final wrap up. Bring us home with your summary, please. >> So I think that Orca is building the cloud security solution that actually works with its innovative aid agentless approach to cyber security to gain complete coverage, comprehensive solution and to gain, to understand the complete context of the 1% that matters in your security challenges across your data centers in the cloud. We are bridging the gap between the security teams, the business needs to grow and to do so in the paste of the cloud, I think the approach of being able to install within minutes, a security solution in getting complete understanding of your risk which is goes hand in hand in the way you expect and adopt cloud technology. >> That's great Gil. Thanks so much for coming on. You guys doing awesome work. Really appreciate you participating in the program. >> Thank you very much. >> And thank you for watching this AWS Startup Showcase. We're covering the next big thing in AI, Security, and Life Science on theCUBE. Keep it right there for more great content. (upbeat music)
SUMMARY :
of the AWS Startup Showcase. agentless is the future of cloud security. and the IT, the engineering, but how did you create And to do so, you had to have a technique into block storage and leveraging the API is that by reading the you eat your own cooking, or the gap that you have and did have that same problem And so it allows you a rapid scalability. to when you showcase the new product? the to do to the world And so we're going to ask you the way that you expect to get value, but we'll give you a in the way you expect and participating in the program. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Orca | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
1% | QUANTITY | 0.99+ |
Gil | PERSON | 0.99+ |
Gil Geron | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
more than 85% | QUANTITY | 0.99+ |
two examples | QUANTITY | 0.99+ |
two years later | DATE | 0.99+ |
Orca Security | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
two great examples | QUANTITY | 0.98+ |
ISO | ORGANIZATION | 0.98+ |
three buckets | QUANTITY | 0.97+ |
three periods | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
S3 | TITLE | 0.96+ |
First | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
first company | QUANTITY | 0.91+ |
day one | QUANTITY | 0.9+ |
SOC 2 | TITLE | 0.87+ |
theCUBE | ORGANIZATION | 0.86+ |
Saas | ORGANIZATION | 0.82+ |
Startup Showcase | EVENT | 0.8+ |
s3 | TITLE | 0.7+ |
double | QUANTITY | 0.57+ |
Gil | ORGANIZATION | 0.55+ |
Next Big Thing | TITLE | 0.51+ |
years | QUANTITY | 0.5+ |
S3 | COMMERCIAL_ITEM | 0.47+ |
Rohan D'Souza, Olive | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences.
(upbeat music) (music fades) >> Welcome to today's session of theCUBE's presentation of the AWS Startup Showcase, I'm your host Natalie Erlich. Today, we're going to feature Olive, in the life sciences track. And of course, this is part of the future of AI, security, and life sciences. Here we're joined by our very special guest Rohan D'Souza, the Chief Product Officer of Olive. Thank you very much for being with us. Of course, we're going to talk today about building the internet of healthcare. I do you appreciate you joining the show. >> Thanks, Natalie. My pleasure to be here, I'm excited. >> Yeah, likewise. Well tell us about AI and how it's revolutionizing health systems across America. >> Yeah, I mean, we're clearly living around, living at this time of a lot of hype with AI, and there's a tremendous amount of excitement. Unfortunately for us, or, you know, depending on if you're an optimist or a pessimist, we had to wait for a global pandemic for people to realize that technology is here to really come into the aid of assisting everybody in healthcare, not just on the consumer side, but on the industry side, and on the enterprise side of delivering better care. And it's a truly an exciting time, but there's a lot of buzz and we play an important role in trying to define that a little bit better because you can't go too far today and hear about the term AI being used/misused in healthcare. >> Definitely. And also I'd love to hear about how Olive is fitting into this, and its contributions to AI in health systems. >> Yeah, so at its core, we, the industry thinks of us very much as an automation player. We are, we've historically been in the trenches of healthcare, mostly on the provider side of the house, in leveraging technology to automate a lot of the high velocity, low variability items. Our founding and our DNA is in this idea of, we think it's unfair that healthcare relies on humans as being routers. And we have looked to solve the problem of technology not talking to each other, by using humans. And so we set out to really go in into the trenches of healthcare and bring about core automation technology. And you might be sitting there wondering, well why are we talking about automation under the umbrella of AI? And that's because we are challenging the very status quo of siloed-based automation, and we're building, what we say, is the internet of healthcare. And more importantly what we've done is, we've brought in a human, very empathetic approach to automation, and we're leveraging technology by saying when one Olive learns, all Olives learn, so that we take advantage of the network effect of a single Olive worker in the trenches of healthcare, sharing that knowledge and wisdom, both with her human counterparts, but also with her AI worker counterparts that are showing up to work every single day in some of the most complex health systems in this country. >> Right. Well, when you think about AI and, you know, computer technology, you don't exactly think of, you know, humanizing kind of potential. So how are you seeking to make AI really humanistic, and empathetic, potentially? >> Well, most importantly the way we're starting with that is where we are treating Olive just like we would any single human counterpart. We don't want to think of this as just purely a technology player. Most importantly, healthcare is deeply rooted in this idea of investing in outcomes, and not necessarily investing in core technology, right? So we have learned that from the early days of us doing some really robust integrated AI-based solutions, but we've humanized it, right? Take, for example, we treat Olive just like any other human worker would, she shows up to work, she's onboarded, she has an obligation to her customers and to her human worker counterparts. And we care very deeply about the cost of the false positive that exists in healthcare, right? So, and we do this through various different ways. Most importantly, we do it in an extremely transparent and interpretable way. By transparent I mean, Olive provides deep insights back to her human counterparts in the form of reporting and status reports, and we even, we even have a term internally, that we call is a sick day. So when Olive calls in sick, we don't just tell our customers Olive's not working today, we tell our customers that Olive is taking a sick day, because a human worker that might require, that might need to stay home and recover. In our case, we just happened to have to rewire a certain portal integration because a portal just went through a massive change, and Olive has to take a sick day in order to make that fix, right? So. And this is, you know, just helping our customers understand, or feel like they can achieve success with AI-based deployments, and not sort of this like robot hanging over them, where we're waiting for Skynet to come into place, and truly humanizing the aspects of AI in healthcare. >> Right. Well that's really interesting. How would you describe Olive's personality? I mean, could you attribute a personality? >> Yeah, she's unbiased, data-driven, extremely transparent in her approach, she's empathetic. There are certain days where she's direct, and there are certain ways where she could be quirky in the way she shares stuff. Most importantly, she's incredibly knowledgeable, and we really want to bring that knowledge that she has gained over the years of working in the trenches of healthcare to her customers. >> That sounds really fascinating, and I love hearing about the human side of Olive. Can you tell us about how this AI, though, is actually improving efficiencies in healthcare systems right now? >> Yeah, not too many people know that about a third of every single US dollar is spent in the administrative burden of delivering care. It's really, really unfortunate. In the capitalistic world, of, just us as a system of healthcare in the United States, there is a lot of tail wagging the dog that ends up happening. Most importantly, I don't know that the last time, if you've been through a process where you have to go and get an MRI or a CT scan, and your provider tells you that we first have to wait for the insurance company in order to give us permission to perform this particular task. And when you think about that, one, there's, you know the tail wagging the dog scenario, but two, the administrative burden to actually seek the approval for that test, that your provider is telling you that you need to perform. Right? And what we've done is, as humans, or as sort of systems, we have just put humans in the supply chain of connecting the left side to the right side. So what we're doing is we're taking advantage of massive distributing cloud computing platforms, I mean, we're fully built on the AWS stack, we take advantage of things that we can very quickly stand up, and spin up. And we're leveraging core capabilities in our computer vision, our natural language processing, to do a lot of the tasks that, unfortunately, we have relegated humans to do, and our goal is can we allow humans to function at the top of their license? Irrespective of what the license is, right? It could be a provider, it could be somebody working in the trenches of revenue cycle management, or it could be somebody in a call center talking to a very anxious patient that just learned that he or she might need to take a test in order to rule out something catastrophic, like a very adverse diagnosis. >> Yeah, really fascinating. I mean, do you think that this is just like the tip of the iceberg? I mean, how much more potential does AI have for healthcare? >> Yeah, I think we're very much in the early, early, early days of AI being applied in a production in practical sense. You know, AI has been talked about for many, many many years, in the trenches of healthcare. It has found its place very much in challenging status quos in research, it has struggled to find its way in the trenches of just the practicality on the application of AI. And that's partly because we, you know, going back to the point that I raised earlier, the cost of the false positive in healthcare is really high. You know, it can't just be a, you know, I bought a pair of shoes online, and it recommended that I buy a pair of socks, and I happen to get the socks and I returned them back because I realized that they're really ugly and hideous and I don't want them. In healthcare, you can't do that. Right? In healthcare you can't tell a patient or somebody else oops, I really screwed up, I should not have told you that. So, what that's meant for us, in the trenches of delivery of AI-based applications, is we've been through a cycle of continuous pilots and proof of concepts. Now, though, with AI starting to take center stage, where a lot of what has been hardened in the research world can be applied towards the practicality to avoid the burnout, and the sheer cost that the system is under, we're starting to see this real upwards tick of people implementing AI-based solutions, whether it's for decision-making, whether it's for administrative tasks, drug discovery, it's just, is an amazing, amazing time to be at the intersection of practical application of AI and really, really good healthcare delivery for all of us. >> Yeah, I mean, that's really, really fascinating, especially your point on practicality. Now how do you foresee AI, you know, being able to be more commercial in its appeal? >> I think you have to have a couple of key wins under your belt, is number one, number two, the standard, sort of outcomes-based publications that is required. Two, I think we need, we need real champions on the inside of systems to support the narrative that us as vendors are pushing heavily on the AI-driven world or the AI-approachable world, and we're starting to see that right now. You know, it took a really, really long time for providers, first here in the United States, but now internationally, on this adoption and move away from paper-based records to electronic medical records. You know, you still hear a lot of pain from people saying oh my God, I used an EMR, but try to take the EMR away from them for a day or two, and you'll very quickly realize that life without an EMR is extremely hard right now. AI is starting to get to that point where, for us, we, you know, we treat, we always say that Olive needs to pass the Turing test. Right? So when you clearly get this, this sort of feeling that I can trust my AI counterpart, my AI worker to go and perform these tasks, because I realized that, you know, as long as it's unbiased, as long as it's data-driven, as long as it's interpretable, and something that I can understand, I'm willing to try this out in a routine basis, but we really, really need those champions on the internal side to promote the use of this safe application. >> Yeah. Well, just another thought here is, you know, looking at your website, you really focus on some of the broken systems in healthcare, and how Olive is uniquely prepared to shine the light on that, where others aren't. Can you just give us an insight onto that? >> Yeah. You know, the shine the light is a play on the fact that there's a tremendous amount of excitement in technology and AI in healthcare applied to the clinical side of the house. And it's the obvious place that most people would want to invest in, right? It's like, can I bring an AI-based technology to the clinical side of the house? Like decision support tools, drug discovery, clinical NLP, et cetera, et cetera. But going back to what I said, 30% of what happens today in healthcare is on the administrative side. And so what we call as the really, sort of the dark side of healthcare where it's not the most exciting place to do true innovation, because you're controlled very much by some big players in the house, and that's why we we provide sort of this insight on saying we can shine a light on a place that has typically been very dark in healthcare. It's around this mundane aspects of traditional, operational, and financial performance, that doesn't get a lot of love from the tech community. >> Well, thank you Rohan for this fascinating conversation on how AI is revolutionizing health systems across the country, and also the unique role that Olive is now playing in driving those efficiencies that we really need. Really looking forward to our next conversation with you. And that was Rohan D'Souza, the Chief Product Officer of Olive, and I'm Natalie Erlich, your host for the AWS Startup Showcase, on theCUBE. Thank you very much for joining us, and look forward for you to join us on the next session. (gentle music)
SUMMARY :
of the AWS Startup Showcase, My pleasure to be here, I'm excited. and how it's revolutionizing and on the enterprise side And also I'd love to hear about in some of the most complex So how are you seeking to in the form of reporting I mean, could you attribute a personality? that she has gained over the years the human side of Olive. know that the last time, is just like the tip of the iceberg? and the sheer cost that you know, being able to be first here in the United and how Olive is uniquely prepared is on the administrative side. and also the unique role
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rohan D'Souza | PERSON | 0.99+ |
Natalie | PERSON | 0.99+ |
Natalie Erlich | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
America | LOCATION | 0.99+ |
Rohan | PERSON | 0.99+ |
Olive | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
Today | DATE | 0.99+ |
a day | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
single | QUANTITY | 0.97+ |
Olives | PERSON | 0.96+ |
Olive | ORGANIZATION | 0.92+ |
one | QUANTITY | 0.88+ |
Startup Showcase | EVENT | 0.88+ |
theCUBE | ORGANIZATION | 0.88+ |
single day | QUANTITY | 0.82+ |
pandemic | EVENT | 0.81+ |
about a third | QUANTITY | 0.81+ |
a pair of socks | QUANTITY | 0.8+ |
AWS Startup Showcase | EVENT | 0.8+ |
AWS Startup Showcase | EVENT | 0.75+ |
single human | QUANTITY | 0.73+ |
Skynet | ORGANIZATION | 0.68+ |
US | LOCATION | 0.67+ |
every single | QUANTITY | 0.65+ |
dollar | QUANTITY | 0.62+ |
pair | QUANTITY | 0.6+ |
number | QUANTITY | 0.56+ |
NLP | ORGANIZATION | 0.5+ |
shoes | QUANTITY | 0.5+ |
Zach Booth, Explorium | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences.
(gentle upbeat music) >> Everyone welcome to the AWS Startup Showcase presented by theCUBE. I'm John Furrier, host of theCUBE. We are here talking about the next big thing in cloud featuring Explorium. For the AI track, we've got AI cybersecurity and life sciences. Obviously AI is hot, machine learning powering that. Today we're joined by Zach Booth, director of global partnerships and channels like Explorium. Zach, thank you for joining me today remotely. Soon we'll be in person, but thanks for coming on. We're going to talk about rethinking external data. Thanks for coming on theCUBE. >> Absolutely, thanks so much for having us, John. >> So you guys are a hot startup. Congratulations, we just wrote about on SiliconANGLE, you're a new $75 million of fresh funding. So you're part of the Amazon partner network and growing like crazy. You guys have a unique value proposition looking at external data and that having a platform for advanced analytics and machine learning. Can you take a minute to explain what you guys do? What is this platform? What's the value proposition and why do you exist? >> Bottom line, we're bringing context to decision-making. The premise of Explorium and kind of this is consistent with the framework of advanced analytics is we're helping customers to reach better, more relevant, external data to feed into their predictive and analytical models. It's quite a challenge to actually integrate and effectively leverage data that's coming from beyond your organization's walls. It's manual, it's tedious, it's extremely time consuming and that's a problem. It's really a problem that Explorium was built to solve. And our philosophy is it shouldn't take so long. It shouldn't be such an arduous process, but it is. So we built a company, a technology that's capable for any given analytical process of connecting a customer to relevant sources that are kind of beyond their organization's walls. And this really impacts decision-making by bringing variety and context into their analytical processes. >> You know, one of the things I see a lot in my interviews with theCUBE and talking to people in the industry is that everyone talks a big game about having some machine learning and AI, they're like, "Okay, I got all this cool stuff". But at the end of the day, people are still using spreadsheets. They're wrangling data. And a lot of it's dominated by these still fenced-off data warehousing and you start to see the emergence of really companies built on the cloud. I saw the snowflake IPO, you're seeing a whole new shift of new brands emerging that are doing things differently, right? And because there's such a need for just move out of the archaic spreadsheet and data presentation layers, it's a slower antiquated, outdated. How do you guys solve that problem? You guys are on the other side of that equation, you're on the new wave of analytics. What are you guys solving? How do you make that work? How do you get on that way? >> So basically the way Explorium sees the world, and I think that most analytical practitioners these days see it in a similar way, but the key to any analytical problem is having the right data. And the challenge that we've talked about and that we're really focused on is helping companies reach that right data. Our focus is on the data part of data science. The science part is the algorithmic side. It's interesting. It was kind of the first frontier of machine learning as practitioners and experts were focused on it and cloud and compute really enabled that. The challenge today isn't so much "What's the right model for my problem?" But it's "What's the right data?" And that's the premise of what we do. Your model's only as strong as the data that it trains on. And going back to that concept of just bringing context to decision-making. Within that framework that we talked about, the key is bringing comprehensive, accurate and highly varied data into my model. But if my model is only being informed with internal data which is wonderful data, but only internal, then it's missing context. And we're helping companies to reach that external variety through a pretty elegant platform that can connect the right data for my analytical process. And this really has implications across several different industries and a multitude of use cases. We're working with companies across consumer packaged goods, insurance, financial services, retail, e-commerce, even software as a service. And the use cases can range between fraud and risk to marketing and lifetime value. Now, why is this such a challenge today with maybe some antiquated or analog means? With a spreadsheet or with a rule-based approach where we're pretty limited, it was an effective means of decision-making to generate and create actions, but it's highly limited in its ability to change, to be dynamic, to be flexible. And with modeling and using data, it's really a huge arsenal that we have at our fingertips. The trick is extracting value from within it. There's obviously latent value from within our org but every day there's more and more data that's being created outside of our org. And that is a challenge to go out and get to effectively filter and navigate and connect to. So we've basically built that tech to help us navigate and query for any given analytical question. Find me the right data rather than starting with what's the problem I'm looking for, now let me think about the right data. Which is kind of akin to going into a library and searching for a specific book. You know which book you're looking for. Instead of saying, there's a world, a universe of data outside there. I want to access it. I want to tap into what's right. Can I use a tool that can effectively query all that data, find what's relevant for me, connect it and match it with my own and distill signals or features from that data to provide more variety into my modeling efforts yielding a robust decision as an output. >> I love that paradigm of just having that searchable kind of paradigm. I got to ask you one of the big things that I've heard people talk about. I want to get your thoughts on this, is that how do I know if I even have the right data? Is the data addressable? Can I find it? Is it even, can I even be queried? How do you solve that problem for customers when they say, "I really want the best analytics but do I even have the data or is it the right data?" How do you guys look at that? >> So the way our technology was built is that it's quite relevant for a few different profile types of customers. Some of these customers, really the genesis of the company started with those cloud-based, model-driven since day one organizations, and they're working with machine learning and they have models in production. They're quite mature in fact. And the problem that they've been facing is, again, our models are only as strong as the data that they're training on. The only data that they're training on is internal data. And we're seeing diminishing returns from those decisions. So now suddenly we're looking for outside data and we're finding that to effectively use outside data, we have to spend a lot of time. 60% of our time spent thinking of data, going out and getting it, cleaning it, validating it, and only then can we actually train a model and assess if there's an ROI. That takes months. And if it doesn't push the needle from an ROI standpoint, then it's an enormous opportunity cost, which is very, very painful, which goes back to their decision-making. Is it even worth it if it doesn't push the needle? That's why there had to be a better way. And what we built is relevant for that audience as well as companies that are in the midst of their digital transformation. We're data rich, but data science poor. We have lots of data. A latent value to extract from within our own data and at the same time tons of valuable data outside of our org. Instead of waiting 18, 36 months to transform ourselves, get our infrastructure in place, our data collection in place, and really start having models in production based on our own data. You can now do this in tandem. And that's what we're seeing with a lot of our enterprise customers. By using their analysts, their data engineers, some of them in their innovation or kind of center of excellences have a data science group as well. And they're using the platform to inform a lot of their different models across lines of businesses. >> I love that expression, "data-rich". A lot of people becoming full of data too. They have a data problem. They have a lot of it. I think I want to get your thoughts but I think that connects to my next question which is as people look at the cloud, for instance, and again, all these old methods were internal, internal to the company, but now that you have this idea of cloud, more integration's happening. More people are connecting with APIs. There's more access to potentially more signals, more data. How does a company go to that next level to connect in and acquire the data and make it faster? Because I can almost imagine that the signals that come from that context of merging external data and that's the topic of this theme, re-imagining external data is extremely valuable signaling capability. And so it sounds like you guys make it go faster. So how does it work? Is it the cloud? Take us through that value proposition. >> Well, it's a real, it's amazing how fast the rate of change organizations have been moving onto the cloud over the past year during COVID and the fact that alternative or external data, depending on how you refer to it, has really, really blown up. And it's really exciting. This is coming in the form of data providers and data marketplaces, and everybody is kind of, more and more organizations are moving from rule-based decision-making to predictive decision making, and that's exciting. Now what's interesting about this company, Explorium, we're working with a lot of different types of customers but our long game has a real high upside. There's more and more companies that are starting to use data and are transformed or already are in the midst of their transformation. So they need outside data. And that challenge that I described is exists for all of them. So how does it really work? Today, if I don't have data outside, I have to think. It's based on hypothesis and it all starts with that hypothesis which is already prone to error from the get-go. You and I might be domain experts for a given use case. Let's say we're focusing on fraud. We might think about a dozen different types of data sources, but going out and getting it like I said, it takes a lot of time harmonizing it, cleaning it, and being able to use it takes even more time. And that's just for each one. So if we have to do that across dozens of data sources it's going to take far too much time and the juice isn't worth the squeeze. And so I'm going to forego using that. And a metaphor that I like to use when I try to describe what Explorium does to my mom. I basically use this connection to buying your first home. It's a very, very important financial decision. You would, when you're buying this home, you're thinking about all the different inputs in your decision-making. It's not just about the blueprint of the house and how many rooms and the criteria you're looking for. You're also thinking external variables. You're thinking about the school zone, the construction, the property value, alternative or similar neighborhoods. That's probably your most important financial decision or one of the largest at least. A machine learning model in production is an extremely important and expensive investment for an organization. Now, the problem is as a consumer buying a home, we have all this data at our fingertips to find out all of those external-based inputs. Organizations don't, which is kind of crazy when I first kind of got into this world. And so, they're making decisions with their first party data only. First party data's wonderful data. It's the best, it's representative, it's high quality, it's high value for their specific decision-making and use cases but it lacks context. And there's so much context in the form of location-based data and business information that can inform decision-making that isn't being used. It translates to sub-optimal decision-making, let's say. >> Yeah, and I think one of the insights around looking at signal data in context is if by merging it with the first party, it creates a huge value window, it gives you observational data, maybe potentially insights into customer behavior. So totally agree, I think that's a huge observation. You guys are definitely on the right side of history here. I want to get into how it plays out for the customer. You mentioned the different industries, obviously data's in every vertical. And vertical specialization with the data it has to be, is very metadata driven. I mean, metadata and oil and gas is different than fintech. I mean, some overlap, but for the most part you got to have that context, acute context, each one. How are you guys working? Take us through an example of someone getting it right, getting that right set up, taking us through the use case of how someone on boards Explorium, how they put it to use, and what are some of the benefits? >> So let's break it down into kind of a three-step phase. And let's use that example of fraud earlier. An organization would have basically past historical data of how many customers were actually fraudulent in the end of the day. So this use case, and it's a core business problem, is with an intention to reduce that fraud. So they would basically provide, going with your description earlier, something similar to an Excel file. This can be pulled from any database out there, we're working with loads of them, and they would provide this what's called training data. This training data is their historical data and would have as an output, the outcome, the conclusion, was this business fraudulent or not? Yes or no. Binary. The platform would understand that data itself to train a model with external context in the form of enrichments. These data enrichments at the end of the day are important, they're relevant, but their purpose is to generate signals. So to your point, signals is the bottom line what everyone's trying to achieve and identify and discover, and even engineer by using data that they have and data that they yet to integrate with. So the platform would connect to your data, infer and understand the meaning of that data. And based on this matching of internal plus external context, the platform automates the process of distilling signals. Or in machine learning this is called, referred to as features. And these features are really the bread and butter of your modeling efforts. If you can leverage features that are coming from data that's outside of your org, and they're quantifiably valuable which the platform measures, then you're putting yourself in a position to generate an edge in your modeling efforts. Meaning now, you might reduce your fraud rate. So your customers get a much better, more compelling offer or service or price point. It impacts your business in a lot of ways. What Explorium is bringing to the table in terms of value is a single access point to a huge universe of external data. It expedites your time to value. So rather than data analysts, data engineers, data scientists, spending a significant amount of time on data preparation, they can now spend most of their time on feature or signal engineering. That's the more fun and interesting part, less so the boring part. But they can scale their modeling efforts. So time to value, access to a huge universe of external context, and scale. >> So I see two things here. Just make sure I get this right 'cause it sounds awesome. So one, the core assets of the engineering side of it, whether it's the platform engineer or data engineering, they're more optimized for getting more signaling which is more impactful for the context acquisition, looking at contexts that might have a business outcome, versus wrangling and doing mundane, heavy lifting. >> Yeah so with it, sorry, go ahead. >> And the second one is you create a democratization for analysts or business people who just are used to dealing with spreadsheets who just want to kind of play and play with data and get a feel for it, or experiment, do querying, try to match planning with policy - >> Yeah, so the way I like to kind of communicate this is Explorium's this one, two punch. It's got this technology layer that provides entity resolution, so matching with external data, which otherwise is a manual endeavor. Explorium's automated that piece. The second is a huge universe of outside data. So this circumvents procurement. You don't have to go out and spend all of these one-off efforts on time finding data, organizing it, cleaning it, etc. You can use Explorium as your single access point to and gateway to external data and match it, so this will accelerate your time to value and ultimately the amount of valuable signals that you can discover and leverage through the platform and feed this into your own pipelines or whatever system or analytical need you have. >> Zach, great stuff. I love talking with you and I love the hot startup action here. Cause you're again, you're on the net new wave here. Like anything new, I was just talking to a colleague here. (indistinct) When you have something new, it's like driving a car for the first time. You need someone to give you some driving lessons or figure out how to operationalize it or take advantage of the one, two, punch as you pointed out. How do you guys get someone up and running? 'Cause let's just say, I'm like, okay, I'm bought into this. So no brainer, you got my attention. I still don't understand. Do you provide a marketplace of data? Do I need to get my own data? Do I bring my own data to the party? Do you guys provide relationships with other data providers? How do I get going? How do I drive this car? How do you answer that? >> So first, explorium.ai is a free trial and we're a product-focused company. So a practitioner, maybe a data analyst, a data engineer, or data scientist would use this platform to enrich their analytical, so BI decision-making or any models that they're working on either in production or being trained. Now oftentimes models that are being trained don't actually make it to production because they don't meet a minimum threshold. Meaning they're not going to have a positive business outcome if they're deployed. With Explorium you can now bring variety into that and increase your chances that your model that's being trained will actually be deployed because it's being fed with the right data. The data that you need that's not just the data that you have. So how a business would start working with us would typically be with a use case that has a high business value. Maybe this could be a fraud use case or a risk use case and B2B, or even B2SMB context. This might be a marketing use case. We're talking about LTV modeling, lookalike modeling, lead acquisition and generation for our CPGs and field sales optimization. Explore and understand your data. It would enrich that data automatically, it would generate and discover new signals from external data plus from your own and feed this into either a model that you have in-house or end to end in the platform itself. We provide customer success to generate, kind of help you build out your first model perhaps, and hold your hands through that process. But typically most of our customers are after a few months time having run in building models, multiple models in production on their own. And that's really exciting because we're helping organizations move from a more kind of rule-based decision making and being their bridge to data science. >> Awesome. I noticed that in your title you handle global partnerships and channels which I'm assuming is you guys have a network and ecosystem you're working with. What are some of the partnerships and channel relationships that you have that you bring to bear in the marketplace? >> So data and analytics, this space is very much an ecosystem. Our customers are working across different clouds, working with all sorts of vendors, technologies. Basically they have a pretty big stack. We're a part of that stack and we want to symbiotically play within our customer stack so that we can contribute value whether they sit here, there, or in another place. Our partners range from consulting and system integration firms, those that perhaps are building out the blueprint for a digital transformation or actually implementing that digital transformation. And we contribute value in both of these cases as a technology innovation layer in our product. And a customer would then consume Explorium afterwards, after that transformation is complete as a part of their stack. We're also working with a lot of the different cloud vendors. Our customers are all cloud-based and data enrichment is becoming more and more relevant with some wonderful machine-learning tools. Be they AutoML, or even some data marketplaces are popping up and very exciting. What we're bringing to the table as an edge is accelerating the connection between the data that I think I want as a company and how to actually extract value from that data. Being part of this ecosystem means that we can be working with and should be working with a lot of different partners to contribute incremental value to our end customers. >> Final question I want to ask you is if I'm in a conference room with my team and someone says, "Hey, we should be rethinking our external data." What would I say? How would I pound my fist on the table or raise my hand in saying, "Hey, I have an idea, we should be thinking this way." What would be my argument to the team, to re-imagine how we deal with external data? >> So it might be a scenario that rather than banging your hands on the table, you might be banging your heads on the table because it's such a challenging endeavor today. Companies have to think about, What's the right data for my specific use cases? I need to validate that data. Is it relevant? Is it real? Is it representative? Does it have good coverage, good depth and good quality? Then I need to procure that data. And this is about getting a license from it. I need to integrate that data with my own. That means I need to have some in-house expertise to do so. And then of course, I need to monitor and maintain that data on an ongoing basis. All of this is a pretty big thing to undertake and undergo and having a partner to facilitate that external data integration and ongoing refresh and monitoring, and being able to trust that this is all harmonized, high quality, and I can find the valuable ones without having to manually pick and choose and try to discover it myself is a huge value add, particularly the larger the organization or partner. Because there's so much data out there. And there's a lot of noise out there too. And so if I can through a single partner or access point, tap into that data and quantify what's relevant for my specific problem, then I'm putting myself in a really good position and optimizing the allocation of my very expensive and valuable data analysts and engineering resources. >> Yeah, I think one of the things you mentioned earlier I thought was a huge point was good call out was it goes beyond the first party data because and even just first party if you just in an internal view, some of the best, most successful innovators that we've been covering with cloud scale is they're extending their first party data to external providers. So they're in the value chains of solutions that share their first party data with other suppliers. And so that's just, again, more of an extension of the first party data. You're kind of taking it to a whole 'nother level of there's another external, external set of data beyond it that's even more important. I think this is a fascinating growth area and I think you guys are onto it. Great stuff. >> Thank you so much, John. >> Well, I really appreciate you coming on Zach. Final word, give a quick plug for the company. What are you up to, and what's going on? >> What's going on with Explorium? We are growing very fast. We're a very exciting company. I've been here since the very early days and I can tell you that we have a stellar working environment, a very, very, strong down to earth, high work ethic culture. We're growing in the sense of our office in San Mateo, New York, and Tel Aviv are growing rapidly. As you mentioned earlier, we raised our series C so that totals Explorium to raising I think 127 million over the past two years and some change. And whether you want to partner with Explorium, work with us as a customer, or join us as an employee, we welcome that. And I encourage everybody to go to explorium.ai. Check us out, read some of the interesting content there around data science, around the processes, around the business outcomes that a lot of our customers are seeing, as well as joining a free trial. So you can check out the platform and everything that has to offer from machine learning engine to a signal studio, as well as what type of information might be relevant for your specific use case. >> All right Zach, thanks for coming on. Zach Booth, director of global partnerships and channels that explorium.ai. The next big thing in cloud featuring Explorium and a part of our AI track, I'm John Furrier, host of theCUBE. Thanks for watching.
SUMMARY :
For the AI track, we've Absolutely, thanks so and that having a platform It's quite a challenge to actually of really companies built on the cloud. And that is a challenge to go out and get I got to ask you one of the big things and at the same time tons of valuable data and that's the topic of this theme, And a metaphor that I like to use of the insights around and data that they yet to integrate with. the core assets of the and gateway to external data Do I bring my own data to the party? that's not just the data that you have. What are some of the partnerships a lot of the different cloud vendors. to re-imagine how we and optimizing the allocation of the first party data. plug for the company. that has to offer from and a part of our AI track,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Zach Booth | PERSON | 0.99+ |
Explorium | ORGANIZATION | 0.99+ |
Zach | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
$75 million | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
San Mateo | LOCATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
127 million | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
explorium.ai | OTHER | 0.99+ |
first party | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
first model | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
first home | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
three-step | QUANTITY | 0.98+ |
second | QUANTITY | 0.97+ |
two punch | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
first frontier | QUANTITY | 0.95+ |
New York | LOCATION | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
AWS | ORGANIZATION | 0.93+ |
explorium.ai | ORGANIZATION | 0.91+ |
each one | QUANTITY | 0.9+ |
second one | QUANTITY | 0.9+ |
single partner | QUANTITY | 0.89+ |
AWS Startup Showcase | EVENT | 0.87+ |
dozens | QUANTITY | 0.85+ |
past year | DATE | 0.84+ |
single access | QUANTITY | 0.84+ |
First party | QUANTITY | 0.84+ |
series C | OTHER | 0.79+ |
COVID | EVENT | 0.74+ |
past two years | DATE | 0.74+ |
36 months | QUANTITY | 0.73+ |
18, | QUANTITY | 0.71+ |
Startup Showcase | EVENT | 0.7+ |
SiliconANGLE | ORGANIZATION | 0.55+ |
tons | QUANTITY | 0.53+ |
things | QUANTITY | 0.53+ |
snowflake IPO | EVENT | 0.52+ |
Breaking Analysis: RPA: Over-Hyped or the Next Big Thing?
from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape hello everyone and welcome to this week's episode of wiki bots cube insights powered by EGR in this breaking analysis we take a deeper dive into the world of robotic process automation otherwise known as RPA it's one of the hottest sectors in software today in fact Gartner says it's the fastest growing software sector that they follow in this session I want to break down three questions one is the RP a market overvalued - how large is the total available market for RP a and three who were the winners and losers in this space now before we address the first question here's what you need to know about RP a the market today is small but it's growing fast the software only revenue for the space was about 1 billion dollars in 2019 and it's growing it between 80 to a hundred percent annually RP a has been very popular in larger organizations especially in back-office functions really in regulated industries like financial services and healthcare RP a has been successful at automating the mundane repeatable deterministic tasks and most automations today are unattended the industry is very well funded with the top two firms raising nearly 1 billion dollars in the past couple of years they have a combined market value of nearly 14 billion now some people in the art community have said that RP a is hyped and looks like a classic pump and dump situation we're gonna look into that and really try to explore the valuation and customer data and really try to come to some conclusions there we see big software companies like Microsoft and sa P entering the scene and we want to comment on that a little later in this segment now RBA players have really cleverly succeeded in selling to the business lines and often a bypassed IT now sometimes that creates tension in or as I said customers are typically very large organizations who can shell out the hundred thousand dollar plus entry point to get into the RP a game the Tam is expanding beyond back office into broader on a broader automation agenda hyper automation is the buzzword of the day and there are varying definitions Gartner looks at hyper automation as the incorporation of RPA along with intelligent business process management I BPM and I pass or intelligent platform-as-a-service Gardner's definition takes a holistic view of the enterprise incorporating legacy on-prem app apps as well as emerging systems now this is good but I question whether the hyper term applies here as we see hyper automation as the extension of our PA to include process mining to discover new automations or new automation opportunities and the use of machine intelligence ml and a I applied to process data data where that combination drives intelligence analytics that further drives digital business process transformation across the enterprise so the point is that we envision a more agile framework and definition for hyper automation we see legacy BPM systems informing the transformation but not necessarily adjudicating the path forward we liken this to the early days of big data where legacy data warehouses and ETL processes provided useful context but organizations had to develop a new tech stack that broke the stranglehold of technical debt we're seeing this emerge in the form of new workloads powered by emerging analytic databases like redshift and snowflake with ml tools applied and cloud driving agile insights in that so-called Big Data space so we think a similar renaissance is happening here with with automation really driven by the money the mandate for digital business transformation along with machine intelligence and that tooling applied for a really driving automation across the enterprise in a form of augmentation with attended BOTS at scale becoming much much more important over time ok now let's shift gears a little bit question is the RP a market overhyped and overvalued now to answer this let's go through a bit of a thought exercise that we've put together and look at some data what this chart shows is some critical data points that will begin to help answer the question that we've posed in the top part of the chart we show the company the VC funding projected valuations and revenue estimates for 2019 and 2020 and as you can see uipath an automation any where are the hot companies right now they're private so much of this data is estimated but we know how much money they've raised and we know the valuations that have been reported so the RP a software market is around a billion dollars today and we have it almost doubling in 2020 now the bottom part of this chart shows the projected market revenue growth and the implied valuations for the market as a whole so you can see today we show a mark that is trading at about 15 to 17 times revenue which seems like a very high multiple but over time we show that multiple shrinking and settling in mid decade at just over 5x which for software is pretty conservative especially for high-growth software now what we've done on this next chart is we brought down that market growth and the implied valuation data and highlighted twenty twenty-five at seventy-five billion dollars the market growth will have slowed by then to twenty percent in this model and this thought exercise with a revenue multiple of five point four x for the overall market now eventually as growth slows RBA software will start to throw off profits at least it better so what we show here is a sensitivity analysis assuming a 20% 25% 30% and 35% for the market as a whole we're using that as a proxy and we show a 20/20 X even multiple which for a market growing the software market growing this fast you know we think is pretty reasonable consider the tech overall typically is gonna have a an even multiple of ten to fifteen you know X it really should be easy your enterprise value over a bit it's really a more accurate measure but but this is back in the Afghan on the balance sheet date and I'm a forecast all-out but we're trying to just sort of get to the question is is this market overvalued and as you can see in the Far column given these assumptions we're in the range of that seventy five billion dollar market valuation with that Delta now reality you're going to have some companies growing faster than the market overall and we'll see a lot of consolidation in this space but at the macro level it would seem that the company which can lead and when the Spoils is gonna really benefit okay so these figures actually suggest in my view that the market could be undervalued that sounds crazy right but look at companies like ServiceNow and work day and look at snowflakes recent valuation at twelve billion dollars so are the valuations for uipath and automation anywhere justified well in part it depends on the size of the market the TAM total available market in their ability to break out of back-office niches and deliver these types of revenue figures and growth you know maybe my forecasts are a little too aggressive in the early days but in my experience the traditional forecast that we see in the marketplace tend to underestimate transformative technologies you tend to have these sort of o guides where you know it takes off and really steep ins and it has a sharp curve and then tapers off so we'll see but let's take a closer look at the Tam but you know first I want to introduce a customer view point here's Eric's Lac Eric Lex who's an RPA pro at GE talking about his company's RPA journey play the clip I would say in terms of our journey 2017 was kind of our year to prove the technology we wanted to see if this stuff could really work long term and operate at scale given that I'm still here obviously we proved that was correct and then 2018 was kind of the year of scaling and operationalizing kind of a a sustainable model to support our business units across the board from an RPA standpoint so really building out a proper structure building out the governance that goes along with building robots and building a kind of a resource team to continue to support the bots that that you know we were at scale at that point so maintaining those bots is critically important that's the direction we're moving in 2019 we've kind of perfected the concept of the back office robot and the development of those and running those at scale and now we're moving towards you know a whole new market share when it comes to attended automation and citizen Development so this is a story we've heard from many customers and we've tried to reflect it in this graphic that we're showing here start small get some wins prove out the tech really in the back office and then drive customer facing activities we see this as the starting point for more SME driven digital transformations where business line pros are rethinking processes and developing new automations you know either in low code scenarios or with Centers of Excellence now this vision of hyper automation we think comes from the ability to do process mining and identify automation opportunities and then bring our PA to the table using machine learning and AI to understand text voice visual context and ultimately use that process data to transform the business this is an outcome driven model where organizations are optimizing on business KPIs and incentives are aligned accordingly so we see this vision as potentially unlocking a very large Tam that perhaps exceeds 30 billion dollars go now let's bring in some of these spending data and take a look at what the ETR data set tells us about the RPA market now the first thing that jumps out at you is our PA is one of the fastest growing segments in the data set you can see that green box and that blue dot at around 20% that's the change in spending velocity in the 2020 survey versus last year now the one caveat is I'm isolating on global 2000 companies in this data set and as you can see in in that red bar up on the left and remember our PA today is really hot in large companies but not nearly as fast growing when you analyze the overall respondent base and which includes smaller organizations nonetheless this chart shows net scores and market shares for our PA across all respondents remember net score is a measure of spending velocity and market share is a measure of pervasiveness in the survey and what you see here is that our PA net scores are holding steadily the nice rate and market shares are creeping up relative to other segments in the data set now remember this is across all companies but we want to use the ETR data understand who is winning in this space now what this chart shows is net score or spending velocity on the vertical axis and market share or pervasiveness on the horizontal axis for each individual player and as we run through this sequence from January 18 survey through today across the nine surveys look at uipath an automation anywhere but look at uipath in particular they really appear to be breaking away from the pack now here's another look at the data it shows net scores or spending velocity for uipath automation anywhere blue prism pegye systems and work fusion now these are all very strong net scores which are essentially calculated by subtracting the percent of customers spending less from those spending more the two leaders here uipath and automation anywhere August but the rest rest are actually quite good there in the green but look at this look what happens when you isolate on the 349 global 2,000 respondents in the survey uipath jumps into the 80 percent net score territory again spending velocity automation anywhere dips a little bit pegye systems interestingly jumps up nicely but look at blue prism they fall back in the larger global 2000 accounts which is a bit of a concern now the other key point on this chart is that 85% of UI customers and 70% of automation anywhere customers plan to spend more this year than they spent last year that is pretty impressive now as you can see here in this chart the global 2000 have been pretty consistent spenders on our PA for the past three survey snapshots uipath again showing net scores or spending intensity solidly in the 80% plus range and even though it's a smaller end you can see pay go with a nice uptick in the last two surveys within these larger accounts now finally let's look at what ETR calls market share which is a measure of pervasiveness in the survey this chart shows data from all 1000 plus respondents and as you can see UI path appears to be breaking out from the pack automation anywhere in pega are showing an uptick in the january survey and blue prism is trending down a little bit which is something to watch but you can see in the upper right all four companies are in the green with regard to net score or against pending velocity so let's summarize it and wrap up is this market overhyped well it probably is overhyped but is it overvalued I don't think so the customer feedback that we have in the community and the proof points are really starting to stack up so with continued revenue growth and eventually profits you can make the case that whoever comes out on top will really do well and see huge returns in this market space let's come back to that in a moment how large is this market I think this market can be very large at am of 30 billion pluses not out of the question in my view now that realization will be a function of RPAs ability to break into more use cases with deeper business integration RBA has an opportunity in our view to cross the chasm and deliver lower code solutions to subject matter experts in business lines that are in a stronger position to drive change now a lot of people poopoo this notion and this concept but I think it's something that is a real possibility this idea of hyper automation is buzzword e but it has meaning companies that bring RPA together with process mining and machine intelligence that tries process analytics has great potential if organizational stovepipes can be broken down in other words put process data and analytics at the core to drive decision-making and change now who wins let me say this the company that breaks out and hits escape velocity is going to make a lot of money here now unlike what I said in last week's braking analysis on cloud computing this is more of a winner-take-all market it's not a trillion dollar team like cloud it's tens of billions and maybe north to 30 billion but it's somewhat of a zero-sum game in my opinion the number one player is going to make a lot of dough number two will do okay and in my view everyone else is going to struggle for profits now the big wildcard is the degree to which the big software players like Microsoft and sa P poison the RPA well now here's what I think I think these big software players are taking an incremental view of the market and are bundling in RPA is a check off item they will not be the ones to drive radical process transformation rather they will siphon off some demand but organizations that really want to benefit from so-called hyper automation will be leaning heavily on software from specialists who have the vision the resources the culture in the focus to drive digital process transformation alright that's a wrap as always I really appreciate the comments that I get on my LinkedIn posts and on Twitter I'm at at D Volante so thanks for that and thanks for watching everyone this is Dave Volante for the cube insights powered by ETR and we'll see you next time
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
January 18 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
twenty percent | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
85% | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
30 billion | QUANTITY | 0.99+ |
80 percent | QUANTITY | 0.99+ |
seventy-five billion dollars | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
tens of billions | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
twelve billion dollars | QUANTITY | 0.99+ |
GE | ORGANIZATION | 0.99+ |
35% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
30 billion dollars | QUANTITY | 0.99+ |
two leaders | QUANTITY | 0.99+ |
three questions | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
2017 | DATE | 0.99+ |
last week | DATE | 0.99+ |
hundred thousand dollar | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
August | DATE | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
ten | QUANTITY | 0.99+ |
Gardner | PERSON | 0.99+ |
nine surveys | QUANTITY | 0.98+ |
EGR | ORGANIZATION | 0.98+ |
Boston Massachusetts | LOCATION | 0.98+ |
january | DATE | 0.98+ |
twenty twenty-five | QUANTITY | 0.98+ |
around a billion dollars | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
nearly 14 billion | QUANTITY | 0.98+ |
about 1 billion dollars | QUANTITY | 0.97+ |
nearly 1 billion dollars | QUANTITY | 0.97+ |
ServiceNow | ORGANIZATION | 0.97+ |
80 | QUANTITY | 0.97+ |
three | QUANTITY | 0.97+ |
around 20% | QUANTITY | 0.96+ |
a lot of money | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
one caveat | QUANTITY | 0.96+ |
Eric Lex | PERSON | 0.96+ |
2000 companies | QUANTITY | 0.95+ |
349 | QUANTITY | 0.95+ |
seventy five billion dollar | QUANTITY | 0.95+ |
17 times | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
25% | QUANTITY | 0.95+ |
five | QUANTITY | 0.95+ |
2,000 respondents | QUANTITY | 0.95+ |
this week | DATE | 0.95+ |
one | QUANTITY | 0.93+ |
two firms | QUANTITY | 0.93+ |
30% | QUANTITY | 0.93+ |
ETR | ORGANIZATION | 0.93+ |
D Volante | ORGANIZATION | 0.92+ |
each individual player | QUANTITY | 0.92+ |
fifteen | QUANTITY | 0.91+ |
trillion dollar | QUANTITY | 0.9+ |
uipath | ORGANIZATION | 0.9+ |
pega | LOCATION | 0.89+ |
about 15 | QUANTITY | 0.87+ |
1000 plus respondents | QUANTITY | 0.86+ |
past couple of years | DATE | 0.85+ |
wiki | TITLE | 0.82+ |
ORGANIZATION | 0.81+ | |
over 5x | QUANTITY | 0.79+ |
number one | QUANTITY | 0.78+ |
RP a | ORGANIZATION | 0.77+ |
Centers of Excellence | ORGANIZATION | 0.75+ |
Afghan | LOCATION | 0.72+ |
a hundred percent | QUANTITY | 0.72+ |
ORGANIZATION | 0.71+ |
Ben Brown, BotKit - Cisco DevNet Create 2017 - #DevNetCreate - #theCUBE
(energetic music) >> Announcer: Live from San Francisco, it's the CUBE, covering DevNetCreate 2017, brought to you by Cisco. >> Okay, welcome back everyone. We're live in San Francisco for the inaugural event for Cisco's DevNetCreate, part of their DevNet classic developer community now extending out into the community of open source and cloud native and dev ops world, where applications and infrastructure coming together. It's the CUBE's exclusive two-days coverage. I'm John Furrier with my co-host, Peter Burris, head of WikiBon.com research. Our next guess is Ben Brown, CEO of Botkit out of Austin. Welcome to the CUBE. >> Thank you. >> So we were just chatting before we came on about how open source and how essentially using machines and humans workin' together, that there's a nice evolving machine learning marketplace for having new kinds of re-imagined recommendation engines. Chat bots that actually work. Integrations, again, back to software. >> Ben: Yeah. >> Tell us what you guys do, and how you guys relate to the cloud native, and what your role in open source is. >> Sure. So, it's real interesting, you know. Over the last couple of decades, an enormous amount of progress has been made on AI and machine learning, and NLP tools at these big companies like Google and Microsoft, and they are now giving that away, right? Like, it is free to use Facebook's top of the line machine learning algorithm. But, it's sort of a mystery and unfamiliar territory for developers coming from web or mobile. It's a black box that nobody's ever used before. So, what we do at Botkit is provide tools for developers, mostly developers who are coming from the web or coming from mobile development, and give them semantic, easy-to-use, and customized tools for building conversational user interfaces. And that can mean chat bots, that can mean voice skills for the Amazon Echo or Cortana or things like that, and give them these open source tools that allow them to take advantage of this exciting NLP and voice to text, and text to voice, and all that to build real software today. So what Botkit is is an open source library. It's free to use, it's MIT-licensed, so very liberally licensed, and it gives the developers tools like hearing and saying, right? So it's not about API calls and NLP classification and utterances and all that. It's about how does a robot think and act, and the metaphors around that. >> So I think of Botkit, I think of Webkit, these are languages of developers. So are you guys actually providing bot kits to create bots, or is it more of a platform? How do you guys describe what you do in open source, and how do you guys stay in business and keep the lights on? (laughter) >> Good question. Yeah, so we're a venture-backed startup. We have an open source toolkit and these kits, right? So if you want to build a Slack bot or a Facebook bot, we will give you 90% of the code that you need to bring that bot up and start talking. And that piece is all free. And we do that for Slack, for Facebook, for Twilio, for Cisco, for Alexa, and Microsoft, and a bunch of other platforms. And what we're really hoping is that we can instill in people, or sort of give to people a skill set that is akin to a web master, right? There's a bunch of skills that are interrelated that you need to actually bring this software to life. >> It saves time. It's tooling to save them time and to get acclimated and get working. >> Absolutely, absolutely. And then, on top of that, we have a set of power tools that sort of complete the process. Botkit, the open source piece is a software development library, but you also need deployment management and operational tools and content management and integrations and things like that. So that's where our business is. >> The class freemium model. The first hit's free, as they say. I'm sorry, that's a drug dealer model. (laughter) You get 'em in there but, as they scale, they're already successful, so it's not like you're gouging someone for not getting value out of it. >> Absolutely. I mean, we think about our business model in the same way a lot of other developer APIs do these days. >> Well, let's talk about some of those other developer APIs, because used to be that you used a language, then you would use a data management system, and then we start talking about web services, and that's all good. But where does this end up going, where you have a specialized toolkit for bots that you can add? You made up specialized toolkits for-- Amazon's talkin' about specialized toolkits for voice recognition that you can add. So is it just going to be in the interface? Are there going to be other classes of kits that developers are going to buy, and combine them together? Where do you see this going? >> Yeah, absolutely. I mean, it's just like, you know, all software development that came before, right? Nobody built every line of code for their mobile app. Nobody had to define what a button was for iOS. That was done at a higher level. In the same way, people who are building these conversational apps, or composing their own code with third party services, with open source software and all that combine. So there's really interesting stuff going on. Like I said, there's NLP tools coming down from all of these big players, but also from small players. There are tools like human takeover, which is like a new thing that didn't exist before. You're talking to a bot, you're starting to get angry, IBM Watson can identify your sentiment and say, "Oh, this person is frustrated. Let's bring in "a real operator." So there's third-party services to actually manage that kind of thing. >> Male: I want that job, by the way. (laughter) >> Only angry customers. >> Parachute me in just for the angry customers, yeah. >> Does not sound like a great job, yeah. And then there are almost every kind of component that you might imagine existing in the web stack is being specialized, or the mobile stack, is being specialized for conversational stuff, 'cause it's just different enough, right? So analytics and CRM and push notifications. >> I mean, you don't got to be a rocket scientist to figure out that voice is the hottest app in the market. I mean, you got Alexa, you got Siri, Google. I mean, voice interface is here. That's conversational, to your point. >> Ben: Yeah, yeah, absolutely. >> So now software will evolve. So that's kind of where you guys are betting, right? >> Yeah, absolutely. I mean-- >> John: Not just voice, but conversational software. >> Right. I mean, as I was just saying in my session here, I don't think anybody really wanted to sit down at a typewriter attached to a television. That was just a technology that we had at the time. Charles Babbage or whatever was dreaming about the thinking machine. So we're just much, much closer to that now, and we think that, over the next five or ten years, almost all software will have some sort of conversational element, whether that's in the app, does it mean you're on an Alexa skill that's embedded in the car, who knows? >> It's just never fight fashion, but this is a relevant fashion piece, where we see machine learning get rendered in AI and some of the cool applications like cars and voice and AI. So I got to ask you. You mention that all this free stuff's comin' out. It's like Christmas, it's like a kid in the candy store if you're a developer. How, in your opinion, has that shaped the developer ecosystems because, outside of the young kids who are just green and have no idea that it wasn't like this before. Back in the old days we used to actually program everything. Lot of cool stuff coming in for free from Google, from Facebook, in some cases Amazon. But I mean, what's the impact? >> I mean, people are able to take advantage of much more sophisticated technology much earlier on in the process, right? For the last 10 years, we've been talking about "Ah, machine learning, isn't it great if you're Google, "and you have ten trillion data points?" But nobody else has it, so it's not even worth talking about. But now, it's possible. You can start on day one, and start training your machine learning and models and things like that. And you don't have to actually invest in that technology. And voice to text, things like-- >> It's given them more speed to get to newer high, the higher functioning stuff. >> Yeah, absolutely. And it's bringing that kind of technology that was-- Most of AI has been in academia, right, and in research. And now, all of a sudden, it's on my kitchen counter. My kid now uses NLP technology every day, and that is a big-- Without the independent developers and smaller apps-- >> Well, the IoT's going to be in your wheelhouse, too. As more things get connected, the interfaces will be more human. >> Well, I was going to ask a question about that. Does this technology-- Today, the technology's mainly thought of part of the interface between the machine and the human being. Does this technology end up in between machines? >> Yeah, absolutely, sort of between bot. Inter-bot communication is very, very interesting. And then also-- So yes, absolutely. But also, like being on the other side of the human, or like between people, right? So customer service representatives using AI to have solutions suggested to them that they can pick from and things like that, like translating systems that suggest the response, so that you can use it if you so desire. And it makes your job easier, but it's not actually doing the transaction for you. It's really, really interesting, and that's nothing that the end user would actually experience themselves. >> Final question for you. Cisco has always been the king of networks. I mean, the internet was their wave, they rode that hard. We all know what they've done. Amazing, connecting routes together, routers, MLPS routers, PLSM routers, paths. I mean, they own that. Now they're moving up the stack, so now you're seeing this a gesture of going into the community, bringing apps and infrastructure together, to bring true dev ops. Kind of like what you're doing with your interfaces to software. What's your thoughts on this strategy. So, what's your take and reaction to what Cisco's doing? >> Clearly, the software layer is becoming more and more powerful and prevalent for people, and a bigger part of people's lives. So I think it makes tons of sense. And what Cisco's going to gain by opening these things up is the innovation of the community, like they were never going to be able to do the things that people are going to do with Spark APIs. And the way that things are connected and interwoven to each other, because I have a smart home, I have all these IoT devices. They don't talk to one another. I am left to weave them together. >> Peter: You mediate. >> I mediate, right. And I'm sophisticated enough to be able to do that. But if they're going to make it as easy as plug and play, and drag and drop, it's going to open up all sorts of exciting capabilities. >> It's the quote as saying waterfall versus agile, which one's faster? Agile. >> Well, but that's exactly why I asked the question about bots reconciling, or bots you having mediating between different devices or different machines, is that it could be a way that a human being can understand a set of instructions for how these things engage other stuff, so that it still looks like it's a set of human interfaces while, at the same time, it's operating at machine speed with machine efficiency. >> This is one of the most interesting things, particularly in the IoT space, that I've seen. There's an app called Thing-tin that is like a chat room for devices, and the way it works is like those devices emit machine messages and human readable messages, but they can talk to each other in machine language, but you can read it as a dialogue. >> That's SkyNet. That's SkyNet. I'm tellin' you, it's coming. >> Yeah, if SkyNet only turns your lights on and off. >> Machines talking to each other. "Hey, go kill that human over there." (laughter) >> Somebody's going to have to program it to kill first. >> We need algorithms to watch the algorithms. Great stuff. I think Cisco clearly, this is a move that they have to make. I've been following Cisco for many generations. Past 10 years, they were one of the first in smart homes, one of the first in smart cities, first with IoT, they called it Internet of Everything, the human network, social network. They had the pulse on all the right trends, but could not execute it, Peter. And, to your point, they'll never get there without open source, in my opinion. I think this is a signal that Cisco can do that. Now here's the key: They have the keys to the kingdom. It's called the network, and I think that making that programmable and extensible is a great strategy. >> Well, that's what they have to be able to do. They have to be able to make it, they have to make it obviously available to developers so they can create value on it. And that's something that they're still struggling to do. >> Yeah, so when he does the Botkit and does all this new creative activity going on, the network has to be adaptive and not get in the way, and not for the creativity of the developer, 'cause networking is hard. >> And that's a great point. And so much of what we do at Botkit is try to drain the complexity out of this complex stuff and make it available so that this enormous amount of power is available to the developer of today. >> Power to the developer, developers are in charge, developers are driving the network policy in a dynamic way. Congratulations on your success, great to chat with you. I'm going to check out Botkit. I already have some ways, Peter and I are already lookin' at it for the clips, and then the crowd chat virtually, great stuff, congratulations. Ben Brown, CEO of Botkit. Check it out, Botkit-dot-AI. We are soon to be replaced by bots here in the CUBE (laughter) with talking machines, but that's down the way, when SkyNet takes over. This is the CUBE here at the inaugural event for Cisco DevNetCreate. I'm John Furrier with Peter Burris. We'll be back after this short break. (electronic music) >> Hi, I'm April Mitchell, and I'm the senior--
SUMMARY :
brought to you by Cisco. It's the CUBE's exclusive two-days coverage. Integrations, again, back to software. guys relate to the cloud native, and what your and it gives the developers tools like and how do you guys stay in business and keep the lights on? a skill set that is akin to a web master, right? and get working. that sort of complete the process. You get 'em in there but, as they scale, in the same way a lot of other developer APIs do these days. So is it just going to be in the interface? So there's third-party services to actually (laughter) is being specialized, or the mobile stack, is being That's conversational, to your point. So that's kind of where you guys are betting, right? I mean-- embedded in the car, who knows? Back in the old days we used to actually program everything. I mean, people are able to take advantage of It's given them more speed to get to newer high, and that is a big-- Well, the IoT's going to be in your wheelhouse, too. interface between the machine and the human being. and that's nothing that the end user would I mean, the internet was their wave, they rode that hard. that people are going to do with Spark APIs. and drag and drop, it's going to open up all sorts of It's the quote as saying waterfall versus agile, or different machines, is that it could be a way This is one of the most interesting things, I'm tellin' you, it's coming. Machines talking to each other. Now here's the key: They have the keys to the kingdom. And that's something that they're still struggling to do. new creative activity going on, the network has to be and make it available so that this enormous This is the CUBE here at the inaugural event
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
April Mitchell | PERSON | 0.99+ |
Ben Brown | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
San Francisco | LOCATION | 0.99+ |
Austin | LOCATION | 0.99+ |
WikiBon.com | ORGANIZATION | 0.99+ |
two-days | QUANTITY | 0.99+ |
Siri | TITLE | 0.99+ |
Charles Babbage | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ben | PERSON | 0.99+ |
SkyNet | ORGANIZATION | 0.99+ |
Botkit | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Echo | COMMERCIAL_ITEM | 0.99+ |
Today | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
Cortana | TITLE | 0.98+ |
Christmas | EVENT | 0.98+ |
today | DATE | 0.97+ |
Thing-tin | TITLE | 0.97+ |
Alexa | TITLE | 0.97+ |
BotKit | ORGANIZATION | 0.97+ |
Slack | TITLE | 0.96+ |
day one | QUANTITY | 0.96+ |
DevNetCreate | EVENT | 0.95+ |
agile | TITLE | 0.95+ |
Twilio | ORGANIZATION | 0.94+ |
ten trillion data points | QUANTITY | 0.93+ |
#DevNetCreate | EVENT | 0.93+ |
Agile | TITLE | 0.91+ |
ten years | QUANTITY | 0.88+ |
last 10 years | DATE | 0.87+ |
last couple of decades | DATE | 0.87+ |
Spark | TITLE | 0.84+ |
DevNetCreate 2017 | EVENT | 0.83+ |
CEO | PERSON | 0.82+ |
CUBE | ORGANIZATION | 0.76+ |
wave | EVENT | 0.76+ |
Botkit | TITLE | 0.75+ |
five | QUANTITY | 0.74+ |
Watson | TITLE | 0.72+ |
10 | QUANTITY | 0.69+ |
MIT | TITLE | 0.63+ |