Ed Walsh & Thomas Hazel | A New Database Architecture for Supercloud
(bright music) >> Hi, everybody, this is Dave Vellante, welcome back to Supercloud 2. Last August, at the first Supercloud event, we invited the broader community to help further define Supercloud, we assessed its viability, and identified the critical elements and deployment models of the concept. The objectives here at Supercloud too are, first of all, to continue to tighten and test the concept, the second is, we want to get real world input from practitioners on the problems that they're facing and the viability of Supercloud in terms of applying it to their business. So on the program, we got companies like Walmart, Sachs, Western Union, Ionis Pharmaceuticals, NASDAQ, and others. And the third thing that we want to do is we want to drill into the intersection of cloud and data to project what the future looks like in the context of Supercloud. So in this segment, we want to explore the concept of data architectures and what's going to be required for Supercloud. And I'm pleased to welcome one of our Supercloud sponsors, ChaosSearch, Ed Walsh is the CEO of the company, with Thomas Hazel, who's the Founder, CTO, and Chief Scientist. Guys, good to see you again, thanks for coming into our Marlborough studio. >> Always great. >> Great to be here. >> Okay, so there's a little debate, I'm going to put you right in the spot. (Ed chuckling) A little debate going on in the community started by Bob Muglia, a former CEO of Snowflake, and he was at Microsoft for a long time, and he looked at the Supercloud definition, said, "I think you need to tighten it up a little bit." So, here's what he came up with. He said, "A Supercloud is a platform that provides a programmatically consistent set of services hosted on heterogeneous cloud providers." So he's calling it a platform, not an architecture, which was kind of interesting. And so presumably the platform owner is going to be responsible for the architecture, but Dr. Nelu Mihai, who's a computer scientist behind the Cloud of Clouds Project, he chimed in and responded with the following. He said, "Cloud is a programming paradigm supporting the entire lifecycle of applications with data and logic natively distributed. Supercloud is an open architecture that integrates heterogeneous clouds in an agnostic manner." So, Ed, words matter. Is this an architecture or is it a platform? >> Put us on the spot. So, I'm sure you have concepts, I would say it's an architectural or design principle. Listen, I look at Supercloud as a mega trend, just like cloud, just like data analytics. And some companies are using the principle, design principles, to literally get dramatically ahead of everyone else. I mean, things you couldn't possibly do if you didn't use cloud principles, right? So I think it's a Supercloud effect, you're able to do things you're not able to. So I think it's more a design principle, but if you do it right, you get dramatic effect as far as customer value. >> So the conversation that we were having with Muglia, and Tristan Handy of dbt Labs, was, I'll set it up as the following, and, Thomas, would love to get your thoughts, if you have a CRM, think about applications today, it's all about forms and codifying business processes, you type a bunch of stuff into Salesforce, and all the salespeople do it, and this machine generates a forecast. What if you have this new type of data app that pulls data from the transaction system, the e-commerce, the supply chain, the partner ecosystem, et cetera, and then, without humans, actually comes up with a plan. That's their vision. And Muglia was saying, in order to do that, you need to rethink data architectures and database architectures specifically, you need to get down to the level of how the data is stored on the disc. What are your thoughts on that? Well, first of all, I'm going to cop out, I think it's actually both. I do think it's a design principle, I think it's not open technology, but open APIs, open access, and you can build a platform on that design principle architecture. Now, I'm a database person, I love solving the database problems. >> I'm waited for you to launch into this. >> Yeah, so I mean, you know, Snowflake is a database, right? It's a distributed database. And we wanted to crack those codes, because, multi-region, multi-cloud, customers wanted access to their data, and their data is in a variety of forms, all these services that you're talked about. And so what I saw as a core principle was cloud object storage, everyone streams their data to cloud object storage. From there we said, well, how about we rethink database architecture, rethink file format, so that we can take each one of these services and bring them together, whether distributively or centrally, such that customers can access and get answers, whether it's operational data, whether it's business data, AKA search, or SQL, complex distributed joins. But we had to rethink the architecture. I like to say we're not a first generation, or a second, we're a third generation distributed database on pure, pure cloud storage, no caching, no SSDs. Why? Because all that availability, the cost of time, is a struggle, and cloud object storage, we think, is the answer. >> So when you're saying no caching, so when I think about how companies are solving some, you know, pretty hairy problems, take MySQL Heatwave, everybody thought Oracle was going to just forget about MySQL, well, they come out with Heatwave. And the way they solve problems, and you see their benchmarks against Amazon, "Oh, we crush everybody," is they put it all in memory. So you said no caching? You're not getting performance through caching? How is that true, and how are you getting performance? >> Well, so five, six years ago, right? When you realize that cloud object storage is going to be everywhere, and it's going to be a core foundational, if you will, fabric, what would you do? Well, a lot of times the second generation say, "We'll take it out of cloud storage, put in SSDs or something, and put into cache." And that adds a lot of time, adds a lot of costs. But I said, what if, what if we could actually make the first read hot, the first read distributed joins and searching? And so what we went out to do was said, we can't cache, because that's adds time, that adds cost. We have to make cloud object storage high performance, like it feels like a caching SSD. That's where our patents are, that's where our technology is, and we've spent many years working towards this. So, to me, if you can crack that code, a lot of these issues we're talking about, multi-region, multicloud, different services, everybody wants to send their data to the data lake, but then they move it out, we said, "Keep it right there." >> You nailed it, the data gravity. So, Bob's right, the data's coming in, and you need to get the data from everywhere, but you need an environment that you can deal with all that different schema, all the different type of technology, but also at scale. Bob's right, you cannot use memory or SSDs to cache that, that doesn't scale, it doesn't scale cost effectively. But if you could, and what you did, is you made object storage, S3 first, but object storage, the only persistence by doing that. And then we get performance, we should talk about it, it's literally, you know, hundreds of terabytes of queries, and it's done in seconds, it's done without memory caching. We have concepts of caching, but the only caching, the only persistence, is actually when we're doing caching, we're just keeping another side-eye track of things on the S3 itself. So we're using, actually, the object storage to be a database, which is kind of where Bob was saying, we agree, but that's what you started at, people thought you were crazy. >> And maybe make it live. Don't think of it as archival or temporary space, make it live, real time streaming, operational data. What we do is make it smart, we see the data coming in, we uniquely index it such that you can get your use cases, that are search, observability, security, or backend operational. But we don't have to have this, I dunno, static, fixed, siloed type of architecture technologies that were traditionally built prior to Supercloud thinking. >> And you don't have to move everything, essentially, you can do it wherever the data lands, whatever cloud across the globe, you're able to bring it together, you get the cost effectiveness, because the only persistence is the cheapest storage persistent layer you can buy. But the key thing is you cracked the code. >> We had to crack the code, right? That was the key thing. >> That's where the plans are. >> And then once you do that, then everything else gets easier to scale, your architecture, across regions, across cloud. >> Now, it's a general purpose database, as Bob was saying, but we use that database to solve a particular issue, which is around operational data, right? So, we agree with Bob's. >> Interesting. So this brings me to this concept of data, Jimata Gan is one of our speakers, you know, we talk about data fabric, which is a NetApp, originally NetApp concept, Gartner's kind of co-opted it. But so, the basic concept is, data lives everywhere, whether it's an S3 bucket, or a SQL database, or a data lake, it's just a node on the data mesh. So in your view, how does this fit in with Supercloud? Ed, you've said that you've built, essentially, an enabler for that, for the data mesh, I think you're an enabler for the Supercloud-like principles. This is a big, chewy opportunity, and it requires, you know, a team approach. There's got to be an ecosystem, there's not going to be one Supercloud to rule them all, so where does the ecosystem fit into the discussion, and where do you fit into the ecosystem? >> Right, so we agree completely, there's not one Supercloud in effect, but we use Supercloud principles to build our platform, and then, you know, the ecosystem's going to be built on leveraging what everyone else's secret powers are, right? So our power, our superpower, based upon what we built is, we deal with, if you're having any scale, or cost effective scale issues, with data, machine generated data, like business observability or security data, we are your force multiplier, we will take that in singularly, just let it, simply put it in your object storage wherever it sits, and we give you uniformity access to that using OpenAPI access, SQL, or you know, Elasticsearch API. So, that's what we do, that's our superpower. So I'll play it into data mesh, that's a perfect, we are a node on a data mesh, but I'll play it in the soup about how, the ecosystem, we see it kind of playing, and we talked about it in just in the last couple days, how we see this kind of possibly. Short term, our superpowers, we deal with this data that's coming at these environments, people, customers, building out observability or security environments, or vendors that are selling their own Supercloud, I do observability, the Datadogs of the world, dot dot dot, the Splunks of the world, dot dot dot, and security. So what we do is we fit in naturally. What we do is a cost effective scale, just land it anywhere in the world, we deal with ingest, and it's a cost effective, an order of magnitude, or two or three order magnitudes more cost effective. Allows them, their customers are asking them to do the impossible, "Give me fast monitoring alerting. I want it snappy, but I want it to keep two years of data, (laughs) and I want it cost effective." It doesn't work. They're good at the fast monitoring alerting, we're good at the long-term retention. And yet there's some gray area between those two, but one to one is actually cheaper, so we would partner. So the first ecosystem plays, who wants to have the ability to, really, all the data's in those same environments, the security observability players, they can literally, just through API, drag our data into their point to grab. We can make it seamless for customers. Right now, we make it helpful to customers. Your Datadog, we make a button, easy go from Datadog to us for logs, save you money. Same thing with Grafana. But you can also look at ecosystem, those same vendors, it used to be a year ago it was, you know, its all about how can you grow, like it's growth at all costs, now it's about cogs. So literally we can go an environment, you supply what your customer wants, but we can help with cogs. And one-on one in a partnership is better than you trying to build on your own. >> Thomas, you were saying you make the first read fast, so you think about Snowflake. Everybody wants to talk about Snowflake and Databricks. So, Snowflake, great, but you got to get the data in there. All right, so that's, can you help with that problem? >> I mean we want simple in, right? And if you have to have structure in, you're not simple. So the idea that you have a simple in, data lake, schema read type philosophy, but schema right type performance. And so what I wanted to do, what we have done, is have that simple lake, and stream that data real time, and those access points of Search or SQL, to go after whatever business case you need, security observability, warehouse integration. But the key thing is, how do I make that click, click, click answer, and do it quickly? And so what we want to do is, that first read has to be fast. Why? 'Cause then you're going to do all this siloing, layers, complexity. If your first read's not fast, you're at a disadvantage, particularly in cost. And nobody says I want less data, but everyone has to, whether they say we're going to shorten the window, we're going to use AI to choose, but in a security moment, when you don't have that answer, you're in trouble. And that's why we are this service, this Supercloud service, if you will, providing access, well-known search, well-known SQL type access, that if you just have one access point, you're at a disadvantage. >> We actually talked about Snowflake and BigQuery, and a different platform, Data Bricks. That's kind of where we see the phase two of ecosystem. One is easy, the low-hanging fruit is observability and security firms. But the next one is, what we do, our super power is dealing with this messy data that schema is changing like night and day. Pipelines are tough, and it's changing all the time, but you want these things fast, and it's big data around the world. That's the next point, just use us alongside, or inside, one of their platforms, and now we get the best of both worlds. Our superpower is keeping this messy data as a streaming, okay, not a batch thing, allow you to do that. So, that's the second one. And then to be honest, the third one, which plays you to Supercloud, it also plays perfectly in the data mesh, is if you really go to the ultimate thing, what we have done is made object storage, S3, GCS, and blob storage, we made it a database. Put, get, complex query with big joins. You know, so back to your original thing, and Muglia teed it up perfectly, we've done that. Now imagine if that's an ecosystem, who would want that? If it's, again, it's uniform available across all the regions, across all the clouds, and it's right next to where you are building a service, or a client's trying, that's where the ecosystem, I think people are going to use Superclouds for their superpowers. We're really good at this, allows that short term. I think the Snowflakes and the Data Bricks are the medium term, you know? And then I think eventually gets to, hey, listen if you can make object storage fast, you can just go after it with simple SQL queries, or elastic. Who would want that? I think that's where people are going to leverage it. It's not going to be one Supercloud, and we leverage the super clouds. >> Our viewpoint is smart object storage can be programmable, and so we agree with Bob, but we're not saying do it here, do it here. This core, fundamental layer across regions, across clouds, that everyone has? Simple in. Right now, it's hard to get data in for access for analysis. So we said, simply, we'll automate the entire process, give you API access across regions, across clouds. And again, how do you do a distributed join that's fast? How do you do a distributed join that doesn't cost you an arm or a leg? And how do you do it at scale? And that's where we've been focused. >> So prior, the cloud object store was a niche. >> Yeah. >> S3 obviously changed that. How standard is, essentially, object store across the different cloud platforms? Is that a problem for you? Is that an easy thing to solve? >> Well, let's talk about it. I mean we've fundamentally, yeah we've extracted it, but fundamentally, cloud object storage, put, get, and list. That's why it's so scalable, 'cause it doesn't have all these other components. That complexity is where we have moved up, and provide direct analytical API access. So because of its simplicity, and costs, and security, and reliability, it can scale naturally. I mean, really, distributed object storage is easy, it's put-get anywhere, now what we've done is we put a layer of intelligence, you know, call it smart object storage, where access is simple. So whether it's multi-region, do a query across, or multicloud, do a query across, or hunting, searching. >> We've had clients doing Amazon and Google, we have some Azure, but we see Amazon and Google more, and it's a consistent service across all of them. Just literally put your data in the bucket of choice, or folder of choice, click a couple buttons, literally click that to say "that's hot," and after that, it's hot, you can see it. But we're not moving data, the data gravity issue, that's the other. That it's already natively flowing to these pools of object storage across different regions and clouds. We don't move it, we index it right there, we're spinning up stateless compute, back to the Supercloud concept. But now that allows us to do all these other things, right? >> And it's no longer just cheap and deep object storage. Right? >> Yeah, we make it the same, like you have an analytic platform regardless of where you're at, you don't have to worry about that. Yeah, we deal with that, we deal with a stateless compute coming up -- >> And make it programmable. Be able to say, "I want this bucket to provide these answers." Right, that's really the hope, the vision. And the complexity to build the entire stack, and then connect them together, we said, the fabric is cloud storage, we just provide the intelligence on top. >> Let's bring it back to the customers, and one of the things we're exploring in Supercloud too is, you know, is Supercloud a solution looking for a problem? Is a multicloud really a problem? I mean, you hear, you know, a lot of the vendor marketing says, "Oh, it's a disaster, because it's all different across the clouds." And I talked to a lot of customers even as part of Supercloud too, they're like, "Well, I solved that problem by just going mono cloud." Well, but then you're not able to take advantage of a lot of the capabilities and the primitives that, you know, like Google's data, or you like Microsoft's simplicity, their RPA, whatever it is. So what are customers telling you, what are their near term problems that they're trying to solve today, and how are they thinking about the future? >> Listen, it's a real problem. I think it started, I think this is a a mega trend, just like cloud. Just, cloud data, and I always add, analytics, are the mega trends. If you're looking at those, if you're not considering using the Supercloud principles, in other words, leveraging what I have, abstracting it out, and getting the most out of that, and then build value on top, I think you're not going to be able to keep up, In fact, no way you're going to keep up with this data volume. It's a geometric challenge, and you're trying to do linear things. So clients aren't necessarily asking, hey, for Supercloud, but they're really saying, I need to have a better mechanism to simplify this and get value across it, and how do you abstract that out to do that? And that's where they're obviously, our conversations are more amazed what we're able to do, and what they're able to do with our platform, because if you think of what we've done, the S3, or GCS, or object storage, is they can't imagine the ingest, they can't imagine how easy, time to glass, one minute, no matter where it lands in the world, querying this in seconds for hundreds of terabytes squared. People are amazed, but that's kind of, so they're not asking for that, but they are amazed. And then when you start talking on it, if you're an enterprise person, you're building a big cloud data platform, or doing data or analytics, if you're not trying to leverage the public clouds, and somehow leverage all of them, and then build on top, then I think you're missing it. So they might not be asking for it, but they're doing it. >> And they're looking for a lens, you mentioned all these different services, how do I bring those together quickly? You know, our viewpoint, our service, is I have all these streams of data, create a lens where they want to go after it via search, go after via SQL, bring them together instantly, no e-tailing out, no define this table, put into this database. We said, let's have a service that creates a lens across all these streams, and then make those connections. I want to take my CRM with my Google AdWords, and maybe my Salesforce, how do I do analysis? Maybe I want to hunt first, maybe I want to join, maybe I want to add another stream to it. And so our viewpoint is, it's so natural to get into these lake platforms and then provide lenses to get that access. >> And they don't want it separate, they don't want something different here, and different there. They want it basically -- >> So this is our industry, right? If something new comes out, remember virtualization came out, "Oh my God, this is so great, it's going to solve all these problems." And all of a sudden it just got to be this big, more complex thing. Same thing with cloud, you know? It started out with S3, and then EC2, and now hundreds and hundreds of different services. So, it's a complex matter for a lot of people, and this creates problems for customers, especially when you got divisions that are using different clouds, and you're saying that the solution, or a solution for the part of the problem, is to really allow the data to stay in place on S3, use that standard, super simple, but then give it what, Ed, you've called superpower a couple of times, to make it fast, make it inexpensive, and allow you to do that across clouds. >> Yeah, yeah. >> I'll give you guys the last word on that. >> No, listen, I think, we think Supercloud allows you to do a lot more. And for us, data, everyone says more data, more problems, more budget issue, everyone knows more data is better, and we show you how to do it cost effectively at scale. And we couldn't have done it without the design principles of we're leveraging the Supercloud to get capabilities, and because we use super, just the object storage, we're able to get these capabilities of ingest, scale, cost effectiveness, and then we built on top of this. In the end, a database is a data platform that allows you to go after everything distributed, and to get one platform for analytics, no matter where it lands, that's where we think the Supercloud concepts are perfect, that's where our clients are seeing it, and we're kind of excited about it. >> Yeah a third generation database, Supercloud database, however we want to phrase it, and make it simple, but provide the value, and make it instant. >> Guys, thanks so much for coming into the studio today, I really thank you for your support of theCUBE, and theCUBE community, it allows us to provide events like this and free content. I really appreciate it. >> Oh, thank you. >> Thank you. >> All right, this is Dave Vellante for John Furrier in theCUBE community, thanks for being with us today. You're watching Supercloud 2, keep it right there for more thought provoking discussions around the future of cloud and data. (bright music)
SUMMARY :
And the third thing that we want to do I'm going to put you right but if you do it right, So the conversation that we were having I like to say we're not a and you see their So, to me, if you can crack that code, and you need to get the you can get your use cases, But the key thing is you cracked the code. We had to crack the code, right? And then once you do that, So, we agree with Bob's. and where do you fit into the ecosystem? and we give you uniformity access to that so you think about Snowflake. So the idea that you have are the medium term, you know? and so we agree with Bob, So prior, the cloud that an easy thing to solve? you know, call it smart object storage, and after that, it's hot, you can see it. And it's no longer just you don't have to worry about And the complexity to and one of the things we're and how do you abstract it's so natural to get and different there. and allow you to do that across clouds. I'll give you guys and we show you how to do it but provide the value, I really thank you for around the future of cloud and data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Walmart | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
Ionis Pharmaceuticals | ORGANIZATION | 0.99+ |
Western Union | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Nelu Mihai | PERSON | 0.99+ |
Sachs | ORGANIZATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two years | QUANTITY | 0.99+ |
Supercloud 2 | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Last August | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
dbt Labs | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Jimata Gan | PERSON | 0.99+ |
third one | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
Grafana | ORGANIZATION | 0.99+ |
second generation | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
hundreds of terabytes | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
five | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
a year ago | DATE | 0.98+ |
ChaosSearch | ORGANIZATION | 0.98+ |
Muglia | PERSON | 0.98+ |
MySQL | TITLE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
third thing | QUANTITY | 0.97+ |
Marlborough | LOCATION | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
Supercloud | ORGANIZATION | 0.97+ |
Elasticsearch | TITLE | 0.96+ |
NetApp | TITLE | 0.96+ |
Datadog | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
EC2 | TITLE | 0.96+ |
each one | QUANTITY | 0.96+ |
S3 | TITLE | 0.96+ |
one platform | QUANTITY | 0.95+ |
Supercloud 2 | EVENT | 0.95+ |
first read | QUANTITY | 0.95+ |
six years ago | DATE | 0.95+ |
Kevin Miller and Ed Walsh | AWS re:Invent 2022 - Global Startup Program
hi everybody welcome back to re invent 2022. this is thecube's exclusive coverage we're here at the satellite set it's up on the fifth floor of the Venetian Conference Center and this is part of the global startup program the AWS startup showcase series that we've been running all through last year and and into this year with AWS and featuring some of its its Global Partners Ed wallson series the CEO of chaos search many times Cube Alum and Kevin Miller there's also a cube Alum vice president GM of S3 at AWS guys good to see you again yeah great to see you Dave hi Kevin this is we call this our Super Bowl so this must be like your I don't know uh World Cup it's a pretty big event yeah it's the World Cup for sure yeah so a lot of S3 talk you know I mean that's what got us all started in 2006 so absolutely what's new in S3 yeah it's been a great show we've had a number of really interesting launches over the last few weeks and a few at the show as well so you know we've been really focused on helping customers that are running Mass scale data Lakes including you know whether it's structured or unstructured data we actually announced just a few just an hour ago I think it was a new capability to give customers cross-account access points for sharing data securely with other parts of the organization and that's something that we'd heard from customers is as they are growing and have more data sets and they're looking to to get more out of their data they are increasingly looking to enable multiple teams across their businesses to access those data sets securely and that's what we provide with cross-count access points we also launched yesterday our multi-region access point failover capabilities and so again this is where customers have data sets and they're using multiple regions for certain critical workloads they're now able to to use that to fail to control the failover between different regions in AWS and then one other launch I would just highlight is some improvements we made to storage lens which is our really a very novel and you need capability to help customers really understand what storage they have where who's accessing it when it's being accessed and we added a bunch of new metrics storage lens has been pretty exciting for a lot of customers in fact we looked at the data and saw that customers who have adopted storage lens typically within six months they saved more than six times what they had invested in turning storage lens on and certainly in this environment right now we have a lot of customers who are it's pretty top of mind they're looking for ways to optimize their their costs in the cloud and take some of those savings and be able to reinvest them in new innovation so pretty exciting with the storage lens launch I think what's interesting about S3 is that you know pre-cloud Object Store was this kind of a niche right and then of course you guys announced you know S3 in 2006 as I said and okay great you know cheap and deep storage simple get put now the conversations about how to enable value from from data absolutely analytics and it's just a whole new world and Ed you've talked many times I love the term yeah we built chaos search on the on the shoulders of giants right and so the under underlying that is S3 but the value that you can build on top of that has been key and I don't think we've talked about his shoulders and Giants but we've talked about how we literally you know we have a big Vision right so hard to kind of solve the challenge to analytics at scale we really focus on the you know the you know Big Data coming environment get analytics so we talk about the on the shoulders Giants obviously Isaac Newton's you know metaphor of I learned from everything before and we layer on top so really when you talk about all the things come from S3 like I just smile because like we picked it up naturally we went all in an S3 and this is where I think you're going Dave but everyone is so let's just cut the chase like so any of the data platforms you're using S3 is what you're building but we did it a little bit differently so at first people using a cold storage like you said and then they ETL it up into a different platforms for analytics of different sorts now people are using it closer they're doing caching layers and cashing out and they're that's where but that's where the attributes of a scale or reliability are what we did is we actually make S3 a database so literally we have no persistence outside that three and that kind of comes in so it's working really well with clients because most of the thing is we pick up all these attributes of scale reliability and it shows up in the clients environments and so when you launch all these new scalable things we just see it like our clients constantly comment like one of our biggest customers fintech in uh Europe they go to Black Friday again black Friday's not one days and they lose scale from what is it 58 terabytes a day and they're going up to 187 terabytes a day and we don't Flinch they say how do you do that well we built our platform on S3 as long as you can stream it to S3 so they're saying I can't overrun S3 and it's a natural play so it's it's really nice that but we take out those attributes but same thing that's why we're able to you know help clients get you know really you know Equifax is a good example maybe they're able to consolidate 12 their divisions on one platform we couldn't have done that without the scale and the performance of what you can get S3 but also they saved 90 I'm able to do that but that's really because the only persistence is S3 and what you guys are delivering but and then we really for focus on shoulders Giants we're doing on top of that innovating on top of your platforms and bringing that out so things like you know we have a unique data representation that makes it easy to ingest this data because it's kind of coming at you four v's of big data we allow you to do that make it performant on s3h so now you're doing hot analytics on S3 as if it's just a native database in memory but there's no memory SSC caching and then multi-model once you get it there don't move it leverage it in place so you know elasticsearch access you know Cabana grafana access or SQL access with your tools so we're seeing that constantly but we always talk about on the shoulders of giants but even this week I get comments from our customers like how did you do that and most of it is because we built on top of what you guys provided so it's really working out pretty well and you know we talk a lot about digital transformation of course we had the pleasure sitting down with Adam solipski prior John Furrier flew to Seattle sits down his annual one-on-one with the AWS CEO which is kind of cool yeah it was it's good it's like study for the test you know and uh and so but but one of the interesting things he said was you know we're one of our challenges going forward is is how do we go Beyond digital transformation into business transformation like okay well that's that's interesting I was talking to a customer today AWS customer and obviously others because they're 100 year old company and they're basically their business was they call them like the Uber for for servicing appliances when your Appliance breaks you got to get a person to serve it a service if it's out of warranty you know these guys do that so they got to basically have a you know a network of technicians yeah and they gotta deal with the customers no phone right so they had a completely you know that was a business transformation right they're becoming you know everybody says they're coming a software company but they're building it of course yeah right on the cloud so wonder if you guys could each talk about what's what you're seeing in terms of changing not only in the sort of I.T and the digital transformation but also the business transformation yeah I know I I 100 agree that I think business transformation is probably that one of the top themes I'm hearing from customers of all sizes right now even in this environment I think customers are looking for what can I do to drive top line or you know improve bottom line or just improve my customer experience and really you know sort of have that effect where I'm helping customers get more done and you know it is it is very tricky because to do that successfully the customers that are doing that successfully I think are really getting into the lines of businesses and figuring out you know it's probably a different skill set possibly a different culture different norms and practices and process and so it's it's a lot more than just a like you said a lot more than just the technology involved but when it you know we sort of liquidate it down into the data that's where absolutely we see that as a critical function for lines of businesses to become more comfortable first off knowing what data sets they have what data they they could access but possibly aren't today and then starting to tap into those data sources and then as as that progresses figuring out how to share and collaborate with data sets across a company to you know to correlate across those data sets and and drive more insights and then as all that's being done of course it's important to measure the results and be able to really see is this what what effect is this having and proving that effect and certainly I've seen plenty of customers be able to show you know this is a percentage increase in top or bottom line and uh so that pattern is playing out a lot and actually a lot of how we think about where we're going with S3 is related to how do we make it easier for customers to to do everything that I just described to have to understand what data they have to make it accessible and you know it's great to have such a great ecosystem of partners that are then building on top of that and innovating to help customers connect really directly with the businesses that they're running and driving those insights well and customers are hours today one of the things I loved that Adam said he said where Amazon is strategically very very patient but tactically we're really impatient and the customers out there like how are you going to help me increase Revenue how are you going to help me cut costs you know we were talking about how off off camera how you know software can actually help do that yeah it's deflationary I love the quote right so software's deflationary as costs come up how do you go drive it also free up the team and you nail it it's like okay everyone wants to save money but they're not putting off these projects in fact the digital transformation or the business it's actually moving forward but they're getting a little bit bigger but everyone's looking for creative ways to look at their architecture and it becomes larger larger we talked about a couple of those examples but like even like uh things like observability they want to give this tool set this data to all the developers all their sres same data to all the security team and then to do that they need to find a way an architect should do that scale and save money simultaneously so we see constantly people who are pairing us up with some of these larger firms like uh or like keep your data dog keep your Splunk use us to reduce the cost that one and one is actually cheaper than what you have but then they use it either to save money we're saving 50 to 80 hard dollars but more importantly to free up your team from the toil and then they they turn around and make that budget neutral and then allowed to get the same tools to more people across the org because they're sometimes constrained of getting the access to everyone explain that a little bit more let's say I got a Splunk or data dog I'm sifting through you know logs how exactly do you help so it's pretty simple I'll use dad dog example so let's say using data dog preservability so it's just your developers your sres managing environments all these platforms are really good at being a monitoring alerting type of tool what they're not necessarily great at is keeping the data for longer periods like the log data the bigger data that's where we're strong what you see is like a data dog let's say you're using it for a minister for to keep 30 days of logs which is not enough like let's say you're running environment you're finding that performance issue you kind of want to look to last quarter in last month in or maybe last Black Friday so 30 days is not enough but will charge you two eighty two dollars and eighty cents a gigabyte don't focus on just 280 and then if you just turn the knob and keep seven days but keep two years of data on us which is on S3 it goes down to 22 cents plus our list price of 80 cents goes to a dollar two compared to 280. so here's the thing what they're able to do is just turn a knob get more data we do an integration so you can go right from data dog or grafana directly into our platform so the user doesn't see it but they save money A lot of times they don't just save the money now they use that to go fund and get data dog to a lot more people make sense so it's a creativity they're looking at it and they're looking at tools we see the same thing with a grafana if you look at the whole grafana play which is hey you can't put it in one place but put Prometheus for metrics or traces we fit well with logs but they're using that to bring down their costs because a lot of this data just really bogs down these applications the alerting monitoring are good at small data they're not good at the big data which is what we're really good at and then the one and one is actually less than you paid for the one so it and it works pretty well so things are really unpredictable right now in the economy you know during the pandemic we've sort of lockdown and then the stock market went crazy we're like okay it's going to end it's going to end and then it looked like it was going to end and then it you know but last year it reinvented just just in that sweet spot before Omicron so we we tucked it in which which was awesome right it was a great great event we really really missed one physical reinvent you know which was very rare so that's cool but I've called it the slingshot economy it feels like you know you're driving down the highway and you got to hit the brakes and then all of a sudden you're going okay we're through it Oh no you're gonna hit the brakes again yeah so it's very very hard to predict and I was listening to jassy this morning he was talking about yeah consumers they're still spending but what they're doing is they're they're shopping for more features they might be you know buying a TV that's less expensive you know more value for the money so okay so hopefully the consumer spending will get us out of this but you don't really know you know and I don't yeah you know we don't seem to have the algorithms we've never been through something like this before so what are you guys seeing in terms of customer Behavior given that uncertainty well one thing I would highlight that I think particularly going back to what we were just talking about as far as business and digital transformation I think some customers are still appreciating the fact that where you know yesterday you may have had to to buy some Capital put out some capital and commit to something for a large upfront expenditure is that you know today the value of being able to experiment and scale up and then most importantly scale down and dynamically based on is the experiment working out am I seeing real value from it and doing that on a time scale of a day or a week or a few months that is so important right now because again it gets to I am looking for a ways to innovate and to drive Top Line growth but I I can't commit to a multi-year sort of uh set of costs to to do that so and I think plenty of customers are finding that even a few months of experimentation gives them some really valuable insight as far as is this going to be successful or not and so I think that again just of course with S3 and storage from day one we've been elastic pay for what you use if you're not using the storage you don't get charged for it and I think that particularly right now having the applications and the rest of the ecosystem around the storage and the data be able to scale up and scale down is is just ever more important and when people see that like typically they're looking to do more with it so if they find you usually find these little Department projects but they see a way to actually move faster and save money I think it is a mix of those two they're looking to expand it which can be a nightmare for sales Cycles because they take longer but people are looking well why don't you leverage this and go across division so we do see people trying to leverage it because they're still I don't think digital transformation is slowing down but a lot more to be honest a lot more approvals at this point for everything it is you know Adam and another great quote in his in his keynote he said if you want to save money the Cloud's a place to do it absolutely and I read an article recently and I was looking through and I said this is the first time you know AWS has ever seen a downturn because the cloud was too early back then I'm like you weren't paying attention in 2008 because that was the first major inflection point for cloud adoption where CFO said okay stop the capex we're going to Opex and you saw the cloud take off and then 2010 started this you know amazing cycle that we really haven't seen anything like it where they were doubling down in Investments and they were real hardcore investment it wasn't like 1998 99 was all just going out the door for no clear reason yeah so that Foundation is now in place and I think it makes a lot of sense and it could be here for for a while where people are saying Hey I want to optimize and I'm going to do that on the cloud yeah no I mean I've obviously I certainly agree with Adam's quote I think really that's been in aws's DNA from from day one right is that ability to scale costs with with the actual consumption and paying for what you use and I think that you know certainly moments like now are ones that can really motivate change in an organization in a way that might not have been as palatable when it just it didn't feel like it was as necessary yeah all right we got to go give you a last word uh I think it's been a great event I love all your announcements I think this is wonderful uh it's been a great show I love uh in fact how many people are here at reinvent north of 50 000. yeah I mean I feel like it was it's as big if not bigger than 2019. people have said ah 2019 was a record when you count out all the professors I don't know it feels it feels as big if not bigger so there's great energy yeah it's quite amazing and uh and we're thrilled to be part of it guys thanks for coming on thecube again really appreciate it face to face all right thank you for watching this is Dave vellante for the cube your leader in Enterprise and emerging Tech coverage we'll be right back foreign
SUMMARY :
across a company to you know to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ed Walsh | PERSON | 0.99+ |
Kevin Miller | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
2006 | DATE | 0.99+ |
2008 | DATE | 0.99+ |
seven days | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
30 days | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Adam solipski | PERSON | 0.99+ |
Dave vellante | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
eighty cents | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
22 cents | QUANTITY | 0.99+ |
Kevin | PERSON | 0.99+ |
80 cents | QUANTITY | 0.99+ |
Seattle | LOCATION | 0.99+ |
12 | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
Isaac Newton | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Super Bowl | EVENT | 0.99+ |
a day | QUANTITY | 0.99+ |
Venetian Conference Center | LOCATION | 0.99+ |
fifth floor | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
World Cup | EVENT | 0.99+ |
last year | DATE | 0.99+ |
last quarter | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
S3 | TITLE | 0.99+ |
last month | DATE | 0.99+ |
more than six times | QUANTITY | 0.99+ |
2019 | DATE | 0.98+ |
Prometheus | TITLE | 0.98+ |
six months | QUANTITY | 0.98+ |
280 | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
Black Friday | EVENT | 0.97+ |
an hour ago | DATE | 0.97+ |
today | DATE | 0.97+ |
58 terabytes a day | QUANTITY | 0.97+ |
100 year old | QUANTITY | 0.97+ |
this morning | DATE | 0.97+ |
a week | QUANTITY | 0.97+ |
Ed wallson | PERSON | 0.97+ |
three | QUANTITY | 0.96+ |
Equifax | ORGANIZATION | 0.96+ |
jassy | PERSON | 0.96+ |
one platform | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
grafana | TITLE | 0.96+ |
one days | QUANTITY | 0.95+ |
first time | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
black Friday | EVENT | 0.93+ |
this week | DATE | 0.92+ |
first major inflection | QUANTITY | 0.91+ |
one place | QUANTITY | 0.91+ |
SQL | TITLE | 0.9+ |
last | DATE | 0.89+ |
Store | TITLE | 0.89+ |
Ed Macosky, Boomi | AWS re:Invent 2022
(upbeat music) >> Hello, CUBE friends and welcome back to Vegas. Lisa Martin here with John Furrier. This is our third day of coverage of AWS re:Invent. There are somewhere between 50,000 and 60, 70,000 people here. The excitement is palpable. The energy in the room has been on fire since Monday night. John, we love talking, we love re:Invent. We love talking about AWS and it's incredible ecosystem of partners and we're going to be doing that next. >> Yeah, I mean 10 years of theCUBE, we've been here since 2013. Watching it grow as the cloud computing invention. And then the ecosystem has just been growing, growing, growing at the same time innovation. And that's this next segment with the company that we both have covered deeply. Boomi is going to be a great segment. Looking forward to it. >> We have, we have. And speaking of innovation and Boomi, we have a four-time cube guests back with us. Ed Macosky joined us, Chief Innovation Officer at Boomi. And it's great to see you in person. >> Yeah, great to be here. Thanks for having me. >> What's going on at Boomi? I mean, I know up and to the right, continues we'll go this way. What's going on? >> Yeah, we continue to grow. We're really focused with AWS on the cloud and app modernization. Most of our projects and many of our customers are in this modernization journey from an enterprise perspective, moving from on-premises, trying to implement multicloud, hybrid cloud, that sort of thing. But what we're really seeing is this modernization choke point that a lot of our customers are facing in that journey where they just can't get over the hump. And a lot of their, they come to us with failing projects where they're saying, "Hey, I've got maybe this anchor of a legacy data source or applications that I need to bring in temporarily or I need to keep filling that." So we help with integrating these workflows, integrating these applications and help that lift and shift and help our customers projects from failing and quickly bringing themselves to the cloud. >> You know, Ed, we've been talking with you guys for many many years with theCUBE and look at the transition, how the market's evolved. If you look at the innovation going on now, I won't say it's an innovator's dilemma because there's a lot of innovation happening. It's becoming an integrator's dilemma. And I was talking with some of your staff. Booth traffic's up, great leads coming in. You mentioned on the keynote in a slide. I mean, the world spun in the direction of Boomi with all your capabilities around integration, understanding how data works. All the themes here at re:Invent kind of like are in that conversation top track that we've been mentioning and Boomi, you guys have been building around. Explain why that's happening. Am I right? Am I getting that right, or can you share your thoughts? >> Yeah, absolutely. We're in a great spot. I mean, given the way the economy's going today, people are, again, trying to do more with less. But there is this modernization journey that I talked about and there's an explosion of SaaS applications, cloud technologies, data sources, et cetera. And not only is it about integrating data sources and automating workflows, but implementing things at scale, making sure you have high data quality, high data governance, security, et cetera. And Boomi sits right in the middle of providing solutions of all of that to make a business more efficient. Not only that, but you can implement things very very quickly 'cause we're a low-code platform. It's not just about this hardcore technology that's really hard to implement. You can do it really quickly with our platform. >> Speaking of transformation, one of the things John does every year ahead of re:Invent is he gets to sit down with the CEO of re:Invent and really does a great, if you haven't seen it, check it out on siliconangle.com. Really kind of a preview of what we're going to expect at the show. And one of the things Adam said to you was CIOs, CEOs are coming to me not wanting to talk about technology. They want to talk about transformation, business transformation. It's no more, not so much about digital transformation anymore, it's about transforming businesses. Are you hearing customers come to you with the same help us transform our business so we can be competitive, so we can meet customer demand? >> Oh, absolutely. It's no longer about tools and technology and providing people with paint to paint on a canvas. We're offering solutions on the AWS marketplace. We have five solutions that we launched this year to get people up and running very quickly based on business problems from disbursement to lead to cash with Salesforce and NetSuite to business-to-business integrations and EDI dashboarding and that sort of thing. We also have our own marketplace that provide these solutions and give our customers the ability to visualize what they can do with our platform to actually solve business problems. Again, not just about tooling and technology and how to connect things. >> How's the marketplace relationship going for you? Are you guys seeing success there? >> Yeah, we're seeing a lot of success. I mean, in fact, we're going to be doubling down in the next year. We're going to be, we haven't announced it yet, but we're going to be announcing some new solutions. >> John: I guess we're announcing it now. >> No, I'm not going to get to specifics. But we're going to be putting more and more solutions on the marketplace and we're going to be offering more ways to consume and purchase our platform on the marketplace in the next couple of months. >> Ed, talk about what's new with Boomi real quick. I know you guys have new connectors Early Access. What's been announced? What have you guys announced? What's coming? What's the new things folks should pay attention from a product standpoint? >> Yeah, so you mentioned the connectors. We have 32 new connectors. And by the way in our ecosystem, our customers have connected 199,970 unique things. Amazon SQS is one of those in that number. So that's the kind of scale. >> What's the number again? >> 199,970. At least that's the last I checked earlier. >> That's a good recall right there. Exact number. >> It's an exciting number 'cause we're scaling very, very rapidly. But the other things that are exciting are we announced our event streaming service that we want to bring to our cloud. We've relied on partners in the past to do that for us, but it's been a very critical need that our customers have asked for. So we're integrating that into our platform. We're also going to be focusing more and more on our data management capabilities because I mentioned it a little earlier, connecting things, if bad data's going in and bad data's going out, bad data's going everywhere. So we have the tools and capability to govern data, manage data, high quality solutions. So we're going to invest more and more in that 'cause that's what our customers are asking us for. >> Data governance is a challenge for any business in any industry. Too much access is a huge risk, not enough access to the right people means you can't really extract the insights from data to be able to make data-driven decisions. How do you help customers really on that fine line of data governance? >> Very specifically, we have as part of our iPaaS platform, we have a data catalog and data prep capability within the platform itself that gives citizens in the organization the ability to catalog data in a secure way based on what they have capabilities to. But not only that, the integrator can use data catalog to actually catalog the data and understand what needs to be integrated and how they can make their business more efficient by automating the movement of data and sharing the data across the organization. >> On the innovation side, I want to get back to that again because I think this integration innovation angle is something that we talked about with Adams Selipsky in our stories hitting SiliconANGLE right now are all about the partner ecosystems. We've been highlighting some of the bigger players emerging. You guys are out there. You got Databricks, Snowflake, MongoDB where they're partnering with Amazon, but they're not just an ISV, they're platforms. You guys have your own ISVs. You have your own customers. You're doing low-code before no-code is popular. So where are you guys at on that wave? You got a good customer base, share some names. What's going on with the customers? Are they becoming more developer oriented? 'Cause let's face it, your customers that working on Boomi, they're developers. >> Yes. >> And so they got tools. You're enablers, so you're a platform on Amazon. >> We are a platform on Amazon. >> We call that supercloud, but that's where this new shift is happening. What's your reaction to that? >> Yes, so I guess we are a supercloud on Amazon and our customers and our partners are developers of our platforms themselves. So most of our partners are also customers of ours and they will be implementing their own integrations in the backend of their platforms into their backend systems to do things like billing and monitoring of their own usage of their platforms. But with our customers, they're also Amazon customers who are trying to connect in a multicloud way or many times just within the Amazon ecosystem. Or even customers like Kenco and Tim Heger who did a presentation from HealthBridge. They're also doing B2B connectivity to bring information from their partners into their ecosystem within their platform. So we handle all of the above. So now we are an independent company and it's nice to be a central part of all of these different ecosystems. And where I find myself in my role a lot of times is literally connecting different platforms and applications and SI partners to solve these problems 'cause nobody can really see it themselves. I had a conversation earlier today where someone would say, "Hey, you're going to talk with that SI partner later today. They're a big SI partner of ours. Why don't they develop solutions that we can go to market together to solve problems for our customers?" >> Lisa, this is something that we've been talking about a lot where it's an and conversation. My big takeaway from Adam's one-on-one and re:Invent so far is they're not mutually exclusive. There's an and. You can be an ISV and this platforms in the ecosystem because you're enabling software developers, ISV as they call it. I think that term is old school, but still independent software vendors. That's not a platform. They can coexist and they are, but they're becoming on your platform. So you're one of the most advanced Amazon partners. So as cloud grows and we mature and what, 13 years old Amazon is now, so okay, you're becoming bigger as a platform. That's the next wave. What happens in that next five years from there? What happens next? Because if your platform continues to grow, what happens next? >> So for us, where we're going is connecting platform providers, cloud providers are getting bigger. A lot of these cloud providers are embracing partnerships with other vendors and things and we're helping connect those. So when I talk about business-to-business and sharing data between those, there are still some folks that have legacy applications that need to connect and bring things in and they're just going to ride them until they go away. That is a requirement, but at some point that's all going to fall by the wayside. But where the industry is really going for us is it is about automation and quickly automating things and again, doing more with less. I think Tim Heger had a quote where he said, "I don't need to use Michelangelo to come paint my living room." And that's the way he thinks about low-code. It's not about, you don't want to just sit there and code things and make an art out of coding. You want to get things done quickly and you want to keep automating your business to keep pushing things forward. So a lot of the things we're looking at is not just about connecting and automating data transformation and that's all valuable, but how do I get someone more productive? How do I automate the business in an intelligent way more and more to push them forward. >> Out of the box solutions versus platforms. You can do both. You can build a platform. >> Yes. >> Or you can just buy out of the box. >> Well, that's what's great about us too is because we don't just provide solutions. We provide solutions many times as a starting point or the way I look at it, it's art of the possible a lot of what we give 'cause then our customers can take our low-code tooling and say, wow, I like this solution, but I can really take it to the next step, almost in like an open source model and just quickly iterate and drive innovation that way. And I just love seeing our, a lot of it for me is just our ecosystem and our partners driving the innovation for us. >> And driving that speed for customers. When I had the chance to interview Tim Heger myself last month and he was talking about Boomi integration and Flow are enabling him to do integration 10x faster than before and HealthBridge built their business on Boomi. They didn't replace the legacy solution, but he had experience with some of your big competitors and chose Boomi and said, "It is 10x faster." So he's able to deliver to those and it's a great business helping people pay for health issues if they don't have the funds to do that. So much faster than they could have if had they chosen a different technology. >> Yeah, and also what I like about the HealthBridge story is you said they started with Boomi's technology. So I like to think we scale up and scale down. So many times when I talk to prospects or new customers, they think that our technology is too advanced or too expensive or too big for them to go after and they don't think they can solve these problems like we do with enterprises. We can start with you as a startup going with SaaS applications, trying to be innovative in your organization to automate things and scale. As you scale the company will be right there along with you to scale into very very advanced solutions all in a low-code way. >> And also helping folks to scale up and down during what we're facing these macroeconomic headwinds. That's really important for businesses to be able to do for cost optimization. But at the end of the day, that company has to be a data company. They have to be able to make sure that the data matches. It's there. They know what they have. They can actually facilitate communications, conversations and deliver the end user customer is demanding whether it's a retailer, a healthcare organization, a bank, you name it. >> Exactly. And another thing with today's economy, a lot of people forget with integration or automation tooling, once you get things implemented, in many traditional forms you got to manage that long term. You have to have a team to do that. Our technology runs autonomously. I hear from our customers over and over again. I just said it, sometimes I'll walk away for a month and come back and wow, Boomi's still running. I didn't realize it. 'Cause we have technology that continues to patch itself, heal itself, continue running autonomously. That also saves in a time like now where you don't have to worry about sending teams out to patch and upgrade things on a continuous basis. We take care of that for our customers. >> I think you guys can see a lot of growth with this recession and looming. You guys fit well in the marketplace. As people figure out how to right size, you guys fit right nicely into that equation. I got to ask you, what's ahead for 2023 for Boomi? What can we expect to see? >> Yeah, what's ahead? I briefly mentioned it earlier, but the new service we're really excited about that 'cause it's going to help our customers to scale even further and bring more workloads into AWS and more workloads that we can solve challenges for our customers. We've also got additional solutions. We're looking at launching on AWS marketplace. We're going to continue working with SIs and GSIs and our ISV ecosystem to identify more and more enterprise great solutions and verticals and industry-based solutions that we can take out of the box and give to our customers. So we're just going to keep growing. >> What are some of those key verticals? Just curious. >> So we're focusing on manufacturing, the financial services industry. I don't know, maybe it's vertical, but higher ed's another big one for us. So we have over a hundred universities that use our technology in order to automate, grant submissions, student management of different aspects, that sort of thing. Boise State is one of them that's modernized on AWS with Boomi technology. So we're going to continue rolling in that front as well. >> Okay. Is it time for the challenge? >> It's time for the challenge. Are you ready for the challenge, Ed? We're springing this on you, but we know you so we know you can nail this. >> Oh no. >> If you were going to create your own sizzle reel and we're creating sizzle reel that's going to go on Instagram reels and you're going to be a star of it, what would that sizzle reel say? Like if you had a billboard or a bumper sticker, what's that about Boomi boom powerful story? >> Well, we joked about this earlier, but I'd have to say, Go Boomi it. This isn't real. >> Go Boomi it, why? >> Go Boomi it because it's such a succinct way of saying our customer, that terminology came to us from our customers because Boomi becomes a verb within an organization. They'll typically start with us and they'll solve an integration challenge or something like that. And then we become viral in a good way with an organization where our customers, Lisa, you mentioned it earlier before the show, you love talking to our customers 'cause they're so excited and happy and love our technology. They just keep finding more ways to solve challenges and push their business forward. And when a problem comes up, an employee will typically say to another, go Boomi it. >> When you're a verb, that's a good thing. >> Ed: Yes it is. >> Splunk, go Splunk it. That was a verb for log files. Kleenex, tissue. >> Go Boomi it. Ed, thank you so much for coming back on your fourth time. So next time we see you will be fifth time. We'll get you that five-timers club jacket like they have on SNL next time. >> Perfect, can't wait. >> We appreciate your insight, your time. It's great to hear what's going on at Boomi. We appreciate it. >> Ed: Cool. Thank you. >> For Ed Macosky and John Furrier, I'm Lisa Martin. You're watching theCUBE, the leader in live enterprise and emerging tech coverage. (upbeat music)
SUMMARY :
and it's incredible ecosystem of partners Boomi is going to be a great segment. And it's great to see you in person. Yeah, great to be here. What's going on at Boomi? that I need to bring in temporarily and look at the transition, of all of that to make a And one of the things Adam said to you was and how to connect things. We're going to be, we going to be offering more ways What's the new things So that's the kind of scale. the last I checked earlier. That's a good recall right there. the past to do that for us, to be able to make data-driven decisions. and sharing the data is something that we talked And so they got tools. We call that supercloud, and it's nice to be a central part continues to grow, So a lot of the things we're looking at Out of the box but I can really take it to the next step, have the funds to do that. So I like to think we that company has to be a data company. You have to have a team to do that. I got to ask you, what's and our ISV ecosystem to What are some of those key verticals? in order to automate, but we know you so we but I'd have to say, Go Boomi it. that terminology came to us that's a good thing. That was a verb for log files. So next time we see It's great to hear For Ed Macosky and John
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Heger | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
fifth time | QUANTITY | 0.99+ |
five solutions | QUANTITY | 0.99+ |
32 new connectors | QUANTITY | 0.99+ |
fourth time | QUANTITY | 0.99+ |
Boomi | PERSON | 0.99+ |
2023 | DATE | 0.99+ |
last month | DATE | 0.99+ |
HealthBridge | ORGANIZATION | 0.99+ |
60, 70,000 people | QUANTITY | 0.99+ |
Monday night | DATE | 0.99+ |
10x | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Boomi | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
next year | DATE | 0.99+ |
2013 | DATE | 0.99+ |
SNL | TITLE | 0.98+ |
199,970 | QUANTITY | 0.98+ |
third day | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
Adams Selipsky | PERSON | 0.98+ |
this year | DATE | 0.98+ |
siliconangle.com | OTHER | 0.97+ |
Michelangelo | PERSON | 0.96+ |
13 years old | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
Databricks | ORGANIZATION | 0.96+ |
ORGANIZATION | 0.95+ | |
iPaaS | TITLE | 0.95+ |
a month | QUANTITY | 0.93+ |
MongoDB | ORGANIZATION | 0.93+ |
four-time cube | QUANTITY | 0.92+ |
Kleenex | ORGANIZATION | 0.91+ |
NetSuite | TITLE | 0.9+ |
later today | DATE | 0.9+ |
earlier today | DATE | 0.88+ |
next couple of months | DATE | 0.84+ |
199,970 unique things | QUANTITY | 0.84+ |
wave | EVENT | 0.83+ |
over a hundred universities | QUANTITY | 0.83+ |
SQS | COMMERCIAL_ITEM | 0.81+ |
Boise State | ORGANIZATION | 0.79+ |
Ed Casmer, Cloud Storage Security & James Johnson, iPipeline | AWS Startup Showcase S2 E4
(upbeat music) >> Hello, everyone. Welcome back to theCUBE's presentation of the AWS Startup Showcase. This is season two, episode four of the ongoing series covering the exciting startups from the AWS ecosystem. And talking about cybersecurity. I'm your host, John Furrier. Excited to have two great guests. Ed Casmer, founder and CEO of Cloud Storage Security, back CUBE alumni, and also James Johnson, AVP of Research and Development at iPipeline. Here to talk about cloud storage security antivirus on S3. James, thanks for joining us today. >> Thank you, John. >> Thank you. >> So the topic here is cloud security, storage security. Ed, we had a great CUBE conversation previously, earlier in the month. Companies are modernizing their apps and migrating the cloud. That's fact. Everyone kind of knows that. >> Yeah. >> Been there, done that. Clouds have the infrastructure, they got the OS, they got protection, but the end of the day, the companies are responsible and they're on the hook for their own security of their data. And this is becoming more permanent now that you have hybrid cloud, cloud operations, cloud native applications. This is the core focus right now in the next five years. This is what everyone's talking about. Architecture, how to build apps, workflows, team formation. Everything's being refactored around this. Can you talk about how organizations are adjusting and how they view their data security in light of how applications are being built and specifically around the goodness of say S3? >> Yep, absolutely. Thank you for that. So we've seen S3 grow 20,000% over the last 10 years. And that's primarily because companies like James with iPipeline are delivering solutions that are leveraging this object storage more and above the others. When we look at protection, we typically fall into a couple of categories. The first one is, we have folks that are worried about the access of the data. How are they dealing with it? And so they're looking at configuration aspects. But the big thing that we're seeing is that customers are blind to the fact that the data itself must also be protected and looked at. And so we find these customers who do come to the realization that it needs to happen, finding out, asking themselves, how do I solve for this? And so they need lightweight, cloud native built solutions to deliver that. >> So what's the blind spot? You mentioned there's a blind spot. They're kind of blind to that. What specifically are you seeing? >> Well so, when we get into these conversations, the first thing that we see with customers is I need to predict how I access it. This is everyone's conversation. Who are my users? How do they get into my data? How am I controlling that policy? Am I making sure there's no east-west traffic there, once I've blocked the north-south? But what we really find is that the data is the key packet of this whole process. It's what gets consumed by the downstream users. Whether that's an employee, a customer, a partner. And so it's really, the blind spot is the fact that we find most customers not looking at whether that data is safe to use. >> It's interesting. When you talk about that, I think about all the recent breaches and incidents. "Incidents," they call them. >> Yeah. >> They've really been around user configurations. S3 buckets not configured properly. >> Absolutely. >> And this brings up what you're saying, is that the users and the customers have to be responsible for the configurations, the encryption, the malware aspect of it. Don't just hope that AWS has the magic to do it. Is that kind of what you're getting at here? Is that the similar, am I correlating that properly? >> Absolutely. That's perfect. And we've seen it. We've had our own customers, luckily iPipeline's not one of them, that have actually infected their end users because they weren't looking at the data. >> And that's a huge issue. So James, let's get in, you're a customer partner. Talk about your relationship with these guys and what's it all about? >> Yeah, well, my pipeline is building a digital ecosystem for life insurance and wealth management industries to enable the sale of life insurance to under-insured and uninsured Americans, to make sure that they have the coverage that they need, should something happen. And our solutions have been around for many years. In a traditional data center type of an implementation. And we're in process now of migrating that to the cloud, moving it to AWS, in order to give our customers a better experience, a better resiliency, better reliability. And with that, we have to change the way that we approach file storage and how we approach scanning for vulnerabilities in those files that might come to us via feeds from third parties or that are uploaded directly by end users that come to us from a source that we don't control. So it was really necessary for us to identify a solution that both solved for these vulnerability scanning needs, as well as enabling us to leverage the capabilities that we get with other aspects of our move to the cloud and being able to automatically scale based on load, based on need, to ensure that we get the performance that our customers are looking for. >> So tell me about your journey to the cloud, migrating to the cloud and how you're using S3 specifically. What led you to determine the need for the cloud based AV solution? >> So when we looked to begin moving our applications to the cloud, one of the realizations that we had is that our approach to storing certain types of data was a bit archaic. We were storing binary files in a database, which is not the most efficient way to do things. And we were scanning them with the traditional antivirus engines that would've been scaled in traditional ways. So as our need grew, we would need to spin up additional instances of those engines to keep up with load. And we wanted a solution that was cloud native and would allow us to scan more dynamically without having to manage the underlying details of how many engines do I need to have running for a particular load at a particular time and being able to scan dynamically. And also being able to move that out of the application layer, being able to scan those files behind the scenes. So scanning in, when the file's been saved in S3, it allows us to scan and release the file once it's been deemed safe rather than blocking the user while they wait for that scan to take place. >> Awesome. Well, thanks for sharing that. I got to ask Ed, and James, same question next. It's, how does all this factor in to audits and self compliance? Because when you start getting into this level of sophistication, I'm sure it probably impacts reporting workflows. Can you guys share the impact on that piece of it? The reporting? >> Yeah. I'll start with a comment and James will have more applicable things to say. But we're seeing two things. One is, you don't want to be the vendor whose name is in the news for infecting your customer base. So that's number one. So you have to put something like this in place and figure that out. The second part is, we do hear that under SOC 2, under PCI, different aspects of it, there are scanning requirements on your data. Traditionally, we've looked at that as endpoint data and the data that you see in your on-prem world. It doesn't translate as directly to cloud data, but it's certainly applicable. And if you want to achieve SOC 2 or you want to achieve some of these other pieces, you have to be scanning your data as well. >> Furrier: James, what's your take? As practitioner, you're living it. >> Yeah, that's exactly right. There are a number of audits that we go through where this is a question that comes up both from a SOC perspective, as well as our individual customers who reach out and they want to know where we stand from a security perspective and a compliance perspective. And very often this is a question of how are you ensuring that data that is uploaded into the application is safe and doesn't contain any vulnerabilities. >> James, if you don't mind me asking, I have to kind of inquire because I can imagine that you have users on your system but also you have third parties, relationships. How does that impact this? What's the connection? >> That's a good question. We receive data from a number of different locations from our customers directly, from their users and from partners that we have as well as partners that our customers have. And as we ingest that data, from an implementation perspective, the way we've approached this, there's a minimal impact there in each one of those integrations. Because everything comes into the S3 bucket and is scanned before it is available for consumption or distribution. But this allows us to ensure that no matter where that data is coming from, that we are able to verify that it is safe before we allow it into our systems or allow it to continue on to another third party whether that's our customer or somebody else. >> Yeah, I don't mean to get in the weeds there, but it's one of those things where, this is what people are experiencing right now. Ed, we talked about this before. It's not just siloed data anymore. It's interactive data. It's third party data from multiple sources. This is a scanning requirement. >> Agreed. I find it interesting too. I think James brings it up. We've had it in previous conversations that not all data's created equal. Data that comes from third parties that you're not in control of, you feel like you have to scan. And other data you may generate internally. You don't have to be as compelled to scan that although it's a good idea, but you can, as long as you can sift through and determine which data is which and process it appropriately, then you're in good shape. >> Well, James, you're living the cloud security, storage security situation here. I got to ask you, if you zoom out and not get in the weeds and look at the board room or the management conversation. Tell me about how you guys view the data security problem. I mean, obviously it's important. So can you give us a level of how important it is for iPipeline and with your customers and where does this S3 piece fit in? I mean, when you guys look at this holistically, for data security, what's the view, what's the conversation like? >> Yeah. Well, data security is critical. As Ed mentioned a few minutes ago, you don't want to be the company that's in the news because some data was exposed. That's something that nobody has the appetite for. And so data security is first and foremost in everything that we do. And that's really where this solution came into play, in making sure that we had not only a solution but we had a solution that was the right fit for the technology that we're using. There are a number of options. Some of them have been around for a while. But this was focused on S3, which we were using to store these documents that are coming from many different sources. And we have to take all the precautions we can to ensure that something that is malicious doesn't make its way into our ecosystem or into our customers' ecosystems through us. >> What's the primary use case that you see the value here with these guys? What's the aha moment that you had? >> With the cloud storage security specifically, it goes beyond the security aspects of being able to scan for vulnerable files, which is, there are a number of options and they're one of those. But for us, the key was being able to scale dynamically without committing to a particular load whether that's under committing or overcommitting. As we move our applications from a traditional data center type of installation to AWS, we anticipated a lot of growth over time and being able to scale up very dynamically, literally moving a slider within the admin console, was key to us to be able to meet our customer's needs without overspending, by building up something that was dramatically larger than we needed in our initial rollout. >> Not a bad testimonial there, Ed. >> I mean, I agree. >> This really highlights the applications using S3 more in the file workflow for the application in real time. This is where you start to see the rise of ransomware other issues. And scale matters. Can you share your thoughts and reaction to what James just said? >> Yeah. I think it's critical. As the popularity of S3 has increased, so has the fact that it's an attack vector now. And people are going after it whether that's to plant bad malicious files, whether it's to replace code segments that are downloaded and used in other applications, it is a very critical piece. And when you look at scale and you look at the cloud native capability, there are lots of ways to solve it. You can dig a hole with a spoon, but a shovel works a lot better. And in this case, we take a simple example like James. They did a weekend migration, so they've got new data coming in all the time, but we did a massive migration 5,000 files a minute being ingested. And like he said, with a couple of clicks, scale up, process that over sustained period of time and then scale back down. So I've said it before, I said it on the previous one. We don't want to get in the way of someone's workflow. We want to help them secure their data and do it in a timely fashion that they can continue with their proper processing and their normal customer responses. >> Frictionless has to be key. I know you're in the marketplace with your antivirus for S3 on the AWS. People can just download it. So people are interested, go check it out. James, I got to ask you and maybe Ed can chime in over the top, but it seems so obvious. Data. Secure the data. Why is it so hard? Why isn't this so obvious? What's the problem? Why is it so difficult? Why are there so many different solutions? It just seems so obvious. You know, you got ransomware, you got injection of different malicious payloads. There's a ton of things going on around the data. Why is, this so obvious? Why isn't it solved? >> Well, I think there have been solutions available for a long time. But the challenge, the difficulty that I see, is that it is a moving target. As bad actors learn new vulnerabilities, new approaches and as new technology becomes available, that opens additional attack vectors. >> Yeah. >> That's the challenge, is keeping up on the changing world including keeping up on the new ways that people are finding to exploit vulnerabilities. >> And you got sensitive data at iPipeline. You do a lot of insurance, wealth management, all kinds of sensitive data, super valuable. This brings me up, reminds me of the Sony hack Ed, years ago. Companies are responsible for their own militia. I mean, cybersecurity is no government help for sure. I mean, companies are on the hook. As we mentioned earlier at the top of this interview, this really is highlighted that IT departments have to evolve to large scale cloud, cloud native applications, automation, AI machine learning all built in, to keep up at the scale. But also from a defense standpoint. I mean, James you're out there, you're in the front lines, you got to defend yourself basically, and you got to engineer it. >> A hundred percent. And just to go on top of what James was saying is, I think there, one of the big factors and we've seen this. There's skill shortages out there. There's also just a pure lack of understanding. When we look at Amazon S3 or object storage in general, it's not an executable file system. So people sort of assume that, oh, I'm safe. It's not executable. So I'm not worried about it traversing my storage network. And they also probably have the assumption that the cloud providers, Amazon is taking care of this for them. And so it's this aha moment. Like you mentioned earlier, that you start to think, oh it's not about where the data is sitting per se. It's about scanning it as close to the storage spot. So when it gets to the end user, it's safe and secure. And you can't rely on the end user's environment and system to be in place and up to date to handle it. So it's that really, that lack of understanding that drives some of these folks into this. But for a while, we'll walk into customers and they'll say the same thing you said, John. Why haven't I been doing this for so long? And it's because they didn't understand that it was such a risk. That's where that blind spot comes in. >> James, it's just a final note on your environment. What's your goals for the next year? How's things going over there on your side? How you look at the security posture? What's on your agenda for the next year? How are you guys looking at the next level? >> Yeah. Well, our goal as it relates to this is to continue to move our existing applications over to AWS to run natively there. Which includes moving more data into S3 and leveraging the cloud storage security solution to scan that and ensure that there are no vulnerabilities that are getting in. >> And the ingestion, is there like a bottlenecks log jams? How do you guys see that scaling up? I mean, what's the strategy there? Just add more S3? >> Well, S3 itself scales automatically for us and the cloud storage solution gives us leverage to pull to do that. As Ed mentioned, we ingested a large amount of data during our initial migration which created a bottleneck for us. As we were preparing to move our users over, we were able to make an adjustment in the admin console and spin up additional processes entirely behind the scenes and broke the log jam. So I don't see any immediate concerns there, being able to handle the load. >> The term cloud native and hyperscale native, cloud native, one cloud's hybrid. All these things are native. We have antivirus native coming soon. And I mean, this is what we're basically doing is making it native into the workflows. Security native. And soon there's going to be security clouds out there. We're starting to see the rise of these new solutions. Can you guys share any thoughts or vision around how you see the industry evolving and what's needed? What's working and what's needed? Ed, we'll start with you. What's your vision? >> So I think the notion of being able to look at and view the management plane and control that has been where we're at right now. That's what everyone seems to be doing and going after. I think there are niche plays coming up. Storage is one of them, but we're going to get to a point where storage is just a blanket term for where you put your stuff. I mean, it kind of already is that. But in AWS, it's going to be less about S3. Less about work docs, less about EVS. It's going to be just storage and you're going to need a solution that can span all of that to go along with where we're already at the management plane. We're going to keep growing the data plane. >> James, what's your vision for what's needed in the industry? What's the gaps, what's working, and where do you see things going? >> Yeah, well, I think on the security front specifically, Ed's probably a little bit better equipped to speak to them than I am since that his primary focus. But I see the need for just expanded solutions that are cloud native that fit and fit nicely with the Amazon technologies. Whether that comes from Amazon or other partners like Cloud Storage Security to fill those gaps. We are focused on the financial services and insurance industries. That's our niche. And we look to other partners like Ed to help be the experts in these areas. And so that's really what I'm looking for, is the experts that we can partner with that are going to help fill those gaps as they come up and as they change in the future. >> Well, James, I really appreciate you coming on, sharing your story and I'll give you the final word. Put a quick, spend a minute to talk about the company. I know Cloud Storage Security is an AWS partner with the security software competency and is one of I think 16 partners listed in the competency and the data category. So take a minute to explain what's going on with the company, where people can find more information, how they buy and consume the products. >> Okay. >> Put the plug in. >> Yeah, thank you for that. So we are a fast growing startup. We've been in business for two and a half years now. We have achieved our security competency as John indicated. We're one of 16 data protection security competent ISV vendors globally. And our goal is to expand and grow a platform that spans all storage types that you're going to be dealing with and answer basic questions. What do I have and where is it? Is it safe to use? And am I in proper control of it? Am I being alerted appropriate? So we're building this storage security platform, very laser focused on the storage aspect of it. And if people want to find out more information, you're more than welcome to go and try the software out on Amazon marketplace. That's basically where we do most of our transacting. So find it there. Start of free trial. Reach out to us directly from our website. We are happy to help you in any way that you need it. Whether that's storage assessments, figuring out what data is important to you and how to protect it. >> All right, Ed. Thank you so much. Ed Casmer, founder and CEO of Cloud Storage Security. And of course James Johnson, AVP of Research and Development, iPipeline customer. Gentlemen, thank you for sharing your story and featuring the company and the value proposition, certainly needed. This is season two, episode four. Thanks for joining us. Appreciate it. >> Casmer: Thanks John. >> Okay. I'm John Furrier. That is a wrap for this segment of the cybersecurity season two, episode four. The ongoing series covering the exciting startups from Amazon's ecosystem. Thanks for watching. (upbeat music)
SUMMARY :
of the AWS Startup Showcase. and migrating the cloud. now that you have hybrid cloud, that it needs to happen, They're kind of blind to that. that data is safe to use. When you talk about that, S3 buckets not configured properly. is that the users and the customers that have actually and what's it all about? migrating that to the cloud, for the cloud based AV solution? move that out of the application layer, I got to ask Ed, and and the data that you see Furrier: James, what's your take? audits that we go through I have to kind of inquire partners that we have get in the weeds there, You don't have to be as and look at the board room or the precautions we can and being able to scale This is where you start to see and you look at the James, I got to ask you But the challenge, the that people are finding to I mean, companies are on the hook. that the cloud providers, at the next level? and leveraging the cloud and the cloud storage And soon there's going to be of being able to look at is the experts that we can partner with and the data category. We are happy to help you in and featuring the company the exciting startups
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James | PERSON | 0.99+ |
Ed Casmer | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
John | PERSON | 0.99+ |
James Johnson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Casmer | PERSON | 0.99+ |
SOC 2 | TITLE | 0.99+ |
5,000 files | QUANTITY | 0.99+ |
iPipeline | ORGANIZATION | 0.99+ |
16 partners | QUANTITY | 0.99+ |
20,000% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two and a half years | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Cloud Storage Security | ORGANIZATION | 0.99+ |
S3 | TITLE | 0.99+ |
today | DATE | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
second part | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
each one | QUANTITY | 0.95+ |
years ago | DATE | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
Ed Casmer & James Johnson Event Sesh (NEEDS SLIDES EDL)
(upbeat intro music) >> Hello, everyone. Welcome back to theCube's presentation of the AWS Startup Showcase. This is season two, episode four, of the ongoing series covering the exciting startups from the a AWS ecosystem. Talk about cybersecurity. I'm your host, John Furrier. Here, excited to have two great guests. Ed Casmer, Founder & CEO of Cloud Storage Security. Back, Cube alumni. And also James Johnson, AVP of Research & Development, iPipeline here. Here to talk about Cloud Storage Security, antivirus on S3. Gents, thanks for joining us today. >> Thank you, John. >> Thank you. >> So, the topic here is cloud security, storage security. Ed, we had a great Cube conversation previously, earlier in the month. You know, companies are modernizing their apps and migrating to the cloud. That's fact. Everyone kind of knows that. Been there, done that. You know, clouds have the infrastructure, they got the OS, they got protection. But, the end of the day, the companies are responsible and they're on the hook for their own security of their data. And this is becoming more preeminent now that you have hybrid cloud, cloud operations, cloud-native applications. This is the core focus right now. In the next five years. This is what everyone's talking about. Architecture, how to build apps, workflows, team formation. Everything's being refactored around this. Can you talk about how organizations are adjusting, and how they view their data security in light of how applications are being built and specifically, around the goodness of say, S3? >> Yep, absolutely. Thank you for that. So, we've seen S3 grow 20,000% over the last 10 years. And that's primarily because companies like James with iPipeline, are delivering solutions that are leveraging this object storage more and above the others. When we look at protection, we typically fall into a couple of categories. The first one is, we have folks that are worried about the access of the data. How are they dealing with it? So, they're looking at configuration aspects. But, the big thing that we're seeing is that customers are blind to the fact that the data itself must also be protected and looked at. And, so, we find these customers who do come to the realization that it needs to happen. Finding out like how asking themselves, "How do I solve for this?" And, so, they need lightweight, cloud-native built solutions to deliver that. >> So, what's the blind spot? You mentioned there's a blind spot. They're kind of blind to that. What specifically are you seeing? >> Well, so when we get into these conversations, the first thing that we see with customers is, "I need to predict how I access it." This is everyone's conversation. "Who are my users? How do they get into my data? How am I controlling that policy? Am I making sure there's no east-west traffic there, once I've blocked the north-south?" But, what we really find is that the data is the key packet of this whole process. It's what gets consumed by the downstream users. Whether that's an employee, a customer, a partner. And, so, it's really the blind spot is the fact that we find most customers not looking at whether that data is safe to use. >> It's interesting. You know, when you talk about that, I think about like all the recent breaches and incidents. "Incidents" they call them. >> Yeah. >> They're really been around user configurations. S3 buckets not configured properly. And this brings up what you're saying, is that the users and the customers have to be responsible for the configurations, the encryption, the malware aspect of it. Don't just hope that AWS has the magic to do it. Is that kind of what you're getting at here? Is that the similar? Am I correlating that properly? >> Absolutely. That's perfect. And, and we've seen it. We've had our own customers, luckily, iPipeline's not one of them, that have actually infected their end users, because they weren't looking at the data. >> Yeah. And that's a huge issue. So, James, let's get in, you're a customer-partner. Talk about your relationship with these guys and what's it all about? >> Yeah. Well, iPipeline is building a digital ecosystem for life insurance and wealth management industries to enable the sale of life insurance to underinsured and uninsured Americans, to make sure that they have the coverage that they need should something happen. And, our solutions have been around for many years in a traditional data center type of an implementation. And, we're in process now of migrating that to the cloud, moving it to AWS. In order to give our customers a better experience, better resiliency, better reliability. And, with that, we have to change the way that we approach file storage and how we approach scanning for vulnerabilities in those files that might come to us via feeds from third parties, or that are uploaded directly by end users that come to us from a source that we don't control. So, it was really necessary for us to identify a solution that both solved for these vulnerability scanning needs, as well as enabling us to leverage the capabilities that we get with other aspects of our move to the cloud. Being able to automatically scale based on load, based on need. To ensure that we get the performance that our customers are looking for. >> So, tell me about your journey to the cloud, migrating to the cloud, and how you're using S3. Specifically, what led you to determine the need for the cloud-based AV solution? >> Yeah. So, when we looked to begin moving our applications to the cloud, one of the realizations that we had is that our approach to storing certain types of data, was a bit archaic. We were storing binary files in a database, which is not the most efficient way to do things. And, we were scanning them with the traditional antivirus engines, that would've been scaled in traditional ways. So, as our need grew, we would need to spin up additional instances of those engines to keep up with load. And we wanted a solution that was cloud-native, and would allow us to scan more dynamically without having to manage the underlying details of how many engines do I need to have running for a particular load at a particular time, and being able to scan dynamically and also being able to move that out of the application layer, being able to scan those files behind the scenes. So, scanning in, when the file's been saved in S3. It allows us to scan and release the file once it's been deemed safe, rather than blocking the user while they wait for that scan to take place. >> Awesome. Well, thanks for sharing that. I got to ask Ed and James, same question. And next is, how does all this factor into audits and self-compliance? Because, when you start getting into this level of sophistication, I'm sure it probably impacts reporting, workflows. Can you guys share the impact on that piece of it? The reporting. >> Yeah, I'll start with a comment, and James will have more applicable things to say. But, we're seeing two things. One, is you don't want to be the vendor whose name is in the news for infecting your customer base. So, that's number one. so you have to put something like this in place and figure that out. The second part is, we do hear that under SOC 2, under PCI, different aspects of it, there are scanning requirements on your data. Traditionally, we've looked at that as endpoint data and the data that you see in your on-prem world. It doesn't translate as directly to cloud data, but, it's certainly applicable. And if you want to achieve SOC 2 or you want to achieve some of these other pieces, you have to be scanning your data as well. >> James, what's your take? As practitioner, you're living it. >> Yeah. That's exactly right. There are a number of audits that we go through, where this is a question that comes up both from a SOC perspective, as well as our individual customers, who reach out, and they want to know where we stand from a security perspective and a compliance perspective. And, very often, this is a question of "How are you ensuring that the data that is uploaded into the application is safe and doesn't contain any vulnerabilities?" >> James, if you don't mind me asking. I have to kind of inquire, because I can imagine that you have users on your system, but also you have third parties, relationships. How does that impact this? What's the connection? >> That's a good question. We receive data from a number of different locations. From our customers directly, from their users, and from partners that we have, as well as partners that our customers have. And, as we ingest that data, from an implementation perspective, the way we've approached this, there's minimal impact there in each one of those integrations, because everything comes into the S3 bucket and is scanned before it is available for consumption or distribution. But, this allows us to ensure that no matter where that data is coming from, that we are able to verify that it is safe before we allow it into our systems or allow it to continue on to another third party, whether that's our customer or somebody else. >> Yeah. I don't mean to get in the weeds there, but it's one of those things where, you know, this is what people are experiencing right now. You know, Ed, we talked about this before. It's not just siloed data anymore. It's interactive data. It's third party data from multiple sources. This is a scanning requirement. >> Agreed. I find it interesting, too. I think James brings it up. We've had it in previous conversations, that not all data's created equal. Data that comes from third parties that you're not in control of, you feel like you have to scan and other data you may generate internally. You don't, have to be as compelled to scan that, although it's a good idea. But it's, you can kind of, as long as you can sift through and determine which data is which, and process it appropriately, then you're in good shape. >> Well, James. You're living the cloud security storage security situation, here. I got to ask you if you zoom out, not get in the weeds, and look at kind of the boardroom or the management conversation. Tell me about how you guys view the data security problem. I mean, obviously it's important, right? So, can you give us a level of, you know, how important it is for iPipeline and with your customers and where does this S3 piece fit in? I mean, when you guys look at this holistically, for data security, what's the view? What's the conversation like? >> Yeah. Well, data security is critical. As Ed mentioned a few minutes ago, you don't want to be the company that's in the news because some data was exposed. That's something that nobody has the appetite for. And, so, data security is, first and foremost, in everything that we do. And that's really where this solution came into play and making sure that we had not only a solution, but, we had a solution that was the right fit for the technology that we're using. There are a number of options. Some of them have been around for a while. But this is focused on S3, which we were using to store these documents that are coming from many different sources. And, you know, we have to take all the precautions we can to ensure that something that is malicious doesn't make its way into our ecosystem or into our customers' ecosystems through us. >> What's the primary use case that you see the value here with these guys? What's the "aha" moment that you had? >> With the Cloud Storage Security, specifically, it was really, it goes beyond the security aspects of being able to scan for vulnerable files, which is there are a number of options and, and they're one of those. But for us, the key was being able to scale dynamically without committing to a particular load, whether that's under committing or over committing. As we move our applications from a traditional data center type of installation to AWS, we anticipated a lot of growth over time. And being able to scale up very dynamically, you know, literally moving a slider within the admin console was key to us, to be able to meet our customer's needs without overspending. By building up something that was, dramatically larger than we needed in our initial rollout. >> Not a bad testimonial there, Ed. I mean. >> I agree. >> This is really highlights the applications using S3 more in the file workflow for the application in real time. This is where you start to see the rise of ransomware, other issues and scale matters. Can you share your thoughts and reaction to what James just said? >> Yeah, I think it's critical. I mean, as the popularity of S3 has increased, so has the fact that it's an attack vector now, and people are going after it. Whether that's to plant bad, malicious files, whether it's to replace code segments that are downloaded and used in other applications, it is a very critical piece. And when you look at scale, and you look at the cloud-native capability, there are lots of ways to solve it. You can dig a hole with a spoon, but a shovel works a lot better. And, in this case, you know, we take a simple example like James. They did a weekend migration, so, they've got new data coming in all the time. But, we did a massive migration. 5,000 files a minute being ingested. And, like he said, with a couple of clicks, scale up, process that over a sustained period of time, and then scale back down. So, you know, I've said it before. I said it on the previous one. We don't want to get in the way of someone's workflow. We want to help them secure their data and do it in a timely fashion, that they can continue with their proper processing and their normal customer responses. >> Yeah. Friction always has to be key. I know you're in the marketplace with your antivirus, for S3 on AWS. People can just download it. So, people are interested, go check it out. James, I got to ask you, and maybe Ed can chime in over the top, but, it seems so obvious. Data. Secure the data. Why is it so hard? Why isn't this so obvious? What's the problem? Why is it so difficult? Why are there so many different solutions? It just seems so obvious. You know, you got ransomware, you got injection of different malicious payloads. There's a ton of things going around around the data. Why is this? This is so obvious. Why isn't it solved? >> Well, I think there have been solutions available for a long time. That the challenge, the difficulty that I see is, that it is a moving target. As bad actors learn new vulnerabilities, new approaches. And as new technology becomes available, that opens additional attack vectors. That's the challenge. Is keeping up on the changing world. Including keeping up on the new ways that people are finding to exploit vulnerabilities. >> Yeah. And you got sensitive data at iPipeline. You do a lot of insurance, wealth management, all kinds of sensitive data, super valuable. You know, just brings me up, reminds me of the Sony hack, Ed, years ago. You know, companies are responsible for their own militia. I mean, cybersecurity, there's no government help for sure. I mean, companies are on the hook, as we mentioned earlier at the top of this interview. This really is highlighted that, IT departments and are, have to evolve to large scale cloud, you know, cloud-native applications, automation, AI machine learning all built in, to keep up at the scale. But, also, from a defense standpoint, I mean, James, you're out there, you're in the front lines. You got to defend yourself, basically, and you got to engineer it. >> A hundred percent. And just to go on top of what James was saying is, I think they're one of the big factors, and we've seen this. There's skill shortages out there. There's also just a pure lack of understanding. When we look at Amazon S3 or object storage in general, it's not an executable file system. So, people sort of assume that, "Oh, I'm safe. It's not executable. So, I'm not worried about it traversing my storage network." And they also probably have the assumption that the cloud providers, Amazon, is taking care of this for 'em. And, so, it's this "aha" moment, like you mentioned earlier. That you start to think, "Oh, it's not about where the data is sitting, per se, it's about scanning it as close to the storage spot. So, when it gets to the end user, it's safe and secure. And you can't rely on the end users' environment and system to be in place and up to date to handle it. So, it's that really, that lack of understanding that drives some of these folks into this, but for a while, we'll walk into customers and they'll say the same thing you said, John. "Why haven't I been doing this for so long?" And, it's because they didn't understand that it was such a risk. That's where that blind spot comes in. >> James, it's just a final note on your environment. What's your goals for the next year? How's things going over there in your side? How do you look at the security posture? What's on your agenda for the next year? How do you guys looking at the next level? >> Yeah, well, our goal as it relates to this is, to continue to move our existing applications over to AWS, to run natively there, which includes moving more data into S3 and leveraging the cloud storage security solution to scan that and ensure that it's, that there are no vulnerabilities that are getting in. >> And the ingestion? Is there like a bottlenecks, log jams? How do you guys see that scaling up? I mean, what's the strategy there? More, just add more S3? >> Well, S3 itself scales automatically for us and, the Cloud Storage Solution gives us levers to pull to do that. As Ed mentioned, we ingested a large amount of data during our initial migration, which created a bottleneck for us, as we were preparing to move our users over. We were able to, you know, make an adjustment in the admin console and spin up additional processes entirely behind the scenes and broke the log jam. So, I don't see any immediate concerns there. Being able to handle the load. >> You know, the term cloud-native and, you know, hyperscale-native, cloud-native, OneCloud, it's hybrid. All these things are native. We have anti-virus native coming soon. And I mean, this is what we're. You're basically doing is making it native into the workflows. Security native, and soon there's going to be security clouds out there. We're starting to see the rise of these new solutions. Can you guys share any thoughts or vision around how you see the industry evolving and what's needed, what's working and what's needed? Ed, we'll start with you. What's your vision? >> So, I think the notion of being able to look at and view the management plane and control that, has been where we're at right now. that's what everyone seems to be doing and going after. I think there are niche plays coming up, storage is one of them. But, we're going to get to a point where storage is just a blanket term for where you put your stuff. I mean, it kind of already is that, but, in AWS, it's going to be less about S3, less about work docs, less about EVS. It's going to be just storage and you're going to need a solution that can span all of that, to go along with where we're already at at the management plane. We're going to keep growing the data plane. >> James, what's your vision for what's needed in the industry? What's the gaps? What's working? And where do you see things going? >> Yeah, well, I think on the security front, specifically, Ed's probably a little bit better equipped to speak to them than I am. Since that's his primary focus. But I see the need for just expanded solutions that are cloud-native, that fit and fit nicely with the Amazon technologies, Whether that comes from Amazon or other partners like Cloud Storage Security, to fill those gaps. We're focused on, you know, the financial services and insurance industries. That's our niche. And we look to other partners, like Ed, to help be the experts in these areas. And so that's really what I'm looking for is, you know, the experts that we can partner with that are going to help fill those gaps as they come up and as they change in the future. >> Well, James, I really appreciate you coming on sharing your story. Ed, I'll give you the final word. Put a quick, spend a minute to talk about the company. I know Cloud Storage Security is an AWS partner, with the Security Software Competency. And is one of, I think, 16 partners listed in the competency and data category. So, take a minute to explain, you know, what's going on with the company, where people can find more information, how they buy and consume the products. >> Okay. >> Put the plug in. >> Yeah, thank you for that. So, we are a fast growing startup. We we've been in business for two and a half years, now. We have achieved our Security Competency. As John indicated, we're one of 16 data protection, Security Competent ISV vendors, globally. And, our goal is to expand and grow a platform that spans all storage types that you're going to be dealing with. And answer basic questions. "What do I have and where is it? Is it safe to use?" And, "Am I in proper control of it? Am I being alerted appropriately?" You know, so we're building this storage security platform, very laser-focused on the storage aspect of it. And, if people want to find out more information, you're more than welcome to go and try the software out on Amazon Marketplace. That's basically where we do most of our transacting. So, find it there, start a free trial, reach out to us directly from our website. We are happy to help you in any way that you need it, whether that's storage assessments, figuring out what data is important to you, and how to protect it. >> All right, Ed, thank you so much. Ed Casmer. Founder & CEO of Cloud Storage Security and of course James Johnson, AVP Research & Development, iPipeline customer. Gentlemen, thank you for sharing your story and featuring the company and the value proposition. It's certainly needed. This is season two, episode four. Thanks for joining us. Appreciate it. >> Thanks, John. >> Okay. I'm John Furrier. That is a wrap for this segment of the cybersecurity, season two, episode four. The ongoing series covering the exciting startups from Amazon's ecosystem. Thanks for watching. (gentle outro music)
SUMMARY :
of the ongoing series and migrating to the cloud. realization that it needs to happen. They're kind of blind to that. find is that the data is You know, when you talk about that, has the magic to do it. And, and we've seen it. and what's it all about? migrating that to the cloud, migrating to the cloud, is that our approach to storing certain I got to ask Ed and James, same question. and the data that you see James, what's your take? the data that is uploaded into because I can imagine that you the way we've approached this, get in the weeds there, You don't, have to be as I got to ask you if you zoom out, and making sure that we And being able to scale up I mean. and reaction to what I mean, as the popularity and maybe Ed can chime in over the top, That's the challenge. I mean, companies are on the the same thing you said, John. How do you guys looking at the next level? and leveraging the cloud and broke the log jam. and soon there's going to be of being able to look at that are going to help fill those gaps So, take a minute to explain, you know, We are happy to help you in and featuring the company the exciting startups
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James | PERSON | 0.99+ |
Ed Casmer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
James Johnson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
iPipeline | ORGANIZATION | 0.99+ |
5,000 files | QUANTITY | 0.99+ |
16 partners | QUANTITY | 0.99+ |
SOC 2 | TITLE | 0.99+ |
20,000% | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
S3 | TITLE | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
16 | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
two and a half years | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
first thing | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
two things | QUANTITY | 0.96+ |
first | QUANTITY | 0.95+ |
S3 | COMMERCIAL_ITEM | 0.94+ |
years ago | DATE | 0.93+ |
Cloud Storage Security | TITLE | 0.93+ |
two great guests | QUANTITY | 0.92+ |
Americans | PERSON | 0.92+ |
Ed Casmer, Cloud Storage Security | CUBE Conversation
(upbeat music) >> Hello, and welcome to "theCUBE" conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE," got a great security conversation, Ed Casper who's the founder and CEO of Cloud Storage Security, the great Cloud background, Cloud security, Cloud storage. Welcome to the "theCUBE Conversation," Ed. Thanks for coming on. >> Thank you very much for having me. >> I got Lafomo on that background. You got the nice look there. Let's get into the storage blind spot conversation around Cloud Security. Obviously, reinforced has came up a ton, you heard a lot about encryption, automated reasoning but still ransomware was still hot. All these things are continuing to be issues on security but they're all brought on data and storage, right? So this is a big part of it. Tell us a little bit about how you guys came about the origination story. What is the company all about? >> Sure, so, we're a pandemic story. We started in February right before the pandemic really hit and we've survived and thrived because it is such a critical thing. If you look at the growth that's happening in storage right now, we saw this at reinforced. We saw even a recent AWS Storage Day. Their S3, in particular, houses over 200 trillion objects. If you look just 10 years ago, in 2012, Amazon touted how they were housing one trillion objects, so in a 10 year period, it's grown to 200 trillion and really most of that has happened in the last three or four years, so the pandemic and the shift in the ability and the technologies to process data better has really driven the need and driven the Cloud growth. >> I want to get into some of the issues around storage. Obviously, the trend on S3, look at what they've done. I mean, I saw my land at storage today. We've interviewed her. She's amazing. Just the EC2 and S3 the core pistons of AWS, obviously, the silicons getting better, the IaaS layers just getting so much more innovation. You got more performance abstraction layers at the past is emerging Cloud operations on premise now with hybrid is becoming a steady state and if you look at all the action, it's all this hyper-converged kind of conversations but it's not hyper-converged in a box, it's Cloud Storage, so there's a lot of activity around storage in the Cloud. Why is that? >> Well, because it's that companies are defined by their data and, if a company's data is growing, the company itself is growing. If it's not growing, they are stagnant and in trouble, and so, what's been happening now and you see it with the move to Cloud especially over the on-prem storage sources is people are starting to put more data to work and they're figuring out how to get the value out of it. Recent analysts made a statement that if the Fortune 1000 could just share and expose 10% more of their data, they'd have net revenue increases of 65 million. So it's just the ability to put that data to work and it's so much more capable in the Cloud than it has been on-prem to this point. >> It's interesting data portability is being discussed, data access, who gets access, do you move compute to the data? Do you move data around? And all these conversations are kind of around access and security. It's one of the big vulnerabilities around data whether it's an S3 bucket that's an manual configuration error, or if it's a tool that needs credentials. I mean, how do you manage all this stuff? This is really where a rethink kind of comes around so, can you share how you guys are surviving and thriving in that kind of crazy world that we're in? >> Yeah, absolutely. So, data has been the critical piece and moving to the Cloud has really been this notion of how do I protect my access into the Cloud? How do I protect who's got it? How do I think about the networking aspects? My east west traffic after I've blocked them from coming in but no one's thinking about the data itself and ultimately, you want to make that data very safe for the consumers of the data. They have an expectation and almost a demand that the data that they consume is safe and so, companies are starting to have to think about that. They haven't thought about it. It has been a blind spot, you mentioned that before. In regards to, I am protecting my management plane, we use posture management tools. We use automated services. If you're not automating, then you're struggling in the Cloud. But when it comes to the data, everyone thinks, "Oh, I've blocked access. I've used firewalls. I've used policies on the data," but they don't think about the data itself. It is that packet that you talked about that moves around to all the different consumers and the workflows and if you're not ensuring that that data is safe, then, you're in big trouble and we've seen it over and over again. >> I mean, it's definitely a hot category and it's changing a lot, so I love this conversation because it's a primary one, primary and secondary cover data cotton storage. It's kind of good joke there, but all kidding aside, it's a hard, you got data lineage tracing is a big issue right now. We're seeing companies come out there and kind of superability tangent there. The focus on this is huge. I'm curious, what was the origination story? What got you into the business? Was it like, were you having a problem with this? Did you see an opportunity? What was the focus when the company was founded? >> It's definitely to solve the problems that customers are facing. What's been very interesting is that they're out there needing this. They're needing to ensure their data is safe. As the whole story goes, they're putting it to work more, we're seeing this. I thought it was a really interesting series, one of your last series about data as code and you saw all the different technologies that are processing and managing that data and companies are leveraging today but still, once that data is ready and it's consumed by someone, it's causing real havoc if it's not either protected from being exposed or safe to use and consume and so that's been the biggest thing. So we saw a niche. We started with this notion of Cloud Storage being object storage, and there was nothing there protecting that. Amazon has the notion of access and that is how they protect the data today but not the packets themselves, not the underlying data and so, we created the solution to say, "Okay, we're going to ensure that that data is clean. We're also going to ensure that you have awareness of what that data is, the types of files you have out in the Cloud, wherever they may be, especially as they drift outside of the normal platforms that you're used to seeing that data in. >> It's interesting that people were storing data lakes. Oh yeah, just store a womp we might need and then became a data swamp. That's kind of like go back 67 years ago. That was the conversation. Now, the conversation is I need data. It's got to be clean. It's got to feed the machine learning. This is going to be a critical aspect of the business model for the developers who are building the apps, hence, the data has code reference which we've focused on but then you say, "Okay, great. Does this increase our surface area for potential hackers?" So there's all kinds of things that kind of open up, we start doing cool, innovative, things like that so, what are some of the areas that you see that your tech solves around some of the blind spots or with object store, the things that people are overlooking? What are some of the core things that you guys are seeing that you're solving? >> So, it's a couple of things, right now, the still the biggest thing you see in the news is configuration issues where people are losing their data or accidentally opening up to rights. That's the worst case scenario. Reads are a bad thing too but if you open up rights and we saw this with a major API vendor in the last couple of years they accidentally opened rights to their buckets. Hackers found it immediately and put malicious code into their APIs that were then downloaded and consumed by many, many of their customers so, it is happening out there. So the notion of ensuring configuration is good and proper, ensuring that data has not been augmented inappropriately and that it is safe for consumption is where we started and, we created a lightweight, highly scalable solution. At this point, we've scanned billions of files for customers and petabytes of data and we're seeing that it's such a critical piece to that to make sure that that data's safe. The big thing and you brought this up as well is the big thing is they're getting data from so many different sources now. It's not just data that they generate. You see one centralized company taking in from numerous sources, consolidating it, creating new value on top of it, and then releasing that and the question is, do you trust those sources or not? And even if you do, they may not be safe. >> We had an event around super Clouds is a topic we brought up to get bring the attention to the complexity of hybrid which is on premise, which is essentially Cloud operations. And the successful people that are doing things in the software side are essentially abstracting up the benefits of the infrastructures of service from HN AWS, right, which is great. Then they innovate on top so they have to abstract that storage is a key component of where we see the innovations going. How do you see your tech that kind of connecting with that trend that's coming which is everyone wants infrastructures code. I mean, that's not new. I mean, that's the goal and it's getting better every day but DevOps, the developers are driving the operations and security teams to like stay pace, so policy seeing a lot of policy seeing some cool things going on that's abstracting up from say storage and compute but then those are being put to use as well, so you've got this new wave coming around the corner. What's your reaction to that? What's your vision on that? How do you see that evolving? >> I think it's great, actually. I think that the biggest problem that you have to do as someone who is helping them with that process is make sure you don't slow it down. So, just like Cloud at scale, you must automate, you must provide different mechanisms to fit into workflows that allow them to do it just how they want to do it and don't slow them down. Don't hold them back and so, we've come up with different measures to provide and pretty much a fit for any workflow that any customer has come so far with. We do data this way. I want you to plug in right here. Can you do that? And so it's really about being able to plug in where you need to be, and don't slow 'em down. That's what we found so far. >> Oh yeah, I mean that exactly, you don't want to solve complexity with more complexity. That's the killer problem right now so take me through the use case. Can you just walk me through how you guys engage with customers? How they consume your service? How they deploy it? You got some deployment scenarios. Can you talk about how you guys fit in and what's different about what you guys do? >> Sure, so, we're what we're seeing is and I'll go back to this data coming from numerous sources. We see different agencies, different enterprises taking data in and maybe their solution is intelligence on top of data, so they're taking these data sets in whether it's topographical information or whether it's in investing type information. Then they process that and they scan it and they distribute it out to others. So, we see that happening as a big common piece through data ingestion pipelines, that's where these folks are getting most of their data. The other is where is the data itself, the document or the document set, the actual critical piece that gets moved around and we see that in pharmaceutical studies, we see it in mortgage industry and FinTech and healthcare and so, anywhere that, let's just take a very simple example, I have to apply for insurance. I'm going to upload my Social Security information. I'm going to upload a driver's license, whatever it happens to be. I want to one know which of my information is personally identifiable, so I want to be able to classify that data but because you're trusting or because you're taking data from untrusted sources, then you have to consider whether or not it's safe for you to use as your own folks and then also for the downstream users as well. >> It's interesting, in the security world, we hear zero trust and then we hear supply chain, software supply chains. We get to trust everybody, so you got kind of two things going on. You got the hardware kind of like all the infrastructure guys saying, "Don't trust anything 'cause we have a zero trust model," but as you start getting into the software side, it's like trust is critical like containers and Cloud native services, trust is critical. You guys are kind of on that balance where you're saying, "Hey, I want data to come in. We're going to look at it. We're going to make sure it's clean." That's the value here. Is that what I'm hearing you, you're taking it and you're saying, "Okay, we'll ingest it and during the ingestion process, we'll classify it. We'll do some things to it with our tech and put it in a position to be used properly." Is that right? >> That's exactly right. That's a great summary, but ultimately, if you're taking data in, you want to ensure it's safe for everyone else to use and there are a few ways to do it. Safety doesn't just mean whether it's clean or not. Is there malicious content or not? It means that you have complete coverage and control and awareness over all of your data and so, I know where it came from. I know whether it's clean and I know what kind of data is inside of it and we don't see, we see that the interesting aspects are we see that the cleanliness factor is so critical in the workflow, but we see the classification expand outside of that because if your data drifts outside of what your standard workflow was, that's when you have concerns, why is PII information over here? And that's what you have to stay on top of, just like AWS is control plane. You have to manage it all. You have to make sure you know what services have all of a sudden been exposed publicly or not, or maybe something's been taken over or not and you control that. You have to do that with your data as well. >> So how do you guys fit into the security posture? Say it a large company that might want to implement this right away. Sounds like it's right in line with what developers want and what people want. It's easy to implement from what I see. It's about 10, 15, 20 minutes to get up and running. It's not hard. It's not a heavy lift to get in. How do you guys fit in once you get operationalized when you're successful? >> It's a lightweight, highly scalable serverless solution, it's built on Fargate containers and it goes in very easily and then, we offer either native integrations through S3 directly, or we offer APIs and the APIs are what a lot of our customers who want inline realtime scanning leverage and we also are looking at offering the actual proxy aspects. So those folks who use the S3 APIs that our native AWS, puts and gets. We can actually leverage our put and get as an endpoint and when they retrieve the file or place the file in, we'll scan it on access as well, so, it's not just a one time data arrest. It can be a data in motion as you're retrieving the information as well >> We were talking with our friends the other day and we're talking about companies like Datadog. This is the model people want, they want to come in and developers are driving a lot of the usage and operational practice so I have to ask you, this fits kind of right in there but also, you also have the corporate governance policy police that want to make sure that things are covered so, how do you balance that? Because that's an important part of this as well. >> Yeah, we're really flexible for the different ways they want to consume and and interact with it. But then also, that is such a critical piece. So many of our customers, we probably have a 50/50 breakdown of those inside the US versus those outside the US and so, you have those in California with their information protection act. You have GDPR in Europe and you have Asia having their own policies as well and the way we solve for that is we scan close to the data and we scan in the customer's account, so we don't require them to lose chain of custody and send data outside of the accoun. That is so critical to that aspect. And then we don't ask them to transfer it outside of the region, so, that's another critical piece is data residency has to be involved as part of that compliance conversation. >> How much does Cloud enable you to do this that you couldn't really do before? I mean, this really shows the advantage of natively being in the Cloud to kind of take advantage of the IaaS to SAS components to solve these problems. Share your thoughts on how this is possible. What if there was no problem, what would you do? >> It really makes it a piece of cake. As silly as that sounds, when we deploy our solution, we provide a management console for them that runs inside their own accounts. So again, no metadata or anything has to come out of it and it's all push button click and because the Cloud makes it scalable because Cloud offers infrastructure as code, we can take advantage of that and then, when they say go protect data in the Ireland region, they push a button, we stand up a stack right there in the Ireland region and scan and protect their data right there. If they say we need to be in GovCloud and operate in GovCloud East, there you go, push the button and you can behave in GovCloud East as well. >> And with server lists and the region support and all the goodness really makes a really good opportunity to really manage these Cloud native services with the data interaction so, really good prospects. Final question for you. I mean, we love the story. I think it is going to be a really changing market in this area in a big way. I think the data storage relationship relative to higher level services will be huge as Cloud native continues to drive everything. What's the future? I mean, you guys see yourself as a all encompassing, all singing and dancing storage platform or a set of services that you're going to enable developers and drive that value. Where do you see this going? >> I think that it's a mix of both. Ultimately, you saw even on Storage Day the announcement of file cash and file cash creates a new common name space across different storage platforms and so, the notion of being able to use one area to access your data and have it come from different spots is fantastic. That's been in the on-prem world for a couple of years and it's finally making it to the Cloud. I see us following that trend in helping support. We're super laser-focused on Cloud Storage itself so, EBS volumes, we keep having customers come to us and say, "I don't want to run agents in my EC2 instances. I want you to snap and scan and I don't want to, I've got all this EFS and FSX out there that we want to scan," and so, we see that all of the Cloud Storage platforms, Amazon work docs, EFS, FSX, EBS, S3, we'll all come together and we'll provide a solution that's super simple, highly scalable that can meet all the storage needs so, that's our goal right now and where we're working towards. >> Well, Cloud Storage Security, you couldn't get a more a descriptive name of what you guys are working on and again, I've had many contacts with Andy Jassy when he was running AWS and he always loves to quote "The Innovator's Dilemma," one of his teachers at Harvard Business School and we were riffing on that the other day and I want to get your thoughts. It's not so much "The Innovator's Dilemma" anymore relative to Cloud 'cause that's kind of a done deal. It's "The Integrator's Dilemma," and so, it's the integrations are so huge now. If you don't integrate the right way, that's the new dilemma. What's your reaction to that? >> A 100% agreed. It's been super interesting. Our customers have come to us for a security solution and they don't expect us to be 'cause we don't want to be either. Our own engine vendor, we're not the ones creating the engines. We are integrating other engines in and so we can provide a multi engine scan that gives you higher efficacy. So this notion of offering simple integrations without slowing down the process, that's the key factor here is what we've been after so, we are about simplifying the Cloud experience to protecting your storage and it's been so funny because I thought customers might complain that we're not a name brand engine vendor, but they love the fact that we have multiple engines in place and we're bringing that to them this higher efficacy, multi engine scan. >> I mean the developer trends can change on a dime. You make it faster, smarter, higher velocity and more protected, that's a winning formula in the Cloud so Ed, congratulations and thanks for spending the time to riff on and talk about Cloud Storage Security and congratulations on the company's success. Thanks for coming on "theCUBE." >> My pleasure, thanks a lot, John. >> Okay. This conversation here in Palo Alto, California I'm John Furrier, host of "theCUBE." Thanks for watching.
SUMMARY :
the great Cloud background, You got the nice look there. and driven the Cloud growth. and if you look at all the action, and it's so much more capable in the Cloud It's one of the big that the data that they consume is safe and kind of superability tangent there. and so that's been the biggest thing. the areas that you see and the question is, do you and security teams to like stay pace, problem that you have to do That's the killer problem right now and they distribute it out to others. and during the ingestion and you control that. into the security posture? and the APIs are what of the usage and operational practice and the way we solve for of the IaaS to SAS components and because the Cloud makes it scalable and all the goodness really and so, the notion of and so, it's the and so we can provide a multi engine scan I mean the developer I'm John Furrier, host of "theCUBE."
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ed Casper | PERSON | 0.99+ |
Ed Casmer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
US | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
200 trillion | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
February | DATE | 0.99+ |
Ireland | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
65 million | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
10% | QUANTITY | 0.99+ |
information protection act | TITLE | 0.99+ |
15 | QUANTITY | 0.99+ |
FSX | TITLE | 0.99+ |
Ed | PERSON | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
one time | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
10 years ago | DATE | 0.99+ |
one trillion objects | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
100% | QUANTITY | 0.98+ |
billions of files | QUANTITY | 0.98+ |
20 minutes | QUANTITY | 0.98+ |
Harvard Business School | ORGANIZATION | 0.98+ |
Asia | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
67 years ago | DATE | 0.98+ |
over 200 trillion objects | QUANTITY | 0.98+ |
50/50 | QUANTITY | 0.97+ |
Cloud Storage Security | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
pandemic | EVENT | 0.96+ |
today | DATE | 0.95+ |
HN AWS | ORGANIZATION | 0.95+ |
Cloud | TITLE | 0.94+ |
The Integrator's Dilemma | TITLE | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
EC2 | TITLE | 0.93+ |
zero trust | QUANTITY | 0.93+ |
last couple of years | DATE | 0.93+ |
about 10 | QUANTITY | 0.93+ |
EFS | TITLE | 0.9+ |
one area | QUANTITY | 0.88+ |
The Innovator's Dilemma | TITLE | 0.87+ |
10 year period | QUANTITY | 0.81+ |
GovCloud | TITLE | 0.78+ |
Cloud Storage | TITLE | 0.77+ |
The Innovator's Dilemma | TITLE | 0.75+ |
Lafomo | PERSON | 0.75+ |
EBS | TITLE | 0.72+ |
last three | DATE | 0.71+ |
Storage Day | EVENT | 0.7+ |
Cloud Security | TITLE | 0.69+ |
CUBE | ORGANIZATION | 0.67+ |
Fortune 1000 | ORGANIZATION | 0.61+ |
EBS | ORGANIZATION | 0.59+ |
Ed Walsh, ChaosSearch | AWS re:Inforce 2022
(upbeat music) >> Welcome back to Boston, everybody. This is the birthplace of theCUBE. In 2010, May of 2010 at EMC World, right in this very venue, John Furrier called it the chowder and lobster post. I'm Dave Vellante. We're here at RE:INFORCE 2022, Ed Walsh, CEO of ChaosSearch. Doing a drive by Ed. Thanks so much for stopping in. You're going to help me wrap up in our final editorial segment. >> Looking forward to it. >> I really appreciate it. >> Thank you for including me. >> How about that? 2010. >> That's amazing. It was really in this-- >> Really in this building. Yeah, we had to sort of bury our way in, tunnel our way into the Blogger Lounge. We did four days. >> Weekends, yeah. >> It was epic. It was really epic. But I'm glad they're back in Boston. AWS was going to do June in Houston. >> Okay. >> Which would've been awful. >> Yeah, yeah. No, this is perfect. >> Yeah. Thank God they came back. You saw Boston in summer is great. I know it's been hot, And of course you and I are from this area. >> Yeah. >> So how you been? What's going on? I mean, it's a little crazy out there. The stock market's going crazy. >> Sure. >> Having the tech lash, what are you seeing? >> So it's an interesting time. So I ran a company in 2008. So we've been through this before. By the way, the world's not ending, we'll get through this. But it is an interesting conversation as an investor, but also even the customers. There's some hesitation but you have to basically have the right value prop, otherwise things are going to get sold. So we are seeing longer sales cycles. But it's nothing that you can't overcome. But it has to be something not nice to have, has to be a need to have. But I think we all get through it. And then there is some, on the VC side, it's now buckle down, let's figure out what to do which is always a challenge for startup plans. >> In pre 2000 you, maybe you weren't a CEO but you were definitely an executive. And so now it's different and a lot of younger people haven't seen this. You've got interest rates now rising. Okay, we've seen that before but it looks like you've got inflation, you got interest rates rising. >> Yep. >> The consumer spending patterns are changing. You had 6$, $7 gas at one point. So you have these weird crosscurrents, >> Yup. >> And people are thinking, "Okay post-September now, maybe because of the recession, the Fed won't have to keep raising interest rates and tightening. But I don't know what to root for. It's like half full, half empty. (Ed laughing) >> But we haven't been in an environment with high inflation. At least not in my career. >> Right. Right. >> I mean, I got into 92, like that was long gone, right?. >> Yeah. >> So it is a interesting regime change that we're going to have to deal with, but there's a lot of analogies between 2008 and now that you still have to work through too, right?. So, anyway, I don't think the world's ending. I do think you have to run a tight shop. So I think the grow all costs is gone. I do think discipline's back in which, for most of us, discipline never left, right?. So, to me that's the name of the game. >> What do you tell just generally, I mean you've been the CEO of a lot of private companies. And of course one of the things that you do to retain people and attract people is you give 'em stock and it's great and everybody's excited. >> Yeah. >> I'm sure they're excited cause you guys are a rocket ship. But so what's the message now that, Okay the market's down, valuations are down, the trees don't grow to the moon, we all know that. But what are you telling your people? What's their reaction? How do you keep 'em motivated? >> So like anything, you want over communicate during these times. So I actually over communicate, you get all these you know, the Sequoia decks, 2008 and the recent... >> (chuckles) Rest in peace good times, that one right? >> I literally share it. Why? It's like, Hey, this is what's going on in the real world. It's going to affect us. It has almost nothing to do with us specifically, but it will affect us. Now we can't not pay attention to it. It does change how you're going to raise money, so you got to make sure you have the right runway to be there. So it does change what you do, but I think you over communicate. So that's what I've been doing and I think it's more like a student of the game, so I try to share it, and I say some appreciate it others, I'm just saying, this is normal, we'll get through this and this is what happened in 2008 and trust me, once the market hits bottom, give it another month afterwards. Then everyone says, oh, the bottom's in and we're back to business. Valuations don't go immediately back up, but right now, no one knows where the bottom is and that's where kind of the world's ending type of things. >> Well, it's interesting because you talked about, I said rest in peace good times >> Yeah >> that was the Sequoia deck, and the message was tighten up. Okay, and I'm not saying you shouldn't tighten up now, but the difference is, there was this period of two years of easy money and even before that, it was pretty easy money. >> Yeah. >> And so companies are well capitalized, they have runway so it's like, okay, I was talking to Frank Slootman about this now of course there are public companies, like we're not taking the foot off the gas. We're inherently profitable, >> Yeah. >> we're growing like crazy, we're going for it. You know? So that's a little bit of a different dynamic. There's a lot of good runway out there, isn't there? >> But also you look at the different companies that were either born or were able to power through those environments are actually better off. You come out stronger in a more dominant position. So Frank, listen, if you see what Frank's done, it's been unbelievable to watch his career, right?. In fact, he was at Data Domain, I was Avamar so, but look at what he's done since, he's crushed it. Right? >> Yeah. >> So for him to say, Hey, I'm going to literally hit the gas and keep going. I think that's the right thing for Snowflake and a right thing for a lot of people. But for people in different roles, I literally say that you have to take it seriously. What you can't be is, well, Frank's in a different situation. What is it...? How many billion does he have in the bank? So it's... >> He's over a billion, you know, over a billion. Well, you're on your way Ed. >> No, no, no, it's good. (Dave chuckles) Okay, I want to ask you about this concept that we've sort of we coined this term called Supercloud. >> Sure. >> You could think of it as the next generation of multi-cloud. The basic premises that multi-cloud was largely a symptom of multi-vendor. Okay. I've done some M&A, I've got some Shadow IT, spinning up, you know, Shadow clouds, projects. But it really wasn't a strategy to have a continuum across clouds. And now we're starting to see ecosystems really build, you know, you've used the term before, standing on the shoulders of giants, you've used that a lot. >> Yep. >> And so we're seeing that. Jerry Chen wrote a seminal piece on Castles in The Cloud, so we coined this term SuperCloud to connote this abstraction layer that hides the underlying complexities and primitives of the individual clouds and then adds value on top of it and can adjudicate and manage, irrespective of physical location, Supercloud. >> Yeah. >> Okay. What do you think about that concept?. How does it maybe relate to some of the things that you're seeing in the industry? >> So, standing on shoulders of giants, right? So I always like to do hard tech either at big company, small companies. So we're probably your definition of a Supercloud. We had a big vision, how to literally solve the core challenge of analytics at scale. How are you going to do that? You're not going to build on your own. So literally we're leveraging the primitives, everything you can get out of the Amazon cloud, everything get out of Google cloud. In fact, we're even looking at what it can get out of this Snowflake cloud, and how do we abstract that out, add value to it? That's where all our patents are. But it becomes a simplified approach. The customers don't care. Well, they care where their data is. But they don't care how you got there, they just want to know the end result. So you simplify, but you gain the advantages. One thing's interesting is, in this particular company, ChaosSearch, people try to always say, at some point the sales cycle they say, no way, hold on, no way that can be fast no way, or whatever the different issue. And initially we used to try to explain our technology, and I would say 60% was explaining the public, cloud capabilities and then how we, harvest those I guess, make them better add value on top and what you're able to get is something you couldn't get from the public clouds themselves and then how we did that across public clouds and then extracted it. So if you think about that like, it's the Shoulders of giants. But what we now do, literally to avoid that conversation because it became a lengthy conversation. So, how do you have a platform for analytics that you can't possibly overwhelm for ingest. All your messy data, no pipelines. Well, you leverage things like S3 and EC2, and you do the different security things. You can go to environments say, you can't possibly overrun me, I could not say that. If I didn't literally build on the shoulders giants of all these public clouds. But the value. So if you're going to do hard tech as a startup, you're going to build, you're going to be the principles of Supercloud. Maybe they're not the same size of Supercloud just looking at Snowflake, but basically, you're going to leverage all that, you abstract it out and that's where you're able to have a lot of values at that. >> So let me ask you, so I don't know if there's a strict definition of Supercloud, We sort of put it out to the community and said, help us define it. So you got to span multiple clouds. It's not just running in each cloud. There's a metadata layer that kind of understands where you're pulling data from. Like you said you can pull data from Snowflake, it sounds like we're not running on Snowflake, correct? >> No, complimentary to them in their different customers. >> Yeah. Okay. >> They want to build on top of a data platform, data apps. >> Right. And of course they're going cross cloud. >> Right. >> Is there a PaaS layer in there? We've said there's probably a Super PaaS layer. You're probably not doing that, but you're allowing people to bring their own, bring your own PaaS sort of thing maybe. >> So we're a little bit different but basically we publish open APIs. We don't have a user interface. We say, keep the user interface. Again, we're solving the challenge of analytics at scale, we're not trying to retrain your analytics, either analysts or your DevOps or your SOV or your Secop team. They use the tools they already use. Elastic search APIs, SQL APIs. So really they program, they build applications on top of us, Equifax is a good example. Case said it coming out later on this week, after 18 months in production but, basically they're building, we provide the abstraction layer, the quote, I'm going to kill it, Jeff Tincher, who owns all of SREs worldwide, said to the effect of, Hey I'm able to rethink what I do for my data pipelines. But then he also talked about how, that he really doesn't have to worry about the data he puts in it. We deal with that. And he just has to, just query on the other side. That simplicity. We couldn't have done that without that. So anyway, what I like about the definition is, if you were going to do something harder in the world, why would you try to rebuild what Amazon, Google and Azure or Snowflake did? You're going to add things on top. We can still do intellectual property. We're still doing patents. So five grand patents all in this. But literally the abstraction layer is the simplification. The end users do not want to know that complexity, even though they ask the questions. >> And I think too, the other attribute is it's ecosystem enablement. Whereas I think, >> Absolutely >> in general, in the Multicloud 1.0 era, the ecosystem wasn't thinking about, okay, how do I build on top and abstract that. So maybe it is Multicloud 2.0, We chose to use Supercloud. So I'm wondering, we're at the security conference, >> RE: INFORCE is there a security Supercloud? Maybe Snyk has the developer Supercloud or maybe Okta has the identity Supercloud. I think CrowdStrike maybe not. Cause CrowdStrike competes with Microsoft. So maybe, because Microsoft, what's interesting, Merritt Bear was just saying, look, we don't show up in the spending data for security because we're not charging for most of our security. We're not trying to make a big business. So that's kind of interesting, but is there a potential for the security Supercloud? >> So, I think so. But also, I'll give you one thing I talked to, just today, at least three different conversations where everyone wants to log data. It's a little bit specific to us, but basically they want to do the security data lake. The idea of, and Snowflake talks about this too. But the idea of putting all the data in one repository and then how do you abstract out and get value from it? Maybe not the perfect, but it becomes simple to do but hard to get value out. So the different players are going to do that. That's what we do. We're able to, once you land it in your S3 or it doesn't matter, cloud of choice, simple storage, we allow you to get after that data, but we take the primitives and hide them from you. And all you do is query the data and we're spinning up stateless computer to go after it. So then if I look around the floor. There's going to be a bunch of these players. I don't think, why would someone in this floor try to recreate what Amazon or Google or Azure had. They're going to build on top of it. And now the key thing is, do you leave it in standard? And now we're open APIs. People are building on top of my open APIs or do you try to put 'em in a walled garden? And they're in, now your Supercloud. Our belief is, part of it is, it needs to be open access and let you go after it. >> Well. And build your applications on top of it openly. >> They come back to snowflake. That's what Snowflake's doing. And they're basically saying, Hey come into our proprietary environment. And the benefit is, and I think both can win. There's a big market. >> I agree. But I think the benefit of Snowflake's is, okay, we're going to have federated governance, we're going to have data sharing, you're going to have access to all the ecosystem players. >> Yep. >> And as everything's going to be controlled and you know what you're getting. The flip side of that is, Databricks is the other end >> Yeah. >> of that spectrum, which is no, no, you got to be open. >> Yeah. >> So what's going to happen, well what's happening clearly, is Snowflake's saying, okay we've got Snowpark. we're going to allow Python, we're going to have an Apache Iceberg. We're going to have open source tooling that you can access. By the way, it's not going to be as good as our waled garden where the flip side of that is you get Databricks coming at it from a data science and data engineering perspective. And there's a lot of gaps in between, aren't there? >> And I think they both win. Like for instance, so we didn't do Snowpark integration. But we work with people building data apps on top of Snowflake or data bricks. And what we do is, we can add value to that, or what we've done, again, using all the Supercloud stuff we're done. But we deal with the unstructured data, the four V's coming at you. You can't pipeline that to save. So we actually could be additive. As they're trying to do like a security data cloud inside of Snowflake or do the same thing in Databricks. That's where we can play. Now, we play with them at the application level that they get some data from them and some data for us. But I believe there's a partnership there that will do it inside their environment. To us they're just another large scaler environment that my customers want to get after data. And they want me to abstract it out and give value. >> So it's another repository to you. >> Yeah. >> Okay. So I think Snowflake recently added support for unstructured data. You chose not to do Snowpark because why? >> Well, so the way they're doing the unstructured data is not bad. It's JSON data. Basically, This is the dilemma. Everyone wants their application developers to be flexible, move fast, securely but just productivity. So you get, give 'em flexibility. The problem with that is analytics on the end want to be structured to be performant. And this is where Snowflake, they have to somehow get that raw data. And it's changing every day because you just let the developers do what they want now, in some structured base, but do what you need to do your business fast and securely. So it completely destroys. So they have large customers trying to do big integrations for this messy data. And it doesn't quite work, cause you literally just can't make the pipelines work. So that's where we're complimentary do it. So now, the particular integration wasn't, we need a little bit deeper integration to do that. So we're integrating, actually, at the data app layer. But we could, see us and I don't, listen. I think Snowflake's a good actor. They're trying to figure out what's best for the customers. And I think we just participate in that. >> Yeah. And I think they're trying to figure out >> Yeah. >> how to grow their ecosystem. Because they know they can't do it all, in fact, >> And we solve the key thing, they just can't do certain things. And we do that well. Yeah, I have SQL but that's where it ends. >> Yeah. >> I do the messy data and how to play with them. >> And when you talk to one of their founders, anyway, Benoit, he comes on the cube and he's like, we start with simple. >> Yeah. >> It reminds me of the guy's some Pure Storage, that guy Coz, he's always like, no, if it starts to get too complicated. So that's why they said all right, we're not going to start out trying to figure out how to do complex joins and workload management. And they turn that into a feature. So like you say, I think both can win. It's a big market. >> I think it's a good model. And I love to see Frank, you know, move. >> Yeah. I forgot So you AVMAR... >> In the day. >> You guys used to hate each other, right? >> No, no, no >> No. I mean, it's all good. >> But the thing is, look what he's done. Like I wouldn't bet against Frank. I think it's a good message. You can see clients trying to do it. Same thing with Databricks, same thing with BigQuery. We get a lot of same dynamic in BigQuery. It's good for a lot of things, but it's not everything you need to do. And there's ways for the ecosystem to play together. >> Well, what's interesting about BigQuery is, it is truly cloud native, as is Snowflake. You know, whereas Amazon Redshift was sort of Parexel, it's cobbled together now. It's great engineering, but BigQuery gets a lot of high marks. But again, there's limitations to everything. That's why companies like yours can exist. >> And that's why.. so back to the Supercloud. It allows me as a company to participate in that because I'm leveraging all the underlying pieces. Which we couldn't be doing what we're doing now, without leveraging the Supercloud concepts right, so... >> Ed, I really appreciate you coming by, help me wrap up today in RE:INFORCE. Always a pleasure seeing you, my friend. >> Thank you. >> All right. Okay, this is a wrap on day one. We'll be back tomorrow. I'll be solo. John Furrier had to fly out but we'll be following what he's doing. This is RE:INFORCE 2022. You're watching theCUBE. I'll see you tomorrow.
SUMMARY :
John Furrier called it the How about that? It was really in this-- Yeah, we had to sort of bury our way in, But I'm glad they're back in Boston. No, this is perfect. And of course you and So how you been? But it's nothing that you can't overcome. but you were definitely an executive. So you have these weird crosscurrents, because of the recession, But we haven't been in an environment Right. that was long gone, right?. I do think you have to run a tight shop. the things that you do But what are you telling your people? 2008 and the recent... So it does change what you do, and the message was tighten up. the foot off the gas. So that's a little bit But also you look at I literally say that you you know, over a billion. Okay, I want to ask you about this concept you know, you've used the term before, of the individual clouds and to some of the things So I always like to do hard tech So you got to span multiple clouds. No, complimentary to them of a data platform, data apps. And of course people to bring their own, the quote, I'm going to kill it, And I think too, the other attribute is in the Multicloud 1.0 era, for the security Supercloud? And now the key thing is, And build your applications And the benefit is, But I think the benefit of Snowflake's is, you know what you're getting. which is no, no, you got to be open. that you can access. You can't pipeline that to save. You chose not to do Snowpark but do what you need to do they're trying to figure out how to grow their ecosystem. And we solve the key thing, I do the messy data And when you talk to So like you say, And I love to see Frank, you know, move. So you AVMAR... it's all good. but it's not everything you need to do. there's limitations to everything. so back to the Supercloud. Ed, I really appreciate you coming by, I'll see you tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Tincher | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
2008 | DATE | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
John Furrier | PERSON | 0.99+ |
Houston | LOCATION | 0.99+ |
2010 | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
Benoit | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
60% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
June | DATE | 0.99+ |
May of 2010 | DATE | 0.99+ |
BigQuery | TITLE | 0.99+ |
Castles in The Cloud | TITLE | 0.99+ |
September | DATE | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
$7 | QUANTITY | 0.99+ |
each cloud | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
over a billion | QUANTITY | 0.99+ |
Multicloud 2.0 | TITLE | 0.99+ |
four days | QUANTITY | 0.99+ |
M&A | ORGANIZATION | 0.98+ |
one repository | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Merritt Bear | PERSON | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
Azure | ORGANIZATION | 0.97+ |
SQL | TITLE | 0.97+ |
EC2 | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
Fed | ORGANIZATION | 0.96+ |
S3 | TITLE | 0.96+ |
five grand patents | QUANTITY | 0.96+ |
Snowpark | ORGANIZATION | 0.96+ |
Multicloud 1.0 | TITLE | 0.95+ |
billion | QUANTITY | 0.94+ |
Avamar | ORGANIZATION | 0.93+ |
EMC World | LOCATION | 0.93+ |
Snowflake | PERSON | 0.93+ |
one point | QUANTITY | 0.93+ |
Supercloud | TITLE | 0.93+ |
Equifax | ORGANIZATION | 0.92+ |
92 | QUANTITY | 0.91+ |
Super PaaS | TITLE | 0.91+ |
Snowflake | TITLE | 0.89+ |
Ed Bailey, Cribl | AWS Startup Showcase S2 E2
(upbeat music) >> Welcome everyone to theCUBE presentation of the AWS Startup Showcase, the theme here is Data as Code. This is season two, episode two of our ongoing series covering the exciting startups from the AWS ecosystem. And talk about the future of data, future of analytics, the future of development and all kind of cool stuff in Multicloud. I'm your host, John Furrier. Today we're joined by Ed Bailey, Senior Technology, Technical Evangelist at Cribl. Thanks for coming on the queue here. >> I thank you for the invitation, thrilled to be here. >> The theme of this session is the observability lake, which I love by the way I'm getting into that in a second. A breach investigation's best friend, which is a great topic. Couple of things, one, I like the breach investigation angle, but I also like this observability lake positioning, because I think this is a teaser of what's coming, more and more data usage where it's actually being applied specifically for things here, it's observability lake. So first, what is an observability lake? Why is it important? >> Why it's important is technology professionals, especially security professionals need data to make decisions. They need data to drive better decisions. They need data to understand, just to achieve understanding. And that means they need everything. They don't need what they can afford to store. They don't need not what vendor is going to let them store. They need everything. And I think as a point of the observability lake, because you couple an observability pipeline with the lake to bring your enterprise of data, to make it accessible for analytics, to be able to use it, to be able to get value from it. And I think that's one of the things that's missing right now in the enterprises. Admins are being forced to make decisions about, okay, we can't afford to keep this, we can afford to keep this, they're missing things. They're missing parts of the picture. And by bringing, able to bring it together, to be able to have your cake and eat it too, where I can get what I need and I can do it affordably is just, I think that's the future, and it just drives value for everyone. >> And it just makes a lot of sense data lake or the earlier concert, throw everything into the lake, and you can figure it out, you can query it, you can take action on it real time, you can stream it. You can do all kinds of things with it. Verb observability is important because it's the most critical thing people are doing right now for all kinds of things from QA, administration, security. So this is where the breach piece comes in. I like that's part of the talk because the breached investigation's best friend, it implies that you got the secret sourced to behind it, right? So, what is the state of the breach investigation today? What's going on with that? Because we know breaches, we see 'em out there, but like, why is this the best friend of a breach investigator? >> Well, and this is unfortunate, but typically there's an enormous delay between breach and detection. And right now, there's an IBM study, I think it's 287 days, but from the actual breach to detection and containment. It's an enormous amount of time. And the key is so when you do detect a breach, you're bringing in your instant, your response team, and typically without an observability lake, without Cribl solutions around observability pipeline, you're going to have an incomplete picture. The incident response team has to first to understand what's the scope of the breach. Is it one server? Is it three servers? Is it all the servers? You got to understand what's been compromised, what's been the end, what's the impact? How did the breach occur in the first place? And they need all the data to stitch that together, and they need it quickly. The more time it takes to get that data, the more time it takes for them to finish their analysis and contain the breach. I mean, hence the, I think about an 87, 90 days to contain a breach. And so by being able to remove the friction, by able to make it easier to achieve these goals, what shouldn't be hard, but making, by removing that friction, you speed up the containment and resolution time. Not to mention for many system administrators, they don't simply have the data because they can afford to store the data in their SIEM. Or they have to go to their backup team to get a restore which can take days. And so that's-- It's just so many obstacles to getting resolution right now. >> I mean, it's just, you're crawling through glass there, right? Because you think about it like just the timing aspect. Where is the data? Where is it stored and relevant and-- >> And do you have it at all? >> And you have it at all, and then, you know, that person doesn't work anywhere, they change jobs. I mean, who is keeping track of all this? You guys have now, this capability where you can come in and do the instrumentation with the observability lake without a lot of change to the environment, which is not the way it used to be. Used to be, buy a tool, build a platform. Cribl has a solution that eases the struggles with the enterprise. What specifically is that pain point? And what do you guys do specifically? >> Well, I'll start out with kind of example, what drew me to Cribl, so back in 2018. I'm running the Splunk team for a very large multinational. The complexity of that, we were dealing with the complexity of the data, the demands we were getting from security and operations were just an enormous issue to overcome. I had vendors come to me all the time that will solve your problems, but that means you got to move to our platform where you have to get rid of Splunk or you have to do this, and I'm losing something. And what Cribl stream brought into, was I could put it between my sources and my destinations and manage my data. And I would have flow control over the data. I don't have to lose anything. I could keep continuing use our existing analytics tools, and that sense of power and control, and I don't have to lose anything. I was like, there's something wrong here. This is too good to be true. And so what we're talking about now in terms of breach investigation, is that with Cribl stream, I can create a clone of my data to an object store. So this is in, this is almost any object store. So it can be AWS, it could be the other vendor object stores. It could be on-prem object stores. And then I can house my data, I can house all my data at the cheapest possible price. So instead of eating up my most expensive storage, I put all my data in my object store. And I only put the data I need for the detections in my SIEM. So if, and hopefully never, but if you do have a breach, lock stream has a wonderful UI that makes a trivial to then pick my data out of my object store and restore it back into my SIEM so that my IR team has to develop a complete picture of how the breach happen. What's the scope? What is their lateral movement and answer those questions. And it just, it takes the friction away. Just like you said, just no more crawling over glass. You're running to your solution. >> You mentioned object store, and you're streaming that in. You talk about the Cribble stream tool. I'm assuming there when you're streaming the pipeline stuff, but is there a schema involved? Is there database challenges? What, how do you guys look at that? I know you're vendor agnostic. I like that piece, you plug in and you leverage all the tools that are out there, Splunk, Datadog, whatever. But how about on the database side, what's the impact there? >> Well, so I'm assuming you're talking about the object store itself, so we don't have to apply the schema. We can fit the data to whichever the object store is. We structure the data so it makes it easier to understand. For example, if I want to see communications from one IP to another IP, we structure it to make it easier to see that and query that, but it is just, we're-- Yeah, it's completely vendor neutral and this makes it so simple, so simple to enable, I think-- >> So no pre-defined schema needed. >> No, not at all. And this, it made it so much easier. I think we enabled this for the enterprise. I think it took us three hours to do, and we were able to then start, I mean, start cutting our retention costs dramatically. >> Yeah, it's great when you get that kind of value, time to value critical and all the skeptics fall to the sides pretty quickly. (chuckles) I got to ask you, well, go ahead. >> So I say, I mean, previously, I would have to go to our backup team. We'd have to open up a ticket, we'd have to have a bridge, then we'd have to go through the process of pulling tape and being, it could take, you know, hours, hours if not days to restore the amount of data we needed. And just it, you know, we were able to run to our goals, and solve business problems instead of focusing on the process steps of getting things done. >> Right, so take me through the architecture here and some customer examples, 'cause you have the Cribble streaming there, observability pipeline. That's key, you mentioned that. >> Yes. >> And then they build out these observability lakes from that. So what is the impact of that? Can you share the customers that are using that solution? What are they seeing for benefits? What are some of the impact? Can you give us some specifics? >> I mean, I can't share with all the exact customer names. I can definitely give you some examples. Like referenceable conference would be TransUnion, so that I came from TransUnion. I was one of the first customers and it solved enormous number of problems for us. Autodesk is another great example. The idea that we're able to automate and data practices. I mean, just for example, what we were talking about with backups. We'd have to, you have to put a lot of time into managing your backups in your inner analytics platforms, you have to. And then you're locked into custom database schemas, you're locked into vendors. And it's also, it's still, it's expensive. So being able to spend a few hours, dramatically cut your costs, but still have the data available, and that's the key. I didn't have to make compromises, 'cause before I was having to say, okay, we're going to keep this, we're going to just drop this and hope for the best. And we just don't, we just didn't have to do that anymore. I think for the same thing for TransUnion and Autodesk, the idea that we're going to lower our cost, we're going to make it easier for our administrators to do their job and so they can spend more time on business value fundamentals, like responding to a breach. You're going to spend time working with your teams, getting value observability solutions and stop spending time on writing custom solutions using to open source tools. 'Cause your engineering time is the most precious asset for any enterprise and you got to focus your engineering time on where it's needed the most. >> Yeah, and they can't underestimate the hassle and cost of ownership, of swapping out pre-existing stuff, just for the sake of having a functionality. I mean that's a big-- >> It's pain and that's a big thing about lock stream is that being vendor neutral is so important. If you want to use the Splunk universal forwarder, that's great. If you want to use Beats, that's awesome. If you want to use Fluentd, even better. If you want to use all three, you can do that too. It's the customer choice and we're saying to people, use what suits your needs. And if you want to write some of your data to elastic, that's great. Some of your data to Splunk, that's even better. Some of it to, pick your pick, fine as well or Exabeam. You have the choices to put together, put your own solutions together and put your data where you need it to be. We're not asking you only in our ecosystem to work with only our partners. We're letting you pick and choose what suits your business. >> Yeah, you know, that's the direction I was just talking about the Amazon folks around their serverless. You know, you can use any tool, you know, you can, they have that core architecture for everything, the S3 and then pick whatever you want to use. SageMaker, just that other thing. This is the new way. That's the way it has to be to be effective. How do you guys handle that? What's been the reaction from customers? Do they like, roll their eyes and doubt you guys, or can you do it? Are they skeptical? How fast can you convert 'em over? (chuckles) >> Right, and that's always the challenge. And that's, I mean, the best part of my day is talking to customers. I love hearing and feedback, what they like, what they don't and what they need. And of course I was skeptical. I didn't believe it when I first saw it because I was like this, you know, because I'm, I was used to being locked in. I was used to having to put a lot of effort, a lot of custom code, like, what do you mean? It's this easy? I believe I did the first, this is 2018, and I did our first demos, like 30 minutes in, and I cut about 1/2 million dollars out of our license in the first 30 minutes in our first demo. And I was stunned because I mean, it's like, this is easy. >> Yeah, I mean-- >> Yeah, exactly. I mean, this is, and then this is the future. And then for example, we needed to bring in so like the security team wanted to bring in a UBA solution that wasn't part of the vendor ecosystem that we were in. And I was like, not a problem. We're going to use log stream. We're going to clone a copy of our data to the UBA solution. We were able to get value from this UBA solution in weeks. What typically is a six month cycle to start getting value. And it just, it was just too easy and the best part of it. And the thing is, it just struck me was my engineers can now spend their time on delivering value instead of integrations and moving data around. >> Yeah, and also we can spend more time preventing breaches. But what's interesting is counterintuitive here is that, if you, as you add more flexibility and choice, you'd think it'd be harder to handle a breach, right? So, now let's go back to the scenario. Now you guys, say an organization has a breach, and they have the observability pipeline, They got the lake in place, your observability lake, take me through the investigation. How easy is it, what happens? How they start it, what goes on? >> So, once your SOC detects a breach, then they bring in the idea. Typically you're going to bring in your incident response team. So what we did, and this is one more way that we removed that friction, we cleaned up the glass, is we delegate to the instant response team, the ability to restore, we call it-- So if Cribl calls it replay, we play data at our object store back into your SIEM. There's a very nice UI that gives you the ability to say, "I want data from this time period, at this time period, I want it to be all the data." Or the ability to filter and say, "I want this, just this IP." For example, if I detected, okay, this IP has been breached then I'm going to pull all the data that mentions this IP and this timeframe, hit a button and it just starts. And then it's going to restore how as fast your IOPS are for your solution. And then it's back in your tool, it's back in your tool. One of the things I also want to mention is we have an amazing enrichment capability. So one of the things that we would do is we would've pipelines so as the data comes out of the object store, it hits the pipeline, and then we enrich it. We hit use GoIP information, perverse and NAS. It gets processed through threat Intel feed. So the data's already enriched and ready for the incident response people to do their job. And so it just, it bamboozle the friction of getting to the point where I can start doing my job. >> You know, at this theme, this episode for this showcase is about Data as Code. And which is, you know, we've been, I've been saying this on theCUBES for since it was being around 13 years ago, that developers are going to be dealing with data like they deal with software code, and you're starting to see, you mentioned enrichment. Where do you see Data as Code going? How relevant in it now, because we really talking about when you add machine learning in here, that has to be enriched, and iterated on too. We're talking about taking things off a branch and putting it back into the core. This is a data discussion, this isn't software, but it sounds the same. >> Right, and this is something that the irony is that, I remember first time saying it to an auditor. I was constantly going with auditors, and that's what I described is I'm going to show you the code that manages the data. This is the data's code that's going to show you how we transform it, how we secure it, where the data goes, how it's enriched. So you can see the whole story, the data life cycle in one place. And that's how we handled our orders. And I think that is enormously, you know, positive because it's so easy to be confused. It's so easy to have complexity to get in the way of progress. And by being able to represent your Data as Code, it's a step forward 'cause the amount of data and the complexity of data, it's not getting simpler, it's getting more complex. So we need to come up with better ways to handle it. >> Now you've been on both sides of the fence. You've been in the trenches as customer, now you're a supplier with Great Solution. What are people doing with this data engineering roles? Because it's not enough data engineering. I mean, 'cause if you say Data as Code, if you believe that to be true and many people do, we do. And you looked at the history of infrastructure risk code that enabled DevOps, AIOps, MLOps, DataOps, it's happening, right? So data stack ops is coming. Obviously security is huge in this. How does that data engineering role evolve? Because it just seems more and more that there's going to be a big push towards an SRE version of data, right? >> I completely agree. I was working with a customer yesterday, and I spent a large part of our conversation talking about implementing development practices for administrators. It's a new role. It's a new way to think of things 'cause traditionally your Splunk or elastic administrators is talking about operating systems and memory and talking about how to use proprietary tools in the vendor, that's just not quite the same. And so we started talking about, you need to have, you need to start getting used to code reviews. Yeah, the idea of getting used to making sure everything has a comment, was one thing I told him was like, you know, if you have a function has to have a comment, just by default, just it has to. Yeah, the standards of how you write things, how you name things all really start to matter. And also you got to start adding, considering your skillset. And this is some mean probably one of the best hire I ever made was I hired a guy with a math degree, because I needed his help to understand how do machine learning works, how to pick the best type of algorithm. And I think this is going to evolve, that you're going to be just away from the gray bearded administrator to some other gray bearded administrator with a math degree. >> It's interesting, it's a step function. You have a data engineer who's got that kind of capabilities, like what the SRA did with infrastructure. The step function of enablement, the value creation from really good data engineering, puts the democratization playback on the table, and changes, >> Thank you very much John. >> And changes that entire landscape. How do you, what's your reaction to that? >> I completely agree 'cause so operational data. So operational security data is the most volatile data in the enterprise. It changes on a whim, you have developers who change things. They don't tell you what happens, vendor doesn't tell you what happened, and so that idea, that life cycle of managing data. So the same types of standards of disciplines that database administrators have done for years is going to have, it has to filter down into the operational areas, and you need tooling that's going to give you the ability to manage that data, manage it in flight in real time, in order to drive detections, in order to drive response. All those business value things we've been talking about. >> So I got to ask you the larger role that you see with observability lakes we were talking before we came on camera live here about how exciting this kind of concept is, and you were attracted to the company because of it. I love the observability lake concept because it puts all that data in one spot, you can manage it. But you got machine learning in AI around the corner that also can help. How has all this changed in the landscape of data security and things because it makes a lot of sense, and I can only see it getting better with machine learning. >> Yeah, definitely does. >> Totally, and so the core issue, and I don't want to say, so when you talk about observability, most people have assumptions around observability is only an operational or an application support process. It's also security process. The idea that you're looking for your unknown, unknowns. This is what keeps security administrators up at night is I'm being attacked by something I don't know about. How do you find those unknown? And that's where your machine learning comes in. And that's where that you have to understand there's so many different types of machine learning algorithms, where the guy that I hired, I mean, had started educating me about the umpteen number of algorithms and how it applies to different data and how you get different value, how you have to test your data constantly. There's no such thing as the magical black box of machine learning that gives you value. You have to implement, but just like the developer practices to keep testing and over and over again, data scientists, for example. >> The best friend of a machine learning algorithm is data, right? You got to keep feeding that data, and when the data sets are baked and secure and vetted, even better, all cool. Had great stuff, great insight. Congratulations Cribl, Great Solution. Love the architecture, love the pipelining of the observability data and streaming that in to a lake. Great stuff. Give a plug for the company where you guys are at, where people can get information. I know you guys got a bunch of live feeds on YouTube, Twitch, here in theCUBE. Where else can people find you? Give the plug. >> Oh, please, please join our slack community, go to cribl.io/community. We have an amazing community. This was another thing that drew me to the company is have a large group of people who are genuinely excited about data, about managing data. If you want to try Cribl out, we have some great tool. Try Cribl tools out. We have a cloud platform, one terabyte up free data. So go to cribl.io/cloud or cribl.cloud, sign up for, you know, just never times out. You're not 30 day, it's forever up to one terabyte. Try out our new products as well, Cribl Edge. And then finally come watch Nick Decker and I, every Thursday, 2:00 PM Eastern. We have live streams on Twitter, LinkedIn and YouTube live. And so just my Twitter handle is EBA 1367. Love to have, love to chat, love to have these conversations. And also, we are hiring. >> All right, good stuff. Great team, great concepts, right? Of course, we're theCUBE here. We got our video lake coming on soon. I think I love this idea of having these video. Hey, videos data too, right? I mean, we've got to keep coming to you. >> I love it, I love videos, it's awesome. It's a great way to communicate, it's a great way to have a conversation. That's the best thing about us, having conversations. I appreciate your time. >> Thank you so much, Ed, for representing Cribl here on the Data as Code. This is season two episode two of the ongoing series covering the hottest, most exciting startups from the AWS ecosystem. Talking about the future data, I'm John Furrier, your host. Thanks for watching. >> Ed: All right, thank you. (slow upbeat music)
SUMMARY :
And talk about the future of I thank you for the I like the breach investigation angle, to be able to have your I like that's part of the talk And the key is so when Where is the data? and do the instrumentation And I only put the data I need I like that piece, you We can fit the data to for the enterprise. I got to ask you, well, go ahead. and being, it could take, you know, hours, the Cribble streaming there, What are some of the impact? and that's the key. just for the sake of You have the choices to put together, This is the new way. I believe I did the first, this is 2018, And the thing is, it just They got the lake in place, the ability to restore, we call it-- and putting it back into the core. is I'm going to show you more that there's going to be And I think this is going to evolve, the value creation from And changes that entire landscape. that's going to give you the So I got to ask you the Totally, and so the core of the observability data and that drew me to the company I think I love this idea That's the best thing about Cribl here on the Data as Code. Ed: All right, thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
Ed Bailey | PERSON | 0.99+ |
TransUnion | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
Autodesk | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
three hours | QUANTITY | 0.99+ |
287 days | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
30 day | QUANTITY | 0.99+ |
six month | QUANTITY | 0.99+ |
first demo | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Cribl | ORGANIZATION | 0.99+ |
first demos | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Twitch | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
three servers | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
one spot | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
30 minutes | QUANTITY | 0.98+ |
Cribl | PERSON | 0.98+ |
UBA | ORGANIZATION | 0.98+ |
one place | QUANTITY | 0.98+ |
one terabyte | QUANTITY | 0.98+ |
first 30 minutes | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
SRA | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
one more way | QUANTITY | 0.97+ |
about 1/2 million dollars | QUANTITY | 0.96+ |
one server | QUANTITY | 0.96+ |
ORGANIZATION | 0.96+ | |
Beats | ORGANIZATION | 0.96+ |
Nick Decker | PERSON | 0.96+ |
Cribl | TITLE | 0.95+ |
today | DATE | 0.94+ |
Cribl Edge | TITLE | 0.94+ |
first customers | QUANTITY | 0.94+ |
87, 90 days | QUANTITY | 0.93+ |
Thursday, 2:00 PM Eastern | DATE | 0.92+ |
around 13 years ago | DATE | 0.91+ |
first time | QUANTITY | 0.89+ |
three | QUANTITY | 0.87+ |
cribl.io/community | OTHER | 0.87+ |
Intel | ORGANIZATION | 0.87+ |
cribl.cloud | TITLE | 0.86+ |
Datadog | ORGANIZATION | 0.85+ |
S3 | TITLE | 0.84+ |
Cribl stream | TITLE | 0.82+ |
cribl.io/cloud | TITLE | 0.81+ |
Couple of things | QUANTITY | 0.78+ |
two | OTHER | 0.78+ |
episode | QUANTITY | 0.74+ |
AWS Startup Showcase | EVENT | 0.72+ |
lock | TITLE | 0.72+ |
Exabeam | ORGANIZATION | 0.71+ |
Startup Showcase S2 E2 | EVENT | 0.69+ |
season two | QUANTITY | 0.67+ |
Multicloud | TITLE | 0.67+ |
up to one terabyte | QUANTITY | 0.67+ |
Ed Walsh, Courtney Pallotta & Thomas Hazel, ChaosSearch | AWS 2021 CUBE Testimonial
(upbeat music) >> My name's Courtney Pallota, I'm the Vice President of Marketing at ChaosSearch. We've partnered with theCUBE team to take every one of those assets, tailor them to meet whatever our needs were, and get them out and shared far and wide. And theCUBE team has been tremendously helpful in partnering with us to make that a success. >> theCUBE has been fantastic with us. They are thought leaders in this space. And we have a unique product, a unique vision, and they have an insight into where the market's going. They've had conference with us with data mesh, and how do we fit into that new realm of data access. And with our unique vision, with our unique platform, and with theCUBE, we've uniquely come out into the market. >> What's my overall experience with theCUBE? Would I do it again, would I recommended it to others? I said, I recommend theCUBE to everyone. In fact, I was at IBM, and some of the IBM executives didn't want to go on theCUBE because it's a live interview. Live interviews can be traumatic. But the fact of the matter is, one, yeah, they're tough questions, but they're in line, they're what clients are looking for. So yes, you have to be on ball. I mean, you're always on your toes, but you get your message out so crisply. So I recommend it to everyone. I've gotten a lot of other executives to participate, and they've all had a great example. You have to be ready. I mean, you can't go on theCUBE and not be ready, but now you can get your message out. And it has such a good distribution. I can't think of a better platform. So I recommended it to everyone. If I say ChaosSearch in one word, I'd say digital transformation, with a hyphen.
SUMMARY :
tailor them to meet And with our unique vision, I said, I recommend theCUBE to everyone.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Courtney Pallota | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
one word | QUANTITY | 0.97+ |
Courtney Pallotta | PERSON | 0.9+ |
theCUBE | TITLE | 0.71+ |
one | QUANTITY | 0.56+ |
AWS 2021 | ORGANIZATION | 0.55+ |
Ed Macosky, Boomi | AWS 2021 CUBE Testimonial
(upbeat music) >> So Boomi is a leader in intelligent connectivity and automation. So theCUBE's been awesome with our messaging. I think when I look back in the last year, I did theCUBE remotely from AWS re:Invent last year, I think it was the most watched video that I did from all of it. So it's been a great asset for me to get the message out in things that we want to talk to the AWS and our customer community about. So very grateful for that. It's extremely important to get here, get in front of our customers and partners again in-person, have these conversations, and how we can help solve new challenges that are emerging for our customers, particularly also to get out our vision in the hyperautomation space, and talk to our customers about these new problems that we're helping them solve. Done theCUBE a few times, everybody's been super professional, courteous. It's been well-organized and well-executed. I would recommend it to anyone, and I certainly would be open and happy to do it again. Well, it's three. I would sum it up as, "Go Boomi it."
SUMMARY :
and talk to our customers
SENTIMENT ANALYSIS :
Mandy Dhaliwal & Ed Macosky, Boomi | AWS re:Invent 2021
>>Welcome back to the cubes. Continuing coverage of AWS reinvent 2021 live from Las Vegas. I'm Lisa Martin. We have to set two live sets here with the cube two remote sets over 100 guests on the program for three and a half days talking about the next decade and cloud innovation. And I have two alumni back with me. Please. Welcome back, Mandy Dolly, while the CMO of Boomi and ed. Makowski the head of product at Boomi guys. It's so great to see you. Great to see you, Lisa, thank you in person zoom. Incredible. So in the time, since it's been, since I've seen you, booty is a verb. You, I can see your cheeks bursting. Yeah. Just >>Boom, yet go, boom. It go. Boom. Yet, >>Talk to me about what, what that means, because this is something that you discovered through customers during the pandemic. >>Absolutely. And really it's a Testament to the platform that's been built and the experience of 18,000 customers, a hundred thousand community members, anytime there's disparate data. And it needs to be connected in a way that's secure, reliable performance. And it just works that confidence and trust our customers are telling us that they just Boomi it. And so we figured it was a rally cry. And as a marketing team, it was handed to us. We didn't have to push a Boulder up hill. Our customers are, are just booming it. And so our rally cry to the market is take advantage of the experience of those that have come before you and go build what you need to. It works, >>Period. It works well as the chief marketing officer, there's probably nothing better, nothing better than the validating voice of the customer, right? That's the most honest that you're going to get, but having a customer create the verb for you, there's going to be nothing that prepares you for that. Nothing like it, but also how great does that make it when you're having conversations with prospective customers or even partners that there's that confidence and that trust that your 18,000 plus now customer's house right in >>Lummi right. And adding what? Eight a day. Yeah. Every day we're adding eight new customers. >>Thank you customers a day. The Boomi versus what? A hundred thousand strong now. Yes. >>In two years we built that. Is that right? Yes. >>Wow. Oh my goodness. During the >>Pandemic, the momentum is incredible. Yeah. It's >>Incredible. >>Then you're on your growth from a usage perspective. So yeah, we're skyrocketing >>Use the most need like, uh, you know, neck braces from whiplash going so fast. >>Oh, we're ready. >>Good. I know, I know you are. So talk to me about, you know, we've seen such change in the last 22 months, massive acceleration to the cloud digital transformation. We're now seeing every company has to be a data company to survive and actually to be competitive, to be a competitor. But one of the things that used to be okay back in the day was, you know, these, uh, experiences that weren't integrated, like when you went to well, like when I was back in college and I would go in and you would pay for this class and that cause everything was disconnected and we didn't know what we didn't know. Now the integrated experience is table stakes for any organization. And talk to me about when you're talking with customers, where are they like across industries and going, we don't have a choice. We've got to be able to connect these experiences for our customers, for our employees and to be a comparator. >>Okay. Yeah. I mean, it used to be about for us application data integration, that sort of thing. That's where we were born. But particularly through the pandemic, it's become integrated experiences and automation. It's not just about moving data between systems, that sort of thing. It's about connecting with your end users, your employees, your customers, et cetera, like you were saying, and automating and using intelligence to continue automating those things faster. Because if, if you're not moving faster in today's world, you're, you're in peril. So, >>And that was one of the themes that we were actually talking about this morning during our kickoff that you're hearing is every company is a data company. And if they're not, they're not going to be around much longer many. Talk to me when you're talking with customers who have to really reckon with that and go, how do we connect these experiences? Because if we can't do that, then we're not going to be around. >>Yeah. The answer lies in the problems, right? There are real-world problems that need to be solved. We have a customer just north of here, a, a university. And, um, as they were bringing students back to campus, right, you're trying to deliver a connected campus experience. Well, how do you handle contact tracing, right. For COVID-19 that's a real modern day problem. Right? And so there you're able to now connect disparate data sources to go deliver on a way, an automated way to be able to handle that and provide safety to your students. Table-stakes oh, it is right. Digital identity management again in a university set setting critical. Right? So these things are now a part of our fabric of the way we live. The consumerization of tech has hit B2B. It's merging. Yeah. >>And it's good. There's definitely silver linings that have come out of the last 22 months. And I'm sure there will be a few more as we go through Omicron and whatever Greek letter is next in the alphabet, but don't want to hear we are at reinvent so much. There's always so much news at reinvent. Here we are. First 10th, 10th reinvent. You can't believe 10th reinvent. AWS is 15 years old brand new leader. And of course, yesterday ad starts the flood of announcements yesterday, today. Talk to me about what it's like to be part of that powerful AWS ecosystem from a partner perspective and how, how influential is Boomi and its customers and the Boomi verse in the direction that AWS goes in because there's so customer obsessed like you guys are >>Well, it was really exciting for us because we're a customer and a partner of AWS, right? We, we run our infrastructure on AWS. So we get to take advantage of all the new announcements that they make and all the cool stuff they bring to the table. So we're really excited for that. But also as all these things come up and customers want to take advantage of them, if they're creating different data, sets, different data silos or opportunity for automation around the business, we're right there for our customers and partners to go take advantage of that and quickly get these things up and running as they get released by AWS. So it's all very exciting. And we look forward to all these different announcements. >>One of the things also that I felt in the last day and a half, since everything really kicked off yesterday was the customer flywheel. AWS always talks about, we work backwards from the customer forwards. And that is a resounding theme that I'm hearing throughout all of the partners that I've talked about. They have a massive ecosystem. Boomi has a massive ecosystem to working with those partners, but also ensuring that, you know, at the end of the day, we're here to help customers resolve problems, problems that are here today, problems that are going to be here tomorrow. How do you help customers deal with Mandy with, with some of the challenges of today, when they say Mandy help us future-proof or integrations what we're doing going forward, what does that mean to Boomi? Yeah, >>I think for us, the way we approach it is you start with Boomi with a connectivity kind of problem, right? We're able to take disparate data silos and be able to connect and be able to create this backbone of connectivity. Once you have that, you can go build these workflows and these user engagement mechanisms to automate these processes and scale, right? So that's 0.1, we have a company called health bridge financial, right? They're a health tech company, financial services company. They are working towards, they run on AWS. They, they have, uh, a very, um, uh, secure, compliant infrastructure requirement, especially around HIPAA because they're dealing with healthcare, right? And they have needs to be able to integrate quickly and not a big budget to start with. They grew very quickly and Lummi powered their, their AWS ecosystem. So as our workloads grew on RDS, as well as SQS as three, we were able to go in and perform these HIPAA compliant integrations for them. So they could go provide reimbursement on medical spending claims for their end customers. So not only did we give them user engagement and an outstanding customer experience, we were able to help them grow as a business and be able to leverage the AWS ecosystem. That's a win, win, win across the board for all of us. >>That's one plus one equals three, for sure. Yep. One of the things too, that's interesting is, you know, when we see the plethora of AWS services, like I mentioned a minute ago, there's always so many announcements, but there's so much choice for customers, right? When you're talking ed with customers, Boomi customers that are looking for AWS services, tell me about some of those conversations. Can we help guide them along that journey? >>I mean, we help them from an architectural standpoint, as far as what services they should choose from AWS to integrate their different data sources within the AWS ecosystem and maybe to others, um, we've helped our customers going back a little bit to, to the future-proofing over the time we've at our platform, we've connected with our customers over 180,000 different data sources, including AWS and others, that as we continue to grow, our customers never need to upgrade. We're a cloud model, ourselves running an AWS. So they just get to keep taking advantage of that. Their business grows and evolves. And as AWS grows and evolves for them, and they're modernizing their infrastructure bringing in, in AWS, we continue to stay on the forefront with keeping connectivity and automation and integration options. >>And that's a massive advantage for customers in any industry, especially, I know one of the first things I thought of when the pandemic first struck and we saw this, you know, the rise of the pharma companies working on vaccine was Madrona. Madonna's a Boomi customer. If they are talk to me about some of the things that you've helped them facilitate, because there was that obviously that time where everyone's scattered, nobody could get onsite having a cloud native solution. Must've been a huge advantage. Yeah. Well getting us all back here, really >>Exactly. First and foremost, getting more people on board into their business to help go find the race for the cure. And then being able to connect that data right. That they were generating and really find a solution. So we had an integral role to play in that. That's definitely a feather in our cap. We're really proud of that. Um, again, right. It's it's about speed and agility and the way we're architected, we're a low code platform. We're not developer heavy. You can log in and go and start building right away. What, what used to take months now takes weeks. If not days, if you use the Boomi platform, those brittle code integrations no longer need to be a part of your day to day. >>And that probably was a major instrument in the survival of a lot of businesses in the very beginning when it was chaotic, right? And it was pivot, pivot, pivot, pivot, pivot, that, that, you know, one of the things we learned during the pandemic is that there is access to real-time data. Real-time integrations. Isn't a nice to have anymore. It's required. It's fundamental for employee experiences, customer experiences in every industry >>And banking. We've had several banks who were able to stand up and start taking PPP loans. Uh, they used to do this in person. They were able to take them within literally some of our banks within four days had the whole process built into it. >>Wow. And so from a differentiation perspective, how have your customer conversations changed? Obviously go Boomi. It is now is something that you do, you have t-shirts yet, by the way, they're coming. And can I get one? Yes, absolutely. Excellent. But talk to me about how those customer conversations have changed is, is what Boomi enables organizations is this snow at the C-suite the board level going? We've got to make sure that these data sources are connected because they're only gonna keep proliferating. >>Yeah, I think it's coming, right. We're not quite there yet, but as we're starting to get this groundswell at the integration developer level at the enterprise architect level, I think the C-suite especially is realizing the value of the delivery of this integrated experience now, right? These data fueled experiences are the differentiators for new business models. So transformation is something that's required. Obviously you need to modernize. We heard about that in the keynotes here at the conference, but now it's the innovation layer and that's where we're squarely focused is once you're able to connect this data and be able to modernize your systems, how do you go build new business models with innovation? That's where the C-suites leaning in with >>Us. Got it. And that's the opportunity is to really unlock the value of all this data and identify new products, new services, new target markets, and really that innovation kicks the door wide open on a competitor if you're focused on really becoming a data company, I think. Yeah, exactly. Yeah. What are some of the things that, that you're looking forward to as we, as we wrap up 2021 and let's cross our fingers, we're going into a much better 20, 22. What question for both of you and we'll start with you, what's next for Boomi? >>So we just recently laid out our hyper automation vision, right. And what hyper automation is, is adding intelligence, artificial intelligence, and machine learning to your automation to make you go faster and faster and help you with decisions that you may have been making over and over as an example, or any workflows you do as an employee. So there is this convergence of RPA and iPads that's happening in the market. And we're on the forefront of that around robotic process automation. And then bringing that, those types of things into our platform and just helping our customers automate more and more, because that's what they're looking for. That's what go Boomi. It's all about. They've integrated their stuff. We were taking the lead from our customers who are automating things. We had blue force tracking as an example, where in Amsterdam, they have security guards running around and, and, and using, um, wearable devices to track them on cameras. And that's not an application integration use case that's automation. So we're moving there, we're looking with our customers on how we can help them get faster and better and provide things like safety and that use case. So, >>And we're our customers in terms of, of embracing hyper automation. Because when we talk about, we know a lot of, uh, news around AI and, and model last day and a half, but when you think about kind of like, where are most organizations with from a maturation perspective, are they ready for hyper automation? >>I think they're ready for automation. They're learning about hyper automation. I think we're pushing the term further ahead. You know, we're, we're, we're on the forefront of that because industries are thinking, our customers are thinking about automation. They're thinking about AIML, we're introducing them to hyper automation and, and kind of explaining to them, you're doing this already. Think more along these lines, how can you drive your business forward with these? And they're embracing it really well. So >>Is that conversation elevating up to the board level yet? Is that a board level initiative or >>What it is? It's, it's a little more grassroots. I think that's, I was thinking that's where came from because the employees teams are solving problems. They're showcasing these things to their executives and saying, look at the cool stuff we're doing for the business. And the executives are now saying, well with this problem, can we now go boob? Can we Boomi it because they're there, they're starting to understand what we can do. Okay. >>That's awesome. Oh my goodness. Mandy, you've been the chief marketing officer for three over three years now. I can't believe the amount of change that you've seen, not just the last 22 months, but the last three years. What are you excited about as Boomi heads into 2022? I think, >>And new opportunities to get deeper and broader into the market. Our ownership changed as you know this past year. And, um, you know, we have a new leg on growth, if you will, right? And so whole new trajectory ahead of us, bigger brand building more pervasiveness or ease of use around our platform, right? We're available now in a pay as you go model on our website and on a $50 a month model or, uh, um, atmosphere go and then also on marketplace. So we're making the product and the platform more accessible to more people so they can begin on faster, build faster, and go solve these problems. So really democratizing integration is something that I'm very excited about. Democratizing integration, as well as more air cover, just to let people know that this technology exists. So it's really a marketer's dream >>And why they should go buy me it. Right. Exactly. You guys. It was great to have you on the program. Congratulations on the success on, on becoming a verb. That's pretty awesome. I'll look forward to my t-shirt. So I smelled flu and >>You got it. >>All right. For my guests. I'm Lisa Martin. You're watching the cube, the global leader in life tech coverage.
SUMMARY :
So in the time, since it's been, since I've seen you, booty is a verb. It go. And it needs to be connected in a way that's secure, reliable performance. That's the most honest that you're going to get, but having a customer create And adding what? Thank you customers a day. Is that right? During the Pandemic, the momentum is incredible. Then you're on your growth from a usage perspective. And talk to me about when you're talking with customers, intelligence to continue automating those things faster. And that was one of the themes that we were actually talking about this morning during our kickoff that you're hearing is every company is There are real-world problems that need to be solved. Talk to me about what it's like to be part of that powerful AWS and all the cool stuff they bring to the table. One of the things also that I felt in the last day and a half, since everything really kicked off yesterday was And they have needs to be able to integrate quickly One of the things too, that's interesting is, So they just get to keep taking advantage of that. If they are talk to me about some of the things that you've helped them facilitate, because there was that obviously that time where And then being able to connect that data right. And that probably was a major instrument in the survival of a lot of businesses in And banking. It is now is something that you do, you have t-shirts yet, by the way, We heard about that in the keynotes here And that's the opportunity is to really unlock the value of all this data and identify new is adding intelligence, artificial intelligence, and machine learning to your automation to make you And we're our customers in terms of, of embracing hyper automation. automation and, and kind of explaining to them, you're doing this already. And the executives are now saying, well with this problem, can we now go boob? I can't believe the amount of change that you've seen, not just the last 22 months, And new opportunities to get deeper and broader into the market. I'll look forward to my t-shirt. I'm Lisa Martin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Darren Anthony | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Mandy Dolly | PERSON | 0.99+ |
Mandy Dhaliwal | PERSON | 0.99+ |
David Richards | PERSON | 0.99+ |
Suzi Jewett | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2.9 times | QUANTITY | 0.99+ |
Darren | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Suzi | PERSON | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
RenDisco | ORGANIZATION | 0.99+ |
2009 | DATE | 0.99+ |
Suzie Jewitt | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AKS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
500 terabytes | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Hadoop | TITLE | 0.99+ |
1,000 camera | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
18,000 customers | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
2030 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
2026 | DATE | 0.99+ |
Yaron | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Ed Walsh and Thomas Hazel, ChaosSearch
>> Welcome to theCUBE, I am Dave Vellante. And today we're going to explore the ebb and flow of data as it travels into the cloud and the data lake. The concept of data lakes was alluring when it was first coined last decade by CTO James Dixon. Rather than be limited to highly structured and curated data that lives in a relational database in the form of an expensive and rigid data warehouse or a data mart. A data lake is formed by flowing data from a variety of sources into a scalable repository, like, say an S3 bucket that anyone can access, dive into, they can extract water, A.K.A data, from that lake and analyze data that's much more fine-grained and less expensive to store at scale. The problem became that organizations started to dump everything into their data lakes with no schema on our right, no metadata, no context, just shoving it into the data lake and figure out what's valuable at some point down the road. Kind of reminds you of your attic, right? Except this is an attic in the cloud. So it's too big to clean out over a weekend. Well look, it's 2021 and we should be solving this problem by now. A lot of folks are working on this, but often the solutions add other complexities for technology pros. So to understand this better, we're going to enlist the help of ChaosSearch CEO Ed Walsh, and Thomas Hazel, the CTO and Founder of ChaosSearch. We're also going to speak with Kevin Miller who's the Vice President and General Manager of S3 at Amazon web services. And of course they manage the largest and deepest data lakes on the planet. And we'll hear from a customer to get their perspective on this problem and how to go about solving it, but let's get started. Ed, Thomas, great to see you. Thanks for coming on theCUBE. >> Likewise. >> Face to face, it's really good to be here. >> It is nice face to face. >> It's great. >> So, Ed, let me start with you. We've been talking about data lakes in the cloud forever. Why is it still so difficult to extract value from those data lakes? >> Good question. I mean, data analytics at scale has always been a challenge, right? So, we're making some incremental changes. As you mentioned that we need to see some step function changes. But in fact, it's the reason ChaosSearch was really founded. But if you look at it, the same challenge around data warehouse or a data lake. Really it's not just to flowing the data in, it's how to get insights out. So it kind of falls into a couple of areas, but the business side will always complain and it's kind of uniform across everything in data lakes, everything in data warehousing. They'll say, "Hey, listen, I typically have to deal with a centralized team to do that data prep because it's data scientists and DBAs". Most of the time, they're a centralized group. Sometimes they're are business units, but most of the time, because they're scarce resources together. And then it takes a lot of time. It's arduous, it's complicated, it's a rigid process of the deal of the team, hard to add new data, but also it's hard to, it's very hard to share data and there's no way to governance without locking it down. And of course they would be more self-serve. So there's, you hear from the business side constantly now underneath is like, there's some real technology issues that we haven't really changed the way we're doing data prep since the two thousands, right? So if you look at it, it's, it falls two big areas. It's one, how to do data prep. How do you take, a request comes in from a business unit. I want to do X, Y, Z with this data. I want to use this type of tool sets to do the following. Someone has to be smart, how to put that data in the right schema, you mentioned. You have to put it in the right format, that the tool sets can analyze that data before you do anything. And then second thing, I'll come back to that 'cause that's the biggest challenge. But the second challenge is how these different data lakes and data warehouses are now persisting data and the complexity of managing that data and also the cost of computing it. And I'll go through that. But basically the biggest thing is actually getting it from raw data so the rigidness and complexity that the business sides are using it is literally someone has to do this ETL process, extract, transform, load. They're actually taking data, a request comes in, I need so much data in this type of way to put together. They're literally physically duplicating data and putting it together on a schema. They're stitching together almost a data puddle for all these different requests. And what happens is anytime they have to do that, someone has to do it. And it's, very skilled resources are scanned in the enterprise, right? So it's a DBS and data scientists. And then when they want new data, you give them a set of data set. They're always saying, what can I add to this data? Now that I've seen the reports. I want to add this data more fresh. And the same process has to happen. This takes about 60% to 80% of the data scientists in DPA's to do this work. It's kind of well-documented. And this is what actually stops the process. That's what is rigid. They have to be rigid because there's a process around that. That's the biggest challenge of doing this. And it takes an enterprise, weeks or months. I always say three weeks or three months. And no one challenges beyond that. It also takes the same skill set of people that you want to drive digital transformation, data warehousing initiatives, motorization, being data driven or all these data scientists and DBS they don't have enough of. So this is not only hurting you getting insights out of your day like in the warehouses. It's also, this resource constraint is hurting you actually getting. >> So that smallest atomic unit is that team, that's super specialized team, right? >> Right. >> Yeah. Okay. So you guys talk about activating the data lake. >> Yep. >> For analytics. What's unique about that? What problems are you all solving? You know, when you guys crew created this magic sauce. >> No, and basically, there's a lot of things. I highlighted the biggest one is how to do the data prep, but also you're persisting and using the data. But in the end, it's like, there's a lot of challenges at how to get analytics at scale. And this is really where Thomas and I founded the team to go after this, but I'll try to say it simply. What we're doing, I'll try to compare and contrast what we do compared to what you do with maybe an elastic cluster or a BI cluster. And if you look at it, what we do is we simply put your data in S3, don't move it, don't transform it. In fact, we're against data movement. What we do is we literally point and set that data and we index that data and make it available in a data representation that you can give virtual views to end-users. And those virtual views are available immediately over petabytes of data. And it actually gets presented to the end-user as an open API. So if you're elastic search user, you can use all your elastic search tools on this view. If you're a SQL user, Tableau, Looker, all the different tools, same thing with machine learning next year. So what we do is we take it, make it very simple. Simply put it there. It's already there already. Point us at it. We do the hard of indexing and making available. And then you publish in the open API as your users can use exactly what they do today. So that's, dramatically I'll give you a before and after. So let's say you're doing elastic search. You're doing logging analytics at scale, they're lending their data in S3. And then they're ETL physically duplicating and moving data. And typically deleting a lot of data to get in a format that elastic search can use. They're persisting it up in a data layer called leucine. It's physically sitting in memories, CPU, SSDs, and it's not one of them, it's a bunch of those. They in the cloud, you have to set them up because they're persisting ECC. They stand up same by 24, not a very cost-effective way to the cloud computing. What we do in comparison to that is literally pointing it at the same S3. In fact, you can run a complete parallel, the data necessary it's being ETL out. When just one more use case read only, or allow you to get that data and make this virtual views. So we run a complete parallel, but what happens is we just give a virtual view to the end users. We don't need this persistence layer, this extra cost layer, this extra time, cost and complexity of doing that. So what happens is when you look at what happens in elastic, they have a constraint, a trade-off of how much you can keep and how much you can afford to keep. And also it becomes unstable at time because you have to build out a schema. It's on a server, the more the schema scales out, guess what? you have to add more servers, very expensive. They're up seven by 24. And also they become brutalized. You lose one node, the whole thing has to be put together. We have none of that cost and complexity. We literally go from to keep whatever you want, whatever you want to keep an S3 is single persistence, very cost effective. And what we are able to do is, costs, we save 50 to 80%. Why? We don't go with the old paradigm of sit it up on servers, spin them up for persistence and keep them up 7 by 24. We're literally asking their cluster, what do you want to cut? We bring up the right compute resources. And then we release those sources after the query done. So we can do some queries that they can't imagine at scale, but we're able to do the exact same query at 50 to 80% savings. And they don't have to do any tutorial of moving that data or managing that layer of persistence, which is not only expensive, it becomes brittle. And then it becomes, I'll be quick. Once you go to BI, it's the same challenge, but the BI systems, the requests are constant coming at from a business unit down to the centralized data team. Give me this flavor of data. I want to use this piece of, you know, this analytic tool in that desk set. So they have to do all this pipeline. They're constantly saying, okay, I'll give you this data, this data, I'm duplicating that data, I'm moving it and stitching it together. And then the minute you want more data, they do the same process all over. We completely eliminate that. >> And those requests are queue up. Thomas, it had me, you don't have to move the data. That's kind of the exciting piece here, isn't it? >> Absolutely no. I think, you know, the data lake philosophy has always been solid, right? The problem is we had that Hadoop hang over, right? Where let's say we were using that platform, little too many variety of ways. And so, I always believed in data lake philosophy when James came and coined that I'm like, that's it. However, HTFS, that wasn't really a service. Cloud object storage is a service that the elasticity, the security, the durability, all that benefits are really why we founded on-cloud storage as a first move. >> So it was talking Thomas about, you know, being able to shut off essentially the compute so you don't have to keep paying for it, but there's other vendors out there and stuff like that. Something similar as separating, compute from storage that they're famous for that. And you have Databricks out there doing their lake house thing. Do you compete with those? How do you participate and how do you differentiate? >> Well, you know you've heard this term data lakes, warehouse, now lake house. And so what everybody wants is simple in, easy in, however, the problem with data lakes was complexity of out. Driving value. And I said, what if, what if you have the easy in and the value out? So if you look at, say snowflake as a warehousing solution, you have to all that prep and data movement to get into that system. And that it's rigid static. Now, Databricks, now that lake house has exact same thing. Now, should they have a data lake philosophy, but their data ingestion is not data lake philosophy. So I said, what if we had that simple in with a unique architecture and indexed technology, make it virtually accessible, publishable dynamically at petabyte scale. And so our service connects to the customer's cloud storage. Data stream the data in, set up what we call a live indexing stream, and then go to our data refinery and publish views that can be consumed the elastic API, use cabana Grafana, or say SQL tables look or say Tableau. And so we're getting the benefits of both sides, use scheme on read-write performance with scheme write-read performance. And if you can do that, that's the true promise of a data lake, you know, again, nothing against Hadoop, but scheme on read with all that complexity of software was a little data swamping. >> Well, you've got to start it, okay. So we got to give them a good prompt, but everybody I talked to has got this big bunch of spark clusters, now saying, all right, this doesn't scale, we're stuck. And so, you know, I'm a big fan of Jamag Dagani and our concept of the data lake and it's early days. But if you fast forward to the end of the decade, you know, what do you see as being the sort of critical components of this notion of, people call it data mesh, but to get the analytics stack, you're a visionary Thomas, how do you see this thing playing out over the next decade? >> I love her thought leadership, to be honest, our core principles were her core principles now, 5, 6, 7 years ago. And so this idea of, decentralize that data as a product, self-serve and, and federated computer governance, I mean, all that was our core principle. The trick is how do you enable that mesh philosophy? I can say we're a mesh ready, meaning that, we can participate in a way that very few products can participate. If there's gates data into your system, the CTL, the schema management, my argument with the data meshes like producers and consumers have the same rights. I want the consumer, people that choose how they want to consume that data. As well as the producer, publishing it. I can say our data refinery is that answer. You know, shoot, I'd love to open up a standard, right? Where we can really talk about the producers and consumers and the rights each others have. But I think she's right on the philosophy. I think as products mature in this cloud, in this data lake capabilities, the trick is those gates. If you have to structure up front, if you set those pipelines, the chance of you getting your data into a mesh is the weeks and months that Ed was mentioning. >> Well, I think you're right. I think the problem with data mesh today is the lack of standards you've got. You know, when you draw the conceptual diagrams, you've got a lot of lollipops, which are APIs, but they're all unique primitives. So there aren't standards, by which to your point, the consumer can take the data the way he or she wants it and build their own data products without having to tap people on the shoulder to say, how can I use this?, where does the data live? And being able to add their own data. >> You're exactly right. So I'm an organization, I'm generating data, when the courageously stream it into a lake. And then the service, a ChaosSearch service, is the data is discoverable and configurable by the consumer. Let's say you want to go to the corner store. I want to make a certain meal tonight. I want to pick and choose what I want, how I want it. Imagine if the data mesh truly can have that producer of information, you know, all the things you can buy a grocery store and what you want to make for dinner. And if you'd static, if you call up your producer to do the change, was it really a data mesh enabled service? I would argue not. >> Ed, bring us home. >> Well, maybe one more thing with this. >> Please, yeah. 'Cause some of this is we're talking 2031, but largely these principles are what we have in production today, right? So even the self service where you can actually have a business context on top of a data lake, we do that today, we talked about, we get rid of the physical ETL, which is 80% of the work, but the last 20% it's done by this refinery where you can do virtual views, the right or back and do all the transformation need and make it available. But also that's available to, you can actually give that as a role-based access service to your end-users, actually analysts. And you don't want to be a data scientist or DBA. In the hands of a data scientist the DBA is powerful, but the fact of matter, you don't have to affect all of our employees, regardless of seniority, if they're in finance or in sales, they actually go through and learn how to do this. So you don't have to be it. So part of that, and they can come up with their own view, which that's one of the things about data lakes. The business unit wants to do themselves, but more importantly, because they have that context of what they're trying to do instead of queuing up the very specific request that takes weeks, they're able to do it themselves. >> And if I have to put it on different data stores and ETL that I can do things in real time or near real time. And that's game changing and something we haven't been able to do ever. >> And then maybe just to wrap it up, listen, you know 8 years ago, Thomas and his group of founders, came up with the concept. How do you actually get after analytics at scale and solve the real problems? And it's not one thing, it's not just getting S3. It's all these different things. And what we have in market today is the ability to literally just simply stream it to S3, by the way, simply do, what we do is automate the process of getting the data in a representation that you can now share an augment. And then we publish open API. So can actually use a tool as you want, first use case log analytics, hey, it's easy to just stream your logs in. And we give you elastic search type of services. Same thing that with CQL, you'll see mainstream machine learning next year. So listen, I think we have the data lake, you know, 3.0 now, and we're just stretching our legs right now to have fun. >> Well, and you have to say it log analytics. But if I really do believe in this concept of building data products and data services, because I want to sell them, I want to monetize them and being able to do that quickly and easily, so I can consume them as the future. So guys, thanks so much for coming on the program. Really appreciate it.
SUMMARY :
and Thomas Hazel, the CTO really good to be here. lakes in the cloud forever. And the same process has to happen. So you guys talk about You know, when you guys crew founded the team to go after this, That's kind of the exciting service that the elasticity, And you have Databricks out there And if you can do that, end of the decade, you know, the chance of you getting your on the shoulder to say, all the things you can buy a grocery store So even the self service where you can actually have And if I have to put it is the ability to literally Well, and you have
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Kevin Miller | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
2021 | DATE | 0.99+ |
two thousands | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
James Dixon | PERSON | 0.99+ |
last decade | DATE | 0.99+ |
7 | QUANTITY | 0.99+ |
second challenge | QUANTITY | 0.99+ |
2031 | DATE | 0.99+ |
Jamag Dagani | PERSON | 0.98+ |
S3 | ORGANIZATION | 0.98+ |
both sides | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
8 years ago | DATE | 0.98+ |
second thing | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
about 60% | QUANTITY | 0.98+ |
tonight | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
Tableau | TITLE | 0.97+ |
two big areas | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
SQL | TITLE | 0.94+ |
seven | QUANTITY | 0.94+ |
6 | DATE | 0.94+ |
CTO | PERSON | 0.93+ |
CQL | TITLE | 0.93+ |
7 years | DATE | 0.93+ |
first move | QUANTITY | 0.93+ |
next decade | DATE | 0.92+ |
single | QUANTITY | 0.91+ |
DBS | ORGANIZATION | 0.9+ |
20% | QUANTITY | 0.9+ |
one thing | QUANTITY | 0.87+ |
5 | DATE | 0.87+ |
Hadoop | TITLE | 0.87+ |
Looker | TITLE | 0.8+ |
Grafana | TITLE | 0.73+ |
DPA | ORGANIZATION | 0.71+ |
one more thing | QUANTITY | 0.71+ |
end of the | DATE | 0.69+ |
Vice President | PERSON | 0.65+ |
petabytes | QUANTITY | 0.64+ |
cabana | TITLE | 0.62+ |
CEO | PERSON | 0.57+ |
HTFS | ORGANIZATION | 0.54+ |
house | ORGANIZATION | 0.49+ |
theCUBE | ORGANIZATION | 0.48+ |
Chris McNabb & Ed Macosky, Boomi | Hyperautomation & The Future of Connectivity
(energetic music) >> Hello, welcome to the CUBE's coverage of Boomi's Out of This World event. I'm John Furrier, host of theCUBE. We've got two great guests here, Chris McNabb, CEO of Boomi, and Ed Macosky, SVP and Head of Products, talking about hyper automation and the future of connectivity. Gentlemen, thank you for coming on theCUBE, great to see you. >> John, it is great to see you again as well. Looking forward to the next in-person one. >> I miss the in-person events, you guys have had great events and a lot of action happening. Love the big news of going out on your own direction, big financing, change of control, all that good stuff happening, industries growing. Chris, this is a big move. You know, the industry is changing. Can you give us some context to, you know, what's going on in automation and connectivity, because iPaaS, which you guys have pioneered, have been a big part of Cloud and CloudScale, and now we're seeing next-generation things happening. Data, automation, edge, modern application development, all happening. Set some context, what's going on? >> Yeah John, listen, it's a great time to be in our space at this point in time. Our customers, at the end of the day, are looking to create what we announced at last year's thing, called Integrated Experiences, which is the combination of user engagement, more awesome connectivity, and making sure high quality data goes through that experience, and providing 21st century experiences. And we're right at the heart of that work. Our platform really drives all the services that are needed there. But what our customers really need and what we're here to focus on today, that this world is to make sure that we have the world's best cut connectivity capabilities, and process automation engagement of constituents to really do what they want to do, where they want to do it. >> So a lot of big moves happening, what's the story? Take us through the story. I mean, you guys have a transaction with big sum financing, setting up this intelligence connectivity and automation approach. Take us through the story, what happened? >> Yeah. So, you know, the lead business was sold outside of Dell and that deal closed. We are now owned by two top tier private equity firms, FP and TPG. That sale is completed and now we are ready to unleash the Boomi business on this market. I think it's a great, it's a great transaction for Dell, and it's a great transaction for FP, PTG but most specifically, it's really a world-class transaction for the Boomi business, the Boomi customer base, as well as the Boomi employee. So I really looked at this as a win-win-win and sets us up for really going after this one. >> Yeah, and there's a huge wave coming and you're seeing like the, the big wave coming. It's just like, no need to debate it. It's here. It's cloud 2.0, whatever you want to call it, it's scale. IT has completely figured out, that not only is replatforming the cloud, but you got to be in the cloud refactoring. This is driving the innovation. And, this is really I see where you guys are leading. So share with me what is hyper automation? What is that actually mean? >> So what hyper automation really is, is intelligent connectivity automation. So our customers have been doing this. It's very specifically related to taking workflows, taking automation within the business. That's been around for a long time anyway, but adding AI and ML to it. So, as you continue to automate your business, you're getting more and more steam, and you get more and more productivity out of the (mumbles) organization or productivity from the (mumbles). >> So Chris, tell us more about this hyper automation, because you guys have a large install base. Take us through some of the numbers of the customer base, and where the dots are connecting as they look at the new IT landscape as it transforms. >> Yeah. John, great question. You know, when I talk to, you know as many of our 18,000 customers worldwide as I can get to, you know, what they are saying very clearly is their IT news feed is getting more complicated, more distributed, more siloed, and it has more data. And as you work through that problem, what they're trying to accomplish, is they're trying to engage their constituents in a 21st century web, however they want, whether it be mobile web, portals, chat bots, old fashioned telephones. And in doing that, that complicated area is extraordinarily difficult. So that's the pervasive problem that Boomi is purpose-built to help solve. And our customers start out sometimes with just great connectivity. Hyper automation is where the real value comes in. That's where your constituents see a complete difference in how I inter-operate with (mumbles). >> So, first of all, I love the word hyper automation because it reminds me of hyper scale, which, you know, look at the Amazons and the cloud players. You know, that kind of game has kind of evolved. I mean, the old joke is what inning are we in, right? And, and I, to use a baseball metaphor, I think it's a doubleheader and game one is won by the cloud. Right? So, Amazon wins game one, game two is all about data. You guys, this is core to Boomi and I want to get your thoughts on this because data is the competitive advantage. But if you look at the pandemic and the stories that we're reporting on, and this reinvent specifically, that'll be a big story. The refactoring in the cloud is a big strategic effort, not just replatforming, refactoring in the cloud. So this is really where you guys are, I think, skating where the puck is. Am I getting it right, can you just share that vision? >> Yeah, John. From a vision perspective, I think the pandemic has really accelerated people's expectations. You know what we need to be more nimble, more flexible. And because they had a fair amount in the Cloud they have to understand what is the next tier, what is the next generation offerings that we put together tie together and connect. That is not only connecting systems, apps, databases, and clouds. You're connecting people, processes and devices. So we're going to have a great story here and out of this world about how we connect bio centric vest to a video system who a network monitoring hub to protect the officer's safety in Amsterdam in real-time. We can deploy officers to location all automatic. All decisions are automatic, all locations, cameras (mumbles) all automatically. And that's only possible, when we think about next generation technology that Boomi provides. Next generation capabilities by the other providers in that solution. >> Ed, before we get to the product announcements for the even, we'll get your reaction to that. I see in the cloud you can refactor, you got data, you got latency issues. These are all kind of go away when you start thinking about integrating it altogether. What's your reaction to refactoring as the next step? >> Yeah. So my regular, I mean, exactly what Chris said, but as our customers are moving to the cloud, they're not choosing any more, just one cloud. It is a multi-cloud it's multidimensional (mumbles), you got multi-cloud, you got hybrid cloud, you have edge devices, et cetera. And our technology just naturally puts this in the space to do that. And based on what we see with our customers, we actually have, we've connected over 189,000 different devices, application points, data endpoints, et cetera to people. And we're seeing that growth of 44% year on year. So, we're seeing that explosion in helping customers, and we just want to accelerate that, and help them react to these changes as quickly as they possibly can. And a lot of it doesn't require, you know, massive upload project technology. We've been lucky enough to be visionaries that with our deployment technology, being able to embrace this new environment that's coming up or we're right at the forefront of this (mumbles). >> Yeah. I love the intelligence saying, I love hyper automation. Okay, let's get into the product announcements of Out of This World event. What are some of the announcements, and share with us the key highlights. >> Yeah. So first and foremost, we've announced a vision in our tactic. So I talked about the 189,000 applications that we did data endpoints, et cetera, that our customers are picking today. And they're moving very, very rapidly with that and it's no longer about name, connections, and having these fixed auxiliary that connects to applications you need to be able to react intelligently, pick the next endpoint and connect very quickly and bring that into your ecosystem. So we've got this vision towards the connectivity service that we're working on that will basically normalize that connectivity across all of the applications that are plugging into Boomi's iPaaS ecosystem and allow customers to get up and running very quickly. So I'm really excited about that. The other thing we announced is Boomi event streams. So in order to complete this, we can't just, we've been on this EDA journey Event-Driven Architecture for the last couple of years, and embracing an open ecosystem. But we found that in order to go faster for our customers, it's very, very important that we bring this into Boomi's iPaaS platform. Our partnerships in this area are still very important for us. But there is an avenue that our customers are demanding that, "hey, bring us into your platform." And we need to move faster with this, and our new Boomi event streams will allow them to do that. We also recently just announced the Boomi Discover Catalog. So this is the, this is an ongoing vision us. We're, building up into a marketplace where customers and partners can all participate, whether it's inside of a customer's ecosystem or partners, or Boomi, et cetera, offering these quick onboarding solutions for their customers. So we will learn intelligently as people have these solutions to help customers onboard, and build, and connect to these systems faster. So that's kind of how they all come together for us In a hyper automation scenario the last thing too, is we are working on RPA as a last mile connectivity that's where we start RPAs today, you know, gone are going to be the days of having RPA at a desktop perspective where you have to have someone manually run that. Although its RPA our runtime technology extends the desktops anyway. So we are going to bring RPA technology into the IPaaS platform as we move forward here so that our customers can enjoy the benefits of that as well. >> That's real quick. It was going to ask about the fence stream. I love this RPA angle. Tell me more about how that impacts is that's that's what I think, pretty big what's the impact of when you bring robotic processes on our RPA into iPaaS, what's the, what's the impact of the customer? >> The impact of the customer is that we believe that customers can really enjoy true cloud when it comes to RPA technology today, most of the RPA technologies, like I said, are deployed at a desktop and they are, they are manually run by some folks. It helps speed up the business user and adds some value there. But our technology will surely bring it to the cloud and allow that connectivity of what an arm robotic process automation solution will be doing and can tap into the iPaaS ecosystem and extend and connect that data up into the cloud or even other operating systems that the customer (mumbles). >> Okay. So on the event streams that you did, you guys announced, obviously it's the best part of the embedded event driven architecture, You guys have been part of. What is, why is it important for customers? Can you just take a minute to explain why event streams and why event driven approaches are important. >> Because customers need access to the data real time. So, so there's two reasons why it's very important to the customers one is Event Driven Architectures are on the rise, in order to truly scale up an environment. If you're talking tens of millions of transactions, you need to have an Event Driven Architecture in place in order to manage that state. So you don't have any message loss or any of those types of things. So it's important that we continue to invest as we continue to scale on our customers and they scale up their environments with us. The other reason it is very important for us to bring it into our ecosystem, within our platform is that our customers enjoy the luxury of having an integrated experience themselves as they're building, you know, intelligent connectivity and automation solutions within our platform. So to ask a customer, to go work with a third party technology versus enjoying it in an integrated experience itself is why we want to bring it in and have them get their (mumbles) much faster. >> I really think you guys are onto something because it's a partnership world. Ecosystems are now everywhere. There's ecosystems, because everything's a platform now that's evolving from tools to platforms and it's not a one platform rules the world. This is the benefit of how the clouds emerging, almost a whole nother set of cloud capabilities. I love this vision and you start to see that, and you guys did talk about this thing called conductivity marketplace. And what is that? Is that a, is that a place where people are sharing instead of partnershipping? I know there's a lot of partners are connected with each other and they want to have it all automated. How does this all play in? Can you just quickly explain that? >> Yeah, so in the last year we launched and we actually launched open source community around connectors and that sort of thing we invested pretty heavily in RSDK. We see quite a big uptake in the ecosystem of them building specific connectors, as well as solution. And our partners were very excited about partnering with us and (mumbles) to markets and those sorts of things that they can offer solutions to their customers on a marketplace. So, so we are reacting to the popular demand that we have from our partners and customers where they say, Hey, we'd love to participate in this marketplace. We'd love to be able to work with you and publish solutions that we're delivering more customers. So, so we're, we're fulfilling that mission on behalf of our customers and partners. >> You know, Chris, when you look at the cloud native ecosystem at the high level, you're seeing opensource driving a big part of it, large enterprises, large customers are moving to that next level of modern application development. They're partnering, right? They're going to out, outsource and partner some, some edge components, maybe bring someone else over here, have a supplier everything's confide now in the cloud, AKA dev ops meets, you know, business logic. So this seems to be validated. How do you see this evolving? How does this iPaaS kind of environment just become the environment? I mean, it seems to me that that's what's happening. What's your reaction to the, to that trend? >> I think as iPaaS evolves we've extended the breadth of our iPaaS dramatically. We're not an integration platform. We're, we take the broadest definition of the word integration I guess I'll say it that way. You'll be integrating people. Connecting people is just as important as connecting cloud applications So, you know, that that's part one in terms of the vision of what it is two is going to be the importance of speed and productivity. It's critically important that people can figure out how to reconnect because endpoints are exploding. You have to connect these extraordinarily quickly infractions of the amount of time that it ever took and coding, code is just not the way that that works. You have to have it abstracted and you have to make it simpler, low-code, no-code environments, configuration based environments, make it simpler for more people outside of IT to actually use the solutions. So that's where these platforms become much more pervasive than the enterprise, solve a much bigger problem, and they solve it at speeds. So, you know, the vision for this is just to continue to accelerate that, you know, when we got started here, things used to take months and months, you know, it came down to weeks, it came down to days, it's in to hours. We're looking at seconds to define connectivity in an easy button, those get connected and get working. That's our vision for intelligent connectivity. >> Okay, so we're talking about hyper automation in the future context. That's the segment here? What is a feature conductivity? Take me through that. How does that evolve? I can see marketplace. I can see an ecosystem. I see people connecting with partners and applications and data. What is the future of connectivity? >> The vision, right? For connectivity, and they talk about our connectivity as a service, but you know, you have to think about it as connectivity instead of connectors, like an NBO, a thing that talks to it, and what we look at is like, you should be able to point to an endpoint, pick a cloud app, any cloud application. You have an API. I should be able to automatically programmatically and dynamically, anytime I want go interrogate that, browse it in the button and I've established connectivity, and the amount of take, in the amount of time it's taken me to explain it, you should almost be able to work through it and be connected to that and talking to that endpoint, we're going to bring that kind of connectivity, that dynamic generated, automatic connectivity, in to our platform, and that's the vision >> And people connect to user from a product standpoint and this should be literally plug and play, so to speak, old, old term, but really seamlessly, automated play, automate and play kind of just connect. >> Yes, absolutely. And what Chris was talking, I was thinking about a customer to be named, but one of the, during one of the interviews coming up at Out of this World, the customer was describing to us today, already the capabilities that we have, where he is, a CTO was able to get an integration up and running before this team was able to write the requirements for the integration. So, so those are the types of things we're looking to continue add to, to add to. And we're also, you know, not asking our customers to make a choice. You can scale up and scale down. It's very important for our customers to realize whether the problem's really big or really small our platforms there to get it done fast and in a secure way. >> I see a lot of people integrating in the cloud with each other and themselves other apps, seeing huge benefits while still working on premise across multiple environments. So this kind of new operating models evolving, some people call it refactoring, whatever term you want to use. It's a change of, of a value creation, creates new value. So as you guys go out, Chris, take us through your vision on next steps. Okay. You're, you're going to be independent. You got the financing behind you. Dell got a nice deal. You guys are going forward. What's next for boomi? >> Well, listen John, we, we, you know, we couldn't be more excited having the opportunity to truly unleash, you know, this business out on the market and you know, our employees are super excited. Our customers are going to benefit. Our customers are going to get a lot more product innovation every single day, we are ready to put out 11 releases a year. There's literally a hundred different features we put in that product. We're looking to double down on that and really accelerate our path towards those things what we were talking about today. Engagement with our customers gets to get much better, you know, doubling down on customer success. People support people, PSL in the field gets us engaging our customers in so many different ways. There's so much more folks that when we partner with our customers, we care about their overall success, and this investment really gives us so many avenues now to double down on and making sure that their journey with us and their journey towards their success as a business and how we can help them. Some of them, we help them get there. >> You guys got a lot of trajectory and experience and knowledge in this industry I think. It's really kind of a great position to be in. And as you guys take on this next wave, Chris McNabb, CEO Boomi, Ed Macosky, SVP, head of projects, thanks for coming on the cube, and this is the cube coverage of Boomi's Out of This World. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
and the future of connectivity. to see you again as well. I miss the in-person events, to really do what they want to do, where they want to do it. I mean, you guys have a and now we are ready that not only is replatforming the cloud, and you get more and more productivity numbers of the customer base, that Boomi is purpose-built to help solve. and the stories that we're reporting on, fair amount in the Cloud I see in the cloud you can refactor, And a lot of it doesn't require, you know, What are some of the announcements, and allow customers to get impact of the customer? The impact of the customer event streams that you did, continue to invest as we continue and you guys did talk about and (mumbles) to markets and So this seems to be validated. You have to have it abstracted and you have to make it simpler, low-code, no-code What is the future of connectivity? and the amount of take, plug and play, so to speak, not asking our customers to make a choice. So as you guys go out, Chris, to truly unleash, you know, And as you guys take on this next wave,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Chris McNabb | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Boomi | ORGANIZATION | 0.99+ |
44% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
two reasons | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
189,000 applications | QUANTITY | 0.99+ |
18,000 customers | QUANTITY | 0.99+ |
Boomi | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Ed | PERSON | 0.98+ |
two | QUANTITY | 0.98+ |
tens of millions | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one cloud | QUANTITY | 0.97+ |
game one | QUANTITY | 0.97+ |
one platform | QUANTITY | 0.97+ |
pandemic | EVENT | 0.96+ |
Boomi's Out of This World | EVENT | 0.96+ |
Out of This World | EVENT | 0.95+ |
first | QUANTITY | 0.95+ |
over 189,000 different devices | QUANTITY | 0.95+ |
two great guests | QUANTITY | 0.94+ |
CloudScale | TITLE | 0.93+ |
11 releases a year | QUANTITY | 0.93+ |
one of the interviews | QUANTITY | 0.92+ |
CEO | PERSON | 0.91+ |
game two | QUANTITY | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
Out of This World | TITLE | 0.91+ |
RSDK | TITLE | 0.9+ |
last couple of years | DATE | 0.88+ |
Hyperautomation | ORGANIZATION | 0.87+ |
SVP | PERSON | 0.86+ |
single day | QUANTITY | 0.86+ |
iPaaS | TITLE | 0.85+ |
The Future of Connectivity | ORGANIZATION | 0.82+ |
big wave | EVENT | 0.81+ |
iPaaS | COMMERCIAL_ITEM | 0.79+ |
two top tier private equity firms | QUANTITY | 0.76+ |
Out of this World | ORGANIZATION | 0.75+ |
wave | EVENT | 0.75+ |
Event | TITLE | 0.71+ |
Ed Walsh and Thomas Hazel, ChaosSearch | JSON
>>Hi, Brian, this is Dave Volante. Welcome to this cube conversation with Thomas Hazel was the founder and CTO of chaos surgeon. I'm also joined by ed Walsh. Who's the CEO Thomas. Good to see you. >>Great to be here. >>Explain Jason. First of all, what >>Jason, Jason has a powerful data representation, a data source. Uh, but let's just say that we try to drive value out of it. It gets complicated. Uh, I can search. We activate customers, data lakes. So, you know, customers stream their Jason data to this, uh, cloud stores that we activate. Now, the trick is the complexity of a Jason data structure. You can do all these complexity of representation. Now here's the problem putting that representation into a elastic search database or relational databases, very problematic. So what people choose to do is they pick and choose what they want and or they just stored as a blob. And so I said, what if, what if we create a new index technology that could store it as a full representation, but dynamically in a, we call our data refinery published access to all the permutations that you may want, where if you do a full on flatten, your flattening of its Jason, one row theoretically could be put into a million rows and relational data sort of explode, >>But then it gets really expensive. But so, but everybody says they have Jason support, every database vendor that I talked to, it's a big announcement. We now support Jason. What's the deal. >>Exactly. So you take your relational database with all those relational constructs and you have a proprietary Jason API to pick and choose. So instead of picking, choosing upfront, now you're picking, choosing in the backend where you really want us the power of the relational analysis of that Jaison data. And that's where chaos comes in, where we expand those data streams we do in a relational way. So all that tooling you've been built to know and love. Now you can access to it. So if you're doing proprietary APIs or Jason data, you're not using Looker, you're not using Tableau. You're doing some type of proprietary, probably emailing now on the backend. >>Okay. So you say all the tools that you've trained, everybody on you can't really use them. You got to build some custom stuff and okay, so, so, so maybe bring that home then in terms of what what's the money, why do the suits care about this stuff? >>The reason this is so important is think about anything, cloud native Kubernetes, your different applications. What you're doing in Mongo is all Jason is it's very powerful but painful, but if you're not keeping the data, what people are doing a data scientist is, or they're just doing leveling, they're saying I'm going to only keep the first four things. So think about it's Kubernetes, it's your app logs. They're trying to figure out for black Friday, what happens? It's Lilly saying, Hey, every minute they'll cut a new log. You're able to say, listen, these are the users that were in that system for an hour. And here's a different things. They do. The fact of the matter is if you cut it off, you lose all that fidelity, all that data. So it's really important that to have. So if you're trying to figure out either what happened for security, what happened for on a performance, or if you're trying to figure out, Hey, I'm VP of product or growth, how do I cross sell things? >>You need to know what everyone's doing. If you're not handling Jason natively, like we're doing either your, it keeps on expanding on black Friday. All of a sudden the logs get huge. And the next day it's not, but it's really powerful data that you need to harness for business values. It's, what's going to drive growth. It's what's going to do the digital transformation. So without the technology, you're kind of blind. And to be honest, you don't know. Cause a data scientist is kind of deleted the data on you. So this is big for the business and digital transformation, but also it was such a pain. The data scientists in DBS were forced to just basically make it simple. So it didn't blow up their system. We allow them to keep it simple, but yes, >>Both power. It reminds me if you like, go on vacation, you got your video camera. Somebody breaks into your house. You go back to Lucas and see who and that the data's gone. The video's gone because it didn't, you didn't, you weren't able to save it cause it's too >>Expensive. Well, it's funny. This is the first day source. That's driving the design of the database because of all the value we should be designed the database around the information. It stores not the structure and how it's been organized. And so our viewpoint is you get to choose your structure yet contain all that content. So if a vendor >>It says to kind of, I'm a customer then says, Hey, we got Jason support. What questions should I ask to really peel the onion? >>Well, particularly relational. Is it a relational access to that data? Now you could say, oh, I've ETL does Jason into it. But chances are the explosion of Jason permutations of one row to a million. They're probably not doing the full representation. So from our viewpoint is either you're doing a blob type access to proprietary Jason APIs or you're picking and choosing those, the choices say that is the market thought. However, what if you could take all the vegetation and design your schema based on how you want to consume it versus how you could store it. And that's a big difference with, >>So I should be asking how, how do I consume this data? Are you ETL? Bring it in how much data explosion is going to occur. Once I do this, and you're saying for chaos, search the answer to those questions. >>The answer is, again, our philosophy simply stream your data into your cloud object, storage, your data lake and with our index technology and our data refinery. You get to create views, dynamic the incident, whether it's a terabyte or petabyte, and describe how you want your data because consumed in a relational way or an elastic search way, both are consumable through our data refinery, which is >>For us. The refinery gives you the view. So what happens if someone wants a different view, I want to actually unpack different columns or different matrices. You able to do that in a virtual view, it's available immediately over petabytes of data. You don't have that episode where you come back, look at the video camera. There's no data there left. So that's, >>We do appreciate the time and the explanation on really understanding Jason. Thank you. All right. And thank you for watching this cube conversation. This is Dave Volante. We'll see you next time.
SUMMARY :
Good to see you. First of all, what where if you do a full on flatten, your flattening of its Jason, one row theoretically What's the deal. So you take your relational database with all those relational constructs and you have a proprietary You got to build some custom The fact of the matter is if you cut it off, you lose all that And to be honest, you don't know. It reminds me if you like, go on vacation, you got your video camera. And so our viewpoint is you It says to kind of, I'm a customer then says, Hey, we got Jason support. However, what if you could take all the vegetation and design your schema based on how you want to Bring it in how much data explosion is going to occur. whether it's a terabyte or petabyte, and describe how you want your data because consumed in a relational way You don't have that episode where you come back, look at the video camera. And thank you for watching this cube conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volante | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
Lilly | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
JSON | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
first day | QUANTITY | 0.99+ |
black Friday | EVENT | 0.99+ |
an hour | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
Both | QUANTITY | 0.97+ |
ed Walsh | PERSON | 0.97+ |
Tableau | TITLE | 0.95+ |
first four things | QUANTITY | 0.94+ |
Kubernetes | TITLE | 0.93+ |
one row | QUANTITY | 0.92+ |
Mongo | ORGANIZATION | 0.9+ |
Jason | ORGANIZATION | 0.89+ |
ChaosSearch | ORGANIZATION | 0.89+ |
a million | QUANTITY | 0.88+ |
next day | DATE | 0.86+ |
Jason | TITLE | 0.81+ |
First | QUANTITY | 0.74+ |
million rows | QUANTITY | 0.73+ |
ETL | ORGANIZATION | 0.7+ |
petabytes | QUANTITY | 0.69+ |
Looker | ORGANIZATION | 0.66+ |
DBS | ORGANIZATION | 0.58+ |
Jaison | PERSON | 0.52+ |
Lucas | PERSON | 0.49+ |
Ed Walsh and Thomas Hazel V1
>>Welcome to the cube. I'm Dave Volante. Today, we're going to explore the ebb and flow of data as it travels into the cloud. In the data lake, the concept of data lakes was a Loring when it was first coined last decade by CTO James Dickson, rather than be limited to highly structured and curated data that lives in a relational database in the form of an expensive and rigid data warehouse or a data Mart, a data lake is formed by flowing data from a variety of sources into a scalable repository, like say an S3 bucket that anyone can access, dive into. They can extract water. It can a data from that lake and analyze data. That's much more fine-grained and less expensive to store at scale. The problem became that organizations started to dump everything into their data lakes with no schema on it, right? No metadata, no context to shove it into the data lake and figure out what's valuable. >>At some point down the road kind of reminds you of your attic, right? Except this is an attic in the cloud. So it's too big to clean out over a weekend. We'll look it's 2021 and we should be solving this problem by now, a lot of folks are working on this, but often the solutions at other complexities for technology pros. So to understand this better, we're going to enlist the help of chaos search CEO and Walsh and Thomas Hazel, the CTO and founder of chaos search. We're also going to speak with Kevin Miller. Who's the vice president and general manager of S3 at Amazon web services. And of course they manage the largest and deepest data lakes on the planet. And we'll hear from a customer to get their perspective on this problem and how to go about solving it, but let's get started. Ed Thomas. Great to see you. Thanks for coming on the cube. Likewise face. It's really good to be in this nice face. Great. So let me start with you. We've been talking about data lakes in the cloud forever. Why is it still so difficult to extract value from those data? >>Good question. I mean, a data analytics at scale is always been a challenge, right? So, and it's, uh, we're making some incremental changes. As you mentioned that we need to seem some step function changes, but, uh, in fact, it's the reason, uh, search was really founded. But if you look at it the same challenge around data warehouse or a data lake, really, it's not just a flowing the data in is how to get insights out. So it kind of falls into a couple of areas, but the business side will always complain and it's kind of uniform across everything in data lakes, everything that we're offering, they'll say, Hey, listen, I typically have to deal with a centralized team to do that data prep because it's data scientist and DBS. Most of the time they're a centralized group, sometimes are business units, but most of the time, because they're scarce resources together. >>And then it takes a lot of time. It's arduous, it's complicated. It's a rigid process of the deal of the team, hard to add new data. But also it's hard to, you know, it's very hard to share data and there's no way to governance without locking it down. And of course they would be more self-service. So there's you hear from the business side constantly now underneath is like, there's some real technology issues that we haven't really changed the way we're doing data prep since the two thousands. Right? So if you look at it, it's, it falls, uh, two big areas. It's one. How do data prep, how do you take a request comes in from a business unit. I want to do X, Y, Z with this data. I want to use this type of tool sets to do the following. Someone has to be smart, how to put that data in the right schema. >>You mentioned you have to put it in the right format, that the tool sets can analyze that data before you do anything. And then secondly, I'll come back to that because that's a biggest challenge. But the second challenge is how these different data lakes and data we're also going to persisting data and the complexity of managing that data and also the cost of computing. And I'll go through that. But basically the biggest thing is actually getting it from raw data so that the rigidness and complexity that the business sides are using it is literally someone has to do this ETL process extract, transform load. They're actually taking data request comes in. I need so much data in this type of way to put together their Lilly, physically duplicating data and putting it together and schema they're stitching together almost a data puddle for all these different requests. >>And what happens is anytime they have to do that, someone has to do it. And it's very skilled. Resources are scant in the enterprise, right? So it's a DBS and data scientists. And then when they want new data, you give them a set of data set. They're always saying, what can I add this data? Now that I've seen the reports, I want to add this data more fresh. And the same process has to happen. This takes about 60 to 80% of the data scientists in DPA's to do this work. It's kind of well-documented. Uh, and this is what actually stops the process. That's what is rigid. They have to be rigid because there's a process around that. Uh, that's the biggest challenge to doing this. And it takes in the enterprise, uh, weeks or months. I always say three weeks to three months. And no one challenges beyond that. It also takes the same skill set of people that you want to drive. Digital transformation, data, warehousing initiatives, uh, monitorization being, data driven, or all these data scientists and DBS. They don't have enough of, so this is not only hurting you getting insights out of your dead like that, or else it's also this resource constraints hurting you actually getting smaller. >>The Tomic unit is that team that's super specialized team. Right. Right. Yeah. Okay. So you guys talk about activating the data lake. Yep, sure. Analytics, what what's unique about that? What problems are you all solving? You know, when you guys crew created this, this, this magic sauce. >>No, and it basically, there's a lot of things I highlighted the biggest one is how to do the data prep, but also you're persisting and using the data. But in the end, it's like, there's a lot of challenges that how to get analytics at scale. And this is really where Thomas founded the team to go after this. But, um, I'll try to say it simply, what are we doing? I'll try to compare and stress what we do compared to what you do with maybe an elastic cluster or a BI cluster. Um, and if you look at it, what we do is we simply put your data in S3, don't move it, don't transform it. In fact, we're not we're against data movement. What we do is we literally pointed at that data and we index that data and make it available in a data representation that you can give virtual views to end users. >>And those virtual views are available immediately over petabytes of data. And it re it actually gets presented to the end user as an open API. So if you're elastic search user, you can use all your lesser search tools on this view. If you're a SQL user, Tableau, Looker, all the different tools, same thing with machine learning next year. So what we do is we take it, make it very simple. Simply put it there. It's already there already. Point is at it. We do the hard of indexing and making available. And then you publish in the open API as your users can use exactly what they do today. So that's dramatically. I'll give you a before and after. So let's say you're doing elastic search. You're doing logging analytics at scale, they're lending their data in S3. And then they're,, they're physically duplicating a moving data and typically deleting a lot of data to get in a format that elastic search can use. >>They're persisting it up in a data layer called leucine. It's physically sitting in memories, CPU, uh, uh, SSDs. And it's not one of them. It's a bunch of those. They in the cloud, you have to set them up because they're persisting ECC. They stand up semi by 24, not a very cost-effective way to the cloud, uh, cloud computing. What we do in comparison to that is literally pointing it at the same S3. In fact, you can run a complete parallel, the data necessary. It's being ETL. That we're just one more use case read only, or allow you to get that data and make this virtual views. So we run a complete parallel, but what happens is we just give a virtual view to the end users. We don't need this persistence layer, this extra cost layer, this extra, um, uh, time cost and complexity of doing that. >>So what happens is when you look at what happens in elastic, they have a constraint, a trade-off of how much you can keep and how much you can afford to keep. And also it becomes unstable at time because you have to build out a schema. It's on a server, the more the schema scales out, guess what you have to add more servers, very expensive. They're up seven by 24. And also they become brittle. As you lose one node. The whole thing has to be put together. We have none of that cost and complexity. We literally go from to keep whatever you want, whatever you want to keep an S3, a single persistence, very cost effective. And what we do is, um, costs. We save 50 to 80% why we don't go with the old paradigm of sit it up on servers, spin them up for persistence and keep them up. >>Somebody 24, we're literally asking her cluster, what do you want to cut? We bring up the right compute resources. And then we release those sources after the query done. So we can do some queries that they can't imagine at scale, but we're able to do the exact same query at 50 to 80% savings. And they don't have to do any of the toil of moving that data or managing that layer of persistence, which is not only expensive. It becomes brittle. And then it becomes an I'll be quick. Once you go to BI, it's the same challenge, but the BI systems, the requests are constant coming at from a business unit down to the centralized data team. Give me this flavor of debt. I want to use this piece of, you know, this analytic tool in that desk set. So they have to do all this pipeline. They're constantly saying, okay, I'll give you this data, this data I'm duplicating that data. I'm moving in stitching together. And then the minute you want more data, they do the same process all over. We completely eliminate that. >>The questions queue up, Thomas, it had me, you don't have to move the data. That's, that's kind of the >>Writing piece here. Isn't it? I absolutely, no. I think, you know, the daylight philosophy has always been solid, right? The problem is we had that who do hang over, right? Where let's say we were using that platform, little, too many variety of ways. And so I always believed in daily philosophy when James came and coined that I'm like, that's it. However, HTFS that wasn't really a service cloud. Oddish storage is a service that the, the last society, the security and the durability, all that benefits are really why we founded, uh, Oncotype storage as a first move. >>So it was talking Thomas about, you know, being able to shut off essentially the compute and you have to keep paying for it, but there's other vendors out there and stuff like that. Something similar as separating, compute from storage that they're famous for that. And, and, and yet Databricks out there doing their lake house thing. Do you compete with those? How do you participate and how do you differentiate? >>I know you've heard this term data lakes, warehouse now, lake house. And so what everybody wants is simple in easy N however, the problem with data lakes was complexity of out driving value. And I said, what if, what if you have the easy end and the value out? So if you look at, uh, say snowflake as a, as a warehousing solution, you have to all that prep and data movement to get into that system. And that it's rigid static. Now, Databricks, now that lake house has exact same thing. Now, should they have a data lake philosophy, but their data ingestion is not daily philosophy. So I said, what if we had that simple in with a unique architecture, indexed technology, make it virtually accessible publishable dynamically at petabyte scale. And so our service connects to the customer's cloud storage data, stream the data in set up what we call a live indexing stream, and then go to our data refinery and publish views that can be consumed the lasted API, use cabana Grafana, or say SQL tables look or say Tableau. And so we're getting the benefits of both sides, you know, schema on read, write performance with scheme on, right. Reperformance. And if you can do that, that's the true promise of a data lake, you know, again, nothing against Hadoop, but a schema on read with all that complexity of, uh, software was, uh, what was a little data, swamp >>Got to start it. Okay. So we got to give a good prompt, but everybody I talked to has got this big bunch of spark clusters now saying, all right, this, this doesn't scale we're stuck. And so, you know, I'm a big fan of and our concept of the data lake and it's it's early days. But if you fast forward to the end of the decade, you know, what do you see as being the sort of critical components of this notion of, you know, people call it data mesh, but you've got the analytics stack. Uh, you, you, you're a visionary Thomas, how do you see this thing playing out over the next? >>I love for thought leadership, to be honest, our core principles were her core principles now, you know, 5, 6, 7 years ago. And so this idea of, you know, de centralize that data as a product, you know, self-serve and, and federated, computer, uh, governance, I mean, all that, it was our core principle. The trick is how do you enable that mesh philosophy? We, I could say we're a mesh ready, meaning that, you know, we can participate in a way that very few products can participate. If there's gates data into your system, the CTLA, the schema management, my argument with the data meshes like producers and consumers have the same rights. I want the consumer people that choose how they want to consume that data, as well as the producer publishing it. I can say our data refinery is that answer. You know, shoot, I love to open up a standard, right, where we can really talk about the producers and consumers and the rights each others have. But I think she's right on the philosophy. I think as products mature in this cloud, in this data lake capabilities, the trick is those gates. If you have the structure up front, it gets at those pipelines. You know, the chance of you getting your data into a mesh is the weeks and months that it was mentioning. >>Well, I think you're right. I think the problem with, with data mesh today is the lack of standards you've got. You know, when you draw the conceptual diagrams, you've got a lot of lollipops, which are API APIs, but they're all, you know, unique primitives. So there aren't standards by which to your point, the consumer can take the data the way he or she wants it and build their own data products without having to tap people on the shoulder to say, how can I use this? Where's the data live and, and, and, and, and being able to add their own >>You're exactly right. So I'm an organization I'm generally data will be courageous, a stream it to a lake. And then the service, uh, Ks search service is the data's con uh, discoverable and configurable by the consumer. Let's say you want to go to the corner store? You know, I want to make a certain meal tonight. I want to pick and choose what I want, how I want it. Imagine if the data mesh truly can have that producer of information, you, all the things you can buy a grocery store and what you want to make for dinner. And if you'd static, if you call up your producer to do the change, was it really a data mesh enabled service? I would argue not that >>Bring us home >>Well. Uh, and, um, maybe one more thing with this, cause some of this is we talking 20, 31, but largely these principles are what we have in production today, right? So even the self service where you can actually have business context on top of a debt, like we do that today, we talked about, we get rid of the physical ETL, which is 80% of the work, but the last 20% it's done by this refinery where you can do virtual views, the right our back and do all the transformation need and make it available. But also that's available to, you can actually give that as a role-based access service to your end users actually analysts, and you don't want to be a data scientist or DBA in the hands of a data science. The DBA is powerful, but the fact of matter, you don't have to affect all of our employees, regardless of seniority. If they're in finance or in sales, they actually go through and learn how to do this. So you don't have to be it. So part of that, and they can come up with their own view, which that's one of the things about debt lakes, the business unit wants to do themselves, but more importantly, because they have that context of what they're trying to do instead of queuing up the very specific request that takes weeks, they're able to do it themselves and to find out that >>Different data stores and ETL that I can do things in real time or near real time. And that's that's game changing and something we haven't been able to do, um, ever. Hmm. >>And then maybe just to wrap it up, listen, um, you know, eight years ago is a group of founders came up with the concept. How do you actually get after analytics at scale and solve the real problems? And it's not one thing it's not just getting S3, it's all these different things. And what we have in market today is the ability to literally just simply stream it to S3 by the way, simply do what we do is automate the process of getting the data in a representation that you can now share an augment. And then we publish open API. So can actually use a tool as you want first use case log analytics, Hey, it's easy to just stream your logs in and we give you elastic search puppet services, same thing that with CQL, you'll see mainstream machine learning next year. So listen, I think we have the data lake, you know, 3.0 now, and we're just stretching our legs run off >>Well, and you have to say it log analytics. But if I really do believe in this concept of building data products and data services, because I want to sell them, I want to monetize them and being able to do that quickly and easily, so that can consume them as the future. So guys, thanks so much for coming on the program. Really appreciate it. All right. In a moment, Kevin Miller of Amazon web services joins me. You're watching the cube, your leader in high tech coverage.
SUMMARY :
that organizations started to dump everything into their data lakes with no schema on it, At some point down the road kind of reminds you of your attic, right? But if you look at it the same challenge around data warehouse So if you look at it, it's, it falls, uh, two big areas. You mentioned you have to put it in the right format, that the tool sets can analyze that data before you do anything. It also takes the same skill set of people that you want So you guys talk about activating the data lake. Um, and if you look at it, what we do is we simply put your data in S3, don't move it, And then you publish in the open API as your users can use exactly what they you have to set them up because they're persisting ECC. It's on a server, the more the schema scales out, guess what you have to add more servers, And then the minute you want more data, they do the same process all over. The questions queue up, Thomas, it had me, you don't have to move the data. I absolutely, no. I think, you know, the daylight philosophy has always been So it was talking Thomas about, you know, being able to shut off essentially the And I said, what if, what if you have the easy end and the value out? the sort of critical components of this notion of, you know, people call it data mesh, And so this idea of, you know, de centralize that You know, when you draw the conceptual diagrams, you've got a lot of lollipops, which are API APIs, but they're all, if you call up your producer to do the change, was it really a data mesh enabled service? but the fact of matter, you don't have to affect all of our employees, regardless of seniority. And that's that's game changing And then maybe just to wrap it up, listen, um, you know, eight years ago is a group of founders Well, and you have to say it log analytics.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kevin Miller | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Ed Thomas | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
2021 | DATE | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
next year | DATE | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
S3 | ORGANIZATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
eight years ago | DATE | 0.98+ |
Today | DATE | 0.98+ |
two thousands | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
20% | QUANTITY | 0.98+ |
tonight | DATE | 0.97+ |
last decade | DATE | 0.97+ |
S3 | TITLE | 0.97+ |
first | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Tableau | TITLE | 0.95+ |
single | QUANTITY | 0.95+ |
James Dickson | PERSON | 0.94+ |
Hadoop | TITLE | 0.94+ |
two big areas | QUANTITY | 0.94+ |
20 | QUANTITY | 0.94+ |
SQL | TITLE | 0.93+ |
seven | QUANTITY | 0.93+ |
CTO | PERSON | 0.93+ |
about 60 | QUANTITY | 0.93+ |
Oncotype | ORGANIZATION | 0.92+ |
first move | QUANTITY | 0.92+ |
secondly | QUANTITY | 0.91+ |
one more thing | QUANTITY | 0.89+ |
DBS | ORGANIZATION | 0.89+ |
one node | QUANTITY | 0.85+ |
Walsh | PERSON | 0.83+ |
petabytes | QUANTITY | 0.77+ |
Tomic | ORGANIZATION | 0.77+ |
31 | QUANTITY | 0.77+ |
end of the | DATE | 0.76+ |
cabana | TITLE | 0.73+ |
HTFS | ORGANIZATION | 0.7+ |
Mart | ORGANIZATION | 0.68+ |
Grafana | TITLE | 0.63+ |
data | ORGANIZATION | 0.58+ |
Looker | TITLE | 0.55+ |
CQL | TITLE | 0.55+ |
DPA | ORGANIZATION | 0.54+ |
Ed Naim & Anthony Lye | AWS Storage Day 2021
(upbeat music) >> Welcome back to AWS storage day. This is the Cubes continuous coverage. My name is Dave Vellante, and we're going to talk about file storage. 80% of the world's data is in unstructured storage. And most of that is in file format. Devs want infrastructure as code. They want to be able to provision and manage storage through an API, and they want that cloud agility. They want to be able to scale up, scale down, pay by the drink. And the big news of storage day was really the partnership, deep partnership between AWS and NetApp. And with me to talk about that as Ed Naim, who's the general manager of Amazon FSX and Anthony Lye, executive vice president and GM of public cloud at NetApp. Two Cube alums. Great to see you guys again. Thanks for coming on. >> Thanks for having us. >> So Ed, let me start with you. You launched FSX 2018 at re-invent. How has it being used today? >> Well, we've talked about MSX on the Cube before Dave, but let me start by recapping that FSX makes it easy to, to launch and run fully managed feature rich high performance file storage in the cloud. And we built MSX from the ground up really to have the reliability, the scalability you were talking about. The simplicity to support, a really wide range of workloads and applications. And with FSX customers choose the file system that powers their file storage with full access to the file systems feature sets, the performance profiles and the data management capabilities. And so since reinvent 2018, when we launched this service, we've offered two file system choices for customers. So the first was a Windows file server, and that's really storage built on top of Windows server designed as a really simple solution for Windows applications that require shared storage. And then Lustre, which is an open source file system that's the world's most popular high-performance file system. And the Amazon FSX model has really resonated strongly with customers for a few reasons. So first, for customers who currently managed network attached storage or NAS on premises, it's such an easy path to move their applications and their application data to the cloud. FSX works and feels like the NAZA appliances that they're used to, but added to all of that are the benefits of a fully managed cloud service. And second, for builders developing modern new apps, it helps them deliver fast, consistent experiences for Windows and Linux in a simple and an agile way. And then third, for research scientists, its storage performance and its capabilities for dealing with data at scale really make it a no-brainer storage solution. And so as a result, the service is being used for a pretty wide spectrum of applications and workloads across industries. So I'll give you a couple of examples. So there's this class of what we call common enterprise IT use cases. So think of things like end user file shares the corporate IT applications, content management systems, highly available database deployments. And then there's a variety of common line of business and vertical workloads that are running on FSX as well. So financial services, there's a lot of modeling and analytics, workloads, life sciences, a lot of genomics analysis, media and entertainment rendering and transcoding and visual effects, automotive. We have a lot of electronic control units, simulations, and object detection, semiconductor, a lot of EDA, electronic design automation. And then oil and gas, seismic data processing, pretty common workload in FSX. And then there's a class of, of really ultra high performance workloads that are running on FSX as well. Think of things like big data analytics. So SAS grid is a, is a common application. A lot of machine learning model training, and then a lot of what people would consider traditional or classic high performance computing or HPC. >> Great. Thank you for that. Just quick follow-up if I may, and I want to bring Anthony into the conversation. So why NetApp? This is not a Barney deal, this was not elbow grease going into a Barney deal. You know, I love you. You love me. We do a press release. But, but why NetApp? Why ONTAP? Why now? (momentary silence) Ed, that was to you. >> Was that a question for Anthony? >> No, for you Ed. And then I want to bring Anthony in. >> Oh, Sure. Sorry. Okay. Sure. Yeah, I mean it, uh, Dave, it really stemmed from both companies realizing a combined offering would be highly valuable to and impactful for customers. In reality, we started collaborating in Amazon and NetApp on the service probably about two years ago. And we really had a joint vision that we wanted to provide AWS customers with the full power of ONTAP. The complete ONTAP with every capability and with ONTAP's full performance, but fully managed an offer as a full-blown AWS native service. So what that would mean is that customers get all of ONTAP's benefits along with the simplicity and the agility, the scalability, the security, and the reliability of an AWS service. >> Great. Thank you. So Anthony, I have watched NetApp reinvent itself started in workstations, saw you go into the enterprise, I saw you lean into virtualization, you told me at least two years, it might've been three years ago, Dave, we are going all in on the cloud. We're going to lead this next, next chapter. And so, I want you to bring in your perspective. You're re-inventing NetApp yet again, you know, what are your thoughts? >> Well, you know, NetApp and AWS have had a very long relationship. I think it probably dates now about nine years. And what we really wanted to do in NetApp was give the most important constituent of all an experience that helped them progress their business. So ONTAP, you know, the industry's leading shared storage platform, we wanted to make sure that in AWS, it was as good as it was on premise. We love the idea of giving customers this wonderful concept of symmetry. You know, ONTAP runs the biggest applications in the largest enterprises on the planet. And we wanted to give not just those customers an opportunity to embrace the Amazon cloud, but we wanted to also extend the capabilities of ONTAP through FSX to a new customer audience. Maybe those smaller companies that didn't really purchase on premise infrastructure, people that were born in the cloud. And of course, this gives us a great opportunity to present a fully managed ONTAP within the FSX platform, to a lot of non NetApp customers, to our competitors customers, Dave, that frankly, haven't done the same as we've done. And I think we are the benefactors of it, and we're in turn passing that innovation, that, that transformation onto the, to the customers and the partners. >> You know, one is the, the key aspect here is that it's a managed service. I don't think that could be, you know, overstated. And the other is that the cloud nativeness of this Anthony, you mentioned here, our marketplace is great, but this is some serious engineering going on here. So Ed maybe, maybe start with the perspective of a managed service. I mean, what does that mean? The whole ball of wax? >> Yeah. I mean, what it means to a customer is they go into the AWS console or they go to the AWS SDK or the, the AWS CLI and they are easily able to provision a resource provision, a file system, and it automatically will get built for them. And if there's nothing that they need to do at that point, they get an endpoint that they have access to the file system from and that's it. We handle patching, we handle all of the provisioning, we handle any hardware replacements that might need to happen along the way. Everything is fully managed. So the customer really can focus not on managing their file system, but on doing all of the other things that they, that they want to do and that they need to do. >> So. So Anthony, in a way you're disrupting yourself, which is kind of what you told me a couple of years ago. You're not afraid to do that because if we don't do it, somebody else is going to do it because you're, you're used to the old days, you're selling a box and you say, we'll see you next time, you know, three or four years. So from, from your customer's standpoint, what's their reaction to this notion of a managed service and what does it mean to NetApp? >> Well, so I think the most important thing it does is it gives them investment protection. The wonderful thing about what we've built with Amazon in the FSX profile is it's a complete ONTAP. And so one ONTAP cluster on premise can immediately see and connect to an ONTAP environment under FSX. We can then establish various different connectivities. We can use snap mirror technologies for disaster recovery. We can use efficient data transfer for things like dev test and backup. Of course, the wonderful thing that we've done, that we've gone beyond, above and beyond, what anybody else has done is we want to make sure that the actual primary application itself, one that was sort of built using NAS built in an on-premise environment an SAP and Oracle, et cetera, as Ed said, that we can move those over and have the confidence to run the application with no changes on an Amazon environment. So, so what we've really done, I think for customers, the NetApp customers, the non NetApp customers, is we've given them an enterprise grade shared storage platform that's as good in an Amazon cloud as it was in an on-premise data center. And that's something that's very unique to us. >> Can we talk a little bit more about those, those use cases? You know, both, both of you. What are you seeing as some of the more interesting ones that you can share? Ed, maybe you can start. >> Yeah, happy to. The customer discussions that we've, we've been in have really highlighted four cases, four use cases the customers are telling us they'll use a service for. So maybe I'll cover two and maybe Anthony can cover the other two. So, the first is application migrations. And customers are increasingly looking to move their applications to AWS. And a lot of those are applications work with file storage today. And so we're talking about applications like SAP. We're talking about relational databases like SQL server and Oracle. We're talking about vertical applications like Epic and the healthcare space. As another example, lots of media entertainment, rendering, and transcoding, and visual effects workload. workflows require Windows, Linux, and Mac iOS access to the same set of data. And what application administrators really want is they want the easy button. They want fully featured file storage that has the same capabilities, the same performance that their applications are used to. Has extremely high availability and durability, and it can easily enable them to meet compliance and security needs with a robust set of data protection and security capabilities. And I'll give you an example, Accenture, for example, has told us that a key obstacle their clients face when migrating to the cloud is potentially re-architecting their applications to adopt new technologies. And they expect that Amazon FSX for NetApp ONTAP will significantly accelerate their customers migrations to the cloud. Then a second one is storage migrations. So storage admins are increasingly looking to extend their on-premise storage to the cloud. And why they want to do that is they want to be more agile and they want to be responsive to growing data sets and growing workload needs. They want to last to capacity. They want the ability to spin up and spin down. They want easy disaster recovery across geographically isolated regions. They want the ability to change performance levels at any time. So all of this goodness that they get from the cloud is what they want. And more and more of them also are looking to make their company's data accessible to cloud services for analytics and processing. So services like ECS and EKS and workspaces and App Stream and VMware cloud and SageMaker and orchestration services like parallel cluster and AWS batch. But at the same time, they want all these cloud benefits, but at the same time, they have established data management workflows, and they build processes and they've built automation, leveraging APIs and capabilities of on-prem NAS appliances. It's really tough for them to just start from scratch with that stuff. So this offering provides them the best of both worlds. They get the benefits of the cloud with the NAS data management capabilities that they're used to. >> Right. >> Ed: So Anthony, maybe, do you want to talk about the other two? >> Well, so, you know, first and foremost, you heard from Ed earlier on the, the, the FSX sort of construct and how successful it's been. And one of the real reasons it's been so successful is, it takes advantage of all of the latest storage technologies, compute technologies, networking technologies. What's great is all of that's hidden from the user. What FSX does is it delivers a service. And what that means for an ONTAP customer is you're going to have ONTAP with an SLA and an SLM. You're going to have hundreds of thousands of IOPS available to you and sub-millisecond latencies. What's also really important is the design for FSX and app ONTAP was really to provide consistency on the NetApp API and to provide full access to ONTAP from the Amazon console, the Amazon SDK, or the Amazon CLI. So in this case, you've got this wonderful benefit of all of the, sort of the 29 years of innovation of NetApp combined with all the innovation AWS, all presented consistently to a customer. What Ed said, which I'm particularly excited about, is customers will see this just as they see any other AWS service. So if they want to use ONTAP in combination with some incremental compute resources, maybe with their own encryption keys, maybe with directory services, they may want to use it with other services like SageMaker. All of those things are immediately exposed to Amazon FSX for the app ONTAP. We do some really intelligent things just in the storage layer. So, for example, we do intelligent tiering. So the customer is constantly getting the, sort of the best TCO. So what that means is we're using Amazon's S3 storage as a tiered service, so that we can back off code data off of the primary file system to give the customer the optimal capacity, the optimal throughput, while maintaining the integrity of the file system. It's the same with backup. It's the same with disaster recovery, whether we're operating in a hybrid AWS cloud, or we're operating in an AWS region or across regions. >> Well, thank you. I think this, this announcement is a big deal for a number of reasons. First of all, it's the largest market. Like you said, you're the gold standard. I'll give you that, Anthony, because you guys earned it. And so it's a large market, but you always had to make previously, you have to make trade-offs. Either I could do file in the cloud, but I didn't get the rich functionality that, you know, NetApp's mature stack brings, or, you know, you could have wrapped your stack in Kubernete's container and thrown it into the cloud and hosted it there. But now that it's a managed service and presumably you're underneath, you're taking advantage. As I say, my inference is there's some serious engineering going on here. You're taking advantage of some of the cloud native capabilities. Yeah, maybe it's the different, you know, ECE two types, but also being able to bring in, we're, we're entering a new data era with machine intelligence and other capabilities that we really didn't have access to last decade. So I want to, I want to close with, you know, give you guys the last word. Maybe each of you could give me your thoughts on how you see this partnership of, for the, in the future. Particularly from a customer standpoint. Ed, maybe you could start. And then Anthony, you can bring us home. >> Yeah, well, Anthony and I and our teams have gotten to know each other really well in, in ideating around what this experience will be and then building the product. And, and we have this, this common vision that it is something that's going to really move the needle for customers. Providing the full ONTAP experience with the power of a, of a native AWS service. So we're really excited. We're, we're in this for the long haul together. We have, we've partnered on everything from engineering, to product management, to support. Like the, the full thing. This is a co-owned effort, a joint effort backed by both companies. And we have, I think a pretty remarkable product on day one, one that I think is going to delight customers. And we have a really rich roadmap that we're going to be building together over, over the years. So I'm excited about getting this in customer's hands. >> Great, thank you. Anthony, bring us home. >> Well, you know, it's one of those sorts of rare chances where you get to do something with Amazon that no one's ever done. You know, we're sort of sitting on the inside, we are a peer of theirs, and we're able to develop at very high speeds in combination with them to release continuously to the customer base. So what you're going to see here is rapid innovation. You're going to see a whole host of new services. Services that NetApp develops, services that Amazon develops. And then the whole ecosystem is going to have access to this, whether they're historically built on the NetApp APIs or increasingly built on the AWS APIs. I think you're going to see orchestrations. I think you're going to see the capabilities expand the overall opportunity for AWS to bring enterprise applications over. For me personally, Dave, you know, I've demonstrated yet again to the NetApp customer base, how much we care about them and their future. Selfishly, you know, I'm looking forward to telling the story to my competitors, customer base, because they haven't done it. So, you know, I think we've been bold. I think we've been committed as you said, three and a half years ago, I promised you that we were going to do everything we possibly could. You know, people always say, you know, what's, what's the real benefit of this. And at the end of the day, customers and partners will be the real winners. This, this innovation, this sort of, as a service I think is going to expand our market, allow our customers to do more with Amazon than they could before. It's one of those rare cases, Dave, where I think one plus one equals about seven, really. >> I love the vision and excited to see the execution Ed and Anthony, thanks so much for coming back in the Cube. Congratulations on getting to this point and good luck. >> Anthony and Ed: Thank you. >> All right. And thank you for watching everybody. This is Dave Vellante for the Cube's continuous coverage of AWS storage day. Keep it right there. (upbeat music)
SUMMARY :
And the big news of storage So Ed, let me start with you. And the Amazon FSX model has into the conversation. I want to bring Anthony in. and NetApp on the service And so, I want you to in the largest enterprises on the planet. And the other is that the cloud all of the provisioning, You're not afraid to do that that the actual primary of the more interesting ones and maybe Anthony can cover the other two. of IOPS available to you and First of all, it's the largest market. really move the needle for Great, thank you. the story to my competitors, for coming back in the Cube. This is Dave Vellante for the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Anthony Lye | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed Naim | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
29 years | QUANTITY | 0.99+ |
FSX | TITLE | 0.99+ |
Barney | ORGANIZATION | 0.99+ |
ONTAP | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
NetApp | TITLE | 0.99+ |
four years | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Windows | TITLE | 0.99+ |
MSX | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Ed Boyajian, EDB | Postgres Vision 2021
(upbeat music) >> From around the globe, it's the CUBE with digital coverage of Postgres Vision 2021. Brought to you by EDB. >> Hello everyone, this is Dave Vellante for the CUBE. We're covering Postgres Vision 2021, the Virtual CUBE edition. Welcome to our conversation with the CEO, Ed Boyajian, the CEO of Enterprise DB. And we're going to talk about what's happening in open source and database and the future of tech. Ed, Welcome. >> Hi Dave, good to be here. >> Hey, several years ago at Postgres Vision event you put forth the premise that the industry was approaching a threshold moment, and digital transformation was the linchpin of that shift. Now, Ed, while you were correct, and I have no doubt the audience agreed, most people went back to their offices after that event and they returned to their hyper-focus of their day-to-day jobs. Yeah, maybe a few accelerated their digital initiatives but generally, pre COVID, we moved at a pretty incremental pace and then the big bang hit. And if you weren't digital business, you were out of business. So, that single event created the most rapid change that we've ever seen in the tech industry by far, nothing really compares. So, the question is, why is Postgres specifically and EDB generally the right fit for this new world? >> Yeah, I think, look a couple of things are happening Dave. You know, right along the bigger picture of digital transformation, we are seeing the database market in transformation. And, and I think the things that are driving that shift are the things that are resulting the success of Postgres and the success of EDB. I think first and foremost, we're seeing a dramatic re-platforming. And just like we saw in the world of Linux where I was at, Red Hat during that shift where people were moving from Unix-based systems to X86 systems, we're seeing that similar re-platforming happening whether that's from traditional infrastructures to cloud-based infrastructures or container-based infrastructures, it's a great opportunity for databases to be changed out. Postgres wins in that context because it's so easily deployed anywhere. I think the second thing that's changing is we're seeing a broad expansion of developers across the enterprise. They don't just live in IT anymore. And I think as developers take on more power and control, they're just defining the agenda. And it's another place where Postgres shines. It's been a priority of EDB's to make Postgres easier and that's coming to life. And I think the last stack overflow developer survey suggested that, I think they survey 65,000 developers, the second most loved and the second most used database by developers is Postgres. And so I think there again, Postgres shines in a moment of change. And then I think the third is kind of obvious. It's always an elephant in the room, no pun intended, but it's this relentless nagging burden of the expenses of the incumbent proprietary databases and the need. And we especially saw this in COVID. To start to change that, more dramatically change that economic equation, here again, Postgres shines. >> You know, I want to ask you, I'm going to jump ahead to the future for a second, because you're talking about the re-platforming and with your Red Hat shops I kind of want to pick your brain on this because you're right. You saw that with Red Hat and you're kind of seeing it again when you think about open shift and where it's going, my question is related to re-platforming around new types of workloads, new processing models at the edge, I mean, you've seen an explosion of processing power GPU's, NPUs, accelerators, DSPs and it appears that there this is happening at a very low cost. I'm inferring that you're saying Postgres can take advantage of that trend as well, that that broader re-platforming trend to the edge, is that correct? >> It is. And, and I think, you know this is the this has been one of the I think the most interesting things with Postgres. Now I've been here almost 13 years. So if you put that in some perspective, I've watched and participated in leading transformation in the category. You know, we've been squarely focused on Postgres so we've got 300 engineers who worry about making Postgres better. And as you look across that landscape a time, not only as Postgres gotten more performance and more scalable, it's also proven to be the right database choice in the world of not just legacy migrations but new application development. And I think that stack overflow developer survey is a good indicator of how developers feel about Postgres, but, you know over that timeframe, I think if you went back to 2008 when I joined EDB, Postgres was was considered a really good general purpose database. And today I think Postgres is a great general purpose database. General purpose isn't sexy in the market, broadly speaking but Postgres capabilities across workloads in every area is really robust. And let me just spend a second on it. We look at our customer base as deploying and what we think of as systems of record, which are the traditional ERP type apps, you know where there's a single source of truth. You might think of ERP apps there. We look at our customers deploying and systems of engagement, and those are apps that you might think of in the context of social media style apps or websites that are backed by a database. And the third area is systems of analytics where you would typically think of data warehouse style applications, interestingly, Postgres performs well. And our customers report using us across that whole landscape of application areas. And I think that is one of Postgres' hidden superpowers, is that ability to reach into each area of requirement on the workload side. >> Yeah. And as I was alluding to before. That, that itself is evolving as you now inject AI into the equation AI inferencing. And it's just a very exciting times ahead. There's no, there's no database, you know 20 years ago it was kind of boring. Now it's just exploding. I want to come back to that, the notion of of Postgres that maybe talk about other database models. I mean, you've mentioned that you've evolved from this, you know, system of record. You can take a system engagement on structured data, et cetera, Jason it's-. So how should we think about Postgres in relation to other databases and specifically other business models of companies that provide database services? Why is Postgres attractive? Where is it winning? >> Yeah, I think a couple of places. So, I mean, for first and foremost, Postgres, you know at its core, Postgres is a SQL relational database a trend in asset compliance, equal relational database. And that is inherently a strength of Postgres but it's also a multi-model database. Which means we handle a lot of other, you know database requirements, whether that's geospatial or, or JSON for documents or, or time series, things like that. And, so Postgres extensibility is one of its inherent strengths. And that's kind of been built in from the beginning of Postgres. So not surprisingly people use Postgres across a number of workloads because at the end of the day, there's still value in having a database that's able to do more. There are a lot of important specialty databases and I think they will remain important specialty databases, but Postgres thrives in its ability to crossover in that way. And I think that is, you know one of the different key differentiators in in how we've seen the market and the business develop. And, and that's the breadth of of workloads that Postgres succeeds in. But, but our growth if you kind of ventured it across vectors we see growth happening, you know, in a few dimensions. First, we see growth happening in new applications. About half of our customers have come to us today for new, new Postgres users are deploying us on new applications. The others are our second area migrating away from some existing legacy incumbent. Often Oracle, not always. The third area of growth we see is in cloud where we're Postgres is deployed very prolifically both in the traditional cloud platforms like EC2, but then again also in the database as a service environment and then the fourth area growth we're seeing now is around container deployment, Kubernetes deployment. >> Well, you mean Oracle's prominent because it's just, it's, it's, it's a big install base and it's expensive and people, you know they got to look at that. I mean, It's funny. I do a lot of TCO work and mostly, you know usually TCO is about labor costs when it comes to Oracle it's about license costs and maintenance costs. And so to the extent that you can reduce that at least for a portion of your state, you're going to, you're going to drop right to the bottom line. But, but, I want to ask you about the kind of that spectrum that you think about the prevailing models for database you've got on the one hand, you've got the right tool for the right job approach. You know, it might be 10 or 12 data stores in the cloud. On the other hand, you've got kind of a converged approach. You know, Oracle is going that direction, clearly Postgres, with its open source innovation, is going that direction. And it seems to me yet that at scale that's a more, the latter is the more cost-effective model. How do you think about that? >> Well, you know, I think at the end of the day you kind of have to look at it. I mean, the, the business side of my brain looks at that as an addressable market question, right? And you heard me talk about three broad categories of workloads and, you know, people define workloads in different buckets, but that's how we do it. But if you look at just a system of record in the system of engagement market I think that's what would be traditionally viewed as the database market. And there that's, you know, let's just say for the sake of arguments, a 45 to $50 billion market. The third, the systems of analysis that market's an $18 billion market. And, and, you know, as we talk about that so all in it's still between 60 and $70 billion market. And I think what happens, there's so much heat and light poured on the valuation multiples of some of the specialty players that the market gets confused. But the reality is our customers don't get confused. I mean, if you look at those specialty players take that $48 billion market. I mean, add up Mongo, Reds, Cockroach, Neo, all of those. I mean, hugely valued companies all unicorn companies, but combined they add up to a billion bucks. Don't get me wrong, that's important revenue and meaningful in the workloads they support, but it's not, it doesn't define the full transformation of this category. Look at the systems of analysis again, another great, great market example. I mean, if you add up the consolidation of the Hadoop vendors, add in there, snowflake you're still talking to, you know $1.5 billion in revenue in an $18 billion market. So while those are all important technologies the question is in this transformation move did the database market fully transformed yet. And my view is, no, it didn't, we're in the first maybe second inning of a $65 billion transformation. And I think this is where Postgres will ultimately shine. I think this is how Postgres wins, because at the end of the day, the, the nature of the workloads fits with Postgres and the future tech that we're building in Postgres will serve that broader set of needs. I think more effectively. >> Well, and I love these tam expansion discussions because I think you're right on. And I think it comes back to the data and we all we all talk about the data growth, the data exposure and we see the IDC numbers. Well, you ain't seen nothing yet. And so at data by its very nature is distributed. That's why I get so excited about these new platform models. And I want to tie it back to developers and open source because to me, that is the linchpin of innovation in the next decade. It has been, I would even say for the last decade we've seen it, but it's gaining momentum. So, so in thinking about innovation and specifically Postgres in open source, you know, what can you share with us in terms of how we should think about your advantage and again where people are glomming, leaning in to that advantage? >> Yeah. So, I mean, I think, I think you bring up a really important topic for us as a company, Postgres, we think is an incredibly powerful community and, and when you step away from it, again, I, now you remember, I told you, I'd been at, I was at Red Hat before now here at EDB. And there's a common thread that runs through those two experiences. In, in both experiences the companies are attached and prominent alongside a strong, independent open-source community. And I think the notion of an independent community is really important to understand around Postgres. There are hundreds and thousands of people contributing to Postgres. Now EDB plays a big role in that about, you know approaching a third of the contributions in the last release, released 13 of Postgres came from EDB. Now you might look at that and say, gee, that sounds like a lot, but if you step away from it, you know at about 30% of those contributions, most of the contributions come from a universe around EDB and that's inherently healthy for the community's ability to innovate and accelerate. And I think that while we play a strong role there you can imagine that having, and there are other great companies that are contributing to Postgres. I think having those companies participating and contributing gets the best the best ideas to the front in innovation. So I think the inherent nature Postgres community makes it strong and healthy. I mean, and then contrast that to some of the other prominent high value open-source companies. Companies and the communities are intimately intertwined. They're one in the same. They're actually not independent open source communities. And I think that they're therein lies one of, one of the inherent weaknesses in those. But, Postgres thrives because, you know we bring all those ideas from EDB. We bring a commercial contingent with us and all the things we hope, we emphasize and focus on, in growth and Postgres. Whether that's in the areas of scalability, manageability, all hot topics, of course security, all of those areas. And then, you know, performance as always. All of those areas are informed to us by enterprise customers deploying Postgres at scale. And I think that's the heart of what makes a successful independent project. >> Common editorial powers of, of that ecosystem. They, they they're they're multiplicative as opposed to the, the resources of one. I want to talk about Postgres Vision 2021 sort of set up that a little bit. The theme this year is 'The Future is You'. What do you mean by that? >> So, if you think about what we just said, posts, the category is in Tran-, the database categories in transformation. And we know that many of our people are interested in Postgres are early in their journey. They're early in their experience. And so we want to focus this year's Postgres Vision on them. That we understand, as a company who's been committed to Postgres, as long as we have. And with the understanding we have of the technology and best practices, we want to share that view, those insights with, with those who are coming to Postgres. Some for the first time, some who are experienced. >> Postgres Vision 21 is June 22nd and 23rd go to enterprisedb.com and register. The CUBE's going to be there. We hope you will be too. Ed, thanks for coming to the CUBE and previewing the event. >> Thanks, Dave. >> And thank you. We'll see you at Vision 21. (upbeat music)
SUMMARY :
Brought to you by EDB. and the future of tech. and I have no doubt the audience agreed, nagging burden of the expenses of the I kind of want to pick your brain on this And the third area is That, that itself is evolving as you now And I think that is, you know one of the And so to the extent that you can reduce And I think this is where Postgres that is the linchpin of innovation and all the things we hope, we emphasize What do you mean by that? the database categories in transformation. and previewing the event. We'll see you at Vision 21.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Postgres | ORGANIZATION | 0.99+ |
Ed Boyajian | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
$48 billion | QUANTITY | 0.99+ |
$18 billion | QUANTITY | 0.99+ |
Jason | PERSON | 0.99+ |
$1.5 billion | QUANTITY | 0.99+ |
Ed | PERSON | 0.99+ |
45 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
$65 billion | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
June 22nd | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
2008 | DATE | 0.99+ |
13 | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Postgres' | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
65,000 developers | QUANTITY | 0.99+ |
EDB | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
23rd | DATE | 0.99+ |
$70 billion | QUANTITY | 0.99+ |
second area | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Cockroach | ORGANIZATION | 0.99+ |
Reds | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
two experiences | QUANTITY | 0.99+ |
enterprisedb.com | OTHER | 0.98+ |
each | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Postgres Vision | EVENT | 0.98+ |
third area | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
300 engineers | QUANTITY | 0.98+ |
Postgres Vision 2021 | EVENT | 0.98+ |
Enterprise DB | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
Neo | ORGANIZATION | 0.98+ |
EC2 | TITLE | 0.98+ |
$50 billion | QUANTITY | 0.97+ |
second thing | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Ed Boyajian, CEO, EDB
>>From around the globe, it's the Cube with digital coverage of postgres Vision 2021 brought to you by >>enterprise DB. Hello everyone. This is Dave Volonte for the cube we're covering Postgres Vision 2021. The virtual cube edition. Welcome to our conversation with the Ceo Ed Boyajian is here is the Ceo of enterprise DB and we're gonna talk about what's happening in open source and database in the future of tech. Ed welcome. >>Hi Dave, Good to be here. >>Hey, several years ago, at a, at a Postgres Vision event, you put forth the premise that the industry was approaching a threshold moment, a digital transformation was the linchpin of that shift now. Ed Well you were correct and I have no doubt the audience agreed. Most people went back to their offices after that event and they returned to their hyper focus of their day to day jobs. Maybe a few accelerated their digital initiatives, but generally pre Covid, we moved in a pretty incremental pace and then the big bang hit. And if you were digital business, you are out of business. So that single event created the most rapid change that we've ever seen in the tech industry by far, nothing really compares. So the question is why is Postgres specifically and e d B generally the right fit for this new world? >>Yeah, I think, look, a couple of things are happening gave right along the bigger picture of digital transformation. We are seeing the database market in transformation and and I think the things that are driving that shift are the things that are resulting the success of Postgres and the success of B D B I think first and foremost we're seeing a dramatic re platform ng. And just like we saw in the world of Lennox where I was at red hat during that shift where people are moving from UNIX based systems to x 86 systems. We're seeing that similar re platform in happening. Whether that's from traditional infrastructures to cloud based infrastructures or container based infrastructures, it's a great opportunity for databases to be changed out. Postgres wins in that context because it's so easily deployed anywhere. I think the second thing that's changing is we're seeing a broad expansion of developers across the enterprise so they don't just live in I. T. Anymore. And I think as developers take on more power and control their defining the agenda and it's another place where Postgres shines, it's been a priority of the dBS to make postgres easier. Uh and that's coming to life. And I think the last Stack Overflow Developer Survey suggested that I think they survey 65 developers, the second most loved and the second most used database by developers, Postgres. And so I think there again Postgres shines in a moment of change. Uh and then I think the third is kind of obvious. It's always an elephant in the room, no pun intended. But it's this relentless nagging burden of the expenses of the incumbent proprietary databases and the need. And we especially saw this in Covid to start to change that more dramatically, change that economic equation here Again. PostGres shines. >>You know, I want to ask you, I'm gonna jump ahead to the future for a second because you're talking about the re platform NG and with your red hat chops, I kind of want to pick your brain on this because you're right, you saw it with red hat and you're kind of seeing it again when you think about open shift and where it's going my my question is related to replant forming around new types of workloads, new processing models at the edge. I mean you're seeing an explosion of processing power, GPU SNP us accelerators, dSPs and it appears that this is happening at a very low cost. I'm referring that you're saying Postgres can take advantage of that trend as well that that broader re platform ng trend to the edge, is that correct? >>It is. And I think you know this is, this has been one of the, I think the most interesting things with posters now I've been here almost 13 years. So if you put that in some perspective, I've watched Uh and participated in leading transformation in the category, you know, we've been squarely focused on postgres. So we've got 300 engineers who worry about making postgres better. And as you look across that landscape of time, not only as Postgres gotten more performant and more scalable, it's also proven to be the right database choice in the world of not just legacy migrations, but new application development. And I think that stack overflow developer survey is a good indicator of how developers feel about postgres. But you know, over that time frame I think if you went back to 2008 when I joined E D. B, post chris was considered a really good general purpose database. And today I think post chris is a great general purpose database. General purpose isn't sexy in the market broadly speaking, but Postgres capabilities across workloads in every area is really robust. Let me just spend a second on it. We look at our customer base is deploying in what we think of as systems of record, which are the traditional er, P type apps, uh you know where there's a single source of truth you might think of the RP apps there. We look at our customers deploying in systems of engagement. And those are apps that you might think of in the context of social media style apps or websites that are backed by a database in the third area Systems of analytics where you would typically think of data warehouse style applications interestingly. Postgres performs well and our customers report using us across that whole landscape of application areas. And I think that is one of postgres hidden superpowers. Is that ability to reach into each area of requirement on the workload side. >>And as always alluding to before that that itself is evolving as you now inject ai into the equation ai influencing and it's just a very exciting times ahead. There's no there's no database, You know, 20 years ago it was kind of boring. Now it's just exploding. I want to come back to that the notion of of post grass and maybe talk about other database models. Uh, I mean you mentioned that you've evolved from this, you know, system of record. You can take a system engagement on structured data etcetera. Jason. It's so how should we think about post grass in relation to other databases and specifically other business models of companies that provide database services? Why is Postgres attractive? Where is it winning? >>Yeah, I think a couple of places. So I mean first and foremost Postgres, you know, at his core, post chris is a sequel, relational databases in acid compliance, equal relational database. And that is inherently a strength of Postgres. But it's also a multi model database, which means we handle a lot of other, um, you know, database requirements, whether that's geospatial or or Jason, uh, for documents or time series, things like that. And so Postgres extensive bility is one of its inherent strengths and that's kind of been built in from the beginning of Postgres. So not surprisingly, people use postgres across the number of workloads because at the end of the day there's still value in having a database is able to do more. There are a lot of important specialty databases and I think they will remain important specialty databases, but Postgres thrives in its ability to cross cross over in that way. Um and I think that is, you know, one of the different key differentiators in how we've seen the market in the business development and that's the breadth of of workloads that Postgres succeeds in. But but our growth, if you kind of ventured it across vectors, we see growth happening, you know, in a few dimensions. First we see growth happening in new applications. About half of our customers that come to us today for new uh new postgres users are deploying us on new applications. The others are our second area migrating away from some existing legacy in companies often oracle. Not always. Um The third area of growth we see is in cloud, where Postgres is deployed very prolifically, both in the traditional cloud platforms, Uh like EC two, but then then again also uh in the database as a service environment. And then the fourth area growth we're seeing now is around uh container deployment, kubernetes deployment. >>Well, you may Oracle's prominent because it's just it's a big installed base and it's expensive and people, >>you >>know, they got a look at them. It's funny, I do a lot of TCO work and mostly, you know, usually TCO is about labor costs. When it comes to Oracle, it's about license costs and maintenance costs. And so to the extent that you can reduce that, at least for a portion of your state, you're gonna you're gonna drop right to the bottom line. But but but but I want to ask you about that kind of that spectrum that you think about the prevailing models for database you've got. On the one hand, You've got the right tool for the right job approach. It might be 10 or 12 data stores in the cloud. On the other hand, you've got, you know, kind of a converged approach. Oracle's going that direction clearly. Postgres with its open source innovation is going that direction. And it seems to me that at scale that's a more the latter is a more cost effective model. How do you think about that? >>Well, you know, I think at the end of the day, you kind of have to look at it. I mean, the business side of my brain looks at that as an addressable market question. Right? And you've heard me talk about three broad categories of workloads and you know, people define workloads in different bucket, but that's how we do it. But if you look at just a system of record in the system of engagement market, I think that's what would be traditionally viewed as the database market. Uh and there that's you know, let's just say for the sake of arguments of $45-$50 billion $18 billion dollar market. And you know, as we talk about that. So all in it's still between 60 and $70 billion market. And I think what happens there's so much heat and light poured on the valuation multiples of some of the specialty players. That the market gets confused, but the reality is our customers don't get confused. I mean if you look at those specialty players take that $48 billion market. I mean add up Mongo red is cockroach neo, all of those. I mean hugely valued companies. All unicorn companies. But combined to add up to a billion bucks don't get me wrong that's important revenue and meaningful in the workloads they support. But it's not. It doesn't define the full transformation of this category. Look at the systems of analysis again, another great great market example. I mean if you add up the consolidation of the Hadoop vendors add in there. Um Snowflake, you're still talking you know a billion five in revenue and an $18 billion market. So while those are all important technologies, the question is in this transformation move to the database market fully transform you. And my view is no it didn't were in the first maybe second inning of a $65 billion transformation. And I think this is where Postgres will ultimately shine. I think this is how Postgres wins because at the end of the day the nature of the workloads fits with postgres and the future tech that we're building in post schools will serve that broader set of needs I think more effectively >>well. And I love these tam expansion discussions because I think you're right on and I think it comes back to the data and we all talk about the data growth, the data explosion, we see the I. D. C. Numbers and you ain't seen nothing yet. And so data by its very nature is distributed. That's why I get so excited about these new platform models and and I want to tie it back to developers and open source because to me that is the linchpin of innovation um in the next decade it has been, I would even say for the last decade we've seen it, but it's gaining momentum, so, so in thinking about innovation and and specifically Postgres and an open source, you know, what can you share with us in terms of how we should think about your advantage, and again, what, where people are glomming leaning in to that advantage? >>Yeah, so, I mean, I think I think you bring up a really important topic for us as a company. Postgres we think is an incredibly powerful community, uh and when you step away from it again, I remember I told you I was at red hat before, now here at E D B, and there's a common thread that runs through those two experiences in both experiences. The companies are attached and prominent alongside a strong independent, open source community, and I think the notion of an independent community is really important to understand around postgres. There are hundreds and thousands of people contributing to Postgres now. E D B plays a big role in that. About approaching a third of the contributions. In the last release released, 13 of Postgres came from E D B. You might look at that and say gee, that sounds like a lot, but if you step away from it, you know, about 30% of those contributions, Most of the contributions come from a universe around D D. B. And that's inherently healthy for the community's ability to innovate and accelerate. And I think that while we play a strong role there, you can imagine that having and there are other great companies that are contributing to Postgres, I think having those companies participating and contributing gets the best, the best ideas to the front in innovation. So I think the inherent nature Postgres community makes it strong and healthy. And then contrast that to some of the other prominent high value open source companies, the companies and the communities are intimately intertwined. They're one and the same. They're actually not independent open source communities. And I think that therein lies one of the, one of the inherent weaknesses in those but postgres to rise because you know, we bring all those ideas from the DB, we bring a commercial contingent with us all the things we hope we emphasize and focus on in growth and postgres, whether that's in the areas of scalability, manageability, all hot topics, of course security, all of those areas. And then, you know, performance as always, all of those areas are informed to us by enterprise customers deploying post chris at scale. And I think that's the heart of what makes a successful independent project. >>Yeah. The combinatorial powers of of that ecosystem. Uh they their their multiplication, I've as opposed to the resources of one. I want to talk about postgres Vision 2021 sort of set up that a little bit. The theme this year is the future. Is you, what do you mean by that? >>So if you think about what we just said post the category is in transit database categories and transformation. And we know that many of our people are interested in. Postgres are early in their journey, their early in their experience. And so we want to focus this year's postcards vision on them that we understand as a company has been committed to postgres as long as we have and with the understanding we have the technology and best practices, we want to share that view those insights uh, with those who are coming to postgres, Some for the first time, some who are experienced >>Postgres. Vision 21 is june 22nd and 23rd. Go to enterprise db dot com and register the cube is going to be there. We hope you will be too. Ed, thanks for coming to the Cuban previewing the event. >>Thanks Dave. >>Thank you. We'll see you at Vision 21 >>mm mm.
SUMMARY :
Ed Boyajian is here is the Ceo of enterprise DB and we're gonna talk about what's happening in open And if you were digital business, you are out of business. And I think the last Stack Overflow Developer Survey suggested that I think again when you think about open shift and where it's going my my question is related to replant forming around And I think you know this is, this has been one of the, I think the most interesting And as always alluding to before that that itself is evolving as you now inject ai into the equation ai Um and I think that is, you know, one of the different key differentiators in And so to the extent that you can reduce that, at least for a portion of your state, you're gonna you're gonna drop right to And I think this is where Postgres And I love these tam expansion discussions because I think you're right on and I think it comes back And I think that's the heart of what makes a successful Uh they their their multiplication, I've as opposed to the resources of one. So if you think about what we just said post the category the cube is going to be there. We'll see you at Vision 21
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Ed Boyajian | PERSON | 0.99+ |
$48 billion | QUANTITY | 0.99+ |
$65 billion | QUANTITY | 0.99+ |
Postgres | ORGANIZATION | 0.99+ |
$45 | QUANTITY | 0.99+ |
$18 billion | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Jason | PERSON | 0.99+ |
$70 billion | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
13 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
postgres | ORGANIZATION | 0.99+ |
60 | QUANTITY | 0.99+ |
23rd | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Ed | PERSON | 0.99+ |
65 developers | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
300 engineers | QUANTITY | 0.99+ |
second area | QUANTITY | 0.99+ |
june 22nd | DATE | 0.99+ |
today | DATE | 0.99+ |
Postgres Vision 2021 | EVENT | 0.98+ |
postgres Vision 2021 | EVENT | 0.98+ |
second thing | QUANTITY | 0.98+ |
several years ago | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
UNIX | TITLE | 0.98+ |
Postgres Vision | EVENT | 0.98+ |
EC two | TITLE | 0.98+ |
this year | DATE | 0.98+ |
two experiences | QUANTITY | 0.97+ |
$50 billion | QUANTITY | 0.97+ |
almost 13 years | QUANTITY | 0.97+ |
about 30% | QUANTITY | 0.97+ |
both experiences | QUANTITY | 0.96+ |
first time | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
E D B | ORGANIZATION | 0.96+ |
fourth area | QUANTITY | 0.95+ |
next decade | DATE | 0.95+ |
five | QUANTITY | 0.95+ |
PostGres | ORGANIZATION | 0.95+ |
single source | QUANTITY | 0.94+ |
$18 billion | QUANTITY | 0.93+ |
third area | QUANTITY | 0.91+ |
dBS | ORGANIZATION | 0.9+ |
12 data stores | QUANTITY | 0.88+ |
hundreds and | QUANTITY | 0.88+ |
Ed Walsh, ChaosSearch | CUBE Conversation May 2021
>>president >>so called big data promised to usher in a new era of innovation where companies competed on the basis of insights and agile decision making. There's little question that social media giants, search leaders and e commerce companies benefited. They had the engineering shops and the execution capabilities to take troves of data and turned them into piles of money. But many organizations were not as successful. They invested heavily in data architecture is tooling and hyper specialized experts to build out their data pipelines. Yet they still struggle today to truly realize they're busy. Did data in their lakes is plentiful but actionable insights aren't so much chaos. Search is a cloud based startup that wants to change this dynamic with a new approach designed to simplify and accelerate time to insights and dramatically lower cost and with us to discuss his company and its vision for the future is cuba Lem Ed Walsh had great to see you. Thanks for coming back in the cube. >>I always love to be here. Thank you very much. It's always a warm welcome. Thank you. >>Alright, so give us the update. You guys have had some big funding rounds, You're making real progress on the tech, taking it to market what's new with chaos surgery. >>Sure. Actually even a lot of good exciting things happen. In fact just this month we need some, you know, obviously announced some pretty exciting things. So we unveiled what we consider the industry first multi model data late platform that we allow you to take your data in S three. In fact, if you want to show the image you can, but basically we allow you to put your data in S three and then what we do is we activate that data and what we do is a full index of the data and makes it available through open a P. I. S. And the key thing about that is it allows your end users to use the tools are using today. So simply put your data in your cloud option charge, think Amazon S three and glacier think of all the different data. Is that a natural act? And then we do the hard work. And the key thing is to get one unified delic but it's a multi mode model access so we expose api like the elastic search aPI So you can do things like search or using cabana do log analytics but you can also do things like sequel, use Tableau looker or bring relational concepts into cabana. Things like joins in the data back end. But it allows you also to machine learning which is early next year. But what you get is that with that because of a data lake philosophy, we're not making new transformations without all the data movement. People typically land data in S. Three and we're on the shoulders of giants with us three. Um There's not a better more cost effective platform. More resilient. There's not a better queuing system out there and it's gonna cost curve that you can't beat. But basically so people store a lot of data in S. Three. Um But what their um But basically what you have to do is you E. T. L. Out to other locations. What we do is allow you to literally keep it in place. We index in place. We write our hot index to rewrite index, allow you to go after that but published an open aPI S. But what we avoid is the GTL process. So what our index does is look at the data and does full scheme of discovery normalization, were able to give sample sets. And then the refinery allows you to advance transformations using code. Think about using sequel or using rejects to change that data pull the dead apartheid things but use role based access to give that to the end user. But it's in a format that their tools understand cabana will use the elasticsearch ap or using elasticsearch calls but also sequel and go directly after data by doing that. You get a data lake but you haven't had to take the three weeks to three months to transform your data. Everyone else makes you. And you talk about the failure. The idea that Alex was put your data there in a very scalable resilient environment. Don't do transformation. It was too hard to structure for databases and data. Where else is put it there? We'll show you how value out Largely un delivered. But we're that last mile. We do exactly that. Just put it in s. three and we activated and activate it with a piece that the tools of your analysts use today or what they want to use in the future. That is what's so powerful. So basically we're on the shoulders of giants with street, put it there and we light it up and that's really the last mile. But it's this multi model but it's also this lack of transformation. We can do all the transformation that's all to virtually and available immediately. You're not doing extended GTL projects with big teams moving around a lot of data in the enterprise. In fact, most time they land and that's three and they move it somewhere and they move it again. What we're saying is now just leave in place well index and make it available. >>So the reason that it was interesting, so the reason they want to move in the S three was the original object storage cloud. It was, it was a cheap bucket. Okay. But it's become much more than that when you talk to customers like, hey, I have all this data in this three. I want to do something with it. I want to apply machine intelligence. I want to search it. I want to do all these things, but you're right. I have to move it. Oftentimes to do that. So that's a huge value. Now can I, are you available in the AWS marketplace yet? >>You know, in fact that was the other announcement to talk about. So our solution is one person available AWS marketplace, which is great for clients because they've been burned down their credits with amazon. >>Yeah, that's that super great news there. Now let's talk a little bit more about data. Like you know, the old joke of the tongue in cheek was data lakes become data swamps. You sort of know, see no schema on, right. Oh great. I can put everything into the lake and then it's like, okay, what? Um, so maybe double click on that a little bit and provide a little bit more details to your, your vision there and your philosophy. >>So if you could put things that data can get after it with your own tools on elastic or search, of course you do that. If you don't have to go through that. But everyone thinks it's a status quo. Everyone is using, you know, everyone has to put it in some sort of schema in a database before they can get access to what everyone does. They move it some place to do it. Now. They're using 1970s and maybe 1980s technology. And they're saying, I'm gonna put it in this database, it works on the cloud and you can go after it. But you have to do all the same pain of transformation, which is what takes human. We use time, cost and complexity. It takes time to do that to do a transformation for an user. It takes a lot of time. But it also takes a teams time to do it with dBS and data scientists to do exactly that. And it's not one thing going on. So it takes three weeks to three months in enterprise. It's a cost complexity. But all these pipelines for every data request, you're trying to give them their own data set. It ends up being data puddles all over this. It might be in your data lake, but it's all separated. Hard to govern. Hard to manage. What we do is we stop that. What we do is we index in place. Your dad is already necessary. Typically retailing it out. You can continue doing that. We really are just one more use of the data. We do read only access. We do not change that data and you give us a place in. You're going to write our index. It's a full rewrite index. Once we did that that allows you with the refinery to make that we just we activate that data. It will immediately fully index was performant from cabana. So you no longer have to take your data and move it and do a pipeline into elasticsearch which becomes kind of brittle at scale. You have the scale of S. Three but use the exact same tools you do today. And what we find for like log analytics is it's a slightly different use case for large analytics or value prop than Be I or what we're doing with private companies but the logs were saving clients 50 to 80% on the hard dollars a day in the month. They're going from very limited data sets to unlimited data sets. Whatever they want to keep an S. Three and glacier. But also they're getting away from the brittle data layer which is the loosen environment which any of the data layers hold you back because it takes time to put it there. But more importantly It becomes brittle at scale where you don't have any of that scale issue when using S. three. Is your dad like. So what what >>are the big use cases Ed you mentioned log analytics? Maybe you can talk about that. And are there any others that are sort of forming in the marketplace? Any patterns that you see >>Because of the multi model we can do a lot of different use cases but we always work with clients on high R. O. I use cases why the Big Bang theory of Due dad like and put everything in it. It's just proven not to work right? So what we're focusing first use cases, log analytics, why as by way with everything had a tipping point, right? People were buying model, save money here, invested here. It went quickly to no, no we're going cloud native and we have to and then on top of it it was how do we efficiently innovate? So they got the tipping point happens, everyone's going cloud native. Once you go cloud native, the amount of machine generated data that you have that comes from the environment dramatically. It just explodes. You're not managing hundreds or thousands or maybe 10,000 endpoints, you're dealing with millions or billions and also you need this insight to get inside out. So logs become one of the things you can't keep up with it. I think I mentioned uh we went to a group of end users, it was only 60 enterprise clients but we asked him what's your capture rate on logs And they said what do you want it to be 80%, actually 78 said listen we want eight captured 80 200 of our logs. That would be the ideal not everything but we need most of it. And then the same group, what are you doing? Well 82 had less than 50%. They just can't keep up with it and every everything including elastic and Splunk. They work harder to the process to narrow and keep less and less data. Why? Because they can't handle the scale, we just say landed there don't transform will make it all available to you. So for log analytics, especially with cloud native, you need this type of technology and you need to stop, it's like uh it feels so good when you stop hitting your head against the wall. Right? This detail process that this type of scale just doesn't work. So that's exactly we're delivering the second use case uh and that's with using elastic KPI but also using sequel to go after the same data representation. And we come out with machine learning. You can also do anomaly detection on the same data representation. So for a log uh analytic use case series devops setups. It's a huge value problem now the same platform because it has sequel exposed. You can do just what we use the term is agile B. I people are using you think about look or tableau power bi I uh metabolic. I think of all these toolsets that people want to give and uh and use your business or coming back to the centralized team every single week asking for new datasets. And they have to be set up like a data set. They have to do an e tail process that give access to that data where because of the way just landed in the bucket. If you have access to that with role based access, I can literally get you access that with your tool set, let's say Tableau looker. You know um these different data sets literally in five minutes and now you're off and running and if you want a new dataset they give another virtual and you're off and running. But with full governance so we can use to be in B I either had self service or centralized. Self service is kind of out of control, but we can move fast and the centralized team is it takes me months but at least I'm in control. We allow you do both fully governed but self service. Right. I got to >>have lower. I gotta excel. All right. And it's like and that's the trade off on each of the pieces of the triangle. Right. >>And they make it easy, we'll just put in a data source and you're done. But the problem is you have to E T L the data source. And that's what takes the three weeks to three months in enterprise and we do it virtually in five minutes. So now the third is actually think about um it's kind of a combination of the two. Think about uh you love the beers and diaper stories. So you know, think about early days of terror data where they look at sales out data for business and they were able to look at all the sales out data, large relational environment, look at it, they crunch all these numbers and they figured out by different location of products and the start of they sell more sticker things and they came up with an analogy which everyone talked about beers and diapers. If you put it together, you sell more from why? Because afternoon for anyone that has kids, you picked up diapers and you might want to grab a beer of your home with the kids. But that analogy 30 years ago, it's now well we're what's the shelf space now for approximate company? You know it is the website, it's actually what's the data coming from there. It's actually the app logs and you're not capturing them because you can't in these environments or you're capturing the data. But everyone's telling, you know, you've got to do an E. T. L. Process to keep less data. You've got to select, you got to be very specific because it's going to kill your budget. You can't do that with elastic or Splunk, you gotta keep less data and you don't even know what the questions are gonna ask with us, Bring all the app logs just land in S. three or glacier which is the most it's really shoulders of giants right? There's not a better platform cost effectively security resilience or through but to think about what you can stream and the it's the best queuing platform I've ever seen in the industry just landed there. And it's also very cost effective. We also compress the data. So by doing that now you match that up with actually relatively small amount of relational data and now you have the vaccine being data. But instead it's like this users using that use case and our top users are always, they start with this one then they use that feature and that feature. Hey, we just did new pricing is affecting these clients and that clients by doing this. We get that. But you need that data and people aren't able to capture it with the current platforms. A data lake. As long as you can make it available. Hot is a way to do it. And that's what we're doing. But we're unique in that. Other people are making GTL IT and put it in a in 19 seventies and 19 eighties data format called a schema. And we avoided that because we basically make S three a hot and elected. >>So okay. So I gotta I want to, I want to land on that for a second because I think sometimes people get confused. I know I do sometimes without chaos or it's like sometimes don't know where to put you. I'm like okay observe ability that seems to be a hot space. You know of course log analytics as part of that B. I. Agile B. I. You called it but there's players like elastic search their star burst. There's data, dogs, data bricks. Dream EOS Snowflake. I mean where do you fit where what's the category and how do you differentiate from players like that? >>Yeah. So we went about it fundamentally different than everyone else. Six years ago. Um Tom hazel and his band of merry men and women came up and designed it from scratch. They may basically yesterday they purposely built make s free hot analytic environment with open A. P. I. S. By doing that. They kind of changed the game so we deliver upon the true promises. Just put it there and I'll give you access to it. No one else does that. Everyone else makes you move the data and put it in schema of some format to get to it. And they try to put so if you look at elasticsearch, why are we going after? Like it just happens to be an easy logs are overwhelming. You once you go to cloud native, you can't afford to put it in a loose seen the elk stack. L is for loosen its inverted index. Start small. Great. But once you now grow it's now not one server. Five servers, 15 servers, you lose a server, you're down for three days because you have to rebuild the whole thing. It becomes brittle at scale and expensive. So you trade off I'm going to keep less or keep less either from retention or data. So basically by doing that so elastic we're not we have no elastic on that covers but we allow you to well index the data in S. Tree and you can access it directly through a cabana interface or an open search interface. Api >>out it's just a P. >>It's open A P. I. S. It's And by doing that you've avoided a whole bunch of time cost, complexity, time of your team to do it. But also the time to results the delays of doing that cost. It's crazy. We're saving 50-80 hard dollars while giving you unlimited retention where you were dramatically limited before us. And as a managed service you have to manage that Kind of Clunky. Not when it starts small, when it starts small, it's great once at scale. That's a terrible environment to manage the scale. That's why you end up with not one elasticsearch cluster, dozens. I just talked to someone yesterday had 125 elasticsearch clusters because of the scale. So anyway, that's where elastic we're not a Mhm. If you're using elastic it scale and you're having problems with the retired off of cost time in the, in the scale, we become a natural fit and you don't change what your end users do. >>So the thing, you know, they had people here, this will go, wow, that sounds so simple. Why doesn't everybody do this? The reason is it's not easy. You said tom and his merry band. This is really hard core tech. Um and it's and it's it's not trivial what you've built. Let's talk about your secret sauce. >>Yeah. So it is a patented technology. So if you look at our, you know, component for architecture is basically a large part of the 90% of value add is actually S. Three, I gotta give S three full kudos. They built a platform that we're on shoulders of giants. Um But what we did is we purpose built to make an object storage a hot alec database. So we have an index, like a database. Um And we basically the data you bring a refinery to be able to do all the advanced type of transformation but all virtually done because we're not changing the source of record, we're changing the virtual views And then a fabric allows you to manage and be fully elastic. So if we have a big queries because we have multiple clients with multiple use cases, each multiple petabytes, we're spending up 1800 different nodes after a particular environment. But even with all that we're saving them 58%. But it's really the patented technology to do this, it took us six years by the way, that's what it takes to come up with this. I come upon it, I knew the founder, I've known tom tom a stable for a while and uh you know his first thing was he figured out the math and the math worked out. Its deep tech, it's hard tech. But the key thing about it is we've been in market now for two years, multiple use cases in production at scale. Um Now what you do is roadmap, we're adding a P. I. So now we have elasticsearch natural proofpoint. Now you're adding sequel allows you open up new markets. But the idea for the person dealing with, you know, so we believe we deliver on the true promise of Data Lakes and the promise of Data lakes was put it there, don't focus on transferring. It's just too hard. I'll get insights out and that's exactly what we do. But we're the only ones that do that everyone else makes you E. T. L. At places. And that's the innovation of the index in the refinery that allows the index in place and give virtual views in place at scale. Um And then the open api is to be honest, uh I think that's a game. Give me an open api let me go after it. I don't know what tool I'm gonna use next week every time we go into account they're not a looker shop or Tableau Sharp or quick site shop there, all of them and they're just trying to keep up with the businesses. Um and then the ability to have role based access where actually can give, hey, get them their own bucket, give them their own refinery. As long as they have access to the data, they can go to their own manipulation ends up being >>just, >>that's the true promise of data lakes. Once we come out with machine learning next year, now you're gonna rip through the same embassy and the way we structured the data matrices. It's a natural fit for things like tensorflow pytorch, but that's, that's gonna be next year just because it's a different persona. But the underlining architecture has been built, what we're doing is trying to use case that time. So we worked, our clients say it's not a big bang. Let's nail a use case that works well. Great R. O. I great business value for a particular business unit and let's move to the next. And that's how I think it's gonna be really. That's what if you think about gardener talks about, if you think about what really got successful in data, where else in the past? That's exactly it wasn't the big bang, it was, let's go and nail it for particular users. And that's what we're doing now because it's multi model, there's a bunch of different use cases, but even then we're focusing on these core things that are really hard to do with other relational only environments. Yeah, I >>can see why you're still because you know, you haven't been well, you and I have talked about the api economy for forever and then you've been in the storage world so long. You know what a nightmare is to move data. We gotta, we gotta jump. But I want to ask you, I want to be clear on this. So you are your cloud cloud Native talked to frank's Lukman maybe a year ago and I asked him about on prem and he's like, no, we're never doing the halfway house. We are cloud all the >>way. I think >>you're, I think you have a similar answer. What what's your plan on Hybrid? >>Okay. We get, there's nothing about technology, we can't go on, but we are 100 cloud native or only in the public cloud. We believe that's a trend line. Everyone agrees with us, we're sticking there. That's for the opportunity. And if you can run analytics, There's nothing better than getting to the public cloud like Amazon and he was actually, that were 100 cloud native. Uh, we love S three and what would be a better place to put this is put the next three and we just let you light it up and then I guess if I'm gonna add the commercial and buy it through amazon marketplace, which we love that business model with amazon. It's >>great. Ed thanks so much for coming back in the cube and participating in the startup showcase. Love having you and best of luck. Really exciting. >>Hey, thanks again, appreciate it. >>All right, thank you for watching everybody. This is Dave Volonte for the cube. Keep it right there.
SUMMARY :
They had the engineering shops and the execution capabilities to take troves of data and Thank you very much. taking it to market what's new with chaos surgery. But basically what you have to do is you E. T. L. Out to other locations. But it's become much more than that when you talk You know, in fact that was the other announcement to talk about. Like you know, the old joke of the tongue in cheek was data lakes become data swamps. You have the scale of S. Three but use the exact same tools you do today. are the big use cases Ed you mentioned log analytics? So logs become one of the things you can't keep up with it. And it's like and that's the trade off on each of But the problem is you have to E T L the data I mean where do you fit where what's the category and how do you differentiate from players like that? no elastic on that covers but we allow you to well index the data in S. And as a managed service you have to manage that Kind of Clunky. So the thing, you know, they had people here, this will go, wow, that sounds so simple. the source of record, we're changing the virtual views And then a fabric allows you to manage and be That's what if you think about gardener talks about, if you think about what really got successful in data, So you are your cloud cloud I think What what's your plan on Hybrid? to put this is put the next three and we just let you light it up and then I guess if I'm gonna add Love having you and best of luck. All right, thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
15 servers | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
58% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
May 2021 | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Five servers | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
1970s | DATE | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
1980s | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
five minutes | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
S three | TITLE | 0.99+ |
three days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
six years | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
one server | QUANTITY | 0.99+ |
Ed | PERSON | 0.99+ |
Tom hazel | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
78 | QUANTITY | 0.99+ |
S. three | LOCATION | 0.99+ |
third | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
tom | PERSON | 0.99+ |
billions | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
dozens | QUANTITY | 0.99+ |
50-80 | QUANTITY | 0.98+ |
Six years ago | DATE | 0.98+ |
125 elasticsearch clusters | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
early next year | DATE | 0.97+ |
Tableau Sharp | ORGANIZATION | 0.97+ |
Alex | PERSON | 0.97+ |
today | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.96+ |
30 years ago | DATE | 0.96+ |
each | QUANTITY | 0.96+ |
one person | QUANTITY | 0.96+ |
S. Tree | TITLE | 0.96+ |
10,000 endpoints | QUANTITY | 0.96+ |
second use | QUANTITY | 0.95+ |
82 | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Tableau | TITLE | 0.94+ |
60 enterprise clients | QUANTITY | 0.93+ |
one | QUANTITY | 0.93+ |
eight | QUANTITY | 0.93+ |
1800 different nodes | QUANTITY | 0.91+ |
excel | TITLE | 0.9+ |
80 200 of our logs | QUANTITY | 0.89+ |
this month | DATE | 0.89+ |
S. Three | TITLE | 0.88+ |
agile | TITLE | 0.88+ |
ChaosSearch | ORGANIZATION | 0.86+ |
S. Three | TITLE | 0.86+ |
Dream EOS Snowflake | TITLE | 0.85+ |
cabana | LOCATION | 0.85+ |
100 cloud | QUANTITY | 0.83+ |
a day | QUANTITY | 0.81+ |
IBM25 Ed Lynch VTT
(bright music) >> Announcer: From around the globe, it's "theCUBE" with digital coverage of IBM Think 2021 brought to you by IBM. >> Welcome back to "theCUBE" coverage of IBM Think 2021. I'm John Furrier, host of "theCUBE". We're here with Ed Lynch, vice president of IBM Business Automation. Topic here is AI Powered Business Automation as he leads the team, the Business Automation offering management team driving the automation platform altering multicloud and built in AI and low code tools. Ed, thanks for joining me on "theCUBE" today. >> Thank you John. Thanks for having me. >> So, automation is really the focus of this event. If you peel back all the announcements and automation which is data process, transformation, innovation scale, all kind of points to eigth automation. How has the past year changed the automation market? >> It's been a fascinating ride. Fascinating ride more than just the COVID part, but some interesting, interesting observations as we look back over the year. I called this the AD for BC before COVID and AD, the Anno, not the Anno Domini, but Anno Damum meaning year of the house, living in the house. The thing that we really learned is that clients are engaging differently with their, let's say the companies that they work with. They're engaging digitally. Not a big surprise. You look at all of the big digital brands. You look at the way that we engage. We buy things from home. We don't go to the store anymore. We get delivery at home. Work from home completely different. If you think about what happened to the business on the business side, work from home changed everything. And the real bottom line is companies that invested ahead of time in automation technology, they've flourished. The companies that didn't, they're not so flourishing. So, we're seeing, right now we're seeing skyrocketing demand. That's bonus for us. Skyrocketing demand and also that this demand on the supply side we're seeing competition. More competition in the automation space. And I believe any company that's got more than two guys in a go in the back in a basement are entering the automation space. So, it's a fun time. It's a really fun time to be in this space. >> Great validation on the market. Great call out there on the whole competition thing. Cause you really look at this competition from you know, two guys in the garage or you know, early stage startup but the valuations are an indicator. It's a hot market. Most of those startups have massive valuations. Even the pre IPO ones are just like enormous valuations. This is a tell sign. That process automation and digital supply chains, value chains, business is being rewritten with software right? So, you know, there's an underlying hybrid cloud kind of model that's been standardized. Now you have all these things now on top thousand flowers, blooming or apps, if you will more apps and more apps, more apps, less of the kind of like CRM, like the... you're going to have sub systems large subsystems, but you're going to have apps everywhere. Everything's an app now. So this means things have to be re-automated. >> Yeah. >> What's your advice for companies trying to figure this out? >> So my advice is start small. Like one of the big temptations is that you can jump in and say, God mighty we've got this perfect opportunity for rejiggering, rebuilding the entire company from scratch. That's a definition of insanity. Like you don't want to do that. What you want to do is you want to start small and then you want to prove. Second big thing is you want to make sure that you start with the data. Just like any, any good management system you have to start with the facts. You have to discover what's going on. You have to decide which piece you're going to focus on. And then you have to act. And then act leads to optimization. Optimization allows them to say, I'm looking at a dashboard I'm making progress or I'm heading in the wrong direction. Stop. Those kinds of things. So start small, start with the data and make sure that you line up your allies. You have to have, this is a culture change that you have to have your CEO lined up from the top and you have to have buy-in from the bottom. If any of those pieces are missing you're asking for trouble. >> Can you share an example of a customer of yours that's using intelligent automation. Take me through that process. And what's the drivers behind. >> Yeah, sure. A good example. There's a, there's a client of ours in Morocco and it's not a big country but it's a very interesting story. They, the company is called CDG Prevoyance. CDG Prevoyance, this is a, it's a French company, obviously. That was my French accent. But there they are a company that does pension benefits. So think of this as you're putting money away, you're in in the US you have, 401ks. In Canada we have RSPs. You're putting money away for the future. And the company that you're putting money into has to manage your account along with millions of other accounts. And this is where CDG started. It was a very paper-based business. Extremely paper-based. Like the forms that you had to fill out. The way that you engage with, with CDG was was a very form-based thing. Like document based thing. They, the onboarding time to actually enter a new account for a new employee, looking to get their pension plan done was weeks. With automation they changed from being a paper-based thing to being an electronic based thing. They changed the workflow associated with gathering information, getting on onboarded. They onboard now in minutes, as opposed to weeks. This is an example of the kind of thing. Now, if you go back to the first question that you asked, Old companies change. The companies that you engage with digitally are the ones that give you that kind of experience where it doesn't, you know you don't have to crawl through broken glass in order to engage with them. That's what CDG did. And they managed to really ring out some of the human labor out of that onboarding process. >> Great, great stuff. You know, this Mayflower is an exciting story. I've been checking out the, using this decisioning together with you guys with automation. Can you tell me about that? >> Mayflower is really exciting. This is one of those things that just jazzes me. It jazzes me because I think to myself how the heck did they do that? So the Mayflower is a boat. It's like a sailing vessel, like any other sailing vessel. It's 15 meters long. It's powered entirely by solar. It's making a voyage from England to Plymouth. The landing place, you know, where the pilgrims landed, and this, this, this whole voyage is going to be done without human interaction. It's all going to be powered by the machine. So you think about autonomous vehicles. You think about this whole story of autonomous vehicles piloting across the ocean is way different than piloting the car down a highway. >> So this is an autonomous ship, then. >> This is an autonomous ship. Exactly. So think of this as there is there's nobody piloting this thing. It's all piloted by software. The software is, is my business software, interestingly. It has all these sensors that allow you to say, Oh there's a boat over there, steer clear of the boat. But more importantly, when you come to the Harbor you have to negotiate the marks. You have to, you know, steer in the lanes. Different from steering a car you steer a car between the two white lines. You know, you might have a dashed line here and a white line here. You steer the car to come in the middle. Very easy. Steering a boat, that's really hard. Steering a boat in the middle of the ocean when you've got monstrous waves and you've got, you know, potential this, potential that. Like this, this thing is really exciting. I find this whole data, AI decisioning, fascinating. >> Dave, Dave Alonzo is going to love this next question I'm going to ask you. He's my co-host of theCUBE. You always talk about data lakes. How about data ocean? Now we have a data ocean out here which I've always used the metaphor ocean so much more dynamic, but here literally the data is the ocean. You got to factor in conditions that are going to be completely dynamic, wave height, countermeasures on, on navigation. All this is being done. Is that, how does it all work? I mean, has it all been driven by data scenarios? I mean... >> No, it's so it's all driven so it starts with the sensors, the sensor, you have a vision sensor that tells you what it sees. So it sees boats and it sees marks. It sees big waves coming. It's all powered by weather data. So there is a weather feed, but more importantly like the sailing across the ocean part you don't have to worry other than when you know a boat comes or a whale comes. You steer clear of it, fine. That part's relatively easy. When you come close to the shore then you have to make decisions about where to go. And the decisions are all informed by data. So you gather all this data you run machine learning algorithms against the data. You run a decision priorities mechanism. And then you have to, you have to confer with the rules. Like, what are the rules of navigation? I don't know if you're a sailor, but the rules of navigation on the open sea are actually really simple to understand because it's, you know the person on the left has the, has the priority. If you're overtaking, you have to steer clear. All those kind of things. In a Harbor it's way different. And so you have to be able to demonstrate to the government that you have open decisions an open decision-making mechanism to steer around the marks. The government wants to know that you can do that. Otherwise they say, stay out of my Harbor. Very interesting. >> It actually is. It actually encapsulates a lot of business challenges too. You have a lot of data mashing up going on. I mean, you've got navigation, what's under the water. What's on top of the water. You got weather data over the top. It's good to own the weather company for IBM. That helps probably a lot. Then you've got policies, you know? And policy based decision-making. It sounds like a data center and multicloud opportunity. >> It is exactly. That's why I love this opportunity because it's, it's it's almost the, the complete stall from being a business problem to being an experiment problem. Because the way that these, these guys, these engineers built this thing, they're, they're looking for research. They're looking for the ability to really press that edge of where AI and uh you know, machine learning and decisioning come together with ocean research, because what they're doing is social research. They're looking for water temperature and whales and that kind of stuff. >> Unmanned vehicles, unmanned drones is another another big thing we're seeing that with, with, from from managing this. This brings up the point I see about leaders in the industry, and I know we don't have a lot of time. I want to get back to the the announcement that you guys made a while back but I want to stay on this point real quick. If you can just comment. Business leaders that are curious around automation, really the ones that have to invent this. Think about the autonomous ship. On top of the autonomous business I mean, here at theCUBE, we have a studio. What about autonomous studio work? So the notion of automation if you're not thinking about it, you can't do it. What's your advice to people? >> So, so I think the, the advice is that you look for areas of opportunity, like be, be discreet and be like just choose the thing that you want to go after. In the, in the Mayflower case what they were doing was they were looking for a way to navigate in the Harbor. Opens, you've got this big wide ocean. You can go wherever you want to. Navigating in the Harbor is much trickier. And so what they did was they applied technology very specific pieces of technology to that specific problem. That's the advice that I would give to a business. Don't look to turn everything upside down. That's craziness. Like, you're in business for a reason. What you want to do is you want to pick a specific thing to go after and go and fix that. Then pick adjacent things, go fix that. And eventually it gets to the point where you have straight through processing, which is where everybody wants to get. >> I can imagine great opportunities for you guys and your team. Congratulations on all that work. 'Cause there's certainly more to do. I can see so much happening as you guys are building out the stack and acquiring companies. You know, last month you guys had announced to acquire process mining company, myInvenio. what does that announcement mean for IBM and the AI powered automation? Because you guys also have business deals with others in the industry. Take, take us through the, the what this acquisition means for IBM. >> Sure. So myInvenio is a, is a business. First, just get the facts. myInvenio is a business and it's a it's a company that's based in Italy. They do what's called process mining. Process mining is a tool that does what I was just talking about. It allows you to identify places where you have weakness in your workflows. Workflows, like big macro workflows like procure to pay the ability to go all the way from buying something to paying for. Companies spend noodles of money on procure to pay as an example. But inevitably there are humans in that, in that process humans means that there are ways to become more efficient. You could change a person's job. You can change a person's profile. All of that is what this tool is about. This, this tool gives us an excellent addition to our portfolio, our automation portfolio which allows clients to understand where the weaknesses are. And then we can apply specific automations to fix those weaknesses. That's what myInvenio means to us. It puts us in a position of having a complete set of technologies that match up with Gartner's hyper automation market texture. That gives us a very powerful advantage in the marketplace. So I'm very, very happy about this acquisition. >> Yeah. Ed, thanks for coming on theCUBE. Really appreciate it. Final word. I'd love to get you spend the last minute just talking about IBM's commitment to open and also integration um, integrating with other companies. Take a minute to explain that. >> Yeah, sure. So the, the, the open part is something that we've understood for very, very long time. One of the jobs that I had a long time ago was open source and bringing open source into IBM. I'm a very strong proponent of open source. Open means no barriers to entry no barriers to substitution. And what it means is you have a fair fight. You have, we all have proprietary technology. We all have intellectual property. Sure. But if you have an open base then what that gives you is the ability to inter-operate with other people, other, you know other competitors, frankly, that to me is goodness for the client, because at the end of the day, the client doesn't get locked in. That's the thing that they are really looking for. They want to have the flexibility to move. They want to have the flexibility to put the best, you know best technology in place. So we are strong proponents of open. >> All right. Ed Lynch, vice president of IBM Business Automation. AI powered business automation is coming. Autonomous vehicles, autonomous ships, autonomous business. Everything's going automation soon. We're going to have the autonomous cube. And so, Ed, thanks for coming on theCUBE. I really appreciate it. >> Okay, John. Thank you. >> Okay. Cube coverage of IBM Think 2021, virtual launch. I'm John Furrier, your host of theCUBE. Thanks for watching. (bright music)
SUMMARY :
brought to you by IBM. as he leads the team, the focus of this event. You look at all of the big digital brands. in the garage or you know, that you have to have your Can you share an example Like the forms that you had to fill out. with you guys with automation. So you think about autonomous vehicles. You steer the car to come that are going to be completely dynamic, the sensor, you have a vision sensor It's good to own the Because the way that these, the announcement that you the point where you have Because you guys also have It allows you to identify I'd love to get you spend the last minute to put the best, you know We're going to have the autonomous cube. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
Ed Lynch | PERSON | 0.99+ |
Dave Alonzo | PERSON | 0.99+ |
England | LOCATION | 0.99+ |
Plymouth | LOCATION | 0.99+ |
Morocco | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
Italy | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
CDG | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
IBM Business Automation | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
two guys | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
CDG Prevoyance | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
two white lines | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
more than two guys | QUANTITY | 0.96+ |
BC | LOCATION | 0.96+ |
One | QUANTITY | 0.93+ |
myInvenio | ORGANIZATION | 0.92+ |
Business Automation | ORGANIZATION | 0.92+ |
401ks | QUANTITY | 0.91+ |
15 meters long | QUANTITY | 0.86+ |
Think 2021 | COMMERCIAL_ITEM | 0.85+ |
theCUBE | TITLE | 0.82+ |
theCUBE | ORGANIZATION | 0.79+ |
past year | DATE | 0.76+ |
Anno | COMMERCIAL_ITEM | 0.76+ |
thousand flowers | QUANTITY | 0.75+ |
Anno Domini | COMMERCIAL_ITEM | 0.7+ |
French | OTHER | 0.7+ |
Mayflower | LOCATION | 0.69+ |
COVID | OTHER | 0.63+ |
Mayflower | COMMERCIAL_ITEM | 0.54+ |
IBM25 | COMMERCIAL_ITEM | 0.53+ |
French | LOCATION | 0.53+ |
Anno Damum | COMMERCIAL_ITEM | 0.43+ |
COVID | TITLE | 0.41+ |
AD | OTHER | 0.29+ |
Ed Lynch, IBM | IBM Think 2021
>> Announcer: From around the globe, it's theCUBE with digital coverage of IBM Think 2021 brought to you by IBM. >> Welcome back to "theCUBE" coverage of IBM Think 2021. I'm John Furrier, host of "theCUBE". We're here with Ed Lynch, vice president of IBM Business Automation. Topic here is AI Powered Business Automation as he leads the team, the Business Automation offering management team driving the automation platform altering multicloud and built in AI and low code tools. Ed, thanks for joining me on "theCUBE" today. >> Thank you John. Thanks for having me. >> So, automation is really the focus of this event. If you peel back all the announcements and automation which is data process, transformation, innovation scale, all kind of points to automation. How has the past year changed the automation market? >> It's been a fascinating ride. Fascinating ride more than just the COVID part, but some interesting, interesting observations as we look back over the year. I called this the AD for BC before COVID and AD, the Anno, not the Anno Domini, but Anno Domuo meaning year of the house, living in the house. The thing that we really learned is that clients are engaging differently with their, let's say the companies that they work with. They're engaging digitally. Not a big surprise. You look at all of the big digital brands. You look at the way that we engage. We buy things from home. We don't go to the store anymore. We get delivery at home. Work from home completely different. If you think about what happened to the business on the business side, work from home changed everything. And the real bottom line is companies that invested ahead of time in automation technology, they've flourished. The companies that didn't, they're not so flourishing. So, we're seeing, right now we're seeing skyrocketing demand. That's bonus for us. Skyrocketing demand and also that this demand on the supply side we're seeing competition. More competition in the automation space. And I believe any company that's got more than two guys in a go in the back in a basement are entering the automation space. So, it's a fun time. It's a really fun time to be in this space. >> Great validation on the market. Great call out there on the whole competition thing. Cause you really look at this competition from you know, two guys in the garage or you know, early stage startup but the valuations are an indicator. It's a hot market. Most of those startups have massive valuations. Even the pre IPO ones are just like enormous valuations. This is a tell sign. That process automation and digital supply chains, value chains, business is being rewritten with software right? So, you know, there's an underlying hybrid cloud kind of model that's been standardized. Now you have all these things now on top thousand flowers, blooming or apps, if you will more apps and more apps, more apps, less of the kind of like CRM, like the... you're going to have sub systems large subsystems, but you're going to have apps everywhere. Everything's an app now. So this means things have to be re-automated. >> Yeah. >> What's your advice for companies trying to figure this out? >> So my advice is start small. Like one of the big temptations is that you can jump in and say, God almighty we've got this perfect opportunity for rejiggering, rebuilding the entire company from scratch. That's a definition of insanity. Like you don't want to do that. What you want to do is you want to start small and then you want to prove. Second big thing is you want to make sure that you start with the data. Just like any, any good management system you have to start with the facts. You have to discover what's going on. You have to decide which piece you're going to focus on. And then you have to act. And then act leads to optimization. Optimization allows them to say, I'm looking at a dashboard I'm making progress or I'm heading in the wrong direction. Stop. Those kinds of things. So start small, start with the data and make sure that you line up your allies. You have to have, this is a culture change that you have to have your CEO lined up from the top and you have to have buy-in from the bottom. If any of those pieces are missing you're asking for trouble. >> Can you share an example of a customer of yours that's using intelligent automation. Take me through that process. And what's the drivers behind. >> Yeah, sure. A good example. There's a, there's a client of ours in Morocco and it's not a big country but it's a very interesting story. They, the company is called CDG Prevoyance. CDG Prevoyance, this is a, it's a French company, obviously. That was my French accent. But there they are a company that does pension benefits. So think of this as you're putting money away, you're in in the US you have, 401ks. In Canada we have RSPs. You're putting money away for the future. And the company that you're putting money into has to manage your account along with millions of other accounts. And this is where CDG started. It was a very paper-based business. Extremely paper-based. Like the forms that you had to fill out. The way that you engage with, with CDG was was a very form-based thing. Like document based thing. They, the onboarding time to actually enter a new account for a new employee, looking to get their pension plan done was weeks. With automation they changed from being a paper-based thing to being an electronic based thing. They changed the workflow associated with gathering information, getting on onboarded. They onboard now in minutes, as opposed to weeks. This is an example of the kind of thing. Now, if you go back to the first question that you asked, Old companies change. The companies that you engage with digitally are the ones that give you that kind of experience where it doesn't, you know you don't have to crawl through broken glass in order to engage with them. That's what CDG did. And they managed to really ring out some of the human labor out of that onboarding process. >> Great, great stuff. You know, this Mayflower is an exciting story. I've been checking out the, using this decisioning together with you guys with automation. Can you tell me about that? >> Mayflower is really exciting. This is one of those things that just jazzes me. It jazzes me because I think to myself how the heck did they do that? So the Mayflower is a boat. It's like a sailing vessel, like any other sailing vessel. It's 15 meters long. It's powered entirely by solar. It's making a voyage from England to Plymouth. The landing place, you know, where the pilgrims landed, and this, this, this whole voyage is going to be done without human interaction. It's all going to be powered by the machine. So you think about autonomous vehicles. You think about this whole story of autonomous vehicles piloting across the ocean is way different than piloting the car down a highway. >> So this is an autonomous ship, then. >> This is an autonomous ship. Exactly. So think of this as there is there's nobody piloting this thing. It's all piloted by software. The software is, is my business software, interestingly. It has all these sensors that allow you to say, Oh there's a boat over there, steer clear of the boat. But more importantly, when you come to the Harbor you have to negotiate the marks. You have to, you know, steer in the lanes. Different from steering a car you steer a car between the two white lines. You know, you might have a dashed line here and a white line here. You steer the car to come in the middle. Very easy. Steering a boat, that's really hard. Steering a boat in the middle of the ocean when you've got monstrous waves and you've got, you know, potential this, potential that. Like this, this thing is really exciting. I find this whole data, AI decisioning, fascinating. >> Dave, Dave Alonzo is going to love this next question I'm going to ask you. He's my co-host of theCUBE. You always talk about data lakes. How about data ocean? Now we have a data ocean out here which I've always used the metaphor ocean so much more dynamic, but here literally the data is the ocean. You got to factor in conditions that are going to be completely dynamic, wave height, countermeasures on, on navigation. All this is being done. Is that, how does it all work? I mean, has it all been driven by data scenarios? I mean... >> No, it's so it's all driven so it starts with the sensors, the sensor, you have a vision sensor that tells you what it sees. So it sees boats and it sees marks. It sees big waves coming. It's all powered by weather data. So there is a weather feed, but more importantly like the sailing across the ocean part you don't have to worry other than when you know a boat comes or a whale comes. You steer clear of it, fine. That part's relatively easy. When you come close to the shore then you have to make decisions about where to go. And the decisions are all informed by data. So you gather all this data you run machine learning algorithms against the data. You run a decision priorities mechanism. And then you have to, you have to confer with the rules. Like, what are the rules of navigation? I don't know if you're a sailor, but the rules of navigation on the open sea are actually really simple to understand because it's, you know the person on the left has the, has the priority. If you're overtaking, you have to steer clear. All those kind of things. In a Harbor it's way different. And so you have to be able to demonstrate to the government that you have open decisions an open decision-making mechanism to steer around the marks. The government wants to know that you can do that. Otherwise they say, stay out of my Harbor. Very interesting. >> It actually is. It actually encapsulates a lot of business challenges too. You have a lot of data mashing up going on. I mean, you've got navigation, what's under the water. What's on top of the water. You got weather data over the top. It's good to own the weather company for IBM. That helps probably a lot. Then you've got policies, you know? And policy based decision-making. It sounds like a data center and multicloud opportunity. >> It is exactly. That's why I love this opportunity because it's, it's it's almost the, the complete stall from being a business problem to being an experiment problem. Because the way that these, these guys, these engineers built this thing, they're, they're looking for research. They're looking for the ability to really press that edge of where AI and uh you know, machine learning and decisioning come together with ocean research, because what they're doing is social research. They're looking for water temperature and whales and that kind of stuff. >> Unmanned vehicles, unmanned drones is another another big thing we're seeing that with, with, from from managing this. This brings up the point I see about leaders in the industry, and I know we don't have a lot of time. I want to get back to the the announcement that you guys made a while back but I want to stay on this point real quick. If you can just comment. Business leaders that are curious around automation, really the ones that have to invent this. Think about the autonomous ship. On top of the autonomous business I mean, here at theCUBE, we have a studio. What about autonomous studio work? So the notion of automation if you're not thinking about it, you can't do it. What's your advice to people? >> So, so I think the, the advice is that you look for areas of opportunity, like be, be discreet and be like just choose the thing that you want to go after. In the, in the Mayflower case what they were doing was they were looking for a way to navigate in the Harbor. Opens, you've got this big wide ocean. You can go wherever you want to. Navigating in the Harbor is much trickier. And so what they did was they applied technology very specific pieces of technology to that specific problem. That's the advice that I would give to a business. Don't look to turn everything upside down. That's craziness. Like, you're in business for a reason. What you want to do is you want to pick a specific thing to go after and go and fix that. Then pick adjacent things, go fix that. And eventually it gets to the point where you have straight through processing, which is where everybody wants to get. >> I can imagine great opportunities for you guys and your team. Congratulations on all that work. 'Cause there's certainly more to do. I can see so much happening as you guys are building out the stack and acquiring companies. You know, last month you guys had announced to acquire process mining company, myInvenio. what does that announcement mean for IBM and the AI powered automation? Because you guys also have business deals with others in the industry. Take, take us through the, the what this acquisition means for IBM. >> Sure. So myInvenio is a, is a business. First, just get the facts. myInvenio is a business and it's a it's a company that's based in Italy. They do what's called process mining. Process mining is a tool that does what I was just talking about. It allows you to identify places where you have weakness in your workflows. Workflows, like big macro workflows like procure to pay the ability to go all the way from buying something to paying for. Companies spend noodles of money on procure to pay as an example. But inevitably there are humans in that, in that process humans means that there are ways to become more efficient. You could change a person's job. You can change a person's profile. All of that is what this tool is about. This, this tool gives us an excellent addition to our portfolio, our automation portfolio which allows clients to understand where the weaknesses are. And then we can apply specific automations to fix those weaknesses. That's what myInvenio means to us. It puts us in a position of having a complete set of technologies that match up with Gartner's hyper automation market texture. That gives us a very powerful advantage in the marketplace. So I'm very, very happy about this acquisition. >> Yeah. Ed, thanks for coming on theCUBE. Really appreciate it. Final word. I'd love to get you spend the last minute just talking about IBM's commitment to open and also integration um, integrating with other companies. Take a minute to explain that. >> Yeah, sure. So the, the, the open part is something that we've understood for very, very long time. One of the jobs that I had a long time ago was open source and bringing open source into IBM. I'm a very strong proponent of open source. Open means no barriers to entry no barriers to substitution. And what it means is you have a fair fight. You have, we all have proprietary technology. We all have intellectual property. Sure. But if you have an open base then what that gives you is the ability to inter-operate with other people, other, you know other competitors, frankly, that to me is goodness for the client, because at the end of the day, the client doesn't get locked in. That's the thing that they are really looking for. They want to have the flexibility to move. They want to have the flexibility to put the best, you know best technology in place. So we are strong proponents of open. >> All right. Ed Lynch, vice president of IBM Business Automation. AI powered business automation is coming. Autonomous vehicles, autonomous ships, autonomous business. Everything's going automation soon. We're going to have the autonomous cube. And so, Ed, thanks for coming on theCUBE. I really appreciate it. >> Okay, John. Thank you. >> Okay. Cube coverage of IBM Think 2021, virtual launch. I'm John Furrier, your host of theCUBE. Thanks for watching. (bright music)
SUMMARY :
brought to you by IBM. as he leads the team, the focus of this event. You look at all of the big digital brands. in the garage or you know, that you have to have your Can you share an example Like the forms that you had to fill out. with you guys with automation. So you think about autonomous vehicles. You steer the car to come that are going to be completely dynamic, the sensor, you have a vision sensor It's good to own the Because the way that these, the announcement that you the point where you have Because you guys also have It allows you to identify I'd love to get you spend the last minute to put the best, you know We're going to have the autonomous cube. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ed | PERSON | 0.99+ |
Dave Alonzo | PERSON | 0.99+ |
Ed Lynch | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Morocco | LOCATION | 0.99+ |
Plymouth | LOCATION | 0.99+ |
England | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
CDG | ORGANIZATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
two guys | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
IBM Business Automation | ORGANIZATION | 0.99+ |
two white lines | QUANTITY | 0.99+ |
CDG Prevoyance | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
last month | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
more than two guys | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
Business Automation | ORGANIZATION | 0.95+ |
BC | LOCATION | 0.95+ |
Think 2021 | COMMERCIAL_ITEM | 0.94+ |
thousand flowers | QUANTITY | 0.87+ |
401ks | QUANTITY | 0.87+ |
past year | DATE | 0.87+ |
theCUBE | TITLE | 0.82+ |
myInvenio | ORGANIZATION | 0.81+ |
French | OTHER | 0.81+ |
15 meters long | QUANTITY | 0.8+ |
theCUBE | ORGANIZATION | 0.8+ |
IBM | COMMERCIAL_ITEM | 0.73+ |
Anno Domini | COMMERCIAL_ITEM | 0.73+ |
Anno | COMMERCIAL_ITEM | 0.7+ |
COVID | OTHER | 0.67+ |
French | LOCATION | 0.64+ |
Anno Domuo | COMMERCIAL_ITEM | 0.53+ |
Business | ORGANIZATION | 0.49+ |
COVID | TITLE | 0.44+ |
Mayflower | LOCATION | 0.41+ |
AD | EVENT | 0.37+ |
theCUBE | EVENT | 0.33+ |
Ed Macosky, Boomi | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. Welcome to the cubes coverage of AWS reinvent 2020. The virtual version. I'm Lisa Martin here with the guests from Bumi. Please welcome Ed Makowski, its head of product of the program and nice to see you today >>I see you, Lisa. >>So here we are in a very socially distant world. But I know a lot about movie, and that movie is really all about connecting people with what they want now. So talk to me before we dig into kind of what's going on with AWS. What's the landscape? That movie like in this year that has had so much change? >>So things have been going really well for us business wise, I think you know, as we've come through this pandemic or we continue to work through the pandemic, we're seeing a lot of our customers accelerating their their migration to the cloud acceleration, accelerating their modernization journeys. Um, in fact, we see the 30% uptick and usage in our platform. You know, in the last several months, as as people just continue to double down on automating, integrating their systems, working through integrated experiences. Toe Really like you said put put data in the hands of the users, the data that they're looking for on the work clothes that they're looking to automate. They're accomplishing that our platform. So things have been good. >>That's good in a year of such uncertainty. So as we kind of look at, you know, you talked about it. We've been talking about it for months now. This acceleration of the digital journey, that Cove it is really catalyzing. Let's get specific with from an integrated experience perspective, I think we're all as consumers, even Mawr demanding oven integrated experience. Now more than ever. How are you working with customers To help them achieve that? >>Sure. So So the way we look at the world through our lenses, data collectivity and user engagement, or are critical pieces to a cloud modernization or a cloud migration journey. So, just like in life, people make connections early on, and as they work through life, they leverage those connections to make advancements, that sort of thing. I did an interview actually a couple of weeks ago with an A list celebrity, where he gave us a bunch of feedback around connectivity where he talked about early on in his life. He made connections that that provided him value later in his career. We think of the same thing for a business, right? If you think about as a business, your customers, your employees, urine users, it's important to take your most strategic asset, which is your data, and and put that toe work for you and make connections with those users, employees, partners, etcetera, eso we look at those is integrated experiences, right, and we we offer a platform that, in a low code way, allows the business to make those connections with users in those integrated experiences. >>Love to know who the A list celebrity was, but I won't ask you to develop that information because we look at that, you know, nowadays we had this massive shift in the last eight months or so where I think as consumers we've been everything's been on demand for a while. We're used to getting what we want. And in the business world there was a big shift and trying to figure out companies well known companies, you know, filing for Chapter 11 and trying to figure out How do we pivot? Not just once, but it's a Siris of pivots, right? So talk to me about From From an integrated experiences perspective, any customers that you kind of think in particular really, really highlight what Bhumi is doing there to allow these customers to have connected integrated experience while you're helping those customers modernized and transform their businesses. >>Yeah, I mean, I could talk to a couple of examples where you know, when when the pandemic hit in the coven situation hit, we had a lot of, you know, I think the world saw there were a lot of mom and pop shops downtown Main Street where they were trying to collect information from industry from from their governments and industries. And they were trying to really relay that information out to, um, their customers and users. And most of them, those small businesses, uh, weren't I t enabled in any way, shape or form, and we tried to figure out what is the business can we do to help solve some of these challenges and a booming for good initiative? And we put out a solution called answers on demand that we gave out to free for free and within I believe it was two weeks. We had only over 2500, you know, customers from all different shops around the country that that registered and basically were ableto themselves stand up a frequently asked question. Ah, site within their Web page chatbots that they were embedded. They were able to bed in the Web page on a low code way, and that was kind of one example. Another from an enterprise example, is you think of things like, Hey, a new employee starts and typically they can walk in the first day. People hand them forms, they walk around, they meet with different departments. How do I get myself on boarded to an organization? Well, in the world today, everybody expects things to be on their mobile. They expect things to be done immediately, and they're not gonna goto 10 different APs in order to onboard themselves to go get swag or sign themselves up for their payroll, etcetera. That's a classic, you know, integrated integrated experiences use case that we help with where it's Hey, we can help with integrating those systems in the back end and provide an integrated experience to your new employees that come on board so they can walk through and be up and running within your company very quickly in a remote way. So we offer all the tooling that businesses can customize. Those make them look like they're, you know, they're color schemes of their business. So on and so forth create custom work flows all again in a low code way because we focus on time to value. It's about getting something done very quickly versus along I t projects That's going to take, you know, 23 years. >>Yeah, I remember. I think it was booming world last year where Chris, your CEO, was talking about, uh, the on boarding experience when he started at Bumi and how massively transformed that is. But to your point right now, there's so many things that we don't have time for. And so when there's obstacles in our way or processes or more convoluted, it just makes everything you know, not function well together or allow customers really maximize their investments in particular technologies. I wanted to get your take on Speaking of maximizing investments, How does booming help have you worked with partner with AWS to help your customers maximize their investments in AWS is technology and services. Sure >>so So we you know, we built our platform first and foremost on top of the AWS platform. So we sit there natively and we take advantage of all of a W s S s services. Behind the scene seems to offer secure platform that customers can work in from a loco development environment. From there you can take advantage. You can take your Bumi integrations and you can run them within three a w your own A w s environment if you'd like to. So we've actually launched a ah Bumi Quick start that allows you to Okay, quickly deploy a run time that spends up in the AWS cloud so you can run your workloads there in a secure way. If you've got your own security set up, you can run within that domain versus going within boonies cloud if you'd like. We're also about to release an elastic version of that That's kubernetes base so that you could, you know, scale that up and down and take advantage of your AWS. Resource is not in a fixed way. But Maurin, a survivalist type capacity. We also have data catalog and prep capabilities now, which we didn't have last year. But we have We've added these so that you can explore your AWS endpoints. You can explore any business and points that you have and kind of look at what data you have that you can, you know, harvest thio, pull together and and offer that make that available to your customers and users. You can run all of that in your AWS environment as well. We put >>a >>bunch of focus and adventure oven architectures so as a you know, as a classic integration scenario, a lot of people focus on pub sub patterns, those types of things. So we're we released connectivity to event bridge, sqs, etcetera. We also support connectivity to red shift so you can handle data warehousing scenarios. So and a lot of investment in the AWS ecosystem in the last year and a half to two years, and we continue, you know, we're going to continue doing that. We're just kind of at the beginning of that. So >>Bumi has over 12,000 customers ranging from, you know, the big guys, nonprofits like American Cancer Society, etcetera. How do you work with customers as head of product toe help them influence the road back to be able to take in the information that they need to. For example, we wanna we wanna be ableto work with me and really modernized but also maximize or a W s investment. What is that customer feedback loop like? >>Sure, So we've got within booming. We have a customer success team that focuses on all of those customers and different tiers. Verticals, um, you know, different horizontal plays, etcetera. But we have success. People that look out, you know, for our customers meet with them on a regular basis. They bring a lot of that feedback back into product. I'm an executive sponsor for a number of our customers where I meet with them directly to understand the projects, use cases. What are they trying to achieve and take? That is input, but but very specifically, we do quarterly webinars for our customers where we get each of our product managers, including myself, do a two hour session where we go through every single detail of here is what we are expecting ourselves that delivered to you as a customer over the next year, and that gives our customers the opportunity to see all those details. We published them online publicly. We then allow them to come back through direct relationships with product or customer success. To request these enhancements. We score them, we go through. We do commit a tely east. 25% of our roadmap to customers specific requests. Um, you know, even the 75% other piece of the road map we're looking at what we feel is the best interest of our customers and what we want to take them in an innovative way. But like I said, the 25% are direct commitment to Hey, customer wants X Y Z feature will put that in the 25% >>That's he, especially right now to be able to be able to. I don't want to be reactive because we often use that as a bad term. But be able to pivot quickly and and take that information in and make the changes needed that will benefit countless others if we go back to integrated experiences, you know, here we are at this virtual aws reinvent. We're so used to being surrounded in Vegas by 45,000 people. But talk to me about how Bhumi is helping AWS customers with their integrated experiences. What are some of the things that you guys are really excited about that you're enabling now? >>So with an integrated experience, you know, again, I go back to the three things that any customer AWS customer specifically need thio think about in order to create an ingrate experience. So data readiness is the first piece. So with a W s, you'll be spinning up a number of the services. You'll be putting data in the cloud so on and so forth. But you need to make sure that that data is of high quality. Um, it's secure. It's understood something like, you know, 60 to 70% of data that you haven't enterprises is unknown, and we help solve some of those challenges through our catalog and prepping tools. So even if you're moving a bunch of your processes and data applications into the cloud, we can help customers with data readiness and making sure it's security of high quality. The second piece is pervasive connectivity. So it is about connecting all of your data sources. So we do have an open platform. You have all your AWS services that we can help you connect to get data from those sources or or transfer them to those sources. But we also allow you to extend out into on Prem or other clouds as well. So as much as we love and work with a W s, we do understand that people need to move things into the cloud out of the cloud, etcetera. You know, we help with all of those connectivity challenges that an organization may face. Uh and then the third is that user engagement engagement piece So you could move data all around all you want. You can understand your data, but unless you're putting it in the hands of the user and allowing them to act on that data in some way, shape or form the tools we have, you know, around workflow and building those in a low code way, you could do all of this in a, you know, a unified platform that we have that you can go in and building a low code way. You don't have to be a pure hardcore Java developer to get things done. We focus on time to value. So you can. You know, we have stories of customers building their first set of integrations or work flows and, you know, minutes or a couple of hours versus some of our competitors who take days, weeks or months. >>So from a local perspective, something I'm just curious about, that's kind of be a facilitator of during the last, you know, eight months of things changing and customers not being able suddenly to get into their data centers air on site, talk to me a little bit about some of the things maybe even anecdotally, that you've heard about Bhumi Loco development platform being facilitator of people that couldn't get to a data center. >>Yeah, so I mean, all of the development even before covert, all all loco development that you did for Bumi was in a Web browser. We've always been that right. So we have that capability. And then from a run time, I was talking earlier about how you can run in a ws cloud. But you can also set your runtime behind a firewall. If it is at a facility, you can put it in. You know, any locations around the world. So when the pandemic hit and folks started needing to work remotely, it was kind of a non event for many of our developer, our local developers, because they can now access the browser from home and still access. All those resource is whether it's on site in a W s or wherever they were then forced to Okay, The rest of the business is saying we need to make data available. We need to actually now put processes in place. And and Bumi became an asset to say, Wait a minute. It's not about just integration behind the scenes, that's plumbing that nobody sees. Our users started becoming heroes in their business by standing up work flows and saying I can quickly because it's low code. Oh, you need to collect information about, you know, in some cases, you know, citizen information that they used to go to. You know, I don't know that I could talk about this government, but citizens used have to go into a building in order to fill out forms and whatnot. We need to collect data live. How can I do that? Okay. This government now just use boom me to start posting these on their website. These work flows in a secure way. You know, that's just, um, examples. I talked about answers on demand before, but but we've seen this pivot of user engagement Mawr out of, you know, bringing middleware and integration out of the shadows of I t into solving real problems as people are now this first around the world at home. So >>solving your problems and probably helping a lot of businesses not just survive the last few months and forward but thrive as well as theirs. We know some things from this will be permanent. Let's question to you just can you give us a sneak peek into some of the solutions and the initiatives that Booby and AWS are working on together? Yes. >>So I talked a little bit about this before, so we are in Advanced Tech Partner were a public sector partner. We run our platform on AWS again, so we continue to work on how we can keep expanding and taking advantage of A W S two services To make things more scalable. Onda were more and more secure. It's always a top priority given the shift to the cloud and a W s is helping us with those we have are quick starts that we're working on again to make things quicker and easier for people to stand up integration workloads in AWS catalog and prep again. All of the connectivity that we have to things like event bridge, sqs Red shift, etcetera. Um, you know, those are all the things we're collaborating on with them. And again through the next year, we'll continue to keep focusing on more and more to just make running your booming environment in AWS more and more seamless. >>Seamless. I'll take it well and thank you so much for sharing what's going on with Louis and AWS in this virtual event. We appreciate your time. >>Yeah. Thank you so much. >>Bread. McCaskey. I'm Lisa Martin. You're watching the cubes coverage of AWS reinvent 2020 A virtual edition
SUMMARY :
its head of product of the program and nice to see you today So talk to me before we dig into kind of what's going on with AWS. So things have been going really well for us business wise, I think you know, as we've you know, you talked about it. If you think about as a business, your customers, Love to know who the A list celebrity was, but I won't ask you to develop that information because we look at that, Yeah, I mean, I could talk to a couple of examples where you know, everything you know, not function well together or allow customers so So we you know, we built our platform first and foremost on top of the AWS platform. We also support connectivity to red shift so you can handle you know, the big guys, nonprofits like American Cancer Society, etcetera. People that look out, you know, for our customers meet with them on a regular What are some of the things that you guys are really excited about that you're enabling now? on that data in some way, shape or form the tools we have, you know, during the last, you know, eight months of things changing and customers not being able suddenly But you can also set your runtime behind a firewall. Let's question to you just can you give us a sneak peek into some of the solutions and the initiatives that Booby and AWS you know, those are all the things we're collaborating on with them. I'll take it well and thank you so much for sharing what's going on with Louis and AWS in this virtual A virtual edition
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ed Makowski | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
60 | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
American Cancer Society | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
two hour | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
23 years | QUANTITY | 0.99+ |
45,000 people | QUANTITY | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
first piece | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
Bumi | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
over 12,000 customers | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
three things | QUANTITY | 0.98+ |
two services | QUANTITY | 0.98+ |
over 2500 | QUANTITY | 0.98+ |
70% | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
first day | QUANTITY | 0.97+ |
one example | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
first set | QUANTITY | 0.97+ |
Java | TITLE | 0.97+ |
Boomi | PERSON | 0.97+ |
Booby | ORGANIZATION | 0.95+ |
today | DATE | 0.95+ |
Bumi | LOCATION | 0.95+ |
this year | DATE | 0.94+ |
two years | QUANTITY | 0.93+ |
first | QUANTITY | 0.93+ |
Chapter 11 | OTHER | 0.92+ |
McCaskey | PERSON | 0.91+ |
Cube | COMMERCIAL_ITEM | 0.9+ |
couple of weeks ago | DATE | 0.86+ |
Bhumi Loco | ORGANIZATION | 0.86+ |
last eight months | DATE | 0.86+ |
Cove | ORGANIZATION | 0.83+ |
2020 | TITLE | 0.83+ |
Bhumi | PERSON | 0.83+ |
2020 | DATE | 0.82+ |
aws | ORGANIZATION | 0.79+ |
Maurin | PERSON | 0.77+ |
Onda | ORGANIZATION | 0.77+ |
last | DATE | 0.76+ |
Louis | PERSON | 0.75+ |
Invent 2020 | TITLE | 0.72+ |
couple | QUANTITY | 0.71+ |
coven situation | EVENT | 0.69+ |
W | ORGANIZATION | 0.66+ |
three | QUANTITY | 0.65+ |
APs | QUANTITY | 0.65+ |
Ed Walsh, ChaosSearch | AWS re:Invent 2020 Partner Network Day
>> Narrator: From around the globe it's theCUBE, with digital coverage of AWS re:Invent 2020. Special coverage sponsored by AWS Global Partner Network. >> Hello and welcome to theCUBE Virtual and our coverage of AWS re:Invent 2020 with special coverage of APN partner experience. We are theCUBE Virtual and I'm your host, Justin Warren. And today I'm joined by Ed Walsh, CEO of ChaosSearch. Ed, welcome to theCUBE. >> Well thank you for having me, I really appreciate it. >> Now, this is not your first time here on theCUBE. You're a regular here and I've loved it to have you back. >> I love the platform you guys are great. >> So let's start off by just reminding people about what ChaosSearch is and what do you do there? >> Sure, the best way to say is so ChaosSearch helps our clients know better. We don't do that by a special wizard or a widget that you give to your, you know, SecOp teams. What we do is a hard work to give you a data platform to get insights at scale. And we do that also by achieving the promise of data lakes. So what we have is a Chaos data platform, connects and indexes data in a customer's S3 or glacier accounts. So inside your data lake, not our data lake but renders that data fully searchable and available for analysis using your existing tools today 'cause what we do is index it and publish open API, it's like API like Elasticsearch API, and soon SQL. So give you an example. So based upon those capabilities were an ideal replacement for a commonly deployed, either Elasticsearch or ELK Stack deployments, if you're hitting scale issues. So we talk about scalable log analytics, and more and more people are hitting these scale issues. So let's say if you're using Elasticsearch ELK or Amazon Elasticsearch, and you're hitting scale issues, what I mean by that is like, you can't keep enough retention. You want longer retention, or it's getting very expensive to keep that retention, or because the scale you hit where you have availability, where the cluster is hard to keep up running or is crashing. That's what we mean by the issues at scale. And what we do is simply we allow you, because we're publishing the open API of Elasticsearch use all your tools, but we save you about 80% off your monthly bill. We also give you an, and it's an and statement and give you unlimited retention. And as much as you want to keep on S3 or into Glacier but we also take care of all the hassles and management and the time to manage these clusters, which ends up being on a database server called leucine. And we take care of that as a managed service. And probably the biggest thing is all of this without changing anything your end users are using. So we include Kibana, but imagine it's an Elastic API. So if you're using API or Kibana, it's just easy to use the exact same tools used today, but you get the benefits of a true data lake. In fact, we're running now Elasticsearch on top of S3 natively. If that makes it sense. >> Right and natively is pretty cool. And look, 80% savings, is a dramatic number, particularly this year. I think there's a lot of people who are looking to save a few quid. So it'd be very nice to be able to save up to 80%. I am curious as to how you're able to achieve that kind of saving though. >> Yeah, you won't be the first person to ask me that. So listen, Elastic came around, it was, you know we had Splunk and we also have a lot of Splunk clients, but Elastic was a more cost effective solution open source to go after it. But what happens is, especially at scale, if it's fall it's actually very cost-effective. But underneath last six tech ELK Stack is a leucine database, it's a database technology. And that sits on our servers that are heavy memory count CPU count in and SSDs. So you can do on-prem or even in the clouds, so if you do an Amazon, basically you're spinning up a server and it stays up, it doesn't spin up, spin down. So those clusters are not one server, it's a cluster of those servers. And typically if you have any scale you're actually having multiple clusters because you don't dare put it on one, for different use cases. So our savings are actually you no longer need those servers to spin up and you don't need to pay for those seen underneath. You can still use Kibana under API but literally it's $80 off your bill that you're paying for your service now, and it's hard dollars. So it's not... And we typically see clients between 70 and 80%. It's up to 80, but it's literally right within a 10% margin that you're saving a lot of money, but more importantly, saving money is a great thing. But now you have one unified data lake that you can have. You used to go across some of the data or all the data through the role-based access. You can give different people. Like we've seen people who say, hey give that, help that person 40 days of this data. But the SecOp up team gets to see across all the different law. You know, all the machine generated data they have. And we can give you a couple of examples of that and walk you through how people deploy if you want. >> I'm always keen to hear specific examples of how customers are doing things. And it's nice that you've thought of drawn that comparison there around what what cloud is good for and what it isn't is. I'll often like to say that AWS is cheap to fail in, but expensive to succeed. So when people are actually succeeding with this and using this, this broad amount of data so what you're saying there with that savings I've actually got access to a lot more data that I can do things with. So yeah, if you could walk through a couple of examples of what people are doing with this increased amount of data that they have access to in EKL Search, what are some of the things that people are now able to unlock with that data? >> Well, literally it's always good for a customer size so we can go through and we go through it however it might want, Kleiner, Blackboard, Alert Logic, Armor Security, HubSpot. Maybe I'll start with HubSpot. One of our good clients, they were doing some Cloud Flare data that was one of their clusters they were using a lot to search for. But they were looking at to look at a denial service. And they were, we find everyone kind of at scale, they get limited. So they were down to five days retention. Why? Well, it's not that they meant to but basically they couldn't cost-effectively handle that in the scale. And also they're having scale issues with the environment, how they set the cluster and sharding. And when they also denial service tech, what happened that's when the influx of data that is one thing about scale is how fast it comes out, yet another one is how much data you have. But this is as the data was coming after them at denial service, that's when the cluster would actually go down believe it or not, you know right. When you need your log analysis tools. So what we did is because they're just using Kibana, it was easy swap. They ran in parallel because we published the open API but we took them from five days to nine days. They could keep as much as they want but nine days for denial services is what they wanted. And then we did save them in over $4 million a year in hard dollars, What they're paying in their environment from really is the savings on the server farm and a little bit on the Elasticsearch Stack. But more importantly, they had no outages since. Now here's the thing. Are you talking about the use case? They also had other clusters and you find everyone does it. They don't dare put it on one cluster, even though these are not one server, they're multiple servers. So the next use case for CloudFlare was one, the next QS and it was a 10 terabyte a day influx kept it for 90 days. So it's about a petabyte. They brought another use case on which was NetMon, again, Network Monitoring. And again, I'm having the same scale issue, retention area. And what they're able to do is easily roll that on. So that's one data platform. Now they're adding the next one. They have about four different use cases and it's just different clusters able to bring together. But now what they're able to do give you use cases either they getting more cost effective, more stability and freedom. We say saves you a lot of time, cost and complexity. Just the time they manage that get the data in the complexities around it. And then the cost is easy to kind of quantify but they've got better but more importantly now for particular teams they only need their access to one data but the SecOP team wants to see across all the data. And it's very easy for them to see across all the data where before it was impossible to do. So now they have multiple large use cases streaming at them. And what I love about that particular case is at one point they were just trying to test our scale. So they started tossing more things at it, right. To see if they could kind of break us. So they spiked us up to 30 terabytes a day which is for Elastic would even 10 terabytes a day makes things fall over. Now, if you think of what they just did, what were doing is literally three steps, put your data in S3 and as fast as you can, don't modify, just put it there. Once it's there three steps connect to us, you give us readability access to those buckets and a place to write the indexy. All of that stuff is in your S3, it never comes out. And then basically you set up, do you want to do live or do you want to do real time analysis? Or do you want to go after old data? We do the rest, we ingest, we normalize the schema. And basically we give you our back and the refinery to give the right people access. So what they did is they basically throw a whole bunch of stuff at it. They were trying to outrun S3. So, you know, we're on shoulders of giants. You know, if you think about our platform for clients what's a better dental like than S3. You're not going to get a better cross curve, right? You're not going to get a better parallelism. And so, or security it's in your, you know a virtual environment. But if you... And also you can keep data in the right location. So Blackboard's a good example. They need to keep that in all the different regions and because it's personal data, they, you know, GDPR they got to keep data in that location. It's easy, we just put compute in each one of the different areas they are. But the net net is if you think that architecture is shoulders of giants if you think you can outrun by just sheer volume or you can put in more cost-effective place to keep long-term or you think you can out store you have so much data that S3 and glacier can't possibly do it. Then you got me at your bigger scale at me but that's the scale we'r&e talking about. So if you think about the spiked our throughput what they really did is they try to outrun S3. And we didn't pick up. Now, the next thing is they tossed a bunch of users at us which were just spinning up in our data fabric different ways to do the indexing, to keep up with it. And new use cases in case they're going after everyone gets their own worker nodes which are all expected to fail in place. So again, they did some of that but really they're like you guys handled all the influx. And if you think about it, it's the shoulders of giants being on top of an Amazon platform, which is amazing. You're not going to get a more cost effective data lake in the world, and it's continuing to fall in price. And it's a cost curve, like no other, but also all that resiliency, all that security and the parallelism you can get, out of an S3 Glacier is just a bar none is the most scalable environment, you can build an environment. And what we do is a thin layer. It's a data platform that allows you to have your data now fully searchable and queryable using your tools >> Right and you, you mentioned there that, I mean you're running in AWS, which has broad experience in doing these sorts of things at scale but on that operational management side of things. As you mentioned, you actually take that off, off the hands of customers so that you run it on their behalf. What are some of the areas that you see people making in trying to do this themselves, when you've gone into customers, and brought it into the EKL Search platform? >> Yeah, so either people are just trying their best to build out clusters of Elasticsearch or they're going to services like Logz.io, Sumo Logic or Amazon Elasticsearch services. And those are all basically on the same ELK Stack. So they have the exact same limits as the same bits. Then we see people trying to say, well I really want to go to a data lake. I want to get away from these database servers and which have their limits. I want to use a data Lake. And then we see a lot of people putting data into environments before they, instead of using Elasticsearch, they want to use SQL type tools. And what they do is they put it into a Parquet or Presto form. It's a Presto dialect, but it into Parquet and structure it. And they go a lot of other way to, Hey it's in the data lake, but they end up building these little islands inside their data lake. And it's a lot of time to transform the data, to get it in a format that you can go after our tools. And then what we do is we don't make you do that. Just literally put the data there. And then what we do is we do the index and a polish API. So right now it's Elasticsearch in a very short time we'll publish Presto or the SQL dialect. You can use the same tool. So we do see people, either brute forcing and trying their best with a bunch of physical servers. We do see another group that says, you know, I want to go use an Athena use cases, or I want to use a there's a whole bunch of different startups saying, I do data lake or data lake houses. But they are, what they really do is force you to put things in the structure before you get insight. True data lake economics is literally just put it there, and use your tools natively to go after it. And that's where we're unique compared to what we see from our competition. >> Hmm, so with people who have moved into ChaosSearch, what's, let's say pick one, if you can, the most interesting example of what people have started to do with, with their data. What's new? >> That's good. Well, I'll give you another one. And so Armor Security is a good one. So Armor Security is a security service company. You know, thousands of clients doing great I mean a beautiful platform, beautiful business. And they won Rackspace as a partner. So now imagine thousand clients, but now, you know massive scale that to keep up with. So that would be an example but another example where we were able to come in and they were facing a major upgrade of their environment just to keep up, and they expose actually to their customers is how their customers do logging analytics. What we're able to do is literally simply because they didn't go below the API they use the exact same tools that are on top and in 30 days replaced that use case, save them tremendous amount of dollars. But now they're able to go back and have unlimited retention. They used to restrict their clients to 14 days. Now they have an opportunity to do a bunch of different things, and possible revenue opportunities and other. But allow them to look at their business differently and free up their team to do other things. And now they're, they're putting billing and other things into the same environment with us because one is easy it's scale but also freed up their team. No one has enough team to do things. And then the biggest thing is what people do interesting with our product is actually in their own tools. So, you know, we talk about Kibana when we do SQL again we talk about Looker and Tableau and Power BI, you know, the really interesting thing, and we think we did the hard work on the data layer which you can say is, you know I can about all the ways you consolidate the performance. Now, what becomes really interesting is what they're doing at the visibility level, either Kibana or the API or Tableau or Looker. And the key thing for us is we just say, just use the tools you're used to. Now that might be a boring statement, but to me, a great value proposition is not changing what your end users have to use. And they're doing amazing things. They're doing the exact same things they did before. They're just doing it with more data at bigger scale. And also they're able to see across their different machine learning data compared to being limited going at one thing at a time. And that getting the correlation from a unified data lake is really what we, you know we get very excited about. What's most exciting to our clients is they don't have to tell the users they have to use a different tool, which, you know, we'll decide if that's really interesting in this conversation. But again, I always say we didn't build a new algorithm that you going to give the SecOp team or a new pipeline cool widget that going to help the machine learning team which is another API we'll publish. But basically what we do is a hard work of making the data platform scalable, but more importantly give you the APIs that you're used to. So it's the platform that you don't have to change what your end users are doing, which is a... So we're kind of invisible behind the scenes. >> Well, that's certainly a pretty strong proposition there and I'm sure that there's plenty of scope for customers to come and and talk to you because no one's creating any less data. So Ed, thanks for coming out of theCUBE. It's always great to see you here. >> Know, thank you. >> You've been watching theCUBE Virtual and our coverage of AWS re:Invent 2020 with special coverage of APN partner experience. Make sure you check out all our coverage online, either on your desktop, mobile on your phone, wherever you are. I've been your host, Justin Warren. And I look forward to seeing you again soon. (soft music)
SUMMARY :
the globe it's theCUBE, and our coverage of AWS re:Invent 2020 Well thank you for having me, loved it to have you back. and the time to manage these clusters, be able to save up to 80%. And we can give you a So yeah, if you could walk and the parallelism you can get, that you see people making it's in the data lake, but they end up what's, let's say pick one, if you can, I can about all the ways you It's always great to see you here. And I look forward to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
$80 | QUANTITY | 0.99+ |
40 days | QUANTITY | 0.99+ |
five days | QUANTITY | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
90 days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS Global Partner Network | ORGANIZATION | 0.99+ |
nine days | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
HubSpot | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
Elasticsearch | TITLE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Armor Security | ORGANIZATION | 0.99+ |
14 days | QUANTITY | 0.99+ |
thousand clients | QUANTITY | 0.99+ |
Blackboard | ORGANIZATION | 0.99+ |
Kleiner | ORGANIZATION | 0.99+ |
S3 | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
Alert Logic | ORGANIZATION | 0.99+ |
three steps | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
GDPR | TITLE | 0.98+ |
one thing | QUANTITY | 0.98+ |
one data | QUANTITY | 0.98+ |
one server | QUANTITY | 0.98+ |
Elastic | TITLE | 0.98+ |
70 | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
about 80% | QUANTITY | 0.97+ |
Kibana | TITLE | 0.97+ |
first time | QUANTITY | 0.97+ |
over $4 million a year | QUANTITY | 0.97+ |
one cluster | QUANTITY | 0.97+ |
first person | QUANTITY | 0.97+ |
CloudFlare | TITLE | 0.97+ |
ChaosSearch | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
Glacier | TITLE | 0.97+ |
up to 80% | QUANTITY | 0.97+ |
Parquet | TITLE | 0.96+ |
each one | QUANTITY | 0.95+ |
Splunk | ORGANIZATION | 0.95+ |
Sumo Logic | ORGANIZATION | 0.94+ |
up to 80 | QUANTITY | 0.94+ |
Power BI | TITLE | 0.93+ |
today | DATE | 0.93+ |
Rackspace | ORGANIZATION | 0.92+ |
up to 30 terabytes a day | QUANTITY | 0.92+ |
one point | QUANTITY | 0.91+ |
S3 Glacier | COMMERCIAL_ITEM | 0.91+ |
Elastic API | TITLE | 0.89+ |
Ed Walsh | CUBE Conversation, August 2020
>> From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE Conversation. >> Hey, everybody, this is Dave Vellante, and welcome to this CXO Series. As you know, I've been running this series discussing major trends and CXOs, how they've navigated through the pandemic. And we've got some good news and some bad news today. And Ed Walsh is here to talk about that. Ed, how you doing? Great to see you. >> Great seeing you, thank you for having me on. I really appreciate it. So the bad news is Ed Walsh is leaving IBM as the head of the storage division (indistinct). But the good news is, he's joining a new startup as CEO, and we're going to talk about that, but Ed, always a pleasure to have you. You're quite a run at at IBM. You really have done a great job there. So, let's start there if we can before we get into the other part of the news. So, you give us the update. You're coming off another strong quarter for the storage business. >> I would say listen, they're sweet, heartily, but to be honest, we're leaving them in a really good position where they have sustainable growth. So they're actually IBM storage in a very good position. I think you're seeing it in the numbers as well. So, yeah, listen, I think the team... I'm very proud of what they were able to pull off. Four years ago, they kind of brought me in, hey, can we get IBM storage back to leadership? They were kind of on their heels, not quite growing, or not growing but falling back in market share. You know, kind of a distant third place finisher, and basically through real innovation that mattered to clients which that's a big deal. It's the right innovation that matters to the clients. We really were able to dramatically grow, grow all different four segments of the portfolio. But also get things like profitability growing, but also NPS growing. It really allowed us to go into a sustainable model. And it's really about the team. You heard I've talked about team all the time, which is you get a good team and they really nailed great client experiences. And they take the right offerings and go to market and merge it. And I'll tell you, I'm very proud of what the IBM team put together. And I'm still the number one fan and inside or outside IBM. So it might be bittersweet, but I actually think they're ready for quite some growth. >> You know Ed, when you came in theCUBE, right after you had joined IBM, a lot of people are saying, Ed Walsh joined an IBM storage division to sell the division. And I asked you on theCUBE, are you there to sell division? And you said, no, absolutely not. So it's always it seemed to me, well, hey, it's good. It's a good business, good cash flow business, got a big customer base, so why would IBM sell it? Never really made sense to me. >> I think it's integral to what IBM does, I think it places their client base in a big way. And under my leadership, really, we got more aligned with what IBM is doing from the big IBM right. What we're doing around Red Hat hybrid multi cloud and what we're doing with AI. And those are big focuses of the storage portfolio. So listen, I think IBM as a company is in a position where they're really innovating and thriving, and really customer centric. And I think IBM storage is benefiting from that. And vice versa. I think it's a good match. >> So one of the thing I want to bring up before we move on. So you had said you were seeing a number. So I want to bring up a chart here. As you know, we've been using a lot of data and sharing data reporting from our partner. ETR, Enterprise Technology Research, they do quarterly surveys. They have a very tight methodology, it's similar to NPS. But it's a net score, we call it methodology. And every quarter they go out and what we're showing here is the results from the last three quarter, specific to IBM storage and IBM net score in storage. And net scores is essentially, we ask people are you spending more, are you spending less, we subtract the less from the more and that's the net score. And you can see when you go back to the October 19, survey, you know, low single digits and then it dipped in the April survey, which was the height of the pandemic. So this was this is forward looking. So in the height of the pa, the lockdown people were saying, maybe I'm going to hold off on budgets. But then now look at the July survey. Huge, huge up check. And I think this is testament to a couple of things. One is, as you mentioned, the team. But the other is, you guys have done a good job of taking R&D, building a product pipeline and getting it into the field. And I think that shows up in the numbers. That was really a one of the hallmarks of your leadership. >> Yeah, I mean, they're the innovation. IBM is there's almost an embarrassment of riches inside. It's how do you get in the pipeline? We went from a typically about for four years, four and a half year cycles, not a two year cycle product cycle. So we're able to innovate and bring it to market much quicker. And I think that's what clients are looking for. >> Yeah, so I mean, you brought a startup mentality to the division and of course now, cause your startup guy, let's face it. Now you're going back to the startup world. So the other part of the news is Ed Walsh is joining ChaosSearch as the CEO. ChaosSearches is a local Boston company, they're focused on log analytics but more on we're going to talk about that. So first of all, congratulations. And tell us about your decision. Why ChaosSearch? And you know where you're out there? >> Yeah, listen, if you can tell from the way I describe IBM, I mean, it was a hard decision to leave IBM, but it was a very, very easy decision to go to Chaos, right. So I knew the founder, I knew what he was working on for the last seven years, right. Last five years as a company, and I was just blown away at their fundamental innovation, and how they're really driving like how to get insights at scale from your data lake in the cloud. But also and also instead, and statements slash cost dramatically. And they make it so simple. Simply put your data in your S3 or really Cloud object storage. But right now, it's, Amazon, they'll go the rest of clouds, but just put your data in S3. And what we'll do is we'll index it, give you API so you can search it and query it. And it literally brings a way to do at scale data analysts. And also login analytics on everything you just put into S3 basically bucket. It makes it very simple. And because they're really fundamental, we can go through it. Fundamental on hard technology that data layer, but they kept all the API. So you're using your normal tools that we did for Elastic Search API's. You want to do Glyfada, you want to do Cabana, or you want to do SQL or you want to do use Looker, Tableau, all those work. Which is that's a part of it. It's really revolutionary what they're doing as far as the value prop and we can explain it. But also they made it evolution, it's very easy for clients to go. Just run in parallel, and then they basically turn off what they currently have running. >> So data lakes, really the term became popular during the sort of early big data, Hadoop era. And, Hadoop obviously brought a lot of innovation, you know, leave the data where it is. Bring the compute to the data, really launched the Big Data initiative, but it was very complicated. You had, MapReduce and and elastic MapReduce in the cloud. And, it really was a big batch job, where storage was really kind of a second class citizen, if you will. There wasn't a lot of real time stuff going on. And then, Spark comes in. And still there's this very complicated situation. So it's sounds like, ChaosSearch is really attacking that problem. And the first use case, it's really going after is log analytics. Explain that a little bit more, please. >> Yeah, so listen, they finally went after it with this, it's called a data lake engine for scalable and we'll say log analytics firstly. It was the first use case to go after it. But basically, they allows for log analytics people, everyone does it, and everyone's kind of getting to scale with it, right. But if you asked your IT department, are you even challenged with scale, or cost, or retention levels, but also management overlay of what they're doing on log analytics or security log analytics, or all this machine data they're collecting? The answer be absolutely no, it's a nightmare. It starts easy and becomes a big, very costly application for our environments. And what Chaos does is because they deal with a real issue, which is the data layer, but keep the API's on top. And so people easily use the data insights at scale, what they're able to do is very simply run in parallel and we'll save 80% of your cost, but also get better data retention. Cause there's typically a trade off. Clients basically have this trade off, or it gets really expensive. It gets to scale. So I should just retain less. We have clients that went from nine day retention and security logs to literally four and five days. If they didn't catch it in that time, it was too late. Now what they're able to do is, they're able to go to our solution. Not change what they're doing applications, because you're using the same API's, but literally save 80% and this is millions and 10s of millions of dollars of savings, but also basically get 90 day retention. There's really limitless, whatever you put into your S3 bucket, we're going to give you access to. So that alone shows you that it's literally revolutions that CFO wins because they save money. The IT department wins because they don't that wrestle with this data technology that wasn't really built. It is really built 30 years ago, wasn't built for this volume and velocity of data coming in. And then the data analytics guys, hey, I keep my tool set but I get all the retention I want. No one's limiting me anymore. So it's kind of an easy win win. And it makes it really easy for clients to have this really big benefit for them. And dramatic cost savings. But also you get the scale, which really means a lot in security login or anything else. >> So let's dig into that a little bit. So Cloud Object Storage has kind of become the de facto bucket, if you will. Everybody wants it, because it's simple. It's a get put kind of paradigm. And it's cheap, but it's also got performance issues. So people will throw cash at the problem, they'll have to move data around. So is that the problem that you're solving? Is it a performance? You know, problem is it a cause problem or both? And explain that a little bit. >> Yeah, so it's all over. So basically, if you were building a data lake, they would like to just put all their data in one very cost effective, scalable, resilient environment. And that is Cloud Object Storage, or S3, or every cloud has around, right? You can do also on prem, everyone would love to do that. And then literally get their insights out of it. But they want to go after it with our tools. Is it Search or is it SQL, they want to go after their own tools. That's the vision everyone wants. But what everyone does now is because this is where the core special sauce what ChaosSearch provides, is we built from the ground up. The database, the indexing technology, the database technology, how to actually make your Cloud object storage a database. We don't move it somewhere, we don't cash it. You put it in the inside the bucket, we literally make the Cloud object storage, the database. And then around it, we basically built a Chaos fabric that allows you to spin up compute nodes to go at the data in different ways. We truly have separated that the data from the compute, but also if a worker nodes, beautiful, beauty of like containerization technology, a worker nodes goes away, nothing happens. It's not like what you do on Prem. And all sudden you have to rebuild clusters. So by fundamentally solving that data layer, but really what was interesting is they just published API's, you mentioned put and get. So the API's you're using cloud obvious sources of put and get. Imagine we just added to that API, your Search API from elastic, or your SQL interface. It's just all we're doing is extending. You put it in the bucket will extend your ability to get after it. Really is an API company, but it's a hard tech, putting that data layer together. So you have cost effectiveness, and scale simultaneously. But we can ask for instance, log analytics. We don't cash, nothing's on the SSD, nothing's on local storage. And we're as fast as you're running Elastic Search on SSDs. So we've solved the performance and scale issues simultaneously. And that's really the core fundamental technology. >> And you do that with math, with algorithms, with machine learning, what's the secret sauce? Yeah, we should really have I'll tell you, my founder, just has the right interesting way of looking at problems. And he really looked at this differently and went after how do you make a both, going after data. He really did it in a different way, and really a modern way. And the reason it differentiates itself is he built from the ground up to do this on object storage. Where basically everyone else is using 30 year old technology, right? So even really new up and coming companies, they're using Tableau, Looker, or Snowflake could be another example. They're not changing how the data stored, they always have to move it ETL at somewhere to go after it. We avoid all that. In fact, we're probably a pretty good ecosystem players for all those partners as we go forward. >> So your talking about Tom Hazel, you're founder and CTO and he's brought in the team and they've been working on this for a while. What's his background? >> Launched Telkom, building out God boxes. So he's always been in the database space. I can't do his in my first day of the job, I can't do justice to his deep technology. There's a really good white paper on our website that does that pretty well. But literally the patent technology is a Chaos index, which is a database that it makes your object storage, the database. And then it's really the chaos fabric that puts around in the chaos refinery that gives you virtual views. But that's one solution. And if you look for log analytics, you come in log in and you get all the tools you're used to. But underneath the covers, were just saving about 80% of overall cost, but also almost limitless retention. We see people going from literally have been reduced the number of logs are keeping because of cost, and complexity, and scale, down to literally a very small amount and going right back at nine days. You could do longer, but that's what we see most people go into when they go to our service. >> Let's talk about the market. I mean, as a startup person, you always look for large markets. Obviously, you got to have good tech, a great team. And you want large markets. So the, space that you're in, I mean, I would think it started, early days and kind of the decision support. Sort of morphed into the data warehouse, you mentioned ETL, that's kind of part of it. Business Intelligence, it's sort of all in there. If you look at the EDW market, it's probably around 18 to 20 billion. Small slice of that is data lakes, maybe a billion or a billion plus. And then you got this sort of BI layer on top, you mentioned a lot of those. You got ETL, you probably get up into the 30,35 billion just sort of off the top of my head and from my historical experience and looking at these markets. But I have to say these markets have traditionally failed to live up to the expectations. Things like 360 degree views of the customer, real time analytics, delivering insights and self service to the business. Those are promises that these industries made. And they ended up being cumbersome, slow, maybe requiring real experts, requiring a lot of infrastructure, the cloud is changing that. Is that right? Is that the way to look at the market that you're going after? You're a player inside of that very large team. >> Yeah, I think we're a key fundamental component underneath that whole ecosystem. And yes, you're seeing us build a full stack solution for log analytics, because there's really good way to prove just how game changing the technology is. But also how we publishing API's, and it's seamless for how you're using log analytics. Same thing can be applied as we go across the SQL and different BI and analytic type of platforms. So it's exactly how we're looking at the market. And it's those players that are all struggling with the same thing. How they add more value to clients? It's a big cost game, right? So if I can literally make your underlying how you store your data and mix it literally 80% more cost effective. that's a big deal or simultaneously saving 80% and give you much longer retention. Those two things are typically, Lily a trade off, you have to go through, and we don't have to do that. That's what really makes this kind of the underlying core technology. And really I look at log analytics is really the first application set. But or if you have any log analytics issues, if you talk to your teams and find out, scale, cost, management issues, it's a pretty we make it very easy. Just run in parallel, we'll do a PLC, and you'll see how easy it is you can just save 80% which is, 80% and better retention is really the value proposition you see at scale, right. >> So this is day zero for you. Give us the hundred day plan, what do you want to accomplish? Where are you going to focus your priorities? I mean, obviously, the company's been started, it's well funded, but where are you going to focus in the next 100 days? >> No, I think it's building out where are we taking the next? There's a lot of things we could do, there's degrees of freedom as far as where we'd go with this technology is pretty wide. You're going to see us be the best log analytic company there. We're getting, really a (mumbling) we, you saw the announcement, best quarter ever last quarter. And you're seeing this nice as a service ramp, you're going to see us go to VPC. So you can do as a service with us, but now we can put this same thing in your own virtual private data center. You're going to see us go to Google, Azure, and also IBM cloud. And the really, clients are driving this. It's not us driving it, but you're going to see actually the client. So we'll go into Google because we had a couple financial institutions that are saying they're driving us to go do exactly that. So it's more really working with our client sets and making sure we got the right roadmap to support what they're trying to do. And then the ecosystem is another play. How to, you know, my core technology is not necessarily competitive with anyone else. No one else is doing this. They're just kind of, hey, move it here, I'll put it on this, you know, a foundational DV or they'll put it on on a presto environment. They're not really worried about the bottom line economics, which is really that's the value prop and that's the hard tech and patented technology that we bring to this ecosystem. >> Well, people are definitely worried about their cloud bills. The the CFO saying, whoa, cause it's so easy to spin up, instances in the cloud. And so, Ed it really looks like you're going after a real problem. You got some great tech behind you. And of course, we love the fact that it's another Boston based company that you're joining, cause it's more Boston based startups. Better for us here at the East Coast Cube, so give us a give us your final thoughts. What should we look for? I'm sure we're going to be being touched and congratulations. >> No, hey, thank you for the time. I'm really excited about this. I really just think it's fundamental technology that allows us to get the most out of everything you're doing around analytics in the cloud. And if you look at a data lake model, I think that's our philosophy. And we're going to drive it pretty aggressively. And I think it's a good fundamental innovation for the space and that's the type of tech that I like. And I think we can also, do a lot of partnering across ecosystems to make it work for a lot of different people. So anyway, so I guess thank you very much for the time appreciate. >> Yeah, well, thanks for coming on theCUBE and best of luck. I'm sure we're going to be learning a lot more and hearing a lot more about ChaosSearch, Ed Walsh. This is Dave Vellante. Thank you for watching everybody, and we'll see you next time on theCUBE. (upbeat music)
SUMMARY :
leaders all around the world, And Ed Walsh is here to talk about that. So the bad news is Ed Walsh is leaving IBM And it's really about the team. And I asked you on theCUBE, of the storage portfolio. So in the height of the pa, the And I think that's what And you know where you're out there? So I knew the founder, I knew And the first use case, So that alone shows you that So is that the problem And that's really the core And the reason it differentiates he's brought in the team I can't do his in my first day of the job, And then you got this and give you much longer retention. I mean, obviously, the And the really, clients are driving this. And of course, And if you look at a data lake model, and we'll see you next time on theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tom Hazel | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
October 19 | DATE | 0.99+ |
Ed | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
90 day | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ChaosSearches | ORGANIZATION | 0.99+ |
April | DATE | 0.99+ |
July | DATE | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
nine day | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
August 2020 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Chaos | ORGANIZATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
30,35 billion | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
nine days | QUANTITY | 0.99+ |
five days | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two year | QUANTITY | 0.99+ |
Looker | ORGANIZATION | 0.99+ |
S3 | TITLE | 0.99+ |
Telkom | ORGANIZATION | 0.99+ |
SQL | TITLE | 0.99+ |
Enterprise Technology Research | ORGANIZATION | 0.98+ |
East Coast Cube | ORGANIZATION | 0.98+ |
a billion | QUANTITY | 0.98+ |
30 years ago | DATE | 0.98+ |
Tableau | TITLE | 0.98+ |
four and a half year | QUANTITY | 0.98+ |
Four years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Elastic Search | TITLE | 0.97+ |
today | DATE | 0.97+ |
Cabana | TITLE | 0.97+ |
one solution | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
first day | QUANTITY | 0.97+ |
ETR | ORGANIZATION | 0.97+ |
first use case | QUANTITY | 0.96+ |
theCUBE Studios | ORGANIZATION | 0.96+ |
VPC | ORGANIZATION | 0.96+ |
about 80% | QUANTITY | 0.96+ |
30 year old | QUANTITY | 0.95+ |
Looker | TITLE | 0.95+ |
last three quarter | DATE | 0.94+ |
third place | QUANTITY | 0.93+ |
Ohad Maislish, Ed Sim & Guy Podjarny | CUBE Conversation, June 2020
>> Narrator: From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, I'm Stuart Miniman and welcome to this CUBE Conversation. I'm in our Boston area studio and one of the things we always love to do is talk to startups and really find out they're usually on the leading edge of helping customers, new technologies, conquering challenges. And to that point, we have the co-founder and CEO of env0, that is, Ohad Maislish and we brought along with him he's got two of his investors, one of his advisors. So sitting next to Maish, we have Ed Sim, who's the founder and managing partner of Boldstart Ventures and sitting next to him is Guy Podjarny, who is the founder of Snyk. So now, you know is the acronym for Snyk and if you didn't know that, I know I'd heard about the company a couple years before that and my understanding is, Guy your the ones that connected Ohad with Ed who was the first investor. So Guy let's talk to Ohad in a second, but how the conversation started? And what what piqued your interest about what is now env0? >> Yeah, I think it started with people. I mean, I think fundamentally when you think about technology and think about startups, it needs to be an interesting market, it needs to be a good idea, but it really, first and foremost is about the people. So I've I've known Ohad from actually some work that he's done at Snyk earlier on, and was really impressed with his sharpness, his technical chops, and a lot of times the bias for feedback. And then when he presented the idea to me around kind of making Infrastructure as Code easy, and I don't want to sort of steal his thunder, talking about it and about kind of engaging with developers for it, a thought that literally resonated with me, I think, we'll probably dig into it some more. But in we live in a world in which more and more activities, more and more decisions, and really more effort is rolled on to developers. So, there's a constant need for great solutions that make on one hand make it easy for developers to embrace these solutions, on the other hand, still kind of allow the right kind of governance and controls. And I felt like Infrastructure as Code was like a great space for that, where we asked developers to do more, there's a ton of value in developers doing more around controlling these Infrastructure decisions, but it's just too hard today. So, anyways, I kind of liked the skills, I liked the idea. And I pulled in Ed, who I felt was kind of natural to kind of help introduce these experiences with other startups that share a similar philosophy to kind of help make this happen. >> Awesome, thank you Guys. So Ohad, let's let's throw it to you. Give us a little bit about your background, your team, Infrastructure as Code is not a new term. So I guess would love you to kind of weave into it. You know why now? Is it becoming more real in why your solution is positioned to help the enterprise? >> Awesome, first of all, thank you for having me. It's really exciting and again thank you for the opportunity. Regarding your question, so my background is technical. I was maybe still am a geek started University at a young age at the age of 14 in Palo Alto High School. And started my career in non technical roles very early. I have now like 21 years of experience, this is my second startup and third company, as I mentioned, my previous company is services company, provided services for Snyk and we became friends and later on partners, investors, and so on. And, we we've seen huge shift, we call the Infrastructure as Code the third data center revolution. We look at the first one being virtualization about 20 years ago led by VMware and then ZenSourcer. The second obviously, is the public cloud when companies started clicking buttons in order to get those compute resources but now nobody is clicking those buttons anymore. And instead writing, maintaining and executing that code, that Infrastructure as Code and as the Guy mentioned, it made it much more relevant for developers to influence the Infrastructure decisions and not just the app decisions. With that many challenges and opportunities around Infrastructure as Code management and automation, and that's where we focus. >> All right, so Ed I'm sure like me, you've seen a number of companies, try to climb this mountain and fall down and crash so I feel like five years ago, I would talk to a company and they say, oh, we're going to help, really help the enterprise enable developers for networking for storage, for security or anything like that. And it was like, oh, okay, good luck with that. And they just kind of crashed and burned or got acquired or did something like that. So, I feel like from our viewpoint we've seen for a long time that growth of developers and how important that is, but that gap between the enterprise and the developers feels like we're getting there. So, it gets similar what I asked Ohad why now, why this group, why the investment from you? >> Yeah, so I'll echo Guy's comment about the people. So, first and foremost, I was fortunate enough to invest in Guy back in his prior company before he started Snyk and then invested in Snyk. And there are lots of elements of env0 that remind me of Snyk the idea, for example, that developers are doing more, and that security is no longer a separate piece of developing, it's now embedded kind of in what developers and teams are doing. And I felt like the opportunity was still there for Infrastructure as Code. How do you make developers more productive, but provide that control plan or governance that's centralized so that environments can easily be reproduced. And the thing that got me so excited, was the idea that Ohad was going to tie kind of cloud costs from a proactive basis versus a reactive basis. Meaning that once we know that your environments are up and running, you could actually automatically tag it and tie the environment to the actual application. And to me, tying the business piece to the development piece was a huge, huge opportunity that hasn't been tapped yet. And so there are lots of elements of both Snyk and env0 and we're super excited to be invested in both. >> Alright, so Ohad maybe just step back for a second, give us some of the speeds and feeds we read your blog post 3.3 million dollars of the early investment, how many people you have, what is the stage of the product customer acquisition and the like? >> Sure, so we just launched our public beta and announced the funding couple of months ago led by Boldstart and another VC in Israel named Grove, and then angel investors Guy is the greatest investor among those and so we have some others as well. And now we have like 10 employees nine in Israel, one in New York City, I'm relocating after this all pandemic thing will get better. I'm moving to the Bay Area as soon as possible. That's more or less the status. And as I've mentioned, we just launched our public beta. So we have our first few design partners and early like private beta customers now starting to grow more. >> Yeah, and how would you characterize, what is the relationship between what you're doing in the public clouds. We understand, in the early days, it was like, Oh, well, cloud is going to be easy, it's going to just be enable it, it has been a wonderful tool set for developers. But simple is definitely not, I think anyone would describe the current state of environments. So, help it help us give it a little bit of what you're seeing there. And how you deal with like some very large players in ecosystem. >> Our customers are the same as the cloud vendors customers. The cloud vendors provide great value with the technical aspect with Infrastructure. But once you want to manage your organization, you want to empower your developers, you want to shift left some decisions, APM, did shift left for a performance, Snyk is doing great shift left for security. I believe that we are doing similar things to the cost. And you in the cloud vendors are in charge of you being able to do some technical orchestration. But when do you need to tear down those resources? When do you understand that there is a problematic resource or environment and what exactly made it? What is the association, how you can prevent from (mumbles) deployments from even happening at first. So all of those management information and insight ties back to your business logic and processes that's where we fit. >> I think there's actually a lot of analogy if I can chime in, on maybe an ownership aspect that happens in cloud. So we talk about the cloud and oftentimes cloud is interpreted as the technical aspect of it. So the fact that it allows you to do a bunch of things in the clouds and sort of renting someone else's hardware, and then automating a lot of it. But what cloud also does and that definitely represents what we're doing security and I think applies here, is that it moves a lot of things that used to be IT responsibility being a part of the application. So a lot of decisions, including ones really security, and including ones related cost around anywhere from provisioning of servers to, network access, to when you burst out, and to the balancing of business value to the cost involved or the risk involved. Those are no longer done by a central IT organizations, but rather, they're being done by developers day in and day out. And so I think that's really where the analogy really works with cloud is, it's not so much, like clearly there's an aspect of that that is the the technical piece of tracking how much does it cost in the on demand surrounding of cloud, but there's a lot of the ownership change, or the fact that the decisions that impact that are done by developers, and they're not yet well equipped to have the insights, to have the tools, to make the right decisions with a press of button. >> Thank you Guy and absolutely, 'cause cloud is just one of the platforms you're living on, you know well from Snyk that integration between what's happening in the platform, where open source fits into it, the various parts of the organization that are there. So, you've got some good background, I'm sure, helps you're an advisor to Ohad there to helps pull through a little bit of some of those challenges. Yeah, I mean, Ed I'd love to hear just in general your viewpoint on how startups are doing at monetizing things in the era of... You've got the massive players like Amazon and Microsoft out there. >> Look, the enterprise pain is higher than ever right now, every fortune 500 is a tech company right now and they need engineers, and they're hiring engineers. In fact, many of the largest fortune 500 have more engineers than some of the tech companies. And developer productivity is number one, front and center. And if you talk to CIOs, we just hosted a panel with the CIO of Guardian Life and the CTO of Priceline. They're all looking at how do I kind of automate my tool chain? How do I get things done faster? How do I do things more scalable? And then how do I coordinate processes amongst teams. As Guy hit upon and Ohad as well, not just security, there's product design being embedded with developers as product management being embedded with developers. There's finance now, FinOps. If you're going to spend more and more in the cloud, how do you actually control that proactively before things happen versus after or months after that happens? So I think this is going to be a huge, huge opportunity on the FinOps side. And, the final thing I would say is that winning the hearts and minds of developers to win the enterprise is a tried and trued model, and I think it's going to be even more important as we move forward in the next few years, to be honest with you. >> All right, so Ohad you know I think Ed talked about those hearts and minds of developers absolutely critical. When you look at the tooling landscape out there, the challenge of course, is there's so many tools out there, that there's platform battles, there's developers that find certain things that they love, and then there's, oh, wait, can I have a general purpose solution that can help. You talk about this being the third wave, how does this kind of tie into or potentially replace some of the last generation of automation tools. How do you see yourself getting into the accounts and growing your developer base? >> I think, I have a very simple answer, because, now enterprises have two options. Either they go with productivity self-service, or they go with governance, but they cannot have both. So if it's the smaller or they have less risks, so they go with the productivity and they take those risks, take the extra costs, take that potential damage that can happen. But more we see the case of I cannot allow myself this mess, so I have to block this velocity. I have to block those developers, they cannot just orchestrate cloud resources as they wish they have to open tickets, they have to go through some manual process of approval or we see more and more developers that understand there is a challenge they built in-house env0 of self-service combined with governance solution, and they always struggle doing it well, because it's not their core business. So once you see the opportunity of a more and more customers doing a lot of investment in in-house solution that do the same thing, probably a good idea to do it, as a separate product. And also the fact that we have the visibility of different customers, we can be very early but for later on adds pattern recognition, and notice what makes sense, what is problematic and give those insights and more business logic back to the customers which is impossible for them to do if they're only isolated on their cases. So as providing the same great solution to different companies, allowing them self-service combined with governance, and then additionally, add those and Smart Insights later on. >> Yeah, I think what I love about what he said is that I don't think he even sort of said finance or cost at any time of those. So really, like you said, governance and I think you can swap governance or you can swap the kind of the entity that's doing the governance for security for all of those. And that sounds awfully familiar for Snyk, which really kind of begs the answer to be the same, it's the reason that env0 approach is promising and that it would win against competition is that it tends to be that the competition or the people that are around are focused on the governance piece, they're they're focused on just sort of the entity that is the controlling entity. I like to say that it's actually not about shift left, it's about if you want to choose a direction, it's going to be the sort of the top to bottom. So it's more about, like this governance entities, whether security or finance, they need to shift from a controlling mindset that is top down that is like this dictatorship of sort of telling you what you should and shouldn't do to more of a bottom up element and allowing the teams the people in the trenches people actually make decisions to make correct decisions, and in this case, correct decisions from a financial perspective. And then alongside that, the governing entity, they need to switch to being a supportive entity an enabling entity and I think that transition will happen across many aspects of sort of software development and definitely anything that requires that type of governance from from outside of the development process today that is to change. >> Yeah, to chime in and add to Guys point, development is so important, it touches every aspect of an organization. So I always think about it as almost a collaborative workflow layer versus being reliant on kind of one control entity. Great developers always want to move fast. But, how do you kind of build that collaborative workflow and I think that Ohad in env0 is providing that for the environment and finance. Guys doing it for security. And there's lots of other opportunities out there, like privacy as well. And I wouldn't be surprised if finance folks start getting embedded with development at some point just like security is, or design is, product management is as well, because that is probably one of the highest costs around right now for many companies, and they're all trying to figure out how to stop the bleeding much earlier. >> Yeah, it's been lots of discussion, of course, we kind of go beyond DevOps, I think FinOps is in there. Ohad you have a favorite term that you've had from your advisors yet, how you categorize what you're doing. Any final words on kind of that organizational dynamic which we know so often it's the technology can be the easy part, it's getting everybody in the org, pulling in the same direction. >> Yeah, I think I'm looking at maybe a physical metaphor, or just an example, if you just enter a developer's room, you might see a screen TV there with some APM Datadog, New Relic Metrics, developers care about performance. They know very early if they did something wrong. And now they see more and more in those dashboards, in the developers rooms, things like Snyk to make sure you're not putting any bad open source package, which has security or ability. What we believe is that now they don't have the right tools, the right product that they can be part of the responsibility, of course, and that's like somebody else's problem. In other rooms, you have those TVs, those screens that show what is the cost, and maybe only later on in the waterfall kind of way you try to isolate and root cause analysis on what went wrong, but there is no good reason why those graphs of the past should be in the same rooms next to the APM and the Snyks and to prevent those as early as possible, maybe to change the discussion and build more trust between the developers that now seem not to care about the cost because they used not to care like 10 years ago when we used to have is called Apex-Cloud. The VMware or even EC2 Instances with the predicted pricing, that's all school. Now you have auto scaling Kubernetes, you have Lambda those kind of things you pay per usage. So the possibility for engineers to know how much their code is about to cost to the organization is very challenging now. If we tie from the developer up to, the financial operations, we will provide better service, and just better business value for our customer. >> Awesome, so final question I have for you, and Ohad I'm going to have you go last on this one is you kind of painted the picture of where things are going to go. So give us what success look like, Ed, start with you, give us out 12 to 24 months as to env0 in this wave as what should we be looking for? >> Success to me would be that every large enterprise has this on their budget line item as a must have. And the market is still early and evolving right now, but I have no doubt in my mind, it's going to happen. And as you hear about many large enterprises saying that we were in the second inning of cloud migration now we're in the fourth. That is what success will be and I know it's going to happen faster than we all thought. >> I'll take the developer angle to it, I think success is really when developers are delighted, or sort of they feel they're building better software by using env0 and by factoring this aspect of quality into their daily activities. And I think a lot of that comes down to ease of use. Like, I kind of encourage folks to sort of try out the env0 and see the cost calculation, it's all about making it easy. So what excites me is really around that type of success where it's so easy that it's embedded into their sort of daily activities, and that they're happy it's not a forced thing. It's something they've accepted and like having as part of their software development process. >> I fully agree with both Ed in Guy, but I want to add on on a personal note, that one of the reasons we started env0 is because we saw developers quitting jobs at some places. And the reason for that was that they didn't give them self-service, they didn't empower those developers, they were blocked by DevOps, they needed to open tickets, to do trivial things. And this frustration is just a bigger motivation for us to solve. So we want to reduce this frustration. We want developers to be happy and productive, and do what they need to do, and not getting blocked by others. So that's, I think, another way to look at it, to make sure that those developers are really making good use out of their time and going back home at the end of the day, and feeling that they did what they were paid for, not for waiting for others to locate some cloud resources for them. >> All right, well, Ohad want to wish you the best, absolutely. Some of the early things that we've seen sometimes they're the tools that help, we've been talking gosh I remember 15, 20 years about breaking down the silos between various parts of the organization, some of the tools give you different viewpoints into what you're doing, help have some of the connection and hopefully some empathy as to what the various pieces are there. You really highlighted there's nothing worse than I'm not being appreciated for the work I'm doing, or they don't understand the challenges that I'm going through. So, congratulations on env0. We look forward to following going forward and definitely hope being part your customers in the future. Thanks so much. >> Thank you, thank you very much. >> All right, and Guy really appreciate your perspectives on this thank you for joining us. >> Thanks for having them. >> All right, be sure to check out theCUBE.net where you can find all of the events we're doing online these days, of course, where there's a huge back catalog of what we have in the thousands of interviews that we've done. I'm Stuart Miniman, and thank you for watching theCUBE. (upbeat music)
SUMMARY :
leaders all around the world, And to that point, we have the the idea to me around So Ohad, let's let's throw it to you. and as the Guy mentioned, but that gap between the And I felt like the of the early investment, and announced the funding Yeah, and how would you characterize, What is the association, have the insights, to have the tools, the platforms you're living on, In fact, many of the largest some of the last generation that do the same thing, the answer to be the same, that for the environment and finance. getting everybody in the org, and to prevent those as early as possible, and Ohad I'm going to have you go last and I know it's going to happen I'll take the developer angle to it, that one of the reasons we started env0 Some of the early things that we've seen on this thank you for joining us. the events we're doing online
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stuart Miniman | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Ed Sim | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
Israel | LOCATION | 0.99+ |
June 2020 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Boldstart Ventures | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
Guy Podjarny | PERSON | 0.99+ |
Ohad Maislish | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Ed | PERSON | 0.99+ |
Bay Area | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
21 years | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
Guardian Life | ORGANIZATION | 0.99+ |
Snyk | ORGANIZATION | 0.99+ |
3.3 million dollars | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
Maish | PERSON | 0.99+ |
pandemic | EVENT | 0.99+ |
10 employees | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
fourth | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
two options | QUANTITY | 0.99+ |
Priceline | ORGANIZATION | 0.99+ |
Boldstart | ORGANIZATION | 0.99+ |
first investor | QUANTITY | 0.98+ |
second startup | QUANTITY | 0.98+ |
thousands | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
Snyks | ORGANIZATION | 0.97+ |
five years ago | DATE | 0.97+ |
env0 | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
fortune 500 | ORGANIZATION | 0.97+ |
couple of months ago | DATE | 0.97+ |
10 years ago | DATE | 0.97+ |
third company | QUANTITY | 0.97+ |
14 | QUANTITY | 0.96+ |
Lambda | TITLE | 0.96+ |
Palo Alto High School | ORGANIZATION | 0.96+ |
Ohad | ORGANIZATION | 0.96+ |
VMware | ORGANIZATION | 0.96+ |
Ohad | PERSON | 0.96+ |
today | DATE | 0.96+ |
Guy | PERSON | 0.95+ |
second inning | QUANTITY | 0.95+ |
theCUBE.net | OTHER | 0.95+ |
20 years | QUANTITY | 0.95+ |
nine | QUANTITY | 0.94+ |
env0 | TITLE | 0.93+ |
DevOps | TITLE | 0.93+ |
CUBE Conversation | EVENT | 0.93+ |
ZenSourcer | ORGANIZATION | 0.9+ |
third wave | EVENT | 0.87+ |
Kubernetes | TITLE | 0.87+ |