Ed Walsh & Thomas Hazel | A New Database Architecture for Supercloud
(bright music) >> Hi, everybody, this is Dave Vellante, welcome back to Supercloud 2. Last August, at the first Supercloud event, we invited the broader community to help further define Supercloud, we assessed its viability, and identified the critical elements and deployment models of the concept. The objectives here at Supercloud too are, first of all, to continue to tighten and test the concept, the second is, we want to get real world input from practitioners on the problems that they're facing and the viability of Supercloud in terms of applying it to their business. So on the program, we got companies like Walmart, Sachs, Western Union, Ionis Pharmaceuticals, NASDAQ, and others. And the third thing that we want to do is we want to drill into the intersection of cloud and data to project what the future looks like in the context of Supercloud. So in this segment, we want to explore the concept of data architectures and what's going to be required for Supercloud. And I'm pleased to welcome one of our Supercloud sponsors, ChaosSearch, Ed Walsh is the CEO of the company, with Thomas Hazel, who's the Founder, CTO, and Chief Scientist. Guys, good to see you again, thanks for coming into our Marlborough studio. >> Always great. >> Great to be here. >> Okay, so there's a little debate, I'm going to put you right in the spot. (Ed chuckling) A little debate going on in the community started by Bob Muglia, a former CEO of Snowflake, and he was at Microsoft for a long time, and he looked at the Supercloud definition, said, "I think you need to tighten it up a little bit." So, here's what he came up with. He said, "A Supercloud is a platform that provides a programmatically consistent set of services hosted on heterogeneous cloud providers." So he's calling it a platform, not an architecture, which was kind of interesting. And so presumably the platform owner is going to be responsible for the architecture, but Dr. Nelu Mihai, who's a computer scientist behind the Cloud of Clouds Project, he chimed in and responded with the following. He said, "Cloud is a programming paradigm supporting the entire lifecycle of applications with data and logic natively distributed. Supercloud is an open architecture that integrates heterogeneous clouds in an agnostic manner." So, Ed, words matter. Is this an architecture or is it a platform? >> Put us on the spot. So, I'm sure you have concepts, I would say it's an architectural or design principle. Listen, I look at Supercloud as a mega trend, just like cloud, just like data analytics. And some companies are using the principle, design principles, to literally get dramatically ahead of everyone else. I mean, things you couldn't possibly do if you didn't use cloud principles, right? So I think it's a Supercloud effect, you're able to do things you're not able to. So I think it's more a design principle, but if you do it right, you get dramatic effect as far as customer value. >> So the conversation that we were having with Muglia, and Tristan Handy of dbt Labs, was, I'll set it up as the following, and, Thomas, would love to get your thoughts, if you have a CRM, think about applications today, it's all about forms and codifying business processes, you type a bunch of stuff into Salesforce, and all the salespeople do it, and this machine generates a forecast. What if you have this new type of data app that pulls data from the transaction system, the e-commerce, the supply chain, the partner ecosystem, et cetera, and then, without humans, actually comes up with a plan. That's their vision. And Muglia was saying, in order to do that, you need to rethink data architectures and database architectures specifically, you need to get down to the level of how the data is stored on the disc. What are your thoughts on that? Well, first of all, I'm going to cop out, I think it's actually both. I do think it's a design principle, I think it's not open technology, but open APIs, open access, and you can build a platform on that design principle architecture. Now, I'm a database person, I love solving the database problems. >> I'm waited for you to launch into this. >> Yeah, so I mean, you know, Snowflake is a database, right? It's a distributed database. And we wanted to crack those codes, because, multi-region, multi-cloud, customers wanted access to their data, and their data is in a variety of forms, all these services that you're talked about. And so what I saw as a core principle was cloud object storage, everyone streams their data to cloud object storage. From there we said, well, how about we rethink database architecture, rethink file format, so that we can take each one of these services and bring them together, whether distributively or centrally, such that customers can access and get answers, whether it's operational data, whether it's business data, AKA search, or SQL, complex distributed joins. But we had to rethink the architecture. I like to say we're not a first generation, or a second, we're a third generation distributed database on pure, pure cloud storage, no caching, no SSDs. Why? Because all that availability, the cost of time, is a struggle, and cloud object storage, we think, is the answer. >> So when you're saying no caching, so when I think about how companies are solving some, you know, pretty hairy problems, take MySQL Heatwave, everybody thought Oracle was going to just forget about MySQL, well, they come out with Heatwave. And the way they solve problems, and you see their benchmarks against Amazon, "Oh, we crush everybody," is they put it all in memory. So you said no caching? You're not getting performance through caching? How is that true, and how are you getting performance? >> Well, so five, six years ago, right? When you realize that cloud object storage is going to be everywhere, and it's going to be a core foundational, if you will, fabric, what would you do? Well, a lot of times the second generation say, "We'll take it out of cloud storage, put in SSDs or something, and put into cache." And that adds a lot of time, adds a lot of costs. But I said, what if, what if we could actually make the first read hot, the first read distributed joins and searching? And so what we went out to do was said, we can't cache, because that's adds time, that adds cost. We have to make cloud object storage high performance, like it feels like a caching SSD. That's where our patents are, that's where our technology is, and we've spent many years working towards this. So, to me, if you can crack that code, a lot of these issues we're talking about, multi-region, multicloud, different services, everybody wants to send their data to the data lake, but then they move it out, we said, "Keep it right there." >> You nailed it, the data gravity. So, Bob's right, the data's coming in, and you need to get the data from everywhere, but you need an environment that you can deal with all that different schema, all the different type of technology, but also at scale. Bob's right, you cannot use memory or SSDs to cache that, that doesn't scale, it doesn't scale cost effectively. But if you could, and what you did, is you made object storage, S3 first, but object storage, the only persistence by doing that. And then we get performance, we should talk about it, it's literally, you know, hundreds of terabytes of queries, and it's done in seconds, it's done without memory caching. We have concepts of caching, but the only caching, the only persistence, is actually when we're doing caching, we're just keeping another side-eye track of things on the S3 itself. So we're using, actually, the object storage to be a database, which is kind of where Bob was saying, we agree, but that's what you started at, people thought you were crazy. >> And maybe make it live. Don't think of it as archival or temporary space, make it live, real time streaming, operational data. What we do is make it smart, we see the data coming in, we uniquely index it such that you can get your use cases, that are search, observability, security, or backend operational. But we don't have to have this, I dunno, static, fixed, siloed type of architecture technologies that were traditionally built prior to Supercloud thinking. >> And you don't have to move everything, essentially, you can do it wherever the data lands, whatever cloud across the globe, you're able to bring it together, you get the cost effectiveness, because the only persistence is the cheapest storage persistent layer you can buy. But the key thing is you cracked the code. >> We had to crack the code, right? That was the key thing. >> That's where the plans are. >> And then once you do that, then everything else gets easier to scale, your architecture, across regions, across cloud. >> Now, it's a general purpose database, as Bob was saying, but we use that database to solve a particular issue, which is around operational data, right? So, we agree with Bob's. >> Interesting. So this brings me to this concept of data, Jimata Gan is one of our speakers, you know, we talk about data fabric, which is a NetApp, originally NetApp concept, Gartner's kind of co-opted it. But so, the basic concept is, data lives everywhere, whether it's an S3 bucket, or a SQL database, or a data lake, it's just a node on the data mesh. So in your view, how does this fit in with Supercloud? Ed, you've said that you've built, essentially, an enabler for that, for the data mesh, I think you're an enabler for the Supercloud-like principles. This is a big, chewy opportunity, and it requires, you know, a team approach. There's got to be an ecosystem, there's not going to be one Supercloud to rule them all, so where does the ecosystem fit into the discussion, and where do you fit into the ecosystem? >> Right, so we agree completely, there's not one Supercloud in effect, but we use Supercloud principles to build our platform, and then, you know, the ecosystem's going to be built on leveraging what everyone else's secret powers are, right? So our power, our superpower, based upon what we built is, we deal with, if you're having any scale, or cost effective scale issues, with data, machine generated data, like business observability or security data, we are your force multiplier, we will take that in singularly, just let it, simply put it in your object storage wherever it sits, and we give you uniformity access to that using OpenAPI access, SQL, or you know, Elasticsearch API. So, that's what we do, that's our superpower. So I'll play it into data mesh, that's a perfect, we are a node on a data mesh, but I'll play it in the soup about how, the ecosystem, we see it kind of playing, and we talked about it in just in the last couple days, how we see this kind of possibly. Short term, our superpowers, we deal with this data that's coming at these environments, people, customers, building out observability or security environments, or vendors that are selling their own Supercloud, I do observability, the Datadogs of the world, dot dot dot, the Splunks of the world, dot dot dot, and security. So what we do is we fit in naturally. What we do is a cost effective scale, just land it anywhere in the world, we deal with ingest, and it's a cost effective, an order of magnitude, or two or three order magnitudes more cost effective. Allows them, their customers are asking them to do the impossible, "Give me fast monitoring alerting. I want it snappy, but I want it to keep two years of data, (laughs) and I want it cost effective." It doesn't work. They're good at the fast monitoring alerting, we're good at the long-term retention. And yet there's some gray area between those two, but one to one is actually cheaper, so we would partner. So the first ecosystem plays, who wants to have the ability to, really, all the data's in those same environments, the security observability players, they can literally, just through API, drag our data into their point to grab. We can make it seamless for customers. Right now, we make it helpful to customers. Your Datadog, we make a button, easy go from Datadog to us for logs, save you money. Same thing with Grafana. But you can also look at ecosystem, those same vendors, it used to be a year ago it was, you know, its all about how can you grow, like it's growth at all costs, now it's about cogs. So literally we can go an environment, you supply what your customer wants, but we can help with cogs. And one-on one in a partnership is better than you trying to build on your own. >> Thomas, you were saying you make the first read fast, so you think about Snowflake. Everybody wants to talk about Snowflake and Databricks. So, Snowflake, great, but you got to get the data in there. All right, so that's, can you help with that problem? >> I mean we want simple in, right? And if you have to have structure in, you're not simple. So the idea that you have a simple in, data lake, schema read type philosophy, but schema right type performance. And so what I wanted to do, what we have done, is have that simple lake, and stream that data real time, and those access points of Search or SQL, to go after whatever business case you need, security observability, warehouse integration. But the key thing is, how do I make that click, click, click answer, and do it quickly? And so what we want to do is, that first read has to be fast. Why? 'Cause then you're going to do all this siloing, layers, complexity. If your first read's not fast, you're at a disadvantage, particularly in cost. And nobody says I want less data, but everyone has to, whether they say we're going to shorten the window, we're going to use AI to choose, but in a security moment, when you don't have that answer, you're in trouble. And that's why we are this service, this Supercloud service, if you will, providing access, well-known search, well-known SQL type access, that if you just have one access point, you're at a disadvantage. >> We actually talked about Snowflake and BigQuery, and a different platform, Data Bricks. That's kind of where we see the phase two of ecosystem. One is easy, the low-hanging fruit is observability and security firms. But the next one is, what we do, our super power is dealing with this messy data that schema is changing like night and day. Pipelines are tough, and it's changing all the time, but you want these things fast, and it's big data around the world. That's the next point, just use us alongside, or inside, one of their platforms, and now we get the best of both worlds. Our superpower is keeping this messy data as a streaming, okay, not a batch thing, allow you to do that. So, that's the second one. And then to be honest, the third one, which plays you to Supercloud, it also plays perfectly in the data mesh, is if you really go to the ultimate thing, what we have done is made object storage, S3, GCS, and blob storage, we made it a database. Put, get, complex query with big joins. You know, so back to your original thing, and Muglia teed it up perfectly, we've done that. Now imagine if that's an ecosystem, who would want that? If it's, again, it's uniform available across all the regions, across all the clouds, and it's right next to where you are building a service, or a client's trying, that's where the ecosystem, I think people are going to use Superclouds for their superpowers. We're really good at this, allows that short term. I think the Snowflakes and the Data Bricks are the medium term, you know? And then I think eventually gets to, hey, listen if you can make object storage fast, you can just go after it with simple SQL queries, or elastic. Who would want that? I think that's where people are going to leverage it. It's not going to be one Supercloud, and we leverage the super clouds. >> Our viewpoint is smart object storage can be programmable, and so we agree with Bob, but we're not saying do it here, do it here. This core, fundamental layer across regions, across clouds, that everyone has? Simple in. Right now, it's hard to get data in for access for analysis. So we said, simply, we'll automate the entire process, give you API access across regions, across clouds. And again, how do you do a distributed join that's fast? How do you do a distributed join that doesn't cost you an arm or a leg? And how do you do it at scale? And that's where we've been focused. >> So prior, the cloud object store was a niche. >> Yeah. >> S3 obviously changed that. How standard is, essentially, object store across the different cloud platforms? Is that a problem for you? Is that an easy thing to solve? >> Well, let's talk about it. I mean we've fundamentally, yeah we've extracted it, but fundamentally, cloud object storage, put, get, and list. That's why it's so scalable, 'cause it doesn't have all these other components. That complexity is where we have moved up, and provide direct analytical API access. So because of its simplicity, and costs, and security, and reliability, it can scale naturally. I mean, really, distributed object storage is easy, it's put-get anywhere, now what we've done is we put a layer of intelligence, you know, call it smart object storage, where access is simple. So whether it's multi-region, do a query across, or multicloud, do a query across, or hunting, searching. >> We've had clients doing Amazon and Google, we have some Azure, but we see Amazon and Google more, and it's a consistent service across all of them. Just literally put your data in the bucket of choice, or folder of choice, click a couple buttons, literally click that to say "that's hot," and after that, it's hot, you can see it. But we're not moving data, the data gravity issue, that's the other. That it's already natively flowing to these pools of object storage across different regions and clouds. We don't move it, we index it right there, we're spinning up stateless compute, back to the Supercloud concept. But now that allows us to do all these other things, right? >> And it's no longer just cheap and deep object storage. Right? >> Yeah, we make it the same, like you have an analytic platform regardless of where you're at, you don't have to worry about that. Yeah, we deal with that, we deal with a stateless compute coming up -- >> And make it programmable. Be able to say, "I want this bucket to provide these answers." Right, that's really the hope, the vision. And the complexity to build the entire stack, and then connect them together, we said, the fabric is cloud storage, we just provide the intelligence on top. >> Let's bring it back to the customers, and one of the things we're exploring in Supercloud too is, you know, is Supercloud a solution looking for a problem? Is a multicloud really a problem? I mean, you hear, you know, a lot of the vendor marketing says, "Oh, it's a disaster, because it's all different across the clouds." And I talked to a lot of customers even as part of Supercloud too, they're like, "Well, I solved that problem by just going mono cloud." Well, but then you're not able to take advantage of a lot of the capabilities and the primitives that, you know, like Google's data, or you like Microsoft's simplicity, their RPA, whatever it is. So what are customers telling you, what are their near term problems that they're trying to solve today, and how are they thinking about the future? >> Listen, it's a real problem. I think it started, I think this is a a mega trend, just like cloud. Just, cloud data, and I always add, analytics, are the mega trends. If you're looking at those, if you're not considering using the Supercloud principles, in other words, leveraging what I have, abstracting it out, and getting the most out of that, and then build value on top, I think you're not going to be able to keep up, In fact, no way you're going to keep up with this data volume. It's a geometric challenge, and you're trying to do linear things. So clients aren't necessarily asking, hey, for Supercloud, but they're really saying, I need to have a better mechanism to simplify this and get value across it, and how do you abstract that out to do that? And that's where they're obviously, our conversations are more amazed what we're able to do, and what they're able to do with our platform, because if you think of what we've done, the S3, or GCS, or object storage, is they can't imagine the ingest, they can't imagine how easy, time to glass, one minute, no matter where it lands in the world, querying this in seconds for hundreds of terabytes squared. People are amazed, but that's kind of, so they're not asking for that, but they are amazed. And then when you start talking on it, if you're an enterprise person, you're building a big cloud data platform, or doing data or analytics, if you're not trying to leverage the public clouds, and somehow leverage all of them, and then build on top, then I think you're missing it. So they might not be asking for it, but they're doing it. >> And they're looking for a lens, you mentioned all these different services, how do I bring those together quickly? You know, our viewpoint, our service, is I have all these streams of data, create a lens where they want to go after it via search, go after via SQL, bring them together instantly, no e-tailing out, no define this table, put into this database. We said, let's have a service that creates a lens across all these streams, and then make those connections. I want to take my CRM with my Google AdWords, and maybe my Salesforce, how do I do analysis? Maybe I want to hunt first, maybe I want to join, maybe I want to add another stream to it. And so our viewpoint is, it's so natural to get into these lake platforms and then provide lenses to get that access. >> And they don't want it separate, they don't want something different here, and different there. They want it basically -- >> So this is our industry, right? If something new comes out, remember virtualization came out, "Oh my God, this is so great, it's going to solve all these problems." And all of a sudden it just got to be this big, more complex thing. Same thing with cloud, you know? It started out with S3, and then EC2, and now hundreds and hundreds of different services. So, it's a complex matter for a lot of people, and this creates problems for customers, especially when you got divisions that are using different clouds, and you're saying that the solution, or a solution for the part of the problem, is to really allow the data to stay in place on S3, use that standard, super simple, but then give it what, Ed, you've called superpower a couple of times, to make it fast, make it inexpensive, and allow you to do that across clouds. >> Yeah, yeah. >> I'll give you guys the last word on that. >> No, listen, I think, we think Supercloud allows you to do a lot more. And for us, data, everyone says more data, more problems, more budget issue, everyone knows more data is better, and we show you how to do it cost effectively at scale. And we couldn't have done it without the design principles of we're leveraging the Supercloud to get capabilities, and because we use super, just the object storage, we're able to get these capabilities of ingest, scale, cost effectiveness, and then we built on top of this. In the end, a database is a data platform that allows you to go after everything distributed, and to get one platform for analytics, no matter where it lands, that's where we think the Supercloud concepts are perfect, that's where our clients are seeing it, and we're kind of excited about it. >> Yeah a third generation database, Supercloud database, however we want to phrase it, and make it simple, but provide the value, and make it instant. >> Guys, thanks so much for coming into the studio today, I really thank you for your support of theCUBE, and theCUBE community, it allows us to provide events like this and free content. I really appreciate it. >> Oh, thank you. >> Thank you. >> All right, this is Dave Vellante for John Furrier in theCUBE community, thanks for being with us today. You're watching Supercloud 2, keep it right there for more thought provoking discussions around the future of cloud and data. (bright music)
SUMMARY :
And the third thing that we want to do I'm going to put you right but if you do it right, So the conversation that we were having I like to say we're not a and you see their So, to me, if you can crack that code, and you need to get the you can get your use cases, But the key thing is you cracked the code. We had to crack the code, right? And then once you do that, So, we agree with Bob's. and where do you fit into the ecosystem? and we give you uniformity access to that so you think about Snowflake. So the idea that you have are the medium term, you know? and so we agree with Bob, So prior, the cloud that an easy thing to solve? you know, call it smart object storage, and after that, it's hot, you can see it. And it's no longer just you don't have to worry about And the complexity to and one of the things we're and how do you abstract it's so natural to get and different there. and allow you to do that across clouds. I'll give you guys and we show you how to do it but provide the value, I really thank you for around the future of cloud and data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Walmart | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
Ionis Pharmaceuticals | ORGANIZATION | 0.99+ |
Western Union | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Nelu Mihai | PERSON | 0.99+ |
Sachs | ORGANIZATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two years | QUANTITY | 0.99+ |
Supercloud 2 | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Last August | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
dbt Labs | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Jimata Gan | PERSON | 0.99+ |
third one | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
Grafana | ORGANIZATION | 0.99+ |
second generation | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
hundreds of terabytes | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
five | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
a year ago | DATE | 0.98+ |
ChaosSearch | ORGANIZATION | 0.98+ |
Muglia | PERSON | 0.98+ |
MySQL | TITLE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
third thing | QUANTITY | 0.97+ |
Marlborough | LOCATION | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
Supercloud | ORGANIZATION | 0.97+ |
Elasticsearch | TITLE | 0.96+ |
NetApp | TITLE | 0.96+ |
Datadog | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
EC2 | TITLE | 0.96+ |
each one | QUANTITY | 0.96+ |
S3 | TITLE | 0.96+ |
one platform | QUANTITY | 0.95+ |
Supercloud 2 | EVENT | 0.95+ |
first read | QUANTITY | 0.95+ |
six years ago | DATE | 0.95+ |
Muddu Sudhakar, Aisera | Supercloud22
(upbeat music) >> Welcome back everyone to Supercloud22, I'm John Furrier, host of theCUBE here in Palo Alto. For this next ecosystem's segment we have Muddu Sudhakar, who is the co-founder and CEO of Aisera, a friend of theCUBE, Cube alumni, serial entrepreneur, multiple exits, been on multiple times with great commentary. Muddu, thank you for coming on, and supporting our- >> Also thank you for having me, John. >> Yeah, thank you. Great handshake there, I love to do it. One, I wanted you here because, two reasons, one is, congratulations on your new funding. >> Thank you. >> For $90 million, Series D funding. >> Series D funding. >> So, huge validation in this market. >> It is. >> You have been experienced software so, it's a real testament to your team. But also, you're kind of in the Supercloud vortex. This new wave that Supercloud is part of is, I call it the pretext to what's coming with multi-clouds. It is the next level. >> I see. >> Structural change and we have been reporting on it, Dave and I, and we are being challenged. So, we decided to open it up. >> Very good, I would love it. >> And have a conversation rather than waiting eight months to prove that we are right. Which, we are right, but that is a long story. >> You're always right. (both laughs) >> What do you think of Supercloud, that's going on? What is the big trend? Because its public cloud is great, so there is no conflict there. >> Right. >> It's got great business, it's integrated, IaaS, to SaaS, PaaS, all in the beginning, or the middle. All that is called good. Now you have on-premise high rate cloud. >> Right. >> Edge is right around the corner. Exploding in new capabilities. So, complexity is still here. >> That's right, I think, you nailed it. We talk about hybrid cloud, and multi cloud. Supercloud is kind of elevates the message even better. Because you still have to leave for some of our clouds, public clouds. There will be some of our clouds, still running on the Edge. That's where, the Edge cloud comes in. Some will still be on-prem. So, the Supercloud as a concept is beyond hybrid and multi cloud. To me, I will run some of our cloud on Amazon. Some could be on Aisera, some could be running only on Edge, right? >> Mm hm >> And we still have, what we call remote executors. Some leaders of service now. You have, what we call the mid-server, is what I think it was called. Where you put in a small code and run it. >> Yeah. >> So, I think all those things will be running on-prem environment and VMware cloud, et cetera. >> And if you look back at, I think it has been five years now, maybe four or five years since Andy Jassy at reInvent announced Outposts. Think that was the moment in time that Dave and I took this pause back and said "Okay, that's Amazon." who listens to their customers. Acknowledging Hybrid. >> Right. >> Then we saw the rise of Snowflakes, the Databricks, specialty clouds. You start to see people who are building on top of AWS. But at MongoDB, it is a database, now they are a full blown, large scale data platform. These companies took advantage of the public cloud to build, as Jerry Chen calls it, "Castles in the cloud." >> Right. >> That seems to be happening in all areas. What do you think about that? >> Right, so what is driving the cloud? To me, we talk about machine learning in AI, right? Versus clouded options. We used to call it lift and shift. The outposts and lift and shift. Initially this was to get the data into the cloud. I think if you see, the vendor that I like the most, is, I'm not picking any favorite but, Microsoft Azure, they're thinking like your Supercloud, right? Amazon is other things, but Azure is a lot more because they run on-prem. They are also on Azure CloudFront, Amazon CloudFront. So I think, Azure and Amazon are doing a lot more in the area of Supercloud. What is really helping is the machine learning environment, needs Superclouds. Because I will be running some on the Edge, some compute, some will be running on the public cloud, some could be running on my data center. So, I think the Supercloud is really suited for AI and automation really well. >> Yeah, it is a good point about Microsoft, too. And I think Microsoft's existing install base saved Azure. >> Okay. >> They brought Office 365, Sequel Server, cause their customers weren't leaving Microsoft. They had the productivity thing nailed down as well as the ability to catch up >> That's right. >> To AWS. So, natural extension to on-premise with Microsoft. >> I think... >> Tell us- >> Your Supercloud is what Microsoft did. Right? Azure. If you think of, like, they had an Office 365, their SharePoint, their Dynamics, taking all of those properties, running on the Azure. And still giving the migration path into a data center. Is Supercloud. So, the early days Supercloud came from Azure. >> Well, that's a good point, we will certainly debate that. I will also say that Snowflake built on AWS. >> That's right. >> Okay, and became a super powerhouse with the data business. As did Databricks. >> That's right. >> Then went to Azure >> That's right. >> So, you're seeing kind of the Playbook. >> Right. >> Go fast on Cloud Native, the native cloud. Get that fly wheel going, then get going, somewhere else. >> It is, and to that point I think you and me are talking, right? If you are to start at one cloud and go to another cloud, the amount of work as a vendor for us to use for implement. Today, like we use all three clouds, including the Gov Cloud. It's a lot of work. So, what will happen, the next toolkit we use? Even services like Elastic. People will not, the word commoditize, is not the word, but people will create an abstraction layer, even for S3. >> Explain that, explain that in detail. So, elastic? What do you mean by that? >> Yeah, so what that means is today, Elasticsearch, if you do an Elasticsearch on Amazon, if I go to Azure, I don't want enter another Elasticsearch layer. Ideally I want us to write an abstracted search layer. So, that when I move my services into a different cloud I don't want to re-compute and re-calculate everything. That's a lot of work. Particularly once you have a production customer, if I were to shift the workloads, even to the point of infrastructure, take S3, if I read infrastructure to S3 and tomorrow I go to Azure. Azure will have its own objects store. I don't want to re-validate that. So what will happen is digital component, Kubernetes is already there, we want storage, we want network layer, we want VPM services, elastic as well as all fundamental stuff, including MongoDB, should be abstracted to run. On the Superclouds. >> Okay, well that is a little bit of a unicorn fantasy. But let's break that down. >> Sure. >> Do you think that's possible? >> It is. Because I think, if I am on MongoDB, I should be able to give a horizontal layer to MongoDB that is optimized for all three of them. I don't want MongoDB. >> First of all, everyone will buy that. >> Sure. >> I'm skeptical that that's possible. Given where we are at right now. So, you're saying that a vendor will provide an abstraction layer. >> No, I'm saying that either MongoDB, itself will do it, or a third party layer will come as a service which will abstract all this layer so that we will write to an AP layer. >> So what do you guys doing? How do you handle multiple clouds? You guys are taking that burden on, because it makes sense, you should build the abstraction layer. Not rely on a third party vendor right? >> We are doing it because there is no third party available offer it. But if you offer a third party tomorrow, I will use that as a Supercloud service. >> If they're 100% reliable? >> That's right. That's exactly it. >> They have to do the work. >> They have to do the work because if today I am doing it because no one else is offering it- >> Okay so what people might not know is that you are an angel investor as well as an entrepreneur been very successful, so you're rich, you have a lot of money. If I were a startup and I said, Muddu, I want to build this abstraction layer. What would be funding advice that you would give me as an entrepreneur? As a company to do that? >> I would do it like an Apigee that Google acquired, you should create an Apigee-like layer, for infrastructure upfront services, I think that is a very good option. >> And you think that is viable? >> It is very much viable. >> Would that be part of Supercloud architecture, in your opinion? >> It is. Right? And that will abstract all the clouds to some level. Like it is like Kubernetes abstract, so that if I am running on Kubernetes I can transfer to any cloud. >> Yeah >> But that should go from computer into other infrastructures. >> It's seems to me, Muddu, and I want to get your thoughts about this whole Supercloud defacto standard opportunity. It feels like we are waiting for a moment where there is some sort of defacto unification, whether it is in the distraction layer, or a standards body. There is no W3C here going on. I mean, W3C was for web consortium, for world wide web. The Supercloud seems to be having the same impact the web had. Transformative, disruptive, re-factoring business operations. Is there a standardized body or an opportunity for a defacto? Like Kubernetes was a great example of a unification around something for orchestration. Is there a better version in the Supercloud model where we need a standard? >> Yes and no. The reason is because by the time you come to standard, take time to look what happened. First, we started with VMs, then became Docker and Containers then we came to Kubernetes. So it goes through a journey. I think the next few years will be stood on SuperCloud let's make customers happy, let's make enough services going, and then the standards will come. Standards will be almost 2-3 years later. So I don't think standards should happen right now. Right now, all we need is, we need enough start ups to create the super layer abstraction, with the goal in mind of AI automation. The reason, AI is because AI needs to be able to run that. Automated because running a work flow is, I can either run a workflow in the cloud services, I can run it on on-prem, I can run it on database, so you have two good applications, take AI and automation with Supercloud and make enough enough noise on that make enough applications, then the standards will come. >> On this project we have been with SuperCloud these past day we have heard a lot of people talking. The themes that developers are okay, they are doing great. Open source is booming. >> Yes >> Cloud Native's got major traction. Developers are going fast and they love it, shifting left, all these great things. They're putting a lot of data, DevOps and the security teams, they're the ones who are leveling up. We are hearing a lot of conversations around how they can be faster. What is your view on this as relative to that Supercloud nirvana getting there? How are DevOps and security teams leveling up to devs? >> A couple of things. I think that in the world of DevSecOps and security ops. The reason security is important, right? Given what is going on, but you don't need to do security the manual way. I think that whole new operation that you and me talked about, AI ops should happen. Where the AI ops is for service operation, for performance, for incident or for security. Nobody thinks of AI security. So, the DevOps people should think more world of AI ops, so that I can predict, prevent things before they happen. Then the security will be much better. So AI ops with Supercloud will probably be that nirvana. But that is what should happen. >> In the AI side of things, what you guys are doing, what are you learning, on scale, relative to data? Is there, you said machine learning needs data, it needs scale operation. What's your view on the automation piece of all this? >> I think to me, the data is the single, underrated, unsung kind of hero in the whole machine learning. Everyone talks about AI and machine learning algorithms. Algorithms are as important, but even more important is data. Lack of data I can't do algorithms. So my advice to customers is don't lose your data. That is why I see, Frank, my old boss, setting everything up into the data cloud, in Snowflake. Data is so important, store the data, analyze the data. Data is the new AI. You and me talk so many times- >> Yeah >> It's underrated, people are not anticipating how important it is. But the data is coming from logs, events, whether there is knowledge documents, any data in any form. I think keep the data, analyze the data, data patterns, and then things like SuperCloud can really take advantage of that. >> So, in the Supercloud equation one of the things that has come up is that the native clouds do great. Their IaaS to SaaS is interactions that solve a lot of problems. There is integration that is good. >> Right. >> Now when you go off cloud, you get regions, get latency issues- >> Right >> You have more complexity. So what's the trade off in the Supercloud journey, if you had to guess? And just thinking out loud here, what would be some of the architectural trade offs of how you do it, what's the sequence? What's the order of operations to get Superclouding going? >> Yeah, very good questions here. I think once you start going from the public cloud, the clouds there scale to lets say, even a regional data center onto an Edge, latency will kick in. The lack of computer function will kick in. So there I think everything should become asynchronous, right? You will run the application in a limited environment. You should anticipate for small memories, small compute, long latencies, but still following should happen. So some operations should become the old-school following, like, it's like the email. I send an email, it's an asynchronous thing, I made a sponsor, I think most of message passing should go back to the old-school architectures They should become asynchronous where thing can rely. I think, as long as algorithms can take that into Edge, I think that Superclouds can really bridge between the public cloud to the edge. >> Muddu, thanks for coming, we really appreciate your insights here. You've always been a great friend, great commentator. If you weren't the CEO and a famous angel investor, we would certainly love to have you as a theCUBE analyst, here on theCUBE. >> I am always available for you. (John laughs) >> When you retire, you can come back. Final point, we've got time left. We'll give you a chance to talk about the company. I'm really intrigued by the success of your ninety million dollar financing realm because we are in a climate where people aren't getting those kinds of investments. It's usually down-rounds. >> Okay >> 409 adjustments, people are struggling. You got an up-round and you got a big number. Why the success? What is going on with the company? Why are you guys getting such great validation? Goldman Sachs, Thoma Bravo, Zoom, these are big names, these are the next gen winners. >> It is. >> Why are they picking you? Why are they investing in you? >> I think it is not one thing, it is many things. First all, I think it is a four-year journey for us where we are right now. So, the company started late 2017. It is getting the right customers, partners, employees, team members. So it is a lot hard work went in. So a lot of thanks to the Aisera community for where we are. Why customers and where we are? Look, fundamentally there is a problem to solve. Like, what Aisera is trying to solve is can we automate customer service? Whether internal employees, external customer support. Do it for IT, HR, sales, marketing, all the way to ops. Like you talk about DevSecOps, I don't want thousands of tune ups for ops. If I can make that job better, >> Yeah >> I want to, any job I want to automate. I call it, elevate the human, right? >> Yeah. >> And that's the reason- >> 'Cause you're saying people have to learn specialty tools, and there are consequences to that. >> Right, and to me, people should focus on more important tasks and use AI as a tool to automate those things right? It's like thinking of offering Apple City as Alexa as a service, that is how we are trying to offer customer service, like, right? And if it can do that consistently, and reduce costs, cost is a big reason why customers like us a lot, we have eliminated the cost in this down economy, I will amplify our message even more, right? I am going to take a bite out of their expense. Whether it is tool expense, it's on resources. Second, is user productivity And finally, experience. People want experience. >> Final question, folks out there, first of all, what do you think about Supercloud? And if someone asks you what is this Supercloud thing? How would you answer? >> Supercloud, is, to me, beyond multi cloud and hybrid cloud. It is to bridge applications that are build in Supercloud can run on all clouds seamlessly. You don't need to compile them, re-clear them. Supercloud is one place to build, develop, and deploy. >> Great, Muddu. Thank you for coming on. Supercloud22 here breaking it down with the ecosystem commentary, we have a lot of people coming to the small group of experts in our network, bringing you in open conversation around the future of cloud computing and applications globally. And again, it is all about the next generation cloud. This is theCUBE, thanks for watching. (upbeat music)
SUMMARY :
Muddu, thank you for coming Great handshake there, I love to do it. I call it the pretext to what's Dave and I, and we are being challenged. to prove that we are right. You're always right. What is the big trend? the beginning, or the middle. Edge is right around the corner. So, the Supercloud as a concept is beyond And we still have, what things will be running And if you look back at, of the public cloud to build, What do you think about that? I think if you see, And I think Microsoft's existing They had the productivity So, natural extension to And still giving the migration I will also say that Okay, and became a super powerhouse Native, the native cloud. and to that point I think you What do you mean by that? Kubernetes is already there, we want storage, But let's break that down. I should be able to give a a vendor will provide so that we will write to an AP layer. So what do you guys doing? I will use that as a Supercloud service. That's right. that you would give me I think that is a very good option. the clouds to some level. But that should go from computer in the Supercloud model in the cloud services, a lot of people talking. DevOps and the security teams, Then the security will be much better. what you guys are doing, I think to me, the data But the data is coming from logs, events, is that the native clouds do great. in the Supercloud journey, between the public cloud to the edge. have you as a theCUBE analyst, I am always available for you. I'm really intrigued by the success Why the success? So a lot of thanks to the Aisera I call it, elevate the human, right? and there are consequences to that. I am going to take a bite It is to bridge around the future of cloud computing
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Aisera | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
$90 million | QUANTITY | 0.99+ |
Muddu Sudhakar | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
four-year | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Muddu | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
five years | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
late 2017 | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two reasons | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Elasticsearch | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
MongoDB | TITLE | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
W3C | ORGANIZATION | 0.99+ |
S3 | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
Office 365 | TITLE | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
Elastic | TITLE | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Aisera | PERSON | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
two good applications | QUANTITY | 0.98+ |
ninety million dollar | QUANTITY | 0.97+ |
thousands | QUANTITY | 0.96+ |
409 adjustments | QUANTITY | 0.96+ |
Dynamics | TITLE | 0.96+ |
single | QUANTITY | 0.96+ |
three | QUANTITY | 0.95+ |
Azure | TITLE | 0.95+ |
SharePoint | TITLE | 0.94+ |
Gov Cloud | TITLE | 0.94+ |
Edge | TITLE | 0.94+ |
Kubernetes | TITLE | 0.94+ |
Zoom | ORGANIZATION | 0.94+ |
one thing | QUANTITY | 0.93+ |
SuperCloud | ORGANIZATION | 0.93+ |
one cloud | QUANTITY | 0.91+ |
Clint Sharp, Cribl | Cube Conversation
(upbeat music) >> Hello, welcome to this CUBE conversation I'm John Furrier your host here in theCUBE in Palo Alto, California, featuring Cribl a hot startup taking over the enterprise when it comes to data pipelining, and we have a CUBE alumni who's the co-founder and CEO, Clint Sharp. Clint, great to see you again, you've been on theCUBE, you were on in 2013, great to see you, congratulations on the company that you co-founded, and leading as the chief executive officer over $200 million in funding, doing this really strong in the enterprise, congratulations thanks for joining us. >> Hey, thanks John it's really great to be back. >> You know, remember our first conversation the big data wave coming in, Hadoop World 2010, now the cloud comes in, and really the cloud native really takes data to a whole nother level. You've seeing the old data architectures being replaced with cloud scale. So the data landscape is interesting. You know, Data as Code you're hearing that term, data engineering teams are out there, data is everywhere, it's now part of how developers and companies are getting value whether it's real time, or coming out of data lakes, data is more pervasive than ever. Observability is a hot area, there's a zillion companies doing it, what are you guys doing? Where do you fit in the data landscape? >> Yeah, so what I say is that Cribl and our products and we solve the problem for our customers of the fundamental tension between data growth and budget. And so if you look at IDCs data data's growing at a 25%, CAGR, you're going to have two and a half times the amount of data in five years that you have today, and I talk to a lot of CIOs, I talk to a lot of CISOs, and the thing that I hear repeatedly is my budget is not growing at a 25% CAGR so fundamentally, how do I resolve this tension? We sell very specifically into the observability in security markets, we sell to technology professionals who are operating, you know, observability in security platforms like Splunk, or Elasticsearch, or Datadog, Exabeam, like these types of platforms they're moving, protocols like syslog, they're moving, they have lots of agents deployed on every endpoint and they're trying to figure out how to get the right data to the right place, and fundamentally you know, control cost. And we do that through our product called Stream which is what we call an observability pipeline. It allows you to take all this data, manipulate it in the stream and get it to the right place and fundamentally be able to connect all those things that maybe weren't originally intended to be connected. >> So I want to get into that new architecture if you don't mind, but let me first ask you on the problem space that you're in. So cloud native obviously instrumentating, instrumenting everything is a key thing. You mentioned data got all these tools, is the problem that there's been a sprawl of things being instrumented and they have to bring it together, or it's too costly to run all these point solutions and get it to work? What's the problem space that you're in? >> So I think customers have always been forced to make trade offs John. So the, hey I have volumes and volumes and volumes of data that's relevant to securing my enterprise, that's relevant to observing and understanding the behavior of my applications but there's never been an approach that allows me to really onboard all of that data. And so where we're coming at is giving them the tools to be able to, you know, filter out noise and waste, to be able to, you know, aggregate this high fidelity telemetry data. There's a lot of growing changes, you talk about cloud native, but digital transformation, you know, the pandemic itself and remote work all these are driving significantly greater data volumes, and vendors unsurprisingly haven't really been all that aligned to giving customers the tools in order to reshape that data, to filter out noise and waste because, you know, for many of them they're incentivized to get as much data into their platform as possible, whether that's aligned to the customer's interests or not. And so we saw an opportunity to come out and fundamentally as a customers-first company give them the tools that they need, in order to take back control of their data. >> I remember those conversations even going back six years ago the whole cloud scale, horizontally scalable applications, you're starting to see data now being stuck in the silos now to have high, good data you have to be observable, which means you got to be addressable. So you now have to have a horizontal data plane if you will. But then you get to the question of, okay, what data do I need at the right time? So is the Data as Code, data engineering discipline changing what new architectures are needed? What changes in the mind of the customer once they realize that they need this new way to pipe data and route data around, or make it available for certain applications? What are the key new changes? >> Yeah, so I think one of the things that we've been seeing in addition to the advent of the observability pipeline that allows you to connect all the things, is also the advent of an observability lake as well. Which is allowing people to store massively greater quantities of data, and also different types of data. So data that might not traditionally fit into a data warehouse, or might not traditionally fit into a data lake architecture, things like deployment artifacts, or things like packet captures. These are binary types of data that, you know, it's not designed to work in a database but yet they want to be able to ask questions like, hey, during the Log4Shell vulnerability, one of all my deployment artifacts actually had Log4j in it in an affected version. These are hard questions to answer in today's enterprise. Or they might need to go back to full fidelity packet capture data to try to understand that, you know, a malicious actor's movement throughout the enterprise. And we're not seeing, you know, we're seeing vendors who have great log indexing engines, and great time series databases, but really what people are looking for is the ability to store massive quantities of data, five times, 10 times more data than they're storing today, and they're doing that in places like AWSS3, or in Azure Blob Storage, and we're just now starting to see the advent of technologies we can help them query that data, and technologies that are generally more specifically focused at the type of persona that we sell to which is a security professional, or an IT professional who's trying to understand the behaviors of their applications, and we also find that, you know, general-purpose data processing technologies are great for the enterprise, but they're not working for the people who are running the enterprise, and that's why you're starting to see the concepts like observability pipelines and observability lakes emerge, because they're targeted at these people who have a very unique set of problems that are not being solved by the general-purpose data processing engines. >> It's interesting as you see the evolution of more data volume, more data gravity, then you have these specialty things that need to be engineered for the business. So sounds like observability lake and pipelining of the data, the data pipelining, or stream you call it, these are new things that they bolt into the architecture, right? Because they have business reasons to do it. What's driving that? Sounds like security is one of them. Are there others that are driving this behavior? >> Yeah, I mean it's the need to be able to observe applications and observe end-user behavior at a fine-grain detail. So, I mean I often use examples of like bank teller applications, or perhaps, you know, the app that you're using to, you know, I'm going to be flying in a couple of days. I'll be using their app to understand whether my flight's on time. Am I getting a good experience in that particular application? Answering the question of is Clint getting a good experience requires massive quantities of data, and your application and your service, you know, I'm going to sit there and look at, you know, American Airlines which I'm flying on Thursday, I'm going to be judging them based on off of my experience. I don't care what the average user's experience is I care what my experience is. And if I call them up and I say, hey, and especially for the enterprise usually this is much more for, you know, in-house applications and things like that. They call up their IT department and say, hey, this application is not working well, I don't know what's going on with it, and they can't answer the question of what was my individual experience, they're living with, you know, data that they can afford to store today. And so I think that's why you're starting to see the advent of these new architectures is because digital is so absolutely critical to every company's customer experience, that they're needing to be able to answer questions about an individual user's experience which requires significantly greater volumes of data, and because of significantly greater volumes of data, that requires entirely new approaches to aggregating that data, bringing the data in, and storing that data. >> Talk to me about enabling customer choice when it comes around controlling their data. You mentioned that before we came on camera that you guys are known for choice. How do you enable customer choice and control over their data? >> So I think one of the biggest problems I've seen in the industry over the last couple of decades is that vendors come to customers with hugely valuable products that make their lives better but it also requires them to maintain a relationship with that vendor in order to be able to continue to ask questions of that data. And so customers don't get a lot of optionality in these relationships. They sign multi-year agreements, they look to try to start another, they want to go try out another vendor, they want to add new technologies into their stack, and in order to do that they're often left with a choice of well, do I roll out like get another agent, do I go touch 10,000 computers, or a 100,000 computers in order to onboard this data? And what we have been able to offer them is the ability to reuse their existing deployed footprints of agents and their existing data collection technologies, to be able to use multiple tools and use the right tool for the right job, and really give them that choice, and not only give them the choice once, but with the concepts of things like the observability lake and replay, they can go back in time and say, you know what? I wanted to rehydrate all this data into a new tool, I'm no longer locked in to the way one vendor stores this, I can store this data in open formats and that's one of the coolest things about the observability late concept is that customers are no longer locked in to any particular vendor, the data is stored in open formats and so that gives them the choice to be able to go back later and choose any vendor, because they may want to do some AI or ML on that type of data and do some model training. They may want to be able to forward that data to a new cloud data warehouse, or try a different vendor for log search or a different vendor for time series data. And we're really giving them the choice and the tools to do that in a way in which was simply not possible before. >> You know you are bring up a point that's a big part of the upcoming AWS startup series Data as Code, the data engineering role has become so important and the word engineering is a key word in that, but there's not a lot of them, right? So like how many data engineers are there on the planet, and hopefully more will come in, come from these great programs in computer science but you got to engineer something but you're talking about developing on data, you're talking about doing replays and rehydrating, this is developing. So Data as Code is now a reality, how do you see Data as Code evolving from your perspective? Because it implies DevOps, Infrastructure as Code was DevOps, if Data as Code then you got DataOps, AIOps has been around for a while, what is Data as Code? And what does that mean to you Clint? >> I think for our customers, one, it means a number of I think sort of after-effects that maybe they have not yet been considering. One you mentioned which is it's hard to acquire that talent. I think it is also increasingly more critical that people who were working in jobs that used to be purely operational, are now being forced to learn, you know, developer centric tooling, things like GET, things like CI/CD pipelines. And that means that there's a lot of education that's going to have to happen because the vast majority of the people who have been doing things in the old way from the last 10 to 20 years, you know, they're going to have to get retrained and retooled. And I think that one is that's a huge opportunity for people who have that skillset, and I think that they will find that their compensation will be directly correlated to their ability to have those types of skills, but it also represents a massive opportunity for people who can catch this wave and find themselves in a place where they're going to have a significantly better career and more options available to them. >> Yeah and I've been thinking about what you just said about your customer environment having all these different things like Datadog and other agents. Those people that rolled those out can still work there, they don't have to rip and replace and then get new training on the new multiyear enterprise service agreement that some other vendor will sell them. You come in and it sounds like you're saying, hey, stay as you are, use Cribl, we'll have some data engineering capabilities for you, is that right? Is that? >> Yup, you got it. And I think one of the things that's a little bit different about our product and our market John, from kind of general-purpose data processing is for our users they often, they're often responsible for many tools and data engineering is not their full-time job, it's actually something they just need to do now, and so we've really built tool that's designed for your average security professional, your average IT professional, yes, we can utilize the same kind of DataOps techniques that you've been talking about, CI/CD pipelines, GITOps, that sort of stuff, but you don't have to, and if you're really just already familiar with administering a Datadog or a Splunk, you can get started with our product really easily, and it is designed to be able to be approachable to anybody with that type of skillset. >> It's interesting you, when you're talking you've remind me of the big wave that was coming, it's still here, shift left meant security from the beginning. What do you do with data shift up, right, down? Like what do you, what does that mean? Because what you're getting at here is that if you're a developer, you have to deal with data but you don't have to be a data engineer but you can be, right? So we're getting in this new world. Security had that same problem. Had to wait for that group to do things, creating tension on the CI/CD pipelining, so the developers who are building apps had to wait. Now you got shift left, what is data, what's the equivalent of the data version of shift left? >> Yeah so we're actually doing this right now. We just announced a new product a week ago called Cribl Edge. And this is enabling us to move processing of this data rather than doing it centrally in the stream to actually push this processing out to the edge, and to utilize a lot of unused capacity that you're already paying AWS, or paying Azure for, or maybe in your own data center, and utilize that capacity to do the processing rather than having to centralize and aggregate all of this data. So I think we're going to see a really interesting, and left from our side is towards the origination point rather than anything else, and that allows us to really unlock a lot of unused capacity and continue to drive the kind of cost down to make more data addressable back to the original thing we talked about the tension between data growth, if we want to offer more capacity to people, if we want to be able to answer more questions, we need to be able to cost-effectively query a lot more data. >> You guys had great success in the enterprise with what you got going on. Obviously the funding is just the scoreboard for that. You got good growth, what are the use cases, or what's the customer look like that's working for you where you're winning, or maybe said differently what pain points are out there the customer might be feeling right now that Cribl could fit in and solve? How would you describe that ideal persona, or environment, or problem, that the customer may have that they say, man, Cribl's a perfect fit? >> Yeah, this is a person who's working on tooling. So they administer a Splunk, or an Elastic, or a Datadog, they may be in a network operations center, a security operation center, they are struggling to get data into their tools, they're always at capacity, their tools always at the redline, they really wish they could do more for the business. They're kind of tired of being this department of no where everybody comes to them and says, "hey, can I get this data in?" And they're like, "I wish, but you know, we're all out of capacity, and you know, we have, we wish we could help you but we frankly can't right now." We help them by routing that data to multiple locations, we help them control costs by eliminating noise and waste, and we've been very successful at that in, you know, logos, like, you know, like a Shutterfly, or a, blanking on names, but we've been very successful in the enterprise, that's not great, and we continue to be successful with major logos inside of government, inside of banking, telco, et cetera. >> So basically it used to be the old hyperscalers, the ones with the data full problem, now everyone's got the, they're full of data and they got to really expand capacity and have more agility and more engineering around contributions of the business sounds like that's what you guys are solving. >> Yup and hopefully we help them do a little bit more with less. And I think that's a key problem for our enterprises, is that there's always a limit on the number of human resources that they have available at their disposal, which is why we try to make the software as easy to use as possible, and make it as widely applicable to those IT and security professionals who are, you know, kind of your run-of-the-mill tools administrator, our product is very approachable for them. >> Clint great to see you on theCUBE here, thanks for coming on. Quick plug for the company, you guys looking for hiring, what's going on? Give a quick update, take 30 seconds to give a plug. >> Yeah, absolutely. We are absolutely hiring cribl.io/jobs, we need people in every function from sales, to marketing, to engineering, to back office, GNA, HR, et cetera. So please check out our job site. If you are interested it in learning more you can go to cribl.io. We've got some great online sandboxes there which will help you educate yourself on the product, our documentation is freely available, you can sign up for up to a terabyte a day on our cloud, go to cribl.cloud and sign up free today. The product's easily accessible, and if you'd like to speak with us we'd love to have you in our community, and you can join the community from cribl.io as well. >> All right, Clint Sharp co-founder and CEO of Cribl, thanks for coming to theCUBE. Great to see you, I'm John Furrier your host thanks for watching. (upbeat music)
SUMMARY :
Clint, great to see you again, really great to be back. and really the cloud native and get it to the right place and get it to work? to be able to, you know, So is the Data as Code, is the ability to store that need to be engineered that they're needing to be that you guys are known for choice. is the ability to reuse their does that mean to you Clint? from the last 10 to 20 years, they don't have to rip and and it is designed to be but you don't have to be a data engineer and to utilize a lot of unused capacity that the customer may have and you know, we have, and they got to really expand capacity as easy to use as possible, Clint great to see you on theCUBE here, and you can join the community Great to see you, I'm
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Clint Sharp | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 times | QUANTITY | 0.99+ |
Clint | PERSON | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
100,000 computers | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
Cribl | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
American Airlines | ORGANIZATION | 0.99+ |
five times | QUANTITY | 0.99+ |
10,000 computers | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
over $200 million | QUANTITY | 0.99+ |
six years ago | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.98+ |
a week ago | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Datadog | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
AWSS3 | TITLE | 0.97+ |
Log4Shell | TITLE | 0.96+ |
two and a half times | QUANTITY | 0.94+ |
last couple of decades | DATE | 0.89+ |
first conversation | QUANTITY | 0.89+ |
One | QUANTITY | 0.87+ |
Hadoop World 2010 | EVENT | 0.87+ |
Log4j | TITLE | 0.83+ |
cribl.io | ORGANIZATION | 0.81+ |
20 years | QUANTITY | 0.8+ |
Azure | ORGANIZATION | 0.8+ |
first company | QUANTITY | 0.79+ |
big wave | EVENT | 0.79+ |
theCUBE | ORGANIZATION | 0.78+ |
up to a terabyte a day | QUANTITY | 0.77+ |
Azure Blob | TITLE | 0.77+ |
cribl.cloud | TITLE | 0.74+ |
Exabeam | ORGANIZATION | 0.72+ |
Shutterfly | ORGANIZATION | 0.71+ |
banking | ORGANIZATION | 0.7+ |
DataOps | TITLE | 0.7+ |
wave | EVENT | 0.68+ |
last | DATE | 0.67+ |
cribl.io | TITLE | 0.66+ |
things | QUANTITY | 0.65+ |
zillion companies | QUANTITY | 0.63+ |
syslog | TITLE | 0.62+ |
10 | QUANTITY | 0.61+ |
Splunk | ORGANIZATION | 0.6+ |
AIOps | TITLE | 0.6+ |
Edge | TITLE | 0.6+ |
Data as | TITLE | 0.59+ |
cribl.io/jobs | ORGANIZATION | 0.58+ |
Elasticsearch | TITLE | 0.58+ |
Elastic | TITLE | 0.55+ |
once | QUANTITY | 0.5+ |
problems | QUANTITY | 0.48+ |
Code | TITLE | 0.46+ |
Splunk | TITLE | 0.44+ |
Breaking Analysis: Cyber Stocks Caught in the Storm While Private Firms Keep Rising
>> From theCUBE studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> The pandemic precipitated what is shaping up to be a permanent shift in cybersecurity spending patterns. As a direct result of hybrid work, CSOs have vested heavily in endpoint security, identity access management, cloud security, and further hardening the network beyond the headquarters. We've reported on this extensively in this Breaking Analysis series. Moreover, the need to build security into applications from the start rather than bolting protection on as an afterthought has led to vastly high heightened awareness around DevSecOps. Finally, attacking security as a data problem with automation and AI is fueling new innovations in cyber products and services and startups. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we present our quarterly findings in the security industry, and share the latest ETR survey data on the spending momentum and market movers. Let's start with the most recent news in cybersecurity. Nary a week goes by without more concerning news. The latest focus in the headlines is, of course, Russia's relentless cyber attacks on critical infrastructure in the Ukraine, including banking, government websites, weaponizing information. The hacker group, BlackByte, put a double whammy on the San Francisco 49ers, meaning they exfiltrated data and they encrypted the organization's files as part of its ransomware attack. Then there's the best Super Bowl ad last Sunday, the Coinbase floating QR code. Did you catch that? As people rushed to scan the code and participate in the Coinbase Bitcoin giveaway, it highlights yet another exposure, meaning we're always told not to click on links that we don't trust or we've never seen, but so many people activated this random QR code on their smartphones that it crashed Coinbase's website. What does that tell you? In other news, Securonix raised a billion dollars. They did this raise on top of Lacework's massive $1.3 billion raise last November. Both of these companies are attacking security with data automation and APIs that can engage machine intelligence. Securonix, specifically in the announcement, mentioned the uptake from MSSPs, managed security service providers, something we've talked about in this series. And that's a trend that we see as increasingly gaining traction as customers are just drawing in and drowning in security incidents. Peter McKay's company, Snyk, acquired Fugue, a company focused on making sure security policies are consistent throughout the software development life cycle. It's a really an example of a developer-defined security approach where policy can be checked at the dev, deployment, and production phases to ensure the same policies are in place at all stages, including monitoring at runtime. Fugue, according to Crunchbase, had raised $85 million to date. In some other company news, Cisco was rumored to be acquiring Splunk for not much more than Splunk is worth today. And the talks reportedly broke down. This would be a major move in security by Cisco and underscores the pressure to consolidate. Cisco would get an extremely strong customer base and through efficiencies could improve Splunk's profitability, but it seems like the premium Cisco was willing to pay was not enough to entice board to act. Splunk board, that is. Datadog blew away its earnings, and the stock was up 12%. It's pulled back now, thanks to Putin, but it's one of those companies that is disrupting Splunk. Datadog is less than half the size of Splunk, revenue-wise, but its valuation is more than 2 1/2 times greater. Finally, Elastic, another Splunk disruptor, settled its trademark dispute with AWS, and now AWS will now stop using the name Elasticsearch. All right, let's take a high level look at how cyber companies have performed in the stock market over time. Here's a graph of the Cyber ETF, and you can see the March 1st crosshairs of 2020 signifying the start of the lockdown. The trajectory of cybersecurity stocks is shown by the orange and blue lines, and it surely has steepened post March of 2020. And, of course, it's been down with the market lately, but the run up, as you can see, was substantial and eclipsed the trajectory of the previous cycles over the last couple of years, owing much of the momentum to the spending dynamics that we talked about at our open. Let's now drill into some of the names that we've been following over the last few years and take a look at the firm level. This chart shows some data that we've been tracking since before the pandemic. The top rows show the S&P 500 and the NASDAQ prices, and the bottom rows show specific stocks. The first column is the index price or the market cap of the company just before the pandemic, then the same data one year later. Then the next column shows the peak value during the pandemic, and then the current value. Then it shows in the next column where it is today, in percentage terms, i.e., how far has it pulled back from the peak, then the delta from pre-pandemic, in other words, how much did the issue earn or lose during the pandemic for investors? We then compare the pre-pandemic revenue multiple using a trailing 12-month revenue metric. Sorry, that's what we used. It's easy to get. (laughs) And that's the revenue multiple compared to the August in 2020, when multiples were really high, and where they are today, and then a recent quarterly growth rate guide based on the last earnings report. That's the last column. Okay, so I'm throwing a lot of data at you here, but what does it tell us? First, the S&P and the NAS are well up from pre-pandemic levels, yet they're off 9% and 15%, respectively, from their peaks today. That was earlier on Friday morning. Now let's look at the names more closely. Splunk has been struggling. It definitely had a tailwind from the pandemic as all boats seem to rise, but its execution has been lacking. It's now 30% off from its pre-pandemic levels. (groans) And it's multiple is compressing, and perhaps Cisco thought it could pick up the company for a discount. Now let's talk about Palo Alto Networks. We had reported on some of the challenges the company faced moving into a cloud-friendly model. that was before the pandemic. And we talked about the divergence between Palo Alto's stock price and the valuations relative to Fortinet, and we said at the time, we fully expected Palo Alto to rebound, and that's exactly what happened. It rode the tailwinds of the last two years. It's up over 100% from its pre-COVID levels, and its revenue multiple is expanding, owing to the nice growth rates. Now Fortinet had been doing well coming into the pandemic. In fact, we said it was executing on a cloud strategy better than Palo Alto Networks, hence that divergence in valuations at the time. So it didn't get as much of a boost from the pandemic. Didn't get that momentum at first, but the company's been executing very well. And as you can see, with 155% increase in valuation since just before the pandemic, it's going more than okay for Fortinet. Now, Okta is a name that we've really followed closely, the identity access management specialist that rocketed. But since it's Auth0 acquisition, it's pulled back. Investors are concerned about its guidance and its profitability. And several analyst have downgraded their price targets on Okta. We still really like the company. The Auth0 acquisition gives Okta a developer vector, and we think the company is going hard after market presence and is willing to sacrifice short-term profitability. We actually like that posture. It's very Frank Slupin-like. This company spends a lot of money on R&D and go-to-market. The question is, does Okta have inherent profitability? The company, as they say, spends a ton in some really key areas but it looks to us like it's going to establish a footprint. It's guiding revenue CAGR in the mid-30s over the mid to long-term and near term should beat that benchmark handily. But you can see the red highlights on Okta. And even though Okta is up 59% from its pre-pandemic levels, it's far behind its peers shown in the chart, especially CrowdStrike and Zscaler, the latter being somewhat less impacted by the pullback in stocks recently, of course, due to the fears of inflation and interest rates, and, of course, Russian invasion escalation. But these high flyers, they were bound to pull back. The question is can they maintain their category leadership? And for the most part, we think they can. All right, let's get into some of the ETR data. Here's our favorite XY view with net score, or spending momentum on the Y-axis, and market share or pervasiveness in the data center on the horizontal axis. That red 40% line, that indicates a highly elevated spending level. And the chart inserts to the right, that shows how the data is plotted with net score and shared N in each of the columns by each company. Okay, so this is an eye chart, but there really are three main takeaways. One is that it's a crowded market. And this shows only the companies ETR captures in its survey. We filtered on those that had more than 50 mentions. So there's others in the ETR survey that we're not showing here, and there are many more out there which don't get reported in the spending data in the ETR survey. Secondly, there are a lot of companies above the 40% mark, and plenty with respectable net scores just below. Third, check out SentinelOne, Elastic, Tanium, Datadog, Netskope, and Darktrace. Each has under 100 N's but we're watching these companies closely. They're popping up in the survey, and they're catching our attention, especially SentinelOne, post-IPO. So we wanted to pare this back a bit and filter the data some more. So let's look at companies with more than 100 mentions in the same chart. It gets a little cleaner this picture, but it's still crowded. Auth0 leads everyone in net score. Okta is also up there, so that's very positive sign since they had just acquired Auth0. CrowdStrike SalePoint, Cyberark, CloudFlare, and Zscaler are all right up there as well. And then there's the bigger security companies. Palo Alto Network, very impressive because it's well above the 40% mark, and it has a big presence in the survey, and, of course, in the market. And Microsoft as well. They're such a big whale. They skew the data for everybody else to kind of mess up these charts. And the position of Cisco and Splunk make for an interesting combination. They get both decent net scores, not above the 40% line but they got a good presence in the survey as well. Thinking about the acquisition, Al Shugart was the CEO of of Seagate, and founder. Brilliant Silicon valley icon and engineer. Great business person. I was asking him one time, hey, you thinking about buying this company or that company? And of course, he's not going to tell me who he's thinking about buying or acquiring. He said, let me just tell you this. If you want to know what I'm thinking, ask yourself if it were free, would you take it? And he said the answer's not always obviously yes, because acquisitions can be messy and disruptive. In the case of Cisco and Splunk, I think the answer would be a definitive yes It would expand Cisco's portfolio and make it the leader in security, with an opportunity to bring greater operating leverage to Splunk. Cisco's just got to pay more if it wants that asset. It's got to pay more than the supposed $20 billion offer that it made. It's going to have to get kind of probably north of 23 billion. I pinged my ETR colleague, Erik Bradley, on this, and he generally agreed. He's very close to the security space. He said, Splunk isn't growing the customer base but the customers are sticky. I totally agree. Cisco could roll Splunk into its security suite. Splunk is the leader in that space, security information and event management, and Cisco really is missing that piece of the pie. All right, let's filter the data even more and look at some of the companies that have moved in the survey over the past year and a half. We'll go back here to July 2020. Same two-dimensional chart. And we're isolating here Auth0, Okta, SalePoint CrowdStrike, Zscaler, Cyberark, Fortinet, and Cisco. No Microsoft. That cleans up the chart. Okay, why these firms? Because they've made some major moves to the right, and some even up since last July. And that's what this next chart shows. Here's the data from the January 2022 survey. The arrow start points show the position that we just showed you earlier in July 2020, and all these players have made major moves to the right. How come? Well, it's likely a combination of strong execution, and the fact that security is on the radar of every CEO, CIO, of course, CSOs, business heads, boards of directors. Everyone is thinking about security. The market momentum is there, especially for the leaders. And it's quite tremendous. All right, let's now look at what's become a bit of a tradition with Breaking Analysis, and look at the firms that have earned four stars. Four-star firms are leaders in the ETR survey that demonstrate both a large presence, that's that X-axis that we showed you, and elevated spending momentum. Now in this chart, we filter the N's. Has to be greater than 100. And we isolate on those companies. So more than 100 responses in the survey. On the left-hand side of the chart, we sort by net score or spending velocity. On the right-hand side, we sort by shared N's or presence in the dataset. We show the top 20 for each of the categories. And the red line shows the top 10 cutoffs. Companies that show up in the top 10 for both spending momentum and presence in the data set earn four stars. If they show up in one, and make the top 10 in one, and make the top 20 in the other, they get two stars. And we've added a one-star category as honorable mention for those companies that make the top 20 in both categories. Microsoft, Palo Alto Networks, CrowdStrike, and Okta make the four-star grade. Okta makes it even without Auth0, which has the number one net score in this data set with 115 shared N to boot. So you can add that to Okta. The weighted average would pull Okta's net score to just above Cyberark's into fourth place. And its shared N would bump Okta up to third place on the right-hand side of the chart Cisco, Splunk, Proofpoint, KnowBe4, Zscaler, and Cyberark get two stars. And then you can see the honorable mentions with one star. Now thinking about a Cisco, Splunk combination. You'd get an entity with a net score in the mid-20s. Yeah, not too bad, definitely respectable. But they'd be number one on the right-hand side of this chart, with the largest market presence in the survey by far. Okay, let's wrap. The trends around hybrid work, cloud migration and the attacker escalation that continue to drive cybersecurity momentum and they're going to do so indefinitely. And we've got some bullet points here that you're seeing private companies, (laughs) they're picking up gobs of money, which really speaks to the fact that there's no silver bullet in this market. It's complex, chaotic, and cash-rich. This idea of MSSPs on the rise is going to continue, we think. About half the mid-size and large organization in the US don't have a SecOps, a security operation center, and outsourcing to one that can be tapped on a consumption basis, cloud-like, as a service just makes sense to us. We see the momentum that companies that we've highlighted over the many quarters of Breaking Analysis are forming. They're forming a strong base in the market. They're going for market share and footprint, and they're focusing on growth, at bringing in new talent. They have good balance sheets and strong management teams and we think they'll be leading companies in the future, Zscaler, CrowdStrike, Okta, SentinelOne, Cyberark, SalePoint, over time, joining the ranks of billion dollar cyber firms, when I say billion dollar, billion dollar revenue like Palo Alto Networks, Fortinet, and Splunk, if it doesn't get acquired. These independent firms that really focus on security. Which underscores the pressure and consolidation and M&A in the whole space. It's almost assured with the fragmentation of companies and so many new entrants fighting for escape velocity that this market is going to continue with robust M&A and consolidation. Okay, that's it for today. Thanks to my colleague, Stephanie Chan, who helped research this week's topics, and Alex Myerson on the production team. He also manages the Breaking Analysis podcast. Kristen Martin and Cheryl Knight, who get the word out. Thank you to all. Remember these episodes are all available as podcasts wherever you listen. All you do is search Breaking Analysis podcast. Check out ETR's website at etr.ai. We also publish a full report every week on wikibon.com and siliconangle.com. You can email me at david.vellante@siliconangle.com. @dvellante is my DM. Comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week. Be safe, be well, and we'll see you next time. (upbeat music)
SUMMARY :
in Palo Alto and Boston, and M&A in the whole space.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Erik Bradley | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seagate | ORGANIZATION | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
July 2020 | DATE | 0.99+ |
January 2022 | DATE | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Cyberark | ORGANIZATION | 0.99+ |
12-month | QUANTITY | 0.99+ |
SentinelOne | ORGANIZATION | 0.99+ |
BlackByte | ORGANIZATION | 0.99+ |
Netskope | ORGANIZATION | 0.99+ |
March of 2020 | DATE | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
Putin | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
SalePoint | ORGANIZATION | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
Securonix | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
one star | QUANTITY | 0.99+ |
Frank Slupin | PERSON | 0.99+ |
Tanium | ORGANIZATION | 0.99+ |
Elastic | ORGANIZATION | 0.99+ |
two stars | QUANTITY | 0.99+ |
Peter McKay | PERSON | 0.99+ |
Al Shugart | PERSON | 0.99+ |
$20 billion | QUANTITY | 0.99+ |
$85 million | QUANTITY | 0.99+ |
one-star | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Coinbase | ORGANIZATION | 0.99+ |
S&P | ORGANIZATION | 0.99+ |
billion dollar | QUANTITY | 0.99+ |
Four-star | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
155% | QUANTITY | 0.99+ |
Darktrace | ORGANIZATION | 0.99+ |
Auth0 | ORGANIZATION | 0.99+ |
Crunchbase | ORGANIZATION | 0.99+ |
9% | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Mark Hill, Digital River and Dave Vellante with closing thoughts
(upbeat music) >> Dave Vellante: Okay. We're back with Mark Hill. who's the Director of IT Operations at Digital River. Mark. Welcome to the cube. Good to see you. Thanks for having me. I really appreciate it. >> Hey, tell us a little bit more about Digital River, people know you as a, a payment platform, you've got marketing expertise. How do you differentiate from other e-commerce platforms? >> Well, I don't think people realize it, but Digital River was founded about 27 years ago. Primarily as a one-stop shop for e-commerce right? And so we offered site development, hosting, order management, fraud, expert controls, tax, um, physical and digital fulfillment, as well as multilingual customer service, advanced reporting and email marketing campaigns, right? So it was really just kind of a broad base for e-commerce. People could just go there. Didn't have to worry about anything. What we found over time as e-commerce has matured, we've really pivoted to a more focused API offering, specializing in just our global seller services. And to us that means payment, fraud, tax, and compliance management. So our, our global footprint allows companies to outsource that risk management and expand their markets internationally, um very quickly. And with low cost of entry. >> Yeah. It's an awesome business. And, you know, to your point, you were founded way before there was such a thing as the modern cloud, and yet you're a cloud native business. >> Yeah. >> Which I think talks to the fact that, that incumbents can evolve. They can reinvent themselves from a technology perspective. I wonder if you could first paint a picture of, of how you use the cloud, you use AWS, you know, I'm sure you got S3 in there. Maybe we could talk about that a little bit. >> Yeah, exactly. So when I think of a cloud native business, you kind of go back to the history. Well, 27 years ago, there wasn't a cloud, right? There wasn't any public infrastructure. It was, we basically stood our own data center up in a warehouse. And so over our history, we've managed our own infrastructure and collocated data centers over time through acquisitions and just how things worked. You know those over 10 data centers globally. for us it was expensive, well from a software hardware perspective, as well as, you know, getting the operational teams and expertise up to up to speed too. So, and it was really difficult to maintain and ultimately not core to our business, right? Nowhere in our mission statement, does it say that we're our goal is to manage data centers? So, so about five years ago, we started the journey from our hosted into AWS. It was a hundred percent lift it and shift plan, and we were able to bleed that migration a little over two years, right. Amazon really just fit for us. It was a natural, a natural place for us to land and they made it really easy here for us to not to say it wasn't difficult, but, but once in the public cloud, we really adopted a cloud first vision. Meaning that we'll not only consume their infrastructure as the service, but we'll also purposely evaluate and migrate to software as a service. So I come from a database background. So an example would be migrating from self deployed and managed relational databases over to AWS RDS, relational database service. You know, you're able to utilize the backups, the standby and the patching tools. Automagically, you know, with a click of the button. And that's pretty cool. And so we moved away from the time consuming operational tasks and, and really put our resources into revenue and generate new products, you know, like pivoting to an API offering. I always like to say that we stopped being busy and started being productive. >> Ha ha. I love that. >> That's really what the cloud has done for us. >> Is that you mean by cloud native? I mean, being able to take advantage of those primitives and native API. So what does that mean for your business? >> Yeah, exactly. I think, well, the first step for us was just to consume the infrastructure right, in that, but now we're looking at targeted services that they have in there too. So, you know, we have our, our, our data stream of services. So log analytics, for example, we used to put it locally on the machine. Now we're just dumping into an S3 bucket and we're using Kinesis to consume that data, put it in Eastic and go from there. And none of the services are managed by Digital River. We're just utilizing the capabilities that AWS has there too. So. >> And as an e-commerce player, retail company, we were ever concerned about moving to AWS as a possible competitor, or did you look at other clouds? What can you tell us about that? >> Yeah. And, and so I think e-commerce has really matured, right? And so we, we got squeezed out by the Amazons of the world. It's just not something that we were doing, but we had really a good area of expertise with our global seller services. But so we evaluated Microsoft. We evaluated AWS as well as Google. And, you know, back when we did that, Microsoft was Windows-based. Google was just coming into the picture, really didn't fit for what we were doing, but Amazon was just a natural fit. So we made a business decision, right? It was financially really the best decision for us. And so we didn't really put our feelings into it, right? We just had to move forward and it's better than where we're at. And we've been delighted actually. >> Yeah. It makes sense. Best cloud, best, best tech. >> Yeah. >> Yeah. I want to talk about ChaosSearch. A lot of people describe it as a data lake for log analytics. Do you agree with that? You know, what does that, what does that even mean? >> Well, from, from our perspective, because they're self-managed solutions were costly and difficult to maintain, you know, we had older versions of self deployed using Splunk, other things like that, too. So over time, we made a conscious decision to limit our data retention in generally seven days. But in a lot of cases, it was zero. We just couldn't consume that, that log data because of the cost, intimidating in itself, because of this limit, you know, we've lost important data points use for incident triage, problem management, problem management, trending, and other things too. So ChaosSearch has offered us a manageable and cost-effective opportunity to store months, or even years of data that we can use for operations, as well as trending automation. And really the big thing that we're pushing into is an event driven architecture so that we can proactively manage our services. >> Yeah. You mentioned Elastic, I know I've talked to people who use the ELK Stack. They say you there's these exponential growth in the amount of data. So you have to cut it off at whatever. I think you said seven days or, or less you're saying, you're not finding that with, with ChaosSearch? >> Yeah. Yeah, exactly. And that was one of the huge benefits here too. So, you know, we were losing out if there was a lower priority incident, for example, and people didn't get to it until eight, nine days later. Well, all the breadcrumbs are gone. So it was really just kind of a best guess or the incident really wasn't resolved. We didn't find a root cause. >> Yeah. Like my video camera down there. My, you know, my other house, somebody breaks in and I don't find out for, for two weeks and then the video's gone. That kind of same thing. >> Yep So, so, so how do you, can you give us some more detail on how you use your data lake and ChaosSearch specifically? >> Yeah, yeah. Yep. And, and so there's, there's many different areas, but what we found is we were able to easily consolidate data from multiple regions, into a single pane of glass to our customers. So internally and externally, you know, it relieves us of that operational support for the data extract transformation load process, right? It offered us also a seamless transition for the users, who were familiar with ElasticSearch, right? It wasn't, it wasn't difficult to move over. And so all these are a lot of selling points, benefits. And, and so now that we have all this data that we're able to, to capture and utilize, it gives us an opportunity to use machine learning, predictive analysis. And like I said, you know, driving to an event driven architecture. >> Okay. >> So that's, that's really what it's offered. And it's, it's been a huge benefit. >> So you're saying that you can speak the language of Elastic. You don't have to move the data out of an S3 bucket and you can scale more easily. Is that right? >> Yeah, yeah, absolutely. And, so for us, just because we're running in multiple regions to drive more high availability, having that data available from multiple regions in a single pane of glass or a single way to utilize it, is a huge benefit as well. Just, you know, not to mention actually having the data. >> What was the initial catalyst to sort of rethink what you were doing with log analytics? Was it cost? Was it flexibility? Scale? >> There was, I think all of those went into it. One of the main drivers. So, so last year we had a huge project, so we have our ELK Stack and it's probably from a decade ago, right? And, you know, a version point oh two or something, you know, anyways, it's a very old, and we went through a whole project to get that upgraded and migrated over. And it was just, we found it impossible internally to do, right? And so this was a method for us to get out of that business, to get rid of the security risks, the support risk, and have a way for people to easily migrate over. And it was just a nightmare here, consolidating the data across regions. And so that was, that was a huge thing, but yeah, it was also been the cost, right? It was, we were finding it cheaper to use ChaosSearch and have more data available versus what we're doing currently in AWS. >> Got it. I wonder if you could, you could share maybe any stories that you have or examples that, that underscore the impact that this approach to analytics is having on your business, maybe your team's everyday activities, any, any metrics you can provide or even just anecdotal information. >> Yeah. Yeah. And, and I think, you know, one coming from an Oracle background here, so Digital River historically has been an Oracle shop, right? And we've been developing a reporting and analytics environment on Oracle and that's complicated and expensive, right? We had to use advance features in Oracle, like partitioning materialized views, and bring in other supporting software like Informatica, Hyperion, Sbase, right? And all of these required our large team with a wide set of expertise into these separate focus areas, right? And the amount of data that we were pushing at the ChaosSearch would simply have overwhelmed this legacy method for data analysis than a relational database, right? In that dimension, the human toll of, of the stress of supporting that Oracle environment, meant that a 24 by seven by 365 environment, you know, which requires little or no downtime. So, just that alone, it's a huge thing. So it's allowed us to break away from Oracle, it's allowed us to use new technologies that make sense to solve business solutions. >> I, you know, ChaosSearch is really interesting company to me. I'm sure like me, you see a lot of startups, I'm sure they're knocking on your door every day. And I always like to say, okay, where are they going after? Are they going after a big market? How are they getting product market fit? And it seems like ChaosSearch has really looked at, hard at log analytics and kind of maybe disrupting the ELK Stack. But I see, you know, other potential use cases, you know, beyond analyzing logs. I wonder if you agree, are there other use cases that you see in your future? >> Yeah, exactly. So I think there's, one area would be Splunk, for example, we have that here too. So we use Splunk versus, you know, flat file analysis or other ways to, to capture that data just because from a PCI perspective, it needs to be secured for our compliance and certification, right? So ChaosSearch allows us to do that. There's different types of authentication. Um, really a hodgepodge of authentication that we used in our old environment, but ChaosSearch has a more easily usable one, One that we could set up, one that can really segregate the data and allow us to satisfy our PCR requirements too. So, but Splunk, but I think really deprecating all of our ElasticSearch environments are homegrown ones, but then also taking a hard look at what we're doing with relational databases, right? 27 years ago, there was only relational databases; Oracle and Sequel Server. So we we've been logging into those types of databases and that's not, cost-effective, it's not supportable. And so really getting away from that and putting the data where it belongs and that was easily accessible in a secure environment and allowing us to, to push our business forward. >> Yep. When you say, where the data belongs, right? It sounds like you're putting it in the bit bucket, S3, leaving it there, because it's the the most cost-effective way to do it and then sort of adding value on top of it. That's, what's interesting about ChaosSearch to me. >> Yeah, exactly. Yup. Yup. Versus the high priced storage, you know, that you have to use for a relational database, you know, and not to mention that the standbys, the backups. So, you know, you're duplicating, triplicating all this data too in an expensive manner, so yeah. Yeah. >> Yeah. Copy. Create. Moving data around and it gets expensive. It's funny when you say about databases, it's true. But database used to be such a boring market. Now it's exploded. Then you had the whole no Sequel movement and Sequel, Sequel became the killer app. You know, it's like full circle, right? >> Yeah, exactly. >> Well, anyway, good stuff, Mark, really, really appreciate you coming on the Cube and, and sharing your perspectives. We'd love to have you back in the future. >> Oh yeah, no problem. Thanks for having me. I really appreciate it. (upbeat music) >> Okay. So that's a wrap. You know, we're seeing a new era in data and analytics. For example, we're moving from a world where data lives in a cloud object store and needs to be extracted, moved into a new data store, transformed, cleansed, structured into a schema, and then analyzed. This cumbersome and expensive process is being revolutionized by companies like ChaosSearch that leave the data in place and then interact with it in a multi-lingual fashion with tooling, that's familiar to analytic pros. You know, I see a lot of potential for this technology beyond just login analytics use cases, but that's a good place to start. You know, really, if I project out into the future, we see a trend of the global data mesh, really taking hold where a data warehouse or data hub or a data lake or an S3 bucket is just a discoverable node on that mesh. And that's governed by an automated computational processes. And I do see ChaosSearch as an enabler of this vision, you know, but for now, if you're struggling to scale with existing tools or you're forced to limit your attention because data is exploding at too rapid a pace, you might want to check these guys out. You can schedule a demo just by clicking the button on the site to do that. Or stop by the ChaosSearch booth at AWS Reinvent. The Cube is going to also be there. We'll have two sets, a hundred guests. I'm Dave Volante. You're watching the Cube, your leader in high-tech coverage.
SUMMARY :
Welcome to the people know you as a, a payment platform, And to us that means payment, fraud, tax, And, you know, to your point, I wonder if you could and generate new products, you know, I love that. That's really what the Is that you mean by cloud native? So, you know, we have our, our, And, you know, Do you agree with that? and difficult to maintain, you know, So you have to cut it off at whatever. So, you know, we were losing out My, you know, my other And, and so now that we have all this data And it's, it's been a huge benefit. and you can scale more Just, you know, not to mention And, you know, a version any stories that you have And, and I think, you know, that you see in your future? use Splunk versus, you know, about ChaosSearch to me. Versus the high priced storage, you know, and Sequel, Sequel became the killer app. We'd love to have you back in the future. I really appreciate it. and needs to be extracted,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Mark Hill | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Mark | PERSON | 0.99+ |
Digital River | ORGANIZATION | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
seven days | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two weeks | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Sequel | ORGANIZATION | 0.99+ |
two sets | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
first step | QUANTITY | 0.98+ |
Informatica | ORGANIZATION | 0.98+ |
hundred percent | QUANTITY | 0.98+ |
a decade ago | DATE | 0.98+ |
365 | QUANTITY | 0.98+ |
over two years | QUANTITY | 0.98+ |
27 years ago | DATE | 0.97+ |
ElasticSearch | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
single way | QUANTITY | 0.95+ |
Amazons | ORGANIZATION | 0.95+ |
over 10 data centers | QUANTITY | 0.94+ |
eight | DATE | 0.94+ |
Elastic | TITLE | 0.94+ |
S3 | TITLE | 0.94+ |
seven | QUANTITY | 0.93+ |
first vision | QUANTITY | 0.91+ |
single pane | QUANTITY | 0.9+ |
nine days later | DATE | 0.88+ |
one area | QUANTITY | 0.87+ |
Hyperion | ORGANIZATION | 0.85+ |
Windows | TITLE | 0.83+ |
Cube | COMMERCIAL_ITEM | 0.81+ |
Kinesis | TITLE | 0.79+ |
about five years ago | DATE | 0.77+ |
about 27 years ago | DATE | 0.76+ |
Sbase | ORGANIZATION | 0.74+ |
ELK Stack | COMMERCIAL_ITEM | 0.74+ |
Eastic | LOCATION | 0.73+ |
ChaosSearch | TITLE | 0.72+ |
hundred guests | QUANTITY | 0.72+ |
Splunk | ORGANIZATION | 0.71+ |
S3 | COMMERCIAL_ITEM | 0.71+ |
one- | QUANTITY | 0.7+ |
Elastic | ORGANIZATION | 0.68+ |
Steven Mih, Ahana and Sachin Nayyar, Securonix | AWS Startup Showcase
>> Voiceover: From theCUBE's Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE Conversation. >> Welcome back to theCUBE's coverage of the AWS Startup Showcase. Next Big Thing in AI, Security and Life Sciences featuring Ahana for the AI Trek. I'm your host, John Furrier. Today, we're joined by two great guests, Steven Mih, Ahana CEO, and Sachin Nayyar, Securonix CEO. Gentlemen, thanks for coming on theCUBE. We're talking about the Next-Gen technologies on AI, Open Data Lakes, et cetera. Thanks for coming on. >> Thanks for having us, John. >> Thanks, John. >> What a great line up here. >> Sachin: Thanks, Steven. >> Great, great stuff. Sachin, let's get in and talk about your company, Securonix. What do you guys do? Take us through, I know you've got a slide to help us through this, I want to introduce your stuff first then jump in with Steven. >> Absolutely. Thanks again, Steven. Ahana team for having us on the show. So Securonix, we started the company in 2010. We are the leader in security analytics and response capability for the cybermarket. So basically, this is a category of solutions called SIEM, Security Incident and Event Management. We are the quadrant leaders in Gartner, we now have about 500 customers today and have been plugging away since 2010. Started the company just really focused on analytics using machine learning and an advanced analytics to really find the needle in the haystack, then moved from there to needle in the needle stack using more algorithms, analysis of analysis. And then kind of, I evolved the company to run on cloud and become sort of the biggest security data lake on cloud and provide all the analytics to help companies with their insider threat, cyber threat, cloud solutions, application threats, emerging internally and externally, and then response and have a great partnership with Ahana as well as with AWS. So looking forward to this session, thank you. >> Awesome. I can't wait to hear the news on that Next-Gen SIEM leadership. Steven, Ahana, talk about what's going on with you guys, give us the update, a lot of stuff happening. >> Yeah. Great to be here and thanks for that such, and we appreciate the partnership as well with both Securonix and AWS. Ahana is the open source company based on PrestoDB, which is a project that came out of Facebook and is widely used, one of the fastest growing projects in data analytics today. And we make a managed service for Presto easily on AWS, all cloud native. And we'll be talking about that more during the show. Really excited to be here. We believe in open source. We believe in all the challenges of having data in the cloud and making it easy to use. So thanks for having us again. >> And looking forward to digging into that managed service and why that's been so successful. Looking forward to that. Let's get into the Securonix Next-Gen SIEM leadership first. Let's share the journey towards what you guys are doing here. As the Open Data Lakes on AWS has been a hot topic, the success of data in the cloud, no doubt is on everyone's mind especially with the edge coming. It's just, I mean, just incredible growth. Take us through Sachin, what do you guys got going on? >> Absolutely. Thanks, John. We are hearing about cyber threats every day. No question about it. So in the past, what was happening is companies, what we have done as enterprise is put all of our eggs in the basket of solutions that were evaluating the network data. With cloud, obviously there is no more network data. Now we have moved into focusing on EDR, right thing to do on endpoint detection. But with that, we also need security analytics across on-premise and cloud. And your other solutions like your OT, IOT, your mobile, bringing it all together into a security data lake and then running purpose built analytics on top of that, and then having a response so we can prevent some of these things from happening or detect them in real time versus innovating for hours or weeks and months, which is is obviously too late. So with some of the recent events happening around colonial and others, we all know cybersecurity is on top of everybody's mind. First and foremost, I also want to. >> Steven: (indistinct) slide one and that's all based off on top of the data lake, right? >> Sachin: Yes, absolutely. Absolutely. So before we go into on Securonix, I also want to congratulate everything going on with the new cyber initiatives with our government and just really excited to see some of the things that the government is also doing in this space to bring, to have stronger regulation and bring together the government and the private sector. From a Securonix perspective, today, we have one third of the fortune 500 companies using our technology. In addition, there are hundreds of small and medium sized companies that rely on Securonix for their cyber protection. So what we do is, again, we are running the solution on cloud, and that is very important. It is not just important for hosting, but in the space of cybersecurity, you need to have a solution, which is not, so where we can update the threat models and we can use the intelligence or the Intel that we gather from our customers, partners, and industry experts and roll it out to our customers within seconds and minutes, because the game is real time in cybersecurity. And that you can only do in cloud where you have the complete telemetry and access to these environments. When we go on-premise traditionally, what you will see is customers are even thinking about pushing the threat models through their standard Dev test life cycle management, and which is just completely defeating the purpose. So in any event, Securonix on the cloud brings together all the data, then runs purpose-built analytics on it. Helps you find very few, we are today pulling in several million events per second from our customers, and we provide just a very small handful of events and reduce the false positives so that people can focus on them. Their security command center can focus on that and then configure response actions on top of that. So we can take action for known issues and have intelligence in all the layers. So that's kind of what the Securonix is focused on. >> Steven, he just brought up, probably the most important story in technology right now. That's ransomware more than, first of all, cybersecurity in general, but ransomware, he mentioned some of the government efforts. Some are saying that the ransomware marketplace is bigger than some governments, nation state governments. There's a business model behind it. It's highly active. It's dominating the scene and it's a real threat. This is the new world we're living in, cloud creates the refactoring capabilities. We're hearing that story here with Securonix. How does Presto and Securonix work together? Because I'm connecting the dots here in real time. I think you're going to go there. So take us through because this is like the most important topic happening. >> Yeah. So as Sachin said, there's all this data that needs to go into the cloud and it's all moving to the cloud. And there's a massive amounts of data and hundreds of terabytes, petabytes of data that's moving into the data lakes and that's the S3-based data lakes, which are the easiest, cheapest, commodified place to put all this data. But in order to deliver the results that Sachin's company is driving, which is intelligence on when there's a ransomware or possibility, you need to have analytics on them. And so Presto is the open source project that is a open source SQL query engine for data lakes and other data sources. It was created by Facebook as part of the Linux foundation, something called Presto foundation. And it was built to replace the complicated Hadoop stack in order to then drive analytics at very lightning fast queries on large, large sets of data. And so Presto fits in with this Open Data Lake analytics movement, which has made Presto one of the fastest growing projects out there. >> What is an Open Data Lake? Real quick for the audience who wants to learn on what it means. Does is it means it's open source in the Linux foundation or open meaning it's open to multiple applications? What does that even mean? >> Yeah. Open Data Lake analytics means that you're, first of all, your data lake has open formats. So it is made up of say something called the ORC or Parquet. And these are formats that any engine can be used against. That's really great, instead of having locked in data types. Data lakes can have all different types of data. It can have unstructured, semi-structured data. It's not just the structured data, which is typically in your data warehouses. There's a lot more data going into the Open Data Lake. And then you can, based on what workload you're looking to get benefit from, the insights come from that, and actually slide two covers this pictorially. If you look on the left here on slide two, the Open Data Lake is where all the data is pulling. And Presto is the layer in between that and the insights which are driven by the visualization, reporting, dashboarding, BI tools or applications like in Securonix case. And so analytics are now being driven by every company for not just industries of security, but it's also for every industry out there, retail, e-commerce, you name it. There's a healthcare, financials, all are looking at driving more analytics for their SaaSified applications as well as for their own internal analysts, data scientists, and folks that are trying to be more data-driven. >> All right. Let's talk about the relationship now with where Presto fits in with Securonix because I get the open data layer. I see value in that. I get also what we're talking about the cloud and being faster with the datasets. So how does, Sachin' Securonix and Ahana fit in together? >> Yeah. Great question. So I'll tell you, we have two customers. I'll give you an example. We have two fortune 10 customers. One has moved most of their operations to the cloud and another customer which is in the process, early stage. The data, the amount of data that we are getting from the customer who's moved fully to the cloud is 20 times, 20 times more than the customer who's in the early stages of moving to the cloud. That is because the ability to add this level of telemetry in the cloud, in this case, it happens to be AWS, Office 365, Salesforce and several other rescalers across several other cloud technologies. But the level of logging that we are able to get the telemetry is unbelievable. So what it does is it allows us to analyze more, protect the customers better, protect them in real time, but there is a cost and scale factor to that. So like I said, when you are trying to pull in billions of events per day from a customer billions of events per day, what the customers are looking for is all of that data goes in, all of data gets enriched so that it makes sense to a normal analyst and all of that data is available for search, sometimes 90 days, sometimes 12 months. And then all of that data is available to be brought back into a searchable format for up to seven years. So think about the amount of data we are dealing with here and we have to provide a solution for this problem at a price that is affordable to the customer and that a medium-sized company as well as a large organization can afford. So after a lot of our analysis on this and again, Securonix is focused on cyber, bringing in the data, analyzing it, so after a lot of our analysis, we zeroed in on S3 as the core bucket where this data needs to be stored because the price point, the reliability, and all the other functions available on top of that. And with that, with S3, we've created a great partnership with AWS as well as with Snowflake that is providing this, from a data lake perspective, a bigger data lake, enterprise data lake perspective. So now for us to be able to provide customers the ability to search that data. So data comes in, we are enriching it. We are putting it in S3 in real time. Now, this is where Presto comes in. In our research, Presto came out as the best search engine to sit on top of S3. The engine is supported by companies like Facebook and Uber, and it is open source. So open source, like you asked the question. So for companies like us, we cannot depend on a very small technology company to offer mission critical capabilities because what if that company gets acquired, et cetera. In the case of open source, we are able to adopt it. We know there is a community behind it and it will be kind of available for us to use and we will be able to contribute in it for the longterm. Number two, from an open source perspective, we have a strong belief that customers own their own data. Traditionally, like Steven used the word locked in, it's a key term, customers have been locked in into proprietary formats in the past and those days are over. You should be, you own the data and you should be able to use it with us and with other systems of choice. So now you get into a data search engine like Presto, which scales independently of the storage. And then when we start looking at Presto, we came across Ahana. So for every open source system, you definitely need a sort of a for-profit company that invests in the community and then that takes the community forward. Because without a company like this, the community will die. So we are very excited about the partnership with Presto and Ahana. And Ahana provides us the ability to take Presto and cloudify it, or make the cloud operations work plus be our conduit to the Ahana community. Help us speed up certain items on the roadmap, help our team contribute to the community as well. And then you have to take a solution like Presto, you have to put it in the cloud, you have to make it scale, you have to put it on Kubernetes. Standard thing that you need to do in today's world to offer it as sort of a micro service into our architecture. So in all of those areas, that's where our partnership is with Ahana and Presto and S3 and we think, this is the search solution for the future. And with something like this, very soon, we will be able to offer our customers 12 months of data, searchable at extremely fast speeds at very reasonable price points and you will own your own data. So it has very significant business benefits for our customers with the technology partnership that we have set up here. So very excited about this. >> Sachin, it's very inspiring, a couple things there. One, decentralize on your own data, having a democratized, that piece is killer. Open source, great point. >> Absolutely. >> Company goes out of business, you don't want to lose the source code or get acquired or whatever. That's a key enabler. And then three, a fast managed service that has a commercial backing behind it. So, a great, and by the way, Snowflake wasn't around a couple of years ago. So like, so this is what we're talking about. This is the cloud scale. Steven, take us home with this point because this is what innovation looks like. Could you share why it's working? What's some of the things that people could walk away with and learn from as the new architecture for the new NextGen cloud is here, so this is a big part of and share how this works? >> That's right. As you heard from Sachin, every company is becoming data-driven and analytics are central to their business. There's more data and it needs to be analyzed at lower cost without the locked in and people want that flexibility. And so a slide three talks about what Ahana cloud for Presto does. It's the best Presto out of the box. It gives you very easy to use for your operations team. So it can be one or two people just managing this and they can get up to speed very quickly in 30 minutes, be up and running. And that jump starts their movement into an Open Data Lake analytics architecture. That architecture is going to be, it is the one that is at Facebook, Uber, Twitter, other large web scale, internet scale companies. And with the amount of data that's occurring, that's now becoming the standard architecture for everyone else in the future. And so just to wrap, we're really excited about making that easy, giving an open source solution because the open source data stack based off of data lake analytics is really happening. >> I got to ask you, you've seen many waves on the industry. Certainly, you've been through the big data waves, Steven. Sachin, you're on the cutting edge and just the cutting edge billions of signals from one client alone is pretty amazing scale and refactoring that value proposition is super important. What's different from 10 years ago when the Hadoop, you mentioned Hadoop earlier, which is RIP, obviously the cloud killed it. We all know that. Everyone kind of knows that. But like, what's different now? I mean, skeptics might say, I don't believe you, but it's just crazy. There's no way it works. S3 costs way too much. Why is this now so much more of an attractive proposition? What do you say the naysayers out there? With Steve, we'll start with you and then Sachin, I want you to like weigh in too. >> Yeah. Well, if you think about the Hadoop era and if you look at slide three, it was a very complicated system that was done mainly on-prem. And you'd have to go and set up a big data team and a rack and stack a bunch of servers and then try to put all this stuff together and candidly, the results and the outcomes of that were very hard to get unless you had the best possible teams and invested a lot of money in this. What you saw in this slide was that, that right hand side which shows the stack. Now you have a separate compute, which is based off of Intel based instances in the cloud. We run the best in that and they're part of the Presto foundation. And that's now data lakes. Now the distributed compute engines are the ones that have become very much easier. So the big difference in what I see is no longer called big data. It's just called data analytics because it's now become commodified as being easy and the bar is much, much lower, so everyone can get the benefit of this across industries, across organizations. I mean, that's good for the world, reduces the security threats, the ransomware, in the case of Securonix and Sachin here. But every company can benefit from this. >> Sachin, this is really as an example in my mind and you can comment too on if you'd believe or not, but replatform with the cloud, that's a no brainer. People do that. They did it. But the value is refactoring in the cloud. It's thinking differently with the assets you have and making sure you're using the right pieces. I mean, there's no brainer, you know it's good. If it costs more money to stand up something than to like get value out of something that's operating at scale, much easier equation. What's your thoughts on this? Go back 10 years and where we are now, what's different? I mean, replatforming, refactoring, all kinds of happening. What's your take on all this? >> Agreed, John. So we have been in business now for about 10 to 11 years. And when we started my hair was all black. Okay. >> John: You're so silly. >> Okay. So this, everything has happened here is the transition from Hadoop to cloud. Okay. This is what the result has been. So people can see it for themselves. So when we started off with deep partnerships with the Hadoop providers and again, Hadoop is the foundation, which has now become EMR and everything else that AWS and other companies have picked up. But when you start with some basic premise, first, the racking and stacking of hardware, companies having to project their entire data volume upfront, bringing the servers and have 50, 100, 500 servers sitting in their data centers. And then when there are spikes in data, or like I said, as you move to the cloud, your data volume will increase between five to 20x and projecting for that. And then think about the agility that it will take you three to six months to bring in new servers and then bring them into the architecture. So big issue. Number two big issue is that the backend of that was built for HDFS. So Hadoop in my mind was built to ingest large amounts of data in batches and then perform some spark jobs on it, some analytics. But we are talking in security about real time, high velocity, high variety data, which has to be available in real time. It wasn't built for that, to be honest. So what was happening is, again, even if you look at the Hadoop companies today as they have kind of figured, kind of define their next generation, they have moved from HDFS to now kind of a cloud based platform capability and have discarded the traditional HDFS architecture because it just wasn't scaling, wasn't searching fast enough, wasn't searching fast enough for hundreds of analysts at the same time. And then obviously, the servers, et cetera wasn't working. Then when we worked with the Hadoop companies, they were always two to three versions behind for the individual services that they had brought together. And again, when you're talking about this kind of a volume, you need to be on the cutting edge always of the technologies underneath that. So even while we were working with them, we had to support our own versions of Kafka, Solr, Zookeeper, et cetera to really bring it together and provide our customers this capability. So now when we have moved to the cloud with solutions like EMR behind us, AWS has invested in in solutions like EMR to make them scalable, to have scale and then scale out, which traditional Hadoop did not provide because they missed the cloud wave. And then on top of that, again, rather than throwing data in that traditional older HDFS format, we are now taking the same format, the parquet format that it supports, putting it in S3 and now making it available and using all the capabilities like you said, the refactoring of that is critical. That rather than on-prem having servers and redundancies with S3, we get built in redundancy. We get built in life cycle management, high degree of confidence data reliability. And then we get all this innovation from companies like, from groups like Presto, companies like Ahana sitting on double that S3. And the last item I would say is in the cloud we are now able to offer multiple, have multiple resilient options on our side. So for example, with us, we still have some premium searching going on with solutions like Solr and Elasticsearch, then you have Presto and Ahana providing majority of our searching, but we still have Athena as a backup in case something goes down in the architecture. Our queries will spin back up to Athena, AWS service on Presto and customers will still get served. So all of these options, but what it doesn't cost us anything, Athena, if we don't use it, but all of these options are not available on-prem. So in my mind, I mean, it's a whole new world we are living in. It is a world where now we have made it possible for companies to even enterprises to even think about having true security data lakes, which are useful and having real-time analytics. From my perspective, I don't even sign up today for a large enterprise that wants to build a data lake on-prem because I know that is not, that is going to be a very difficult project to make it successful. So we've come a long way and there are several details around this that we've kind of endured through the process, but very excited where we are today. >> Well, we certainly follow up with theCUBE on all your your endeavors. Quickly on Ahana, why them, why their solution? In your words, what would be the advice you'd give me if I'm like, okay, I'm looking at this, why do I want to use it, and what's your experience? >> Right. So the standard SQL query engine for data lake analytics, more and more people have more data, want to have something that's based on open source, based on open formats, gives you that flexibility, pay as you go. You only pay for what you use. And so it proved to be the best option for Securonix to create a self-service system that has all the speed and performance and scalability that they need, which is based off of the innovation from the large companies like Facebook, Uber, Twitter. They've all invested heavily. We contribute to the open source project. It's a vibrant community. We encourage people to join the community and even Securonix, we'll be having engineers that are contributing to the project as well. I think, is that right Sachin? Maybe you could share a little bit about your thoughts on being part of the community. >> Yeah. So also why we chose Ahana, like John said. The first reason is you see Steven is always smiling. Okay. >> That's for sure. >> That is very important. I mean, jokes apart, you need a great partner. You need a great partner. You need a partner with a great attitude because this is not a sprint, this is a marathon. So the Ahana founders, Steven, the whole team, they're world-class, they're world-class. The depth that the CTO has, his experience, the depth that Dipti has, who's running the cloud solution. These guys are world-class. They are very involved in the community. We evaluated them from a community perspective. They are very involved. They have the depth of really commercializing an open source solution without making it too commercial. The right balance, where the founding companies like Facebook and Uber, and hopefully Securonix in the future as we contribute more and more will have our say and they act like the right stewards in this journey and then contribute as well. So and then they have chosen the right niche rather than taking portions of the product and making it proprietary. They have put in the effort towards the cloud infrastructure of making that product available easily on the cloud. So I think it's sort of a no-brainer from our side. Once we chose Presto, Ahana was the no-brainer and just the partnership so far has been very exciting and I'm looking forward to great things together. >> Likewise Sachin, thanks so much for that. And we've only found your team, you're world-class as well, and working together and we look forward to working in the community also in the Presto foundation. So thanks for that. >> Guys, great partnership. Great insight and really, this is a great example of cloud scale, cloud value proposition as it unlocks new benefits. Open source, managed services, refactoring the opportunities to create more value. Stephen, Sachin, thank you so much for sharing your story here on open data lakes. Can open always wins in my mind. This is theCUBE we're always open and we're showcasing all the hot startups coming out of the AWS ecosystem for the AWS Startup Showcase. I'm John Furrier, your host. Thanks for watching. (bright music)
SUMMARY :
leaders all around the world, of the AWS Startup Showcase. to help us through this, and provide all the what's going on with you guys, in the cloud and making it easy to use. Let's get into the Securonix So in the past, what was So in any event, Securonix on the cloud Some are saying that the and that's the S3-based data in the Linux foundation or open meaning And Presto is the layer in because I get the open data layer. and all the other functions that piece is killer. and learn from as the new architecture for everyone else in the future. obviously the cloud killed it. and the bar is much, much lower, But the value is refactoring in the cloud. So we have been in business and again, Hadoop is the foundation, be the advice you'd give me system that has all the speed The first reason is you see and just the partnership so in the community also in for the AWS Startup Showcase.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steven | PERSON | 0.99+ |
Sachin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Securonix | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Steven Mih | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
Stephen | PERSON | 0.99+ |
Sachin Nayyar | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
20 times | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Ahana | PERSON | 0.99+ |
two customers | QUANTITY | 0.99+ |
90 days | QUANTITY | 0.99+ |
Ahana | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
100 | QUANTITY | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
Presto | ORGANIZATION | 0.99+ |
hundreds of terabytes | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
Zookeeper | TITLE | 0.99+ |
Ed Walsh, ChaosSearch | CUBE Conversation May 2021
>>president >>so called big data promised to usher in a new era of innovation where companies competed on the basis of insights and agile decision making. There's little question that social media giants, search leaders and e commerce companies benefited. They had the engineering shops and the execution capabilities to take troves of data and turned them into piles of money. But many organizations were not as successful. They invested heavily in data architecture is tooling and hyper specialized experts to build out their data pipelines. Yet they still struggle today to truly realize they're busy. Did data in their lakes is plentiful but actionable insights aren't so much chaos. Search is a cloud based startup that wants to change this dynamic with a new approach designed to simplify and accelerate time to insights and dramatically lower cost and with us to discuss his company and its vision for the future is cuba Lem Ed Walsh had great to see you. Thanks for coming back in the cube. >>I always love to be here. Thank you very much. It's always a warm welcome. Thank you. >>Alright, so give us the update. You guys have had some big funding rounds, You're making real progress on the tech, taking it to market what's new with chaos surgery. >>Sure. Actually even a lot of good exciting things happen. In fact just this month we need some, you know, obviously announced some pretty exciting things. So we unveiled what we consider the industry first multi model data late platform that we allow you to take your data in S three. In fact, if you want to show the image you can, but basically we allow you to put your data in S three and then what we do is we activate that data and what we do is a full index of the data and makes it available through open a P. I. S. And the key thing about that is it allows your end users to use the tools are using today. So simply put your data in your cloud option charge, think Amazon S three and glacier think of all the different data. Is that a natural act? And then we do the hard work. And the key thing is to get one unified delic but it's a multi mode model access so we expose api like the elastic search aPI So you can do things like search or using cabana do log analytics but you can also do things like sequel, use Tableau looker or bring relational concepts into cabana. Things like joins in the data back end. But it allows you also to machine learning which is early next year. But what you get is that with that because of a data lake philosophy, we're not making new transformations without all the data movement. People typically land data in S. Three and we're on the shoulders of giants with us three. Um There's not a better more cost effective platform. More resilient. There's not a better queuing system out there and it's gonna cost curve that you can't beat. But basically so people store a lot of data in S. Three. Um But what their um But basically what you have to do is you E. T. L. Out to other locations. What we do is allow you to literally keep it in place. We index in place. We write our hot index to rewrite index, allow you to go after that but published an open aPI S. But what we avoid is the GTL process. So what our index does is look at the data and does full scheme of discovery normalization, were able to give sample sets. And then the refinery allows you to advance transformations using code. Think about using sequel or using rejects to change that data pull the dead apartheid things but use role based access to give that to the end user. But it's in a format that their tools understand cabana will use the elasticsearch ap or using elasticsearch calls but also sequel and go directly after data by doing that. You get a data lake but you haven't had to take the three weeks to three months to transform your data. Everyone else makes you. And you talk about the failure. The idea that Alex was put your data there in a very scalable resilient environment. Don't do transformation. It was too hard to structure for databases and data. Where else is put it there? We'll show you how value out Largely un delivered. But we're that last mile. We do exactly that. Just put it in s. three and we activated and activate it with a piece that the tools of your analysts use today or what they want to use in the future. That is what's so powerful. So basically we're on the shoulders of giants with street, put it there and we light it up and that's really the last mile. But it's this multi model but it's also this lack of transformation. We can do all the transformation that's all to virtually and available immediately. You're not doing extended GTL projects with big teams moving around a lot of data in the enterprise. In fact, most time they land and that's three and they move it somewhere and they move it again. What we're saying is now just leave in place well index and make it available. >>So the reason that it was interesting, so the reason they want to move in the S three was the original object storage cloud. It was, it was a cheap bucket. Okay. But it's become much more than that when you talk to customers like, hey, I have all this data in this three. I want to do something with it. I want to apply machine intelligence. I want to search it. I want to do all these things, but you're right. I have to move it. Oftentimes to do that. So that's a huge value. Now can I, are you available in the AWS marketplace yet? >>You know, in fact that was the other announcement to talk about. So our solution is one person available AWS marketplace, which is great for clients because they've been burned down their credits with amazon. >>Yeah, that's that super great news there. Now let's talk a little bit more about data. Like you know, the old joke of the tongue in cheek was data lakes become data swamps. You sort of know, see no schema on, right. Oh great. I can put everything into the lake and then it's like, okay, what? Um, so maybe double click on that a little bit and provide a little bit more details to your, your vision there and your philosophy. >>So if you could put things that data can get after it with your own tools on elastic or search, of course you do that. If you don't have to go through that. But everyone thinks it's a status quo. Everyone is using, you know, everyone has to put it in some sort of schema in a database before they can get access to what everyone does. They move it some place to do it. Now. They're using 1970s and maybe 1980s technology. And they're saying, I'm gonna put it in this database, it works on the cloud and you can go after it. But you have to do all the same pain of transformation, which is what takes human. We use time, cost and complexity. It takes time to do that to do a transformation for an user. It takes a lot of time. But it also takes a teams time to do it with dBS and data scientists to do exactly that. And it's not one thing going on. So it takes three weeks to three months in enterprise. It's a cost complexity. But all these pipelines for every data request, you're trying to give them their own data set. It ends up being data puddles all over this. It might be in your data lake, but it's all separated. Hard to govern. Hard to manage. What we do is we stop that. What we do is we index in place. Your dad is already necessary. Typically retailing it out. You can continue doing that. We really are just one more use of the data. We do read only access. We do not change that data and you give us a place in. You're going to write our index. It's a full rewrite index. Once we did that that allows you with the refinery to make that we just we activate that data. It will immediately fully index was performant from cabana. So you no longer have to take your data and move it and do a pipeline into elasticsearch which becomes kind of brittle at scale. You have the scale of S. Three but use the exact same tools you do today. And what we find for like log analytics is it's a slightly different use case for large analytics or value prop than Be I or what we're doing with private companies but the logs were saving clients 50 to 80% on the hard dollars a day in the month. They're going from very limited data sets to unlimited data sets. Whatever they want to keep an S. Three and glacier. But also they're getting away from the brittle data layer which is the loosen environment which any of the data layers hold you back because it takes time to put it there. But more importantly It becomes brittle at scale where you don't have any of that scale issue when using S. three. Is your dad like. So what what >>are the big use cases Ed you mentioned log analytics? Maybe you can talk about that. And are there any others that are sort of forming in the marketplace? Any patterns that you see >>Because of the multi model we can do a lot of different use cases but we always work with clients on high R. O. I use cases why the Big Bang theory of Due dad like and put everything in it. It's just proven not to work right? So what we're focusing first use cases, log analytics, why as by way with everything had a tipping point, right? People were buying model, save money here, invested here. It went quickly to no, no we're going cloud native and we have to and then on top of it it was how do we efficiently innovate? So they got the tipping point happens, everyone's going cloud native. Once you go cloud native, the amount of machine generated data that you have that comes from the environment dramatically. It just explodes. You're not managing hundreds or thousands or maybe 10,000 endpoints, you're dealing with millions or billions and also you need this insight to get inside out. So logs become one of the things you can't keep up with it. I think I mentioned uh we went to a group of end users, it was only 60 enterprise clients but we asked him what's your capture rate on logs And they said what do you want it to be 80%, actually 78 said listen we want eight captured 80 200 of our logs. That would be the ideal not everything but we need most of it. And then the same group, what are you doing? Well 82 had less than 50%. They just can't keep up with it and every everything including elastic and Splunk. They work harder to the process to narrow and keep less and less data. Why? Because they can't handle the scale, we just say landed there don't transform will make it all available to you. So for log analytics, especially with cloud native, you need this type of technology and you need to stop, it's like uh it feels so good when you stop hitting your head against the wall. Right? This detail process that this type of scale just doesn't work. So that's exactly we're delivering the second use case uh and that's with using elastic KPI but also using sequel to go after the same data representation. And we come out with machine learning. You can also do anomaly detection on the same data representation. So for a log uh analytic use case series devops setups. It's a huge value problem now the same platform because it has sequel exposed. You can do just what we use the term is agile B. I people are using you think about look or tableau power bi I uh metabolic. I think of all these toolsets that people want to give and uh and use your business or coming back to the centralized team every single week asking for new datasets. And they have to be set up like a data set. They have to do an e tail process that give access to that data where because of the way just landed in the bucket. If you have access to that with role based access, I can literally get you access that with your tool set, let's say Tableau looker. You know um these different data sets literally in five minutes and now you're off and running and if you want a new dataset they give another virtual and you're off and running. But with full governance so we can use to be in B I either had self service or centralized. Self service is kind of out of control, but we can move fast and the centralized team is it takes me months but at least I'm in control. We allow you do both fully governed but self service. Right. I got to >>have lower. I gotta excel. All right. And it's like and that's the trade off on each of the pieces of the triangle. Right. >>And they make it easy, we'll just put in a data source and you're done. But the problem is you have to E T L the data source. And that's what takes the three weeks to three months in enterprise and we do it virtually in five minutes. So now the third is actually think about um it's kind of a combination of the two. Think about uh you love the beers and diaper stories. So you know, think about early days of terror data where they look at sales out data for business and they were able to look at all the sales out data, large relational environment, look at it, they crunch all these numbers and they figured out by different location of products and the start of they sell more sticker things and they came up with an analogy which everyone talked about beers and diapers. If you put it together, you sell more from why? Because afternoon for anyone that has kids, you picked up diapers and you might want to grab a beer of your home with the kids. But that analogy 30 years ago, it's now well we're what's the shelf space now for approximate company? You know it is the website, it's actually what's the data coming from there. It's actually the app logs and you're not capturing them because you can't in these environments or you're capturing the data. But everyone's telling, you know, you've got to do an E. T. L. Process to keep less data. You've got to select, you got to be very specific because it's going to kill your budget. You can't do that with elastic or Splunk, you gotta keep less data and you don't even know what the questions are gonna ask with us, Bring all the app logs just land in S. three or glacier which is the most it's really shoulders of giants right? There's not a better platform cost effectively security resilience or through but to think about what you can stream and the it's the best queuing platform I've ever seen in the industry just landed there. And it's also very cost effective. We also compress the data. So by doing that now you match that up with actually relatively small amount of relational data and now you have the vaccine being data. But instead it's like this users using that use case and our top users are always, they start with this one then they use that feature and that feature. Hey, we just did new pricing is affecting these clients and that clients by doing this. We get that. But you need that data and people aren't able to capture it with the current platforms. A data lake. As long as you can make it available. Hot is a way to do it. And that's what we're doing. But we're unique in that. Other people are making GTL IT and put it in a in 19 seventies and 19 eighties data format called a schema. And we avoided that because we basically make S three a hot and elected. >>So okay. So I gotta I want to, I want to land on that for a second because I think sometimes people get confused. I know I do sometimes without chaos or it's like sometimes don't know where to put you. I'm like okay observe ability that seems to be a hot space. You know of course log analytics as part of that B. I. Agile B. I. You called it but there's players like elastic search their star burst. There's data, dogs, data bricks. Dream EOS Snowflake. I mean where do you fit where what's the category and how do you differentiate from players like that? >>Yeah. So we went about it fundamentally different than everyone else. Six years ago. Um Tom hazel and his band of merry men and women came up and designed it from scratch. They may basically yesterday they purposely built make s free hot analytic environment with open A. P. I. S. By doing that. They kind of changed the game so we deliver upon the true promises. Just put it there and I'll give you access to it. No one else does that. Everyone else makes you move the data and put it in schema of some format to get to it. And they try to put so if you look at elasticsearch, why are we going after? Like it just happens to be an easy logs are overwhelming. You once you go to cloud native, you can't afford to put it in a loose seen the elk stack. L is for loosen its inverted index. Start small. Great. But once you now grow it's now not one server. Five servers, 15 servers, you lose a server, you're down for three days because you have to rebuild the whole thing. It becomes brittle at scale and expensive. So you trade off I'm going to keep less or keep less either from retention or data. So basically by doing that so elastic we're not we have no elastic on that covers but we allow you to well index the data in S. Tree and you can access it directly through a cabana interface or an open search interface. Api >>out it's just a P. >>It's open A P. I. S. It's And by doing that you've avoided a whole bunch of time cost, complexity, time of your team to do it. But also the time to results the delays of doing that cost. It's crazy. We're saving 50-80 hard dollars while giving you unlimited retention where you were dramatically limited before us. And as a managed service you have to manage that Kind of Clunky. Not when it starts small, when it starts small, it's great once at scale. That's a terrible environment to manage the scale. That's why you end up with not one elasticsearch cluster, dozens. I just talked to someone yesterday had 125 elasticsearch clusters because of the scale. So anyway, that's where elastic we're not a Mhm. If you're using elastic it scale and you're having problems with the retired off of cost time in the, in the scale, we become a natural fit and you don't change what your end users do. >>So the thing, you know, they had people here, this will go, wow, that sounds so simple. Why doesn't everybody do this? The reason is it's not easy. You said tom and his merry band. This is really hard core tech. Um and it's and it's it's not trivial what you've built. Let's talk about your secret sauce. >>Yeah. So it is a patented technology. So if you look at our, you know, component for architecture is basically a large part of the 90% of value add is actually S. Three, I gotta give S three full kudos. They built a platform that we're on shoulders of giants. Um But what we did is we purpose built to make an object storage a hot alec database. So we have an index, like a database. Um And we basically the data you bring a refinery to be able to do all the advanced type of transformation but all virtually done because we're not changing the source of record, we're changing the virtual views And then a fabric allows you to manage and be fully elastic. So if we have a big queries because we have multiple clients with multiple use cases, each multiple petabytes, we're spending up 1800 different nodes after a particular environment. But even with all that we're saving them 58%. But it's really the patented technology to do this, it took us six years by the way, that's what it takes to come up with this. I come upon it, I knew the founder, I've known tom tom a stable for a while and uh you know his first thing was he figured out the math and the math worked out. Its deep tech, it's hard tech. But the key thing about it is we've been in market now for two years, multiple use cases in production at scale. Um Now what you do is roadmap, we're adding a P. I. So now we have elasticsearch natural proofpoint. Now you're adding sequel allows you open up new markets. But the idea for the person dealing with, you know, so we believe we deliver on the true promise of Data Lakes and the promise of Data lakes was put it there, don't focus on transferring. It's just too hard. I'll get insights out and that's exactly what we do. But we're the only ones that do that everyone else makes you E. T. L. At places. And that's the innovation of the index in the refinery that allows the index in place and give virtual views in place at scale. Um And then the open api is to be honest, uh I think that's a game. Give me an open api let me go after it. I don't know what tool I'm gonna use next week every time we go into account they're not a looker shop or Tableau Sharp or quick site shop there, all of them and they're just trying to keep up with the businesses. Um and then the ability to have role based access where actually can give, hey, get them their own bucket, give them their own refinery. As long as they have access to the data, they can go to their own manipulation ends up being >>just, >>that's the true promise of data lakes. Once we come out with machine learning next year, now you're gonna rip through the same embassy and the way we structured the data matrices. It's a natural fit for things like tensorflow pytorch, but that's, that's gonna be next year just because it's a different persona. But the underlining architecture has been built, what we're doing is trying to use case that time. So we worked, our clients say it's not a big bang. Let's nail a use case that works well. Great R. O. I great business value for a particular business unit and let's move to the next. And that's how I think it's gonna be really. That's what if you think about gardener talks about, if you think about what really got successful in data, where else in the past? That's exactly it wasn't the big bang, it was, let's go and nail it for particular users. And that's what we're doing now because it's multi model, there's a bunch of different use cases, but even then we're focusing on these core things that are really hard to do with other relational only environments. Yeah, I >>can see why you're still because you know, you haven't been well, you and I have talked about the api economy for forever and then you've been in the storage world so long. You know what a nightmare is to move data. We gotta, we gotta jump. But I want to ask you, I want to be clear on this. So you are your cloud cloud Native talked to frank's Lukman maybe a year ago and I asked him about on prem and he's like, no, we're never doing the halfway house. We are cloud all the >>way. I think >>you're, I think you have a similar answer. What what's your plan on Hybrid? >>Okay. We get, there's nothing about technology, we can't go on, but we are 100 cloud native or only in the public cloud. We believe that's a trend line. Everyone agrees with us, we're sticking there. That's for the opportunity. And if you can run analytics, There's nothing better than getting to the public cloud like Amazon and he was actually, that were 100 cloud native. Uh, we love S three and what would be a better place to put this is put the next three and we just let you light it up and then I guess if I'm gonna add the commercial and buy it through amazon marketplace, which we love that business model with amazon. It's >>great. Ed thanks so much for coming back in the cube and participating in the startup showcase. Love having you and best of luck. Really exciting. >>Hey, thanks again, appreciate it. >>All right, thank you for watching everybody. This is Dave Volonte for the cube. Keep it right there.
SUMMARY :
They had the engineering shops and the execution capabilities to take troves of data and Thank you very much. taking it to market what's new with chaos surgery. But basically what you have to do is you E. T. L. Out to other locations. But it's become much more than that when you talk You know, in fact that was the other announcement to talk about. Like you know, the old joke of the tongue in cheek was data lakes become data swamps. You have the scale of S. Three but use the exact same tools you do today. are the big use cases Ed you mentioned log analytics? So logs become one of the things you can't keep up with it. And it's like and that's the trade off on each of But the problem is you have to E T L the data I mean where do you fit where what's the category and how do you differentiate from players like that? no elastic on that covers but we allow you to well index the data in S. And as a managed service you have to manage that Kind of Clunky. So the thing, you know, they had people here, this will go, wow, that sounds so simple. the source of record, we're changing the virtual views And then a fabric allows you to manage and be That's what if you think about gardener talks about, if you think about what really got successful in data, So you are your cloud cloud I think What what's your plan on Hybrid? to put this is put the next three and we just let you light it up and then I guess if I'm gonna add Love having you and best of luck. All right, thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
15 servers | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
58% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
May 2021 | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Five servers | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
1970s | DATE | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
1980s | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
five minutes | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
S three | TITLE | 0.99+ |
three days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
six years | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
one server | QUANTITY | 0.99+ |
Ed | PERSON | 0.99+ |
Tom hazel | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
78 | QUANTITY | 0.99+ |
S. three | LOCATION | 0.99+ |
third | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
tom | PERSON | 0.99+ |
billions | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
dozens | QUANTITY | 0.99+ |
50-80 | QUANTITY | 0.98+ |
Six years ago | DATE | 0.98+ |
125 elasticsearch clusters | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
early next year | DATE | 0.97+ |
Tableau Sharp | ORGANIZATION | 0.97+ |
Alex | PERSON | 0.97+ |
today | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.96+ |
30 years ago | DATE | 0.96+ |
each | QUANTITY | 0.96+ |
one person | QUANTITY | 0.96+ |
S. Tree | TITLE | 0.96+ |
10,000 endpoints | QUANTITY | 0.96+ |
second use | QUANTITY | 0.95+ |
82 | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Tableau | TITLE | 0.94+ |
60 enterprise clients | QUANTITY | 0.93+ |
one | QUANTITY | 0.93+ |
eight | QUANTITY | 0.93+ |
1800 different nodes | QUANTITY | 0.91+ |
excel | TITLE | 0.9+ |
80 200 of our logs | QUANTITY | 0.89+ |
this month | DATE | 0.89+ |
S. Three | TITLE | 0.88+ |
agile | TITLE | 0.88+ |
ChaosSearch | ORGANIZATION | 0.86+ |
S. Three | TITLE | 0.86+ |
Dream EOS Snowflake | TITLE | 0.85+ |
cabana | LOCATION | 0.85+ |
100 cloud | QUANTITY | 0.83+ |
a day | QUANTITY | 0.81+ |
Bob Wise, AWS & Peder Ulander, AWS | Red Hat Summit 2021 Virtual Experience
(smart gentle music) >> Hey, welcome back everyone to theCUBE's coverage of Red Hat Summit 2021 virtual. I'm John Furrier, host of theCUBE, got two great guests here from AWS, Bob Wise, General Manager of Kubernetes for Amazon Web Services and Peder Ulander, Head of product marketing for the enterprise developer and open-source at AWS. Gentlemen, you guys are the core leaders in the AWS open-source initiatives. Thanks for joining us on theCUBE here for Red Hat Summit. >> Thanks for having us, John. >> Good to be here. >> So the innovation that's come from people building on top of the cloud has just been amazing. You guys, props to Amazon Web Services for constantly adding more and raising the bar on more services every year. You guys do that, and now public cloud has become so popular, and so important that now Hybrid has pushed the Edge. You got outpost with Amazon you see everyone following suit. It's pretty much clear vote of confidence from the customers that, Hybrid is the operating model of the future. And that really is about the Edge. So I want to chat with you about the open-source intersection there, so let's get into it. So we're here at Red Hat Summit. So Red Hat's an open-source company and timing is great for them. Now, part of IBM you guys have had a relationship with Red Hat for some time. Can you tell us about the partnership and how it's working together? >> Yeah, absolutely. Why don't I take that one? AWS and Red Hat have been strategic partners since, shoot, I think it's 2008 or so in the early days of AWS, when engaging with customers, we wanted to ensure that AWS was the best place for enterprises to run their Red Hat workloads. And this is super important when you think about, what Red Hat has accomplished with RHEL in the enterprise, it's running SAP, it's running Oracle's, it's running all different types of core business applications, as well as a lot of the new things that customers are innovating. And so having that relationship to ensure that not only did it work on AWS, but it actually scaled we had integration of services, we had the performance, the price all of the things that were so critical to customers was critical from day one. And we continue to evolve this relationship over time. As you see us coming into Red Hat Summit this year. >> Well, again, to the hard news here also the new service Red Hat OpenShift servers on AWS known as ROSA, the A for Amazon Red Hat OpenShift, A for Amazon Web Services, a clever acronym but really it's on AWS. What exactly is this service? What does it do? And who is it designed for? >> Well, I'll let me jump in on this one. Maybe let's start with the why? Why ROSA? Customers love using OpenShift, but they also want to use AWS. They want the best of both. So they want their peanut butter and their chocolate together in a single confection. A lot of those customers have deployed AWS, have deployed OpenShift on AWS. They want managed service simplified supply chain. We want to be able to streamline moving on premises, OpenShift workloads to AWS, naturally want good integration with AWS services. So as to the, what? Our new service jointly operated is supported by Red Hat and AWS to provide a fully managed to OpenShifts on AWS. So again, like lot of customers have been running OpenShift on AWS before this time, but of course they were managing it themselves typically. And so now they get a fully managed option with also simplified supply chain. Single support channels, single billing. >> You know, were talking before we came on camera about the acronym on AWS and people build on the clouds kind of like it's no big deal to say that, but I know it means something. I want to explain, you guys to explain this on because I know I've been scolded saying things on theCUBE that were kind of misspoken because it's easy to say, Oh yeah, I built that app. We built all this stuff on theCUBE was on AWS, but it's not on AWS. It means something from a designation standpoint what does on AWS mean? 'Cause this is OpenShift servers on AWS, we see this other companies have their products on AWS. This is specific designation. Can you share, please. >> John, when you see the branding of something like Red Hat on AWS, what that basically signals to our customers is that this is joint engineering work. This is the top of the strategic partners where we actually do a lot of joint engineering and work to make sure that we're driving the right integrations and the right experience, make sure that these things are accessible and discoverable in our console. They're treated effectively as a first-class service inside of the AWS ecosystem. So it's, there's not many of the on's, if you will. You think about SAP on VMware cloud, on AWS, and now Red Hat OpenShift on AWS, it really is that signal that helps give customers the confidence of tested, tried, trued, supported and validated service on top of AWS. And we think that's significantly better than anything else. It's easy to run an image on a VM and stuffed it into a cloud service to make it available, but customers want better, customer want tighter experiences. They want to be able to take advantage of all the great things that we have from a scale availability and performance perspective. And that's really what we're pushing towards. >> Yeah. I've seen examples specifically where when partners work with Amazon at that level of joint engineering, deeper partnerships. The results were pretty significant on the business side. So congratulations to you guys working with OpenShift and Red Hat, that's real testament to their product. But I got to ask you guys, pull the Amazon playbook out and challenge you guys, or just, create a new some commentary around the process of working backwards. Every time I talked to Andy Jassy, he always says, we work backwards from the customer and we get the requirements, and we're listening to customers. Okay, great. He loves that, he loves to say that it's true. I know that I've seen that. What is the customer work backwards document look like here? What is the, what was the need and what made this become such an important part of AWS? What was the, and then what are they saying now, now that the products out there? >> Well, OpenShift has a very wide footprint as does AWS. Some working backwards documents kind of write themselves, because now the customer demand is so strong that there's just no avoiding it. Now, it really just becomes about making sure you have a good plan so it becomes much more operational at that point. ROSA's definitely one of those services. We had so much demand and as a result, no surprise that we're getting a lot of enthusiasm for customers because so many of them asked us for it. (crosstalk) >> What's been the reaction in asking demand. That's kind of got the sense of that, but okay. So there's demand now, what's the what's the use cases? What are customers saying? What's the reaction been? >> Lot of the use cases are these Hybrid kind of use cases where a customer has a big OpenShift footprint. What we see from a lot of these customers is a strong demand for consistency in order to reduce IT sprawl. What they really want to do is have the smallest number of simplest environments they can. And so when customers that standardized on OpenShift really wants to be able to standardize OpenShifts, both in their on premises environment and on AWS and get managed service options just to remove the undifferentiated heavy lifting. >> Hey, what's your take on the product marketing side of this, where you got open-source becoming very enterprise specific, Red Hat's been there for a very long time. I've been user of Red Hat since the beginning and following them, and Linux, obviously is Linux where that's come from. But what features specifically jump out in this offering that customers are resonating around? What's the vibe here? >> John, you kind of alluded to it early on, which is I don't know that I'd necessarily call it Hybrid but the reality is our customers have environments that are on premises in the cloud and all the way out to the Edge. Today, when you think of a lot of solutions and services, it's a fractured experience that they have between those three locations. And one of our biggest commitments to our customers, just to make things super simple, remove the complexity do all of the hard work, which means, customers are looking for a consistent experience environment and tooling that spans data center to cloud, to Edge. And that's probably the biggest kind of core asset here for customers who might have standardized on OpenShift in the data centers. They come to the cloud, they want to continue to leverage those skills. I think probably one of the, an interesting one is we headed down in this path, we all know Delta Airlines. Delta is a great example of a customer who, joint customer, who have been doing stuff inside of AWS for a long time. They've been standardizing on Red Hat for a long time and bringing this together just gave them that simple extension to take their investment in Red Hat OpenShift and leverage their experience. And again, the scale and performance of what AWS brings them. >> Next question, what's next for a Red Hat OpenShift on AWS in your work with Red Hat. Where does this go next? What's the big to-do item, what do you guys see as the vision? >> I'm glad you mentioned open-source collaboration at the start there. We're taking to point out is that AWS works on the Kubernetes project upstream as does the Red Hat teams. So one of the ways that we collaborate with the Red Hat team is in open-source. One of those projects is on a new project called ACK. It was on controllers for Kubernetes and this is a kind of Kubernetes friendly way for my customers to use an API to manage AWS services. So that's one of the things that we're looking forward to as that goes GA wobbling out into both ROSA and onto our other services. >> Awesome. I got to ask you guys this while you're here, because it's very rare to get two luminaries within AWS on the open-source side. This has been a huge build-out over the many, many years for AWS, and some people really kind of don't understand kind of the position. So take a minute to clarify the position of AWS on open-source. You guys are very active in a lot of projects. You mentioned upstream with Kubernetes in other areas. I've had many countries with Adrian Cockcroft on this, as well as others within AWS. Huge proponents web services, I mean, you go back to the original Amazon. I mean, Jeff Barr was saying 15 years ago some of those API's are still in play here. API's back in 15 years ago, that was kind of not main stream at that time. So you had open standards, really made Amazon web services successful and you guys are continuing it but as the modern era is very enterprise, like and you see a lot of legacy, you seeing a lot more operations that they're going to be driven by open technologies that you guys are investing in. I'll take a minute to explain what AWS is doing and what you guys care about and your mission? >> Yeah. Well, why don't I start? And then we'll kick it over to Bob 'cause I think Bob can also talk about some of the key contribution sides, but the best way to think about it is kind of in three different pillars. So let's start with the first one, which is, around the fact of ensuring that our customer's favorite open-source projects run best on AWS. Since 2006, we've been helping our customers operationalize their open-source investments and really kind of achieve that scale and focus more on how they use and innovate on the products versus how they set up and run. And for myself being an open-source since the late 90s, the biggest opportunity, yet challenge was the access to the technology, but it still required you as a customer to learn how to set up, configure, operationalized support and sustain. AWS removes that heavy lifting and, again, back to that earlier point from the beginning of AWS, we helped customers scale and implement their Apache services, their database services, all of these different types of open-source projects to make them really work exceptionally well on AWS. And back to that point, make sure that AWS was the best place for their open-source projects. I think the second thing that we do, and you're seeing that today with what we're doing with ROSA and Red Hat is we partner with open-source leaders from Red Hat to Redis and Confluent to a number of different players out there, Grafana, and Prometheus, to even foundations like the LF and the CNCF. We partner with these leaders to ensure that we're working together to grow grow the overall experience and the overall the overall pie, if you will. And this kind of gets into that point you were making John in that, the old world legacy proprietary stuff, there's a huge chance for refresh and new opportunity and rethinking or modernization if you will, as you come into the cloud having the expertise and the partnerships with these key players is as enterprises move in, is so crucial. And then the third piece I'd like to talk about that's important to our open-source strategies is really around contribution. We have a number of projects that we've delivered ourselves. I think the two most recent ones that really come top of mind for me is, what we did with Babel Fish, as well as with OpenSearch. So contributing and driving a true open-source project that helps our customers, take advantage of things like an SQL, a proprietary to open-source SQL conversion tool, or what we're doing to make Elasticsearch, the opportune or the primary open platform for our customers. But it's not just about those services, it's also collaborating with key industry initiatives. Bob's at the forefront of that with what we're doing with the CNCF around things, like Kubernetes and Prometheus et cetera, Bob you want to jump in on some of that? >> Sure, I think the one thing I would add here is that customers love using those open-source projects. The one of the challenges with them frequently is security. And this is job zero to AWS. So a lot of the collaboration work we do, a lot of the work that we do on upstream projects is go specifically around kind of security oriented things because that is what customers expect when they come to get a managed service at AWS. Some of those efforts are somewhat unsung because you generally do more work and less talk, in security oriented things. But projects across AWS, that's always a key contribution focus for us. >> Good way to call out security too. I think that's being built-in to the everything now, that's an operating model. People call it shift-left day two operations. Whatever you want to look at it. You got this nice formation going between under the hood kind of programmability of the infrastructure at scale. And then you have the modern application development which is just beginning, programmable DevSecOps. It's funny, Bob, I'd love to get your take on this because I remember in the 80s and during the Unix generation I used to peddle software under the table. Like, here's a copy of, you just don't tell anyone, people in the younger generation don't get the fact that it wasn't always open. And so now you have open and you have this idea of an enterprise that's going to be a system management system view. So you got engineering and you got computer science kind of coming together, this SRE middle layer. You're hearing that as a, kind of a new discipline. So DevOps kind of has won. I mean, we kind of knew this for many, many years. I said this in 2013 on theCUBE actually at re-inventing. I just recently shared that clip. But okay, now you've got SecOps, DevSecOps. So now you have an era where it's a system thinking and open-source is driving all of that. So can you share your perspective because this is kind of where the puck is going. It's an open to open world. That's going to have to be open and scalable. How does open-source and you guys take it to the next level to give that same scale and reliability? What's your vision? >> The key here is really around automation and what we're seeing you could look at Kubernetes. Kubernetes, is essentially a robot. It was like the early design of it was built around robotics principles. So it's a giant software robot and the world has changed. If you just look at the influx of all kinds of automation to not just the DevOps world but to all industries, you see a similar kind of trend. And so the world of IT operations person is changing from doing the work that the robot did and replacing it with the robot to managing large numbers of robots. And in this case, the robots are like a little early and a little hard to talk to. And so, you end up using languages like YAML and other things, but it turns out robots still just do what you tell them to do. And so one of the things you have to do is be really, really careful because robots will go and do whatever it is you ask them to do. On the other hand, they're really, really good at doing that. So in the security area, they take the research points to the largest single source of security issues, being people making manual mistakes. And a lot of people are still a little bit terrified if human beings aren't touching things on the way to production. In AWS, we're terrified if humans aren't touching it. And that is a super hard chasm to cross and open-source projects have really, are really playing a big role in what's really a IT wide migration to a whole new set of, not just tools, but organizational approaches. >> What's your reaction to that? Because we're talking that essentially software concepts, because if you write bad code, the code will execute what you did. So assuming it compiles left in the old days. Now, if you're going to scale a large scale operations that has dynamic capabilities, services being initiated in terminating tear down up started, you need the automation, but if you really don't design it right, you could be screwed. This is a huge deal. >> This is one reason why we've put so much effort into getops that you can think of it as a more narrowly defined subset of the DevOps world with a specific set of principles around using kind of simplified declarative approaches, along with robots that converge the desired state, converge the system to the desired state. And when you get into large distributed systems, you end up needing to take those kinds of approaches to get it to work at scale. Otherwise you have problems. >> Yeah, just adding to that. And it's funny, you said DevOps has won. I actually think DevOps has won, but DevOps hasn't changed (indistinct) Bob, you were right, the reality is it was founded back what quite a while ago, it was more around CICD in the enterprise and the closed data center. And it was one of those where automation and runbooks took addressed the fact that, every pair of hands between service requests and service delivery recreated or created an issue. So that growth and that mental model of moving from a waterfall, agile to DevOps, you built it, you run it, type of a model, I think is really, really important. But as it comes out into the cloud, you no longer have those controls of the data center and you actually have infinite scale. So back to your point of you got to get this right. You have to architect correctly you have to make sure that your code is good, you have to make sure that you have full visibility. This is where it gets really interesting at AWS. And some of the things that we're tying in. So whether we're talking about getops like what Bob just went through, or what you brought up with DevSecOps, you also have things like, AIOps. And so looking at how we take our machine learning tools to really implement the appropriate types of code reviews to assessing your infrastructure or your choices against well-architected principles and providing automated remediation is key, adding to that is observability, developers, especially in a highly distributed environment need to have better understanding, fidelity and touchpoints of what's going on with our application as it runs in production. And so what we do with regards to the work we have in observability around Grafana and Prometheus projects only accelerate that co-whole concept of continuous monitoring and continuous observability, and then kind of really, adding to that, I think it was last month, we introduce our fault injection simulator, a chaos engineering tool that, again takes advantage of all of this automation and machine learning to really help our developers, our customers operate at scale. And make sure that when they are releasing code, they're releasing code that is not just great in a small sense, it works on my laptop, but it works great in a highly distributed massively scaled environment around the globe. >> You know, this is one of the things that impresses me about Red Hat this year. And I've said this before all the covers events I've covered with them is that they get the cloud scale piece and I think their relationship with you guys shows that I think, DevOps has won, but it's the gift that keeps giving in open-source because what you have here is no longer a conversation about the cloud moving to the cloud. It's the cloud has become the operating model. So the conversation shifts to much more complicated enterprise or, and or intelligent Edge, and whether it's industrial or human or whatever, you got a data problem. So that's about a programmability issue at scale. So what's interesting is that Red Hat is on those bandwagon. It's an operating system. I mean, basically it's a distributed computing paradigm, essentially ala AWS concept as a cloud. Now it goes to the Edge, it's just distributed services via an open-source. So what's your reaction to that? >> Yeah, it's back to the original point, John where I said, any CIO is thinking about their IT environment from data center to cloud, to Edge and the more consistency automation and, kind of tools that they're at their disposal to enable them to create that kind of, I think you started to talk about an infrastructure the whole as code infrastructure's code, it's now, almost everything is code. And that starts with the operating system, obviously. And that's why this is so critical that we're partnering with companies like Red Hat on our vision and their vision, because they aligned to where our customers were ultimately going. Bob, you want to, you want to add to that? >> Bob: No, I think you said it. >> John: You guys are crushing it. Bob, one quick question for you, while I got you here. You mentioned getops, I've heard this before, I kind of understand it. Can you just quickly define from your perspective. What is getops? >> Sure, well, getops is really taking the, I said before it's a kind of narrowed version of DevOps. Sure, it's infrastructure is code. Sure, you're doing things incrementally but the getops principle, it's back to like, what are the good, what are the best practices we are managing large numbers, large numbers of robots. And in this case, it's around this idea of declarative intent. So instead of having systems that reach into production and change things, what you do is you set up the defined declared state of the system that you want and then leave the robots to constantly work to converge the state there. That seems kind of nebulous. Let me give you like a really concrete example from Kubernetes, by the way the entire Kubernetes system design is based on this. You say, I want five pods running in production and that's running my application. So what Kubernetes does is it sits there and it constantly checks, Oh, I'm supposed to have five pods. Do I have five? Well, what happens if the machine running one of those pods goes away. Now, suddenly it goes and checks and says, Oh, I'm supposed to have five pods, but there's four pods. What action do I take to now try to get the system back to the state. So you don't have a system running, reaching out and checking externally to Kubernetes, you let Kubernetes do the heavy lifting there. And so it goes through, goes through a loop of, Oh, I need to start a new pod and then it converges the system state back to running five pods. So it's really taking that kind of declarative intent combined with constant convergence loops to fully production at scale. >> That's awesome. Well, we do a whole segment on state and stateless future, but we don't have time. I do want to summarize real quick. We're here at the Red Hat Summit 2021. You got Red Hat OpenShift on AWS. The big news, Bob and Peder tell us quickly in summary, why AWS? Why Red Hat? Why better together? Give the quick overview, Bob, we'll start with you. >> Bob, you want to kick us off? >> I'm going to repeat peanut butter and chocolate. Customers love OpenShift, they love managed services. They want a simplified operations, simplified supply chain. So you get the best of both worlds. You get the OpenShift that you want fully managed on AWS, where you get all of the security and scale. Yeah, I can't add much to that. Other than saying, Red Hat is powerhouse obviously on data centers it is the operating system of the data center. Bringing together the best in the cloud, with the best in the data center is such a huge benefit to our customers. Because back to your point, John, our customers are thinking about what are they doing from data center to cloud, to Edge and bringing the best of those pieces together in a seamless solution is so, so critical. And that that's why AW. (indistinct) >> Thanks for coming on, I really appreciate it. I just want to give you guys a plug for you and being humble, but you've worked in the CNCF and standards bodies has been well, well known and I'm getting the word out. Congratulations for the commitment to open-source. Really appreciate the community. Thanks you, thank you for your time. >> Thanks, John. >> Okay, Cube coverage here, covering Red Hat Summit 2021. I'm John Ferry, host of theCUBE. Thanks for watching. (smart gentle music)
SUMMARY :
in the AWS open-source initiatives. And that really is about the Edge. And so having that relationship to ensure also the new service Red Red Hat and AWS to kind of like it's no big deal to say that, of the on's, if you will. But I got to ask you guys, pull the Amazon because now the customer That's kind of got the Lot of the use cases are of this, where you got do all of the hard work, which what do you guys see as the vision? So one of the ways that we collaborate I got to ask you guys this the overall pie, if you will. So a lot of the collaboration work we do, And so now you have open And so one of the things you have to do the code will execute what you did. into getops that you can of the data center and you So the conversation shifts to and the more consistency automation and, I kind of understand it. of the system that you want We're here at the Red Hat Summit 2021. in the cloud, with the best I just want to give you guys a I'm John Ferry, host of theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Jeff Barr | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Ferry | PERSON | 0.99+ |
ROSA | ORGANIZATION | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Bob Wise | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Redis | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
LF | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Delta Airlines | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
five pods | QUANTITY | 0.99+ |
Red Hat OpenShift | TITLE | 0.99+ |
Grafana | ORGANIZATION | 0.99+ |
Red Hat | TITLE | 0.99+ |
five pods | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Kubernetes | ORGANIZATION | 0.99+ |
Arijit Mukherji, Splunk | Leading with Observability
>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hello and welcome to this special CUBE Conversation here in the Palo Alto studios, I'm John Furrier, host of theCUBE, for this Leading with Observability series with Under the Hood with Splunk Observability, I'm John Furrier with Arijit Mukherji with Splunk, he's a distinguished engineer, great to have you on. These are my favorite talks. Under the Hood means we're going to get all the details, what's powering observability, thanks for coming on. >> It's my pleasure, John, it's always nice to talk to you. >> Leading with Observability is the series, want to take a deep dive look across the spectrum of the product, the problems that it's solving, but Under the Hood is a challenge, because, people are really looking at coming out of COVID with a growth strategy, looking at cloud-native, Kubernetes, you're starting to see microservices really be a big part of that, in real deployments, in real scale. This has been a theme that's been growing, we've been covering it. But now, architectural decisions start to emerge. Could you share your thoughts on this, because this becomes a big conversation. Do you buy a tool here, how do you think it through, what's the approach? >> Exactly, John. So it's very exciting times in some sense, with observability right now. So as you mentioned and discussed a few times, there's a bunch of trends that are happening in the industry which is causing a renewed interest in observability, and also an appreciation of the importance of it, and observability now as a topic, it's like a huge umbrella topic, it covers many many different things like APM, your infrastructure monitoring, your logging, your real user monitoring, your digital experience management, and so on. So it's quite a set of things that all fall under observability, and so the challenge right now, as you mentioned, is how do we look at this holistically? Because, I think at this point, it is so many different parts to this edifice, to this building, that I think having a non-integrated strategy where you just maybe go buy or build individual pieces, I don't think that's going to get you very far, given the complexity of what we're dealing with. And frankly, that's one of the big challenges that we, as architects within Splunk, we are scratching our heads with, is how do we sort of build all of this in a more coherent fashion? >> You know, one of the things, Arijit, I want to get your thoughts on is because, I've been seeing this trend and, we've been talking about it on theCUBE a lot around systems thinking, and if you look at the distributed computing wave, from just go back 20 years and look at the history of how we got here, a lot of those similar concepts are happening again, with the cloud, but not as simple. You're seeing a lot more network, I won't say network management, but observability is essentially instrumentation of the traffic and looking at all the data, to make sure things like breaches and cybersecurity, and also making systems run effectively, but it's distributed computing at the end of it, so there's a lot of science that's been there, and now new science emerging around, how do you do this all? What's your thoughts on this, because this becomes a key part of the architectural choices that some companies have to make, if they want to be in position to take advantage of cloud-native growth, which is multifold benefits, and your product people talk about faster time to market and all that good stuff, but these technical decisions matter, can you explain? >> Yes, it absolutely does. I think the main thing that I would recommend that everybody do, is understand why observability, what do you want to get out of it? So it is not just a set of parts, as I mentioned earlier, but it brings direct product benefits, as we mentioned, like faster mean time to resolution, understanding what's going on in your environment, having maybe fewer outages at the same time, understanding your causes, so many different benefits. So the point is not that one has the ability to do maybe (indistinct) or ability to do infrastructure (indistinct), the main question is aspirationally, what are my goals that are aligned to what my business wants? So what do I want to achieve, do I want to innovate faster? In that case, how is observability going to help me? And this is sort of how you need to define your strategy in terms of what kind of tools you get and how they work together. And so, if you look at what we're doing at Splunk, you'll notice it's extremely exciting right now, there's a lot of acquisitions happening, a lot of products that we're building, and the question we're asking as architects is, suppose we want to use, that will help us achieve all of this, and at the same time be somewhat future-proofed. And I think any organization that's either investing in it, or building it, or buying it, they all would probably want to think along those lines. Like what are my foundational principles, what are the basic qualities I want to have out of this system? Because technologies and infrastructures will keep on changing, that's sort of the rule of nature right now. The question is how do we best address it in a more future-proofed system? At Splunk, we have come up with a few guiding principles, and I'm sure others will have done the same. >> You know, one of the dynamics I want to get your reaction to is kind of two perspectives, one is, the growth of more teams that are involved in the work, so whether it's from cyber to monitoring, there's more teams with tools out there that are working on the network. And then you have just the impact of the diversity of use cases, not so much data volume, 'cause that's been talked about, lot of, we're having a tsunami of data, that's clear. But different kinds of dynamics, whether it's real-time, bursting, and so when you have this kind of environment, you can have gaps. And solar winds have taught us anything, it's that you have to identify problems and resolve them, this comes up a lot in observability conversations, MTTI, mean time to identify, and then to resolve. These are concepts. If you don't see the data, you can't understand what's going on if you can't measure it. This is like huge. >> Yes, absolutely right, absolutely right. So what we really need now is, as you mentioned, we need an integrated tool set, right? What we mean by that, is the tools must be able to work together, the data must be able to be used across the board. So like by use case it should not be siloed or fragmented, that they should work as one system that users are able to learn, and then sort of be able to use effectively without context switching. Another concept that's quite important is, how flexible are you? Are you digging yourself into a fixed solution, or are you depending on open standards that will then let you change out implementations, or vendors, or what have you, (static crackles) down the line, relatively easily. So understanding how you're collecting the data, how good can open standards and open source you're using is important. But to your point about missing and gaps, I think full fidelity, like understanding every single transaction, if you can pull it off, is a fascinating superpower, because that's where you don't get the gaps, and if you are able to go back and track any bad transaction, any time, that is hugely liberating, right? Because without that, if you're going to do a lot of sampling, you're going to miss a huge percentage of the user interactions, that's probably a recipe for some kind of trouble down the line, as you mentioned. And actually, these are some of those principles that we are using to build the Splunk Observability Suite, is no sample or full fidelity is a core foundational principle, and for us, it's not just isolated to, let's say application performance management, where user gets your API and you're able to track what happened, we are actually taking this upstream, up to the user, so the user is taking actions on the browser, how do we capture and correlate what's happening on the browser, because (indistinct) as you know, there's a huge move towards single-page applications, where half of my logic that my users are using is actually running on the browser, right? And so understanding the whole thing end to end, without any gaps, without any sampling, is extremely powerful. And so yes, so those are some of the things that we're investing in, and I think, again, one should keep in mind, when they're considering observability. >> You know, we were talking the other day, and having a debate around technical debt, and how that applies to observability, and one of the things that you brought up earlier about tools, and tool sprawl, that causes problems, you have operational friction, and we've heard people say "Yeah, I've got too many tools," and just too much, to replatform or refactor, it's just too much pain in the butt for me to do that, so at some point they break, I take on too much technical debt. When is that point of no return, where someone feels the pain on tool sprawl? What are some of the signaling where it's like, "You better move now (indistinct) too late," 'cause this integrated platform, that's what seems to be the way people go, as you mentioned. But this tool sprawl is a big problem. >> It is, and I think it starts hitting you relatively early on, nowadays, if you ask my opinion. So, tool sprawl is I think, if you find yourself, I think using three or four different tools, which are all part of some critical workload together, that's a stink that there's something could be optimized. For example, let's say I'm observing whether my website works fine, and if my alerting tool is different from my data gathering, or whatever, the infrastructure monitoring metrics tool, which is different from my incident management tool, which is different from my logs tool, then if you put the hat on of an engineer, a poor engineer who's dealing with a crisis, the number of times they have to context switch and the amount of friction that adds to the process, the delay that it adds to the process is very very painful. So my thinking is that at some point, especially if we find that core critical workloads are being fragmented, and that's when sort of I'm adding a bunch of friction, it's probably not good for us to sort of make that sort of keep on going for a while, and it would be time to address that problem. And frankly, having these tools integrated, it actually brings a lot of benefit, which is far bigger than the sum of the parts, because think about it, if I'm looking at, say, an incident, and if I'm able to get a cross-tool data, all presented in one screen, one UI, that is hugely powerful because it gives me all the information that I need without having to, again, dig into five different tools, and allows me to make quicker, faster decisions. So I think this is almost an inevitable wave that everybody must and will adopt, and the question is, I think it's important to get on the good program early, because unless you sort of build a lot of practices within an organization, that becomes very very hard to change later, it is just going to be more costly down the line. >> So from an (indistinct) standpoint, under the hood, integrated platform, takes that tool sprawl problem away, helps there. You had open source technology so there's no lock-in, you mentioned full fidelity, not just sampling, full end to end tracing, which is critical, wants to avoid those gaps. And then the other are I want to get your thoughts on, that you didn't bring up yet, that people are talking about is, real time streaming of analytics. What role does that play, is that part of the architecture, what function does that do? >> Right, so to me, it's a question of, how quickly do I find a problem? If you think about it, we are moving to more and more software services, right? So everybody's a software service now, and we all talk to each other in different services. Now, any time you use a dependency, you want to know how available it is, what are my SLAs and SLOs and so on, and three nines is almost a given, that you must provide three nines or better. Ideally four nines of availability, because your overall system stability is going to be less than the one of any single part, and if you go to look at four nines, you have about four or five minutes of total downtime in one whole month. That's a hard thing to be able to control. And if your alerting is going to be in order of five or 10 minutes, there's no chance you're going to be able to promise the kind of high availability that you need to be able to do, and so the fundamental question is you need to understand problems quick, like fast, within seconds, ideally. Now streaming is one way to do it, but that really is the problem definition, how do I find the problems early enough so that I can give my automation or my engineers time to figure out what happened and take corrective action? Because if I can't even know that there's something amiss, then there's no chance I'm going to be able to sort of provide that availability that my solution needs. So in that context, real time is very important, it is much more important now, because we have all these software and service dependencies, than it maybe used to be in the past. And so that's kind of why, again, at Splunk, we invested in real time streaming analytics, with the idea again being, let the problem, how can we address this, how can we provide customers with quick, high level important alerts in seconds, and that sort of real time streaming is probably the best way to achieve that. And then, if I were to, sorry, go ahead. >> No, go on, finish. >> Yeah, I was going to say that it's one thing to get an alert, but the question then is, now what do I do with it? And there's obviously a lot of alert noise that's going out, and people are fatigued, and I have all these alerts, I have this complex environment, understanding what to do, which is sort of reducing the MTTR part of it, is also important, I think environments are so complex now, that without a little bit of help from the tool, you are not going to be able to be very effective, it's going to take you longer, and this is also another reason why integrated tools are better, because they can provide you hints, looking at all the data, not just one type, not just necessarily logs, or not just necessarily traces, but they have access to the whole data set, and they can give you far better hints, and that's again one of the foundational principles, because this is in the emergent field of AIOps, where the idea is that we want to bring the power of data science, the power of machine learning, and to aid the operator in figuring out where a problem might be, so that they can at least take corrective action faster, not necessarily fix it, but at least bypass the problem, or take some kind of corrective action, and that's a theme that sort of goes across our suite of tools is, the question we ask ourselves is, "In every situation, what information could I have provided them, what kind of hints could we have provided them, to short circuit their resolution process?" >> It's funny you mention suite of tools, you have an Observability Suite, which Splunk leads with, as part of the series, it's funny, suite of tools, it's kind of like, you kind of don't want to say it, but it is kind of what's being discussed, it's kind of a platform and tool working together, and I think the trend seems to be, it used to be in the old days, you were a platform player or a tool player, really kind of couldn't do both, but now with cloud-native, as it's distributed computing, with all this importance around observability, you got to start thinking, suite has platform features, could you react to that, and how would you talk about that, because what does it mean to be a platform? Platforms have benefits, tools have benefits, working together implies it's a combination, could you share your thoughts on that reaction to that? >> That's a very interesting question you asked, John, so this is actually, if you asked me how I look at the solution set that we have, I will explain it thus. We are a platform, we are a set of products and tools, and we are an enterprise solution. And let me explain what I mean by that, because I think all of these matter, to somebody or the other. As a platform, you're like "How good am I in dealing with data?" Like ingesting data, analyzing data, alerting you, so those are the core foundational features that everybody has, these are the database-centric aspects of it, right? And if you look at a lot of organizations who have mature practices, they are looking for a platform, maybe it scales better than what they have, or whatnot, and they're looking for a platform, they know what to do, build out on top of that, right? But at the same time, a platform is not a product, 99% of our users, they're not going to make database calls to fetch and query data, they want an end to end, like a thing that they can use to say, "Monitor my Kubernetes," "Monitor my Elasticsearch," "Monitor my," you know, whatever other solution I may have. So then we build a bunch of products that are built on top of the platform, which provide sort of the usability, so where, it's very easy to get on, send the data, have built-in content, dashboard (indistinct), what have you, so that my day to day work is fast, because I'm not a observability engineer, I'm a software engineer working on something, and I want to use observability, make it easy for me, right? So that's sort of the product aspect of it. But then if you look at organizations that a little bit scale up, just a product is also not good enough. Now we're looking at a observability solution that's deployed in an enterprise, and there are many many products, many many teams, many many users, and then how can one be effective there? And if you look at what's important at that level, it's not the database aspect or the platform aspect, it's about how well can I manage it, do I have visibility into what I am sending, what my bill is, can I control against incorrect usage, do I have permissions to sort of control who can mess with my (indistinct) and so on, and so there's a bunch of layer of what we call enterprise capabilities that are important in an organizational setting. So I think in order to build something that's successful in this space, we have to think at all these three levels, right? And all of these are important, because in the end, it's how much value am I getting out of it, it's not just what's theoretically possible, what's really happening, and all of these are important in that context. >> And I think, Arijit, that's amazing masterclass right there, soundbite right there, and I think it's because the data also is important, if you're going to be busting down data silos, you need to have a horizontally scalable data observability space. You have to have access to the data, so I think the trend will be more integrated, clearly, and more versatile from a platform perspective, it has to be. >> Absolutely, absolutely. >> Well, we're certainly going to bring you back on our conversations when we have our events and/or our groups around digital transformation Under the Hood series that we're going to do, but great voice, great commentary, Arijit, thank you for sharing that knowledge with us, appreciate it. >> My pleasure, thank you very much. >> Okay, I'm John Furrier with theCUBE, here, Leading with Observability content series with Splunk, I'm John Furrier with theCUBE, thanks for watching. (calm music)
SUMMARY :
leaders all around the world, great to have you on. always nice to talk to you. Could you share your thoughts on this, and so the challenge right and if you look at the and at the same time be it's that you have to identify and if you are able to go back and how that applies to observability, the delay that it adds to the that part of the architecture, and so the fundamental question is And if you look at a lot of organizations and I think it's because going to bring you back I'm John Furrier with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Arijit Mukherji | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Arijit | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
99% | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
one screen | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Under the Hood | TITLE | 0.99+ |
five minutes | QUANTITY | 0.98+ |
five different tools | QUANTITY | 0.98+ |
Splunk | PERSON | 0.98+ |
one system | QUANTITY | 0.97+ |
three nines | QUANTITY | 0.97+ |
two perspectives | QUANTITY | 0.97+ |
one way | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Leading with Observability | TITLE | 0.97+ |
Splunk Observability Suite | TITLE | 0.96+ |
one whole month | QUANTITY | 0.96+ |
four nines | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
single | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
four different tools | QUANTITY | 0.92+ |
one type | QUANTITY | 0.92+ |
three levels | QUANTITY | 0.9+ |
about four | QUANTITY | 0.89+ |
Under the Hood with Splunk Observability | TITLE | 0.89+ |
20 years | QUANTITY | 0.82+ |
single part | QUANTITY | 0.81+ |
CUBE Conversation | EVENT | 0.79+ |
Kubernetes | ORGANIZATION | 0.78+ |
page | QUANTITY | 0.76+ |
Leading with Observability | TITLE | 0.75+ |
one UI | QUANTITY | 0.73+ |
my Kubernetes | TITLE | 0.72+ |
with Observability | TITLE | 0.71+ |
Elasticsearch | TITLE | 0.69+ |
COVID | TITLE | 0.68+ |
single transaction | QUANTITY | 0.66+ |
Under the | TITLE | 0.66+ |
less than | QUANTITY | 0.6+ |
Conversation | EVENT | 0.54+ |
Hood | ORGANIZATION | 0.47+ |
Muddu Sudhakar, Investor | theCUBE on Cloud 2021
(gentle music) >> From the Cube Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is theCube Conversation. >> Hi everybody, this is Dave Vellante, we're back at Cube on Cloud, and with me is Muddu Sudhakar. He's a long time alum of theCube, a technologist and executive, a serial entrepreneur and an investor. Welcome my friend, good to see you. >> Good to see you, Dave. Pleasure to be with you. Happy elections, I guess. >> Yeah, yeah. So I wanted to start, this work from home, pivot's been amazing, and you've seen the enterprise collaboration explode. I wrote a piece a couple months ago, looking at valuations of various companies, right around the snowflake IPO, I want to ask you about that, but I was looking at the valuations of various companies, at Spotify, and Shopify, and of course Zoom was there. And I was looking at just simple revenue multiples, and I said, geez, Zoom actually looks, might look undervalued, which is crazy, right? And of course the stock went up after that, and you see teams, Microsoft Teams, and Microsoft doing a great job across the board, we've written about that, you're seeing Webex is exploding, I mean, what do you make of this whole enterprise collaboration play? >> No, I think the look there is a trend here, right? So I think this probably trend started before COVID, but COVID is going to probably accelerate this whole digital transformation, right? People are going to work remotely a lot more, not everybody's going to come back to the offices even after COVID, so I think this whole collaboration through Slack, and Zoom, and Microsoft Teams and Webex, it's going to be the new game now, right? Both the video, audio and chat solutions, that's really going to help people like eyeballs. You're not going to spend time on all four of them, right? It's like everyday from a consumer side, you're going to spend time on your Gmail, Facebook, maybe Twitter, maybe Instagram, so like in the consumer side, on your personal life, you have something on the enterprise. The eyeballs are going to be in these platforms. >> Yeah. Well. >> But we're not going to take everything. >> Well, So you are right, there's a permanence to this, and I got a lot of ground to cover with you. And I always like our conversations mood because you tell it like it is, I'm going to stay on that work from home pivot. You know a lot about security, but you've seen three big trends, like mega trends in security, Endpoint, Identity Access Management, and Cloud Security, you're seeing this in the stock prices of companies like CrowdStrike, Zscaler, Okta- >> Right >> Sailpoint- >> Right, I mean, they exploded, as a result of the pandemic, and I think I'm inferring from your comment that you see that as permanent, but that's a real challenge from a security standpoint. What's the impact of Cloud there? >> No, it isn't impact but look, first is all the services required to be Cloud, right? See, the whole ideas for it to collaborate and do these things. So you cannot be running an application, like you can't be running conference and SharePoint oN-Prem, and try to on a Zoom and MS teams. So that's why, if you look at Microsoft is very clever, they went with Office 365, SharePoint 365, now they have MS Teams, so I think that Cloud is going to drive all these workloads that you have been talking about a lot, right? You and John have been saying this for years now. The eruption of Cloud and SAS services are the vehicle to drive this next-generation collaboration. >> You know what's so cool? So Cloud obviously is the topic, I wonder how you look at the last 10 years of Cloud, and maybe we could project forward, I mean the big three Cloud vendors, they're running it like $20 billion a quarter, and they're growing collectively, 35, 40% clips, so we're really approaching a hundred billion dollars for these three. And you hear stats like only 20% of the workloads are in the public Cloud, so it feels like we're just getting started. How do you look at the impact of Cloud on the market, as you say, the last 10 years, and what do you expect going forward? >> No, I think it's very fascinating, right? So I remember when theCube, you guys are talking about 10 years back, now it's been what? More than 10 years, 15 years, since AWS came out with their first S3 service back in 2006. >> Right. >> Right? so I think look, Cloud is going to accelerate even more further. The areas is going to accelerate is for different reasons. I think now you're seeing the initial days, it's all about startups, initial workloads, Dev test and QA test, now you're talking about real production workloads are moving towards Cloud, right? Initially it was backup, we really didn't care for backup they really put there. Now you're going to have Cloud health primary services, your primary storage will be there, it's not going to be an EMC, It's not going to be a NetApp storage, right? So workloads are going to shift from the business applications, and these business applications, will be running on the Cloud, and I'll make another prediction, make customer service and support. Customer service and support, again, we should be running on the Cloud. You're not want to run the thing on a Dell server, or an IBM server, or an HP server, with your own hosted environment. That model is not because there's no economies of scale. So to your point, what will drive Cloud for the next 10 years, will be economies of scale. Where can you take the cost? How can I save money? If you don't move to the Cloud, you won't save money. So all those workloads are going to go to the Cloud are people who really want to save, like global gradual custom, right? If you stay on the ASP model, a hosted, you're not going to save your costs, your costs will constantly go up from a SaaS perspective. >> So that doesn't bode well for all the On-prem guys, and you hear a lot of the vendors that don't own a Cloud that talk about repatriation, but the numbers don't support that. So what do those guys do? I mean, they're talking multi-Cloud, of course they're talking hybrid, that's IBM's big play, how do you see it? >> I think, look, see there, to me, multi-Cloud makes sense, right? You don't want one vendor that you never want to get, so having Amazon, Microsoft, Google, it gives them a multi-Cloud. Even hybrid Cloud does make sense, right? There'll be some workloads. It's like, we are still running On-prem environment, we still have mainframe, so it's never going to be a hundred percent, but I would say the majority, your question is, can we get to 60, 70, 80% workers in the next 10 years? I think you will. I think by 2025, more than 78% of the Cloud Migration by the next five years, 70% of workload for enterprise will be on the Cloud. The remaining 25, maybe Hybrid, maybe On-prem, but I get panics, really doesn't matter. You have saved and part of your business is running on the Cloud. That's your cost saving, that's where you'll see the economies of scale, and that's where all the growth will happen. >> So square the circle for me, because again, you hear the stat on the IDC stat, IBM Ginni Rometty puts it out there a lot that only 20% of the workloads are in the public Cloud, everything else is On-prem, but it's not a zero sum game, right? I mean the Cloud native stuff is growing like crazy, the On-prem stuff is flat to down, so what's going to happen? When you talk about 70% of the workloads will be in the Cloud, do you see those mission critical apps and moving into the car, I mean the insurance companies going to put their claims apps in the Cloud, or the financial services companies going to put their mission critical workloads in the Cloud, or they just going to develop new stuff that's Cloud native that is sort of interacts with the On-prem. How do you see that playing out? >> Yeah, no, I think absolutely, I think a very good question. So two things will happen. I think if you take an enterprise, right? Most businesses what they'll do is the workloads that they should not be running On-prem, they'll move it up. So obviously things like take, as I said, I use the word SharePoint, right? SharePoint and conference, all the knowledge stuff is still running on people's data centers. There's no reason. I understand, I've seen statistics that 70, 80% of the On-prem for SharePoint will move to SharePoint on the Cloud. So Microsoft is going to make tons of money on that, right? Same thing, databases, right? Whether it's CQL server, whether there is Oracle database, things that you are running as a database, as a Cloud, we move to the Cloud. Whether that is posted in Oracle Cloud, or you're running Oracle or Mongo DB, or Dynamo DB on AWS or SQL server Microsoft, that's going to happen. Then what you're talking about is really the App concept, the applications themselves, the App server. Is the App server is going to run On-prem, how much it's going to laureate outside? There may be a hybrid Cloud, like for example, Kafka. I may use a Purse running on a Kafka as a service, or I may be using Elasticsearch for my indexing on AWS or Google Cloud, but I may be running my App locally. So there'll be some hybrid place, but what I would say is for every application, 75% of your Comprende will be on the Cloud. So think of it like the Dev. So even for the On-prem app, you're not going to be a 100 percent On-prem. The competent, the billing materials will move to the Cloud, your Purse, your storage, because if you put it On-prem, you need to add all this, you need to have all the whole things to buy it and hire the people, so that's what is going to happen. So from a competent perspective, 70% of your bill of materials will move to the Cloud, even for an On-prem application. >> So, Of course, the susification of the industry in the last decade and in my three favorite companies last decade, you've worked for two of them, Tableau, ServiceNow, and Splunk. I want to ask you about those, but I'm interested in the potential disruption there. I mean, you've got these SAS companies, Salesforce of course is another one, but they can't get started in 1999. What do you see happening with those? I mean, we're basically building these sort of large SAS, platforms, now. Do you think that the Cloud native world that developers can come at this from an angle where they can disrupt those companies, or are they too entrenched? I mean, look at service now, I mean, I don't know, $80 billion market capital where they are, they bigger than Workday, I mean, just amazing how much they've grown and you feel like, okay, nothing can stop them, but there's always disruption in this industry, what are your thoughts on that. >> Not very good with, I think there'll be disrupted. So to me actually to your point, ServiceNow is now close to a 100 billion now, 95 billion market coverage, crazy. So from evaluation perspective, so I think the reason they'll be disrupted is that the SAS vendors that you talked about, ServiceNow, and all this plan, most of these services, they're truly not a multi-tenant or what do you call the Cloud Native. And that is the Accenture. So because of that, they will not be able to pass the savings back to the enterprises. So the cost economics, the economics that the Cloud provides because of the multi tenancy ability will not. The second reason there'll be disrupted is AI. So far, we talked about Cloud, but AI is the core. So it's not really Cloud Native, Dave, I look at the AI in a two-piece. AI is going to change, see all the SAS vendors were created 20 years back, if you remember, was an operator typing it, I don't respond administered we'll type a Splunk query. I don't need a human to type a query anymore, system will actually find it, that's what the whole security game has changed, right? So what's going to happen is if you believe in that, that AI, your score will disrupt all the SAS vendors, so one angle SAS is going to have is a Cloud. That's where you make the Cloud will take up because a SAS application will be Cloudified. Being SAS is not Cloud, right? Second thing is SAS will be also, I call it, will be AI-fied. So AI and machine learning will be trying to drive at the core so that I don't need that many licenses. I don't need that many humans. I don't need that many administrators to manage, I call them the tuners. Once you get a driverless car, you don't need a thousand tuners to tune your Tesla, or Google Waymo car. So the same philosophy will happen is your Dev Apps, your administrators, your service management, people that you need for service now, and these products, Zendesk with AI, will tremendously will disrupt. >> So you're saying, okay, so yeah, I was going to ask you, won't the SAS vendors, won't they be able to just put, inject AI into their platforms, and I guess I'm inferring saying, yeah, but a lot of the problems that they're solving, are going to go away because of AI, is that right? And automation and RPA and things of that nature, is that right? >> Yes and no. So I'll tell you what, sorry, you have asked a very good question, let's answer, let me rephrase that question. What you're saying is, "Why can't the existing SAS vendors do the AI?" >> Yes, right. >> Right, >> And there's a reason they can't do it is their pricing model is by number of seats. So I'm not going to come to Dave, and say, come on, come pay me less money. It's the same reason why a board and general lover build an electric car. They're selling 10 million gasoline cars. There's no incentive for me, I'm not going to do any AI, I'm going to put, I'm not going to come to you and say, hey, buy me a hundred less license next year from it. So that is one reason why AI, even though these guys do any AI, it's going to be just so I call it, they're going to, what do you call it, a whitewash, kind of like you put some paint brush on it, trying to show you some AI you did from a marketing dynamics. But at the core, if you really implement the AI with you take the driver out, how are you going to change the pricing model? And being a public company, you got to take a hit on the pricing model and the price, and it's going to have a stocking part. So that, to your earlier question, will somebody disrupt them? The person who is going to disrupt them, will disrupt them on the pricing model. >> Right. So I want to ask you about that, because we saw a Snowflake, and it's IPO, we were able to pour through its S-1, and they have a different pricing model. It's a true Cloud consumption model, Whereas of course, most SAS companies, they're going to lock you in for at least one year term, maybe more, and then, you buy the license, you got to pay X. If you, don't use it, you still got to pay for it. Snowflake's different, actually they have a different problem, that people are using it too much and the sea is driving the CFO crazy because the bill is going up and up and up, but to me, that's the right model, It's just like the Amazon model, if you can justify it, so how do you see the pricing, that consumption model is actually, you're seeing some of the On-prem guys at HPE, Dell, they're doing as a service. They're kind of taking a page out of the last decade SAS model, so I think pricing is a real tricky one, isn't it? >> No, you nailed it, you nailed it. So I think the way in which the Snowflake there, how the disruptors are data warehouse, that disrupted the open source vendors too. Snowflake distributed, imagine the playbook, you disrupted something as the $ 0, right? It's an open source with Cloudera, Hortonworks, Mapper, that whole big data that you want me to, or that market is this, that disrupting data warehouses like Netezza, Teradata, and the charging more money, they're making more money and disrupting at $0, because the pricing models by consumption that you talked about. CMT is going to happen in the service now, Zen Desk, well, 'cause their pricing one is by number of seats. People are going to say, "How are my users are going to ask?" right? If you're an employee help desk, you're back to your original health collaborative. I may be on Slack, I could be on zoom, I'll maybe on MS Teams, I'm going to ask by using usage model on Slack, tools by employees to service now is the pricing model that people want to pay for. The more my employees use it, the more value I get. But I don't want to pay by number of seats, so the vendor, who's going to figure that out, and that's where I look, if you know me, I'm right over as I started, that's what I've tried to push that model look, I love that because that's the core of how you want to change the new game. >> I agree. I say, kill me with that problem, I mean, some people are trying to make it a criticism, but you hit on the point. If you pay more, it's only because you're getting more value out of it. So I wanted to flip the switch here a little bit and take a customer angle. Something that you've been on all sides. And I want to talk a little bit about strategies, you've been a strategist, I guess, once a strategist, always a strategist. How should organizations be thinking about their approach to Cloud, it's cost different for different industries, but, back when the cube started, financial services Cloud was a four-letter word. But of course the age of company is going to matter, but what's the framework for figuring out your Cloud strategy to get to your 70% and really take advantage of the economics? Should I be Mono Cloud, Multi-Cloud, Multi-vendor, what would you advise? >> Yeah, no, I mean, I mean, I actually call it the tech stack. Actually you and John taught me that what was the tech stack, like the lamp stack, I think there is a new Cloud stack needs to come, and that I think the bottomline there should be... First of all, anything with storage should be in the Cloud. I mean, if you want to start, whether you are, financial, doesn't matter, there's no way. I come from cybersecurity side, I've seen it. Your attackers will be more with insiders than being on the Cloud, so storage has to be in the Cloud then come compute, Kubernetes. If you really want to use containers and Kubernetes, it has to be in the public Cloud, leverage that have the computer on their databases. That's where it can be like if your data is so strong, maybe run it On-prem, maybe have it on a hosted model for when it comes to database, but there you have a choice between hybrid Cloud and public Cloud choice. Then on top when it comes to App, the app itself, you can run locally or anywhere, the App and database. Now the areas that you really want to go after to migrate is look at anything that's an enterprise workload that you don't need people to manage it. You want your own team to move up in the career. You don't want thousand people looking at... you don't want to have a, for example, IT administrators to call central people to the people to manage your compute storage. That workload should be more, right? You already saw Sierra moved out to Salesforce. We saw collaboration already moved out. Zoom is not running locally. You already saw SharePoint with knowledge management mode up, right? With a box, drawbacks, you name anything. The next global mode is a SAS workloads, right? I think Workday service running there, but work data will go into the Cloud. I bet at some point Zendesk, ServiceNow, then either they put it on the public Cloud, or they have to create a product and public Cloud. To your point, these public Cloud vendors are at $2 trillion market cap. They're they're bigger than the... I call them nation States. >> Yeah, >> So I'm servicing though. I mean, there's a 2 trillion market gap between Amazon and Azure, I'm not going to compete with them. So I want to take this workload to run it there. So all these vendors, if you see that's where Shandra from Adobe is pushing this right, Adobe, Workday, Anaplan, all the SAS vendors we'll move them into the public Cloud within these vendors. So those workloads need to move out, right? So that all those things will start, then you'll start migrating, but I call your procurement. That's where the RPA comes in. The other thing that we didn't talk about, back to your first question, what is the next 10 years of Cloud will be RPA? That third piece to Cloud is RPA because if you have your systems On-prem, I can't automate them. I have to do a VPN into your house there and then try to automate your systems, or your procurement, et cetera. So all these RPA vendors are still running On-prem, most of them, whether it's UI path automation anywhere. So the Cloud should be where the brain should be. That's what I call them like the octopus analogy, the brain is in the Cloud, the tentacles are everywhere, they should manage it. But if my tentacles have to do a VPN with your house to manage it, I'm always will have failures. So if you look at the why RPA did not have the growth, like the Snowflake, like the Cloud, because they are running it On-prem, most of them still. 80% of the RP revenue is On-prem, running On-prem, that needs to be called clarified. So AI, RPA and the SAS, are the three reasons Cloud will take off. >> Awesome. Thank you for that. Now I want to flip the switch again. You're an investor or a multi-tool player here, but so if you're, let's say you're an ecosystem player, and you're kind of looking at the landscape as you're in an investor, of course you've invested in the Cloud, because the Cloud is where it's at, but you got to be careful as an ecosystem player to pick a spot that both provides growth, but allows you to have a moat as, I mean, that's why I'm really curious to see how Snowflake's going to compete because they're competing with AWS, Microsoft, and Google, unlike, Frank, when he was at service now, he was competing with BMC and with on-prem and he crushed it, but the competitors are much more capable here, but it seems like they've got, maybe they've got a moat with MultiCloud, and that whole data sharing thing, we'll see. But, what about that? Where are the opportunities? Where's that white space? And I know there's a lot of white space, but what's the framework to look at, from an investor standpoint, or even a CEO standpoint, where you want to put place your bets. >> No, very good question, so look, I did something. We talk as an investor in the board with many companies, right? So one thing that says as an investor, if you come back and say, I want to create a next generation Docker or a computer, there's no way nobody's going to invest. So that we can motor off, even if you want to do object storage or a block storage, I mean, I've been an investor board member of so many storage companies, there's no way as an industry, I'll write a check for a compute or storage, right? If you want to create a next generation network, like either NetSuite, or restart Juniper, Cisco, there is no way. But if you come back and say, I want to create a next generation Viper for remote working environments, where AI is at the core, I'm interested in that, right? So if you look at how the packets are dropped, there's no intelligence in either not switching today. The packets come, I do it. The intelligence is not built into the network with AI level. So if somebody comes with an AI, what good is all this NVD, our GPS, et cetera, if you cannot do wire speed, packet inspection, looking at the content and then route the traffic. If I see if it's a video package, but in UN Boston, there's high interview day of they should be loading our package faster, because you are a premium ISP. That intelligence has not gone there. So you will see, and that will be a bad people will happen in the network, switching, et cetera, right? So that is still an angle. But if you work and it comes to platform services, remember when I was at Pivotal and VMware, all models was my boss, that would, yes, as a platform, service is a game already won by the Cloud guys. >> Right. (indistinct) >> Silicon Valley Investors, I don't think you want to invest in past services, right? I mean, you might come with some lecture edition database to do some updates, there could be some game, let's say we want to do a time series database, or some metrics database, there's always some small angle, but the opportunity to go create a national database there it's very few. So I'm kind of eliminating all the black spaces, right? >> Yeah. >> We have the white spaces that comes in is the SAS level. Now to your point, if I'm Amazon, I'm going to compete with Snowflake, I have Redshift. So this is where at some point, these Cloud platforms, I call them aircraft carriers. They're not going to stay on the aircraft carriers, they're going to own the land as well. So they're going to move up to the SAS space. The question is you want to create a SAS service like CRM. They are not going to create a CRM like service, they may not create a sales force and service now, but if you're going to add a data warehouse, I can very well see Azure, Google, and AWS, going to create something to compute a Snowflake. Why would I not? It's so close to my database and data warehouse, I already have Redshift. So that's going to be nightlights, same reason, If you look at Netflix, you have a Netflix and you have Amazon prime. Netflix runs on Amazon, but you have Amazon prime. So you have the same model, you have Snowflake, and you'll have Redshift. The both will help each other, there'll be a... What do you call it? Coexistence will happen. But if you really want to invest, you want to invest in SAS companies. You do not want to be investing in a compliment players. You don't want to a feature. >> Yeah, that's great, I appreciate that perspective. And I wonder, so obviously Microsoft play in SAS, Google's got G suite. And I wonder if people often ask the Andy Jassy, you're going to move up the stack, you got to be an application, a SAS vendor, and you never say never with Atavist, But I wonder, and we were talking to Jerry Chen about this, years ago on theCube, and his angle was that Amazon will play, but they'll play through developers. They'll enable developers, and they'll participate, they'll take their, lick off the cone. So it's going to be interesting to see how directly Amazon plays, but at some point you got Tam expansion, you got to play in that space. >> Yeah, I'll give you an example of knowing, I got acquired by a couple of times by EMC. So I learned a lot from Joe Tucci and Paul Merage over the years. see Paul and Joe, what they did is to look at how 20 years, and they are very close to Boston in your area, Joe, what games did is they used to sell storage, but you know what he did, he went and bought the Apps to drive them. He bought like Legato, he bought Documentum, he bought Captiva, if you remember how he acquired all these companies as a services, he bought VMware to drive that. So I think the good angle that Microsoft has is, I'm a SAS player, I have dynamics, I have CRM, I have SharePoint, I have Collaboration, I have Office 365, MS Teams for users, and then I have the platform as Azure. So I think if I'm Amazon, (indistinct). I got to own the apps so that I can drive this workforce on my platform. >> Interesting. >> Just going to developers, like I know Jerry Chan, he was my peer a BMF. I don't think just literally to developers and that model works in open source, but the open source game is pretty much gone, and not too many companies made money. >> Well, >> Most companies pretty much gone. >> Yeah, he's right. Red hats not bad idea. But it's very interesting what you're saying there. And so, hey, its why Oracle wants to have Tiktok, running on their platform, right? I mean, it's going to. (laughing) It's going to drive that further integration. I wanted to ask you something, you were talking about, you wouldn't invest in storage or compute, but I wonder, and you mentioned some commentary about GPU's. Of course the videos has been going crazy, but they're now saying, okay, how do we expand our Team, they make the acquisition of arm, et cetera. What about this DPU thing, if you follow that, that data processing unit where they're like hyper dis-aggregation and then they reaggregate, and as an offload and really to drive data centric workloads. Have you looked at that at all? >> I did, I think, and that's a good angle. So I think, look, it's like, it goes through it. I don't know if you remember in your career, we have seen it. I used to get Silicon graphics. I saw the first graphic GPU, right? That time GPU was more graphic processor unit, >> Right, yeah, work stations. >> So then become NPUs at work processing units, right? There was a TCP/IP office offloading, if you remember right, there was like vector processing unit. So I think every once in a while the industry, recreated this separate unit, as a co-processor to the main CPU, because main CPU's inefficient, and it makes sense. And then Google created TPU's and then we have the new world of the media GPU's, now we have DPS all these are good, but what's happening is, all these are driving for machine learning, AI for the training period there. Training period Sometimes it's so long with the workloads, if you can cut down, it makes sense. >> Yeah. >> Because, but the question is, these aren't so specialized in nature. I can't use it for everything. >> Yup. >> I want Ideally, algorithms to be paralyzed, I want the training to be paralyzed, I want so having deep use and GPS are important, I think where I want to see them as more, the algorithm, there should be more investment from the NVIDIA's and these guys, taking the algorithm to be highly paralyzed them. (indistinct) And I think that still has not happened in industry yet. >> All right, so we're pretty much out of time, but what are you doing these days? Where are you spending your time, are you still in Stealth, give us a little glimpse. >> Yeah, no, I'm out of the Stealth, I'm actually the CEO of Aisera now, Aisera, obviously I invested with them, but I'm the CEO of Aisero. It's funded by Menlo ventures, Norwest, True, along with Khosla ventures and Ram Shriram is a big investor. Robin's on the board of Google, so these guys, look, we are going out to the collaboration game. How do you automate customer service and support for employees and then users, right? In this whole game, we talked about the Zoom, Slack and MS Teams, that's what I'm spending time, I want to create next generation service now. >> Fantastic. Muddu, I always love having you on you, pull punches, you tell it like it is, that you're a great visionary technologist. Thanks so much for coming on theCube, and participating in our program. >> Dave, it's always a pleasure speaking to you sir. Thank you. >> Okay. Keep it right there, there's more coming from Cuba and Cloud right after this break. (slow music)
SUMMARY :
From the Cube Studios Welcome my friend, good to see you. Pleasure to be with you. I want to ask you about that, but COVID is going to probably accelerate Yeah. because you tell it like it is, that you see that as permanent, So that's why, if you look I wonder how you look at you guys are talking about 10 years back, So to your point, what will drive Cloud and you hear a lot of the I think you will. the On-prem stuff is flat to Is the App server is going to run On-prem, I want to ask you about those, So the same philosophy will So I'll tell you what, sorry, I'm not going to come to you and say, hey, the license, you got to pay X. I love that because that's the core But of course the age of Now the areas that you So AI, RPA and the SAS, where you want to put place your bets. So if you look at how Right. but the opportunity to go So you have the same So it's going to be interesting to see the Apps to drive them. I don't think just literally to developers I wanted to ask you something, I don't know if you AI for the training period there. Because, but the question is, taking the algorithm to but what are you doing these days? but I'm the CEO of Aisero. Muddu, I always love having you on you, pleasure speaking to you sir. right after this break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
1999 | DATE | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
$0 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
$ 0 | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Netezza | ORGANIZATION | 0.99+ |
Ram Shriram | PERSON | 0.99+ |
2006 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
35 | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Muddu Sudhakar | PERSON | 0.99+ |
Jerry Chan | PERSON | 0.99+ |
95 billion | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
2025 | DATE | 0.99+ |
Webex | ORGANIZATION | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
60 | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Frank | PERSON | 0.99+ |
Aisero | ORGANIZATION | 0.99+ |
Paul Merage | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
$2 trillion | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
70 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
Norwest | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
two-piece | QUANTITY | 0.99+ |
Muddu | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
15 years | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Mapper | ORGANIZATION | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
100 percent | QUANTITY | 0.99+ |
Aisera | ORGANIZATION | 0.99+ |
Pivotal | ORGANIZATION | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
$80 billion | QUANTITY | 0.99+ |
Joe Tucci | PERSON | 0.99+ |
next year | DATE | 0.99+ |
20 years | QUANTITY | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
Rahul Pathak, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, welcome back to the cubes. Ongoing coverage of AWS reinvent virtual Cuba's Gone Virtual along with most events these days are all events and continues to bring our digital coverage of reinvent With me is Rahul Pathak, who is the vice president of analytics at AWS A Ro. It's great to see you again. Welcome. And thanks for joining the program. >>They have Great co two and always a pleasure. Thanks for having me on. >>You're very welcome. Before we get into your leadership discussion, I want to talk about some of the things that AWS has announced. Uh, in the early parts of reinvent, I want to start with a glue elastic views. Very notable announcement allowing people to, you know, essentially share data across different data stores. Maybe tell us a little bit more about glue. Elastic view is kind of where the name came from and what the implication is, >>Uh, sure. So, yeah, we're really excited about blue elastic views and, you know, as you mentioned, the idea is to make it easy for customers to combine and use data from a variety of different sources and pull them together into one or many targets. And the reason for it is that you know we're really seeing customers adopt what we're calling a lake house architectural, which is, uh, at its core Data Lake for making sense of data and integrating it across different silos, uh, typically integrated with the data warehouse, and not just that, but also a range of other purpose. Both stores like Aurora, Relation of Workloads or dynamodb for non relational ones. And while customers typically get a lot of benefit from using purpose built stores because you get the best possible functionality, performance and scale forgiven use case, you often want to combine data across them to get a holistic view of what's happening in your business or with your customers. And before glue elastic views, customers would have to either use E. T. L or data integration software, or they have to write custom code that could be complex to manage, and I could be are prone and tough to change. And so, with elastic views, you can now use sequel to define a view across multiple data sources pick one or many targets. And then the system will actually monitor the sources for changes and propagate them into the targets in near real time. And it manages the anti pipeline and can notify operators if if anything, changes. And so the you know the components of the name are pretty straightforward. Blues are survivalists E T Elling data integration service on blue elastic views about our about data integration their views because you could define these virtual tables using sequel and then elastic because it's several lists and will scale up and down to deal with the propagation of changes. So we're really excited about it, and customers are as well. >>Okay, great. So my understanding is I'm gonna be able to take what's called what the parlance of materialized views, which in my laypersons terms assumes I'm gonna run a query on the database and take that subset. And then I'm gonna be ableto thio. Copy that and move it to another data store. And then you're gonna automatically keep track of the changes and keep everything up to date. Is that right? >>Yes. That's exactly right. So you can imagine. So you had a product catalog for example, that's being updated in dynamodb, and you can create a view that will move that to Amazon Elasticsearch service. You could search through a current version of your catalog, and we will monitor your dynamodb tables for any changes and make sure those air all propagated in the real time. And all of that is is taken care of for our customers as soon as they defined the view on. But they don't be just kept in sync a za long as the views in effect. >>Let's see, this is being really valuable for a person who's building Looks like I like to think in terms of data services or data products that are gonna help me, you know, monetize my business. Maybe, you know, maybe it's a simple as a dashboard, but maybe it's actually a product. You know, it might be some content that I want to develop, and I've got transaction systems. I've got unstructured data, may be in a no sequel database, and I wanna actually combine those build new products, and I want to do that quickly. So So take me through what I would have to do. You you sort of alluded to it with, you know, a lot of e t l and but take me through in a little bit more detail how I would do that, you know, before this innovation. And maybe you could give us a sense as to what the possibilities are with glue. Elastic views? >>Sure. So, you know, before we announced elastic views, a customer would typically have toe think about using a T l software, so they'd have to write a neat L pipeline that would extract data periodically from a range of sources. They then have to write transformation code that would do things like matchup types. Make sure you didn't have any invalid values, and then you would combine it on periodically, Write that into a target. And so once you've got that pipeline set up, you've got to monitor it. If you see an unusual spike in data volume, you might have to add more. Resource is to the pipeline to make a complete on time. And then, if anything changed in either the source of the destination that prevented that data from flowing in the way you would expect it, you'd have toe manually, figure that out and have data, quality checks and all of that in place to make sure everything kept working but with elastic views just gets much simpler. So instead of having to write custom transformation code, you right view using sequel and um, sequel is, uh, you know, widely popular with data analysts and folks that work with data, as you well know. And so you can define that view and sequel. The view will look across multiple sources, and then you pick your destination and then glue. Elastic views essentially monitors both the source for changes as well as the source and the destination for any any issues like, for example, did the schema changed. The shape of the data change is something briefly unavailable, and it can monitor. All of that can handle any errors, but it can recover from automatically. Or if it can't say someone dropped an important table in the source. That was part of your view. You can actually get alerted and notified to take some action to prevent bad data from getting through your system or to prevent your pipeline from breaking without your knowledge and then the final pieces, the elasticity of it. It will automatically deal with adding more resource is if, for example, say you had a spiky day, Um, in the markets, maybe you're building a financial services application and you needed to add more resource is to process those changes into your targets more quickly. The system would handle that for you. And then, if you're monetizing data services on the back end, you've got a range of options for folks subscribing to those targets. So we've got capabilities like our, uh, Amazon data exchange, where people can exchange and monetize data set. So it allows this and to end flow in a much more straightforward way. It was possible before >>awesome. So a lot of automation, especially if something goes wrong. So something goes wrong. You can automatically recover. And if for whatever reason, you can't what happens? You quite ask the system and and let the operator No. Hey, there's an issue. You gotta go fix it. How does that work? >>Yes, exactly. Right. So if we can recover, say, for example, you can you know that for a short period of time, you can't read the target database. The system will keep trying until it can get through. But say someone dropped a column from your source. That was a key part of your ultimate view and destination. You just can't proceed at that point. So the pipeline stops and then we notify using a PS or an SMS alert eso that programmatic action can be taken. So this effectively provides a really great way to enforce the integrity of data that's going between the sources and the targets. >>All right, make it kindergarten proof of it. So let's talk about another innovation. You guys announced quicksight que, uh, kind of speaking to the machine in my natural language, but but give us some more detail there. What is quicksight Q and and how doe I interact with it. What What kind of questions can I ask it >>so quick? Like you is essentially a deep, learning based semantic model of your data that allows you to ask natural language questions in your dashboard so you'll get a search bar in your quick side dashboard and quick site is our service B I service. That makes it really easy to provide rich dashboards. Whoever needs them in the organization on what Q does is it's automatically developing relationships between the entities in your data, and it's able to actually reason about the questions you ask. So unlike earlier natural language systems, where you have to pre define your models, you have to pre define all the calculations that you might ask the system to do on your behalf. Q can actually figure it out. So you can say Show me the top five categories for sales in California and it'll look in your data and figure out what that is and will prevent. It will present you with how it parse that question, and there will, in line in seconds, pop up a dashboard of what you asked and actually automatically try and take a chart or visualization for that data. That makes sense, and you could then start to refine it further and say, How does this compare to what happened in New York? And we'll be able to figure out that you're tryingto overlay those two data sets and it'll add them. And unlike other systems, it doesn't need to have all of those things pre defined. It's able to reason about it because it's building a model of what your data means on the flight and we pre trained it across a variety of different domains So you can ask a question about sales or HR or any of that on another great part accused that when it presents to you what it's parsed, you're actually able toe correct it if it needs it and provide feedback to the system. So, for example, if it got something slightly off you could actually select from a drop down and then it will remember your selection for the next time on it will get better as you use it. >>I saw a demo on in Swamis Keynote on December 8. That was basically you were able to ask Quick psych you the same question, but in different ways, you know, like compare California in New York or and then the data comes up or give me the top, you know, five. And then the California, New York, the same exact data. So so is that how I kind of can can check and see if the answer that I'm getting back is correct is ask different questions. I don't have to know. The schema is what you're saying. I have to have knowledge of that is the user I can. I can triangulate from different angles and then look and see if that's correct. Is that is that how you verify or there are other ways? >>Eso That's one way to verify. You could definitely ask the same question a couple of different ways and ensure you're seeing the same results. I think the third option would be toe, uh, you know, potentially click and drill and filter down into that data through the dash one on, then the you know, the other step would be at data ingestion Time. Typically, data pipelines will have some quality controls, but when you're interacting with Q, I think the ability to ask the question multiple ways and make sure that you're getting the same result is a perfectly reasonable way to validate. >>You know what I like about that answer that you just gave, and I wonder if I could get your opinion on this because you're you've been in this business for a while? You work with a lot of customers is if you think about our operational systems, you know things like sales or E r. P systems. We've contextualized them. In other words, the business lines have inject context into the system. I mean, they kind of own it, if you will. They own the data when I put in quotes, but they do. They feel like they're responsible for it. There's not this constant argument because it's their data. It seems to me that if you look back in the last 10 years, ah, lot of the the data architecture has been sort of generis ized. In other words, the experts. Whether it's the data engineer, the quality engineer, they don't really have the business context. But the example that you just gave it the drill down to verify that the answer is correct. It seems to me, just in listening again to Swamis Keynote the other day is that you're really trying to put data in the hands of business users who have the context on the domain knowledge. And that seems to me to be a change in mindset that we're gonna see evolve over the next decade. I wonder if you could give me your thoughts on that change in the data architecture data mindset. >>David, I think you're absolutely right. I mean, we see this across all the customers that we speak with there's there's an increasing desire to get data broadly distributed into the hands of the organization in a well governed and controlled way. But customers want to give data to the folks that know what it means and know how they can take action on it to do something for the business, whether that's finding a new opportunity or looking for efficiencies. And I think, you know, we're seeing that increasingly, especially given the unpredictability that we've all gone through in 2020 customers are realizing that they need to get a lot more agile, and they need to get a lot more data about their business, their customers, because you've got to find ways to adapt quickly. And you know, that's not gonna change anytime in the future. >>And I've said many times in the The Cube, you know, there are industry. The technology industry used to be all about the products, and in the last decade it was really platforms, whether it's SAS platforms or AWS cloud platforms, and it seems like innovation in the coming years, in many respects is coming is gonna come from the ecosystem and the ability toe share data we've We've had some examples today and then But you hit on. You know, one of the key challenges, of course, is security and governance. And can you automate that if you will and protect? You know the users from doing things that you know, whether it's data access of corporate edicts for governance and compliance. How are you handling that challenge? >>That's a great question, and it's something that really emphasized in my leadership session. But the you know, the notion of what customers are doing and what we're seeing is that there's, uh, the Lake House architectural concept. So you've got a day late. Purpose build stores and customers are looking for easy data movement across those. And so we have things like blue elastic views or some of the other blue features we announced. But they're also looking for unified governance, and that's why we built it ws late formation. And the idea here is that it can quickly discover and catalog customer data assets and then allows customers to define granular access policies centrally around that data. And once you have defined that, it then sets customers free to give broader access to the data because they put the guardrails in place. They put the protections in place. So you know you can tag columns as being private so nobody can see them on gun were announced. We announced a couple of new capabilities where you can provide row based control. So only a certain set of users can see certain rose in the data, whereas a different set of users might only be able to see, you know, a different step. And so, by creating this fine grained but unified governance model, this actually sets customers free to give broader access to the data because they know that they're policies and compliance requirements are being met on it gets them out of the way of the analyst. For someone who can actually use the data to drive some value for the business, >>right? They could really focus on driving value. And I always talk about monetization. However monetization could be, you know, a generic term, for it could be saving lives, admission of the business or the or the organization I meant to ask you about acute customers in bed. Uh, looks like you into their own APs. >>Yes, absolutely so one of quick sites key strengths is its embed ability. And on then it's also serverless, so you could embed it at a really massive scale. And so we see customers, for example, like blackboard that's embedding quick side dashboards into information. It's providing the thousands of educators to provide data on the effectiveness of online learning. For example, on you could embed Q into that capability. So it's a really cool way to give a broad set of people the ability to ask questions of data without requiring them to be fluent in things like Sequel. >>If I ask you a question, we've talked a little bit about data movement. I think last year reinvent you guys announced our A three. I think it made general availability this year. And remember Andy speaking about it, talking about you know, the importance of having big enough pipes when you're moving, you know, data around. Of course you do. Doing tearing. You also announced Aqua Advanced Query accelerator, which kind of reduces bringing the computer. The data, I guess, is how I would think about that reducing that movement. But then we're talking about, you know, glue, elastic views you're copying and moving data. How are you ensuring you know, maintaining that that maximum performance for your customers. I mean, I know it's an architectural question, but as an analytics professional, you have toe be comfortable that that infrastructure is there. So how does what's A. W s general philosophy in that regard? >>So there's a few ways that we think about this, and you're absolutely right. I think there's data volumes were going up, and we're seeing customers going from terabytes, two petabytes and even people heading into the exabyte range. Uh, there's really a need to deliver performance at scale. And you know, the reality of customer architectures is that customers will use purpose built systems for different best in class use cases. And, you know, if you're trying to do a one size fits all thing, you're inevitably going to end up compromising somewhere. And so the reality is, is that customers will have more data. We're gonna want to get it to more people on. They're gonna want their analytics to be fast and cost effective. And so we look at strategies to enable all of this. So, for example, glue elastic views. It's about moving data, but it's about moving data efficiently. So What we do is we allow customers to define a view that represents the subset of their data they care about, and then we only look to move changes as efficiently as possible. So you're reducing the amount of data that needs to get moved and making sure it's focused on the essential. Similarly, with Aqua, what we've done, as you mentioned, is we've taken the compute down to the storage layer, and we're using our nitro chips to help with things like compression and encryption. And then we have F. P. J s in line to allow filtering an aggregation operation. So again, you're tryingto quickly and effectively get through as much data as you can so that you're only sending back what's relevant to the query that's being processed. And that again leads to more performance. If you can avoid reading a bite, you're going to speed up your queries. And that Awkward is trying to do. It's trying to push those operations down so that you're really reducing data as close to its origin as possible on focusing on what's essential. And that's what we're applying across our analytics portfolio. I would say one other piece we're focused on with performance is really about innovating across the stack. So you mentioned network performance. You know, we've got 100 gigabits per second throughout now, with the next 10 instances and then with things like Grab it on to your able to drive better price performance for customers, for general purpose workloads. So it's really innovating at all layers. >>It's amazing to watch it. I mean, you guys, it's a It's an incredible engineering challenge as you built this hyper distributed system. That's now, of course, going to the edge. I wanna come back to something you mentioned on do wanna hit on your leadership session as well. But you mentioned the one size fits all, uh, system. And I've asked Andy Jassy about this. I've had a discussion with many folks that because you're full and and of course, you mentioned the challenges you're gonna have to make tradeoffs if it's one size fits all. The flip side of that is okay. It's simple is you know, 11 of the Swiss Army knife of database, for example. But your philosophy is Amazon is you wanna have fine grained access and to the primitives in case the market changes you, you wanna be able to move quickly. So that puts more pressure on you to then simplify. You're not gonna build this big hairball abstraction layer. That's not what he gonna dio. Uh, you know, I think about, you know, layers and layers of paint. I live in a very old house. Eso your That's not your approach. So it puts greater pressure on on you to constantly listen to your customers, and and they're always saying, Hey, I want to simplify, simplify, simplify. We certainly again heard that in swamis presentation the other day, all about, you know, minimizing complexity. So that really is your trade office. It puts pressure on Amazon Engineering to continue to raise the bar on simplification. Isn't Is that a fair statement? >>Yeah, I think so. I mean, you know, I think any time we can do work, so our customers don't have to. I think that's a win for both of us. Um, you know, because I think we're delivering more value, and it makes it easier for our customers to get value from their data way. Absolutely believe in using the right tool for the right job. And you know you talked about an old house. You're not gonna build or renovate a house of the Swiss Army knife. It's just the wrong tool. It might work for small projects, but you're going to need something more specialized. The handle things that matter. It's and that is, uh, that's really what we see with that, you know, with that set of capabilities. So we want to provide customers with the best of both worlds. We want to give them purpose built tools so they don't have to compromise on performance or scale of functionality. And then we want to make it easy to use these together. Whether it's about data movement or things like Federated Queries, you can reach into each of them and through a single query and through a unified governance model. So it's all about stitching those together. >>Yeah, so far you've been on the right side of history. I think it serves you well on your customers. Well, I wanna come back to your leadership discussion, your your leadership session. What else could you tell us about? You know, what you covered there? >>So we we've actually had a bunch of innovations on the analytics tax. So some of the highlights are in m r, which is our managed spark. And to do service, we've been able to achieve 1.7 x better performance and open source with our spark runtime. So we've invested heavily in performance on now. EMR is also available for customers who are running and containerized environment. So we announced you Marnie chaos on then eh an integrated development environment and studio for you Marco D M R studio. So making it easier both for people at the infrastructure layer to run em are on their eks environments and make it available within their organizations but also simplifying life for data analysts and folks working with data so they can operate in that studio and not have toe mess with the details of the clusters underneath and then a bunch of innovation in red shift. We talked about Aqua already, but then we also announced data sharing for red Shift. So this makes it easy for red shift clusters to share data with other clusters without putting any load on the central producer cluster. And this also speaks to the theme of simplifying getting data from point A to point B so you could have central producer environments publishing data, which represents the source of truth, say into other departments within the organization or departments. And they can query the data, use it. It's always up to date, but it doesn't put any load on the producers that enables these really powerful data sharing on downstream data monetization capabilities like you've mentioned. In addition, like Swami mentioned in his keynote Red Shift ML, so you can now essentially train and run models that were built in sage maker and optimized from within your red shift clusters. And then we've also automated all of the performance tuning that's possible in red ships. So we really invested heavily in price performance, and now we've automated all of the things that make Red Shift the best in class data warehouse service from a price performance perspective up to three X better than others. But customers can just set red shift auto, and it'll handle workload management, data compression and data distribution. Eso making it easier to access all about performance and then the other big one was in Lake Formacion. We announced three new capabilities. One is transactions, so enabling consistent acid transactions on data lakes so you can do things like inserts and updates and deletes. We announced row based filtering for fine grained access control and that unified governance model and then automated storage optimization for Data Lake. So customers are dealing with an optimized small files that air coming off streaming systems, for example, like Formacion can auto compact those under the covers, and you can get a 78 x performance boost. It's been a busy year for prime lyrics. >>I'll say that, z that it no great great job, bro. Thanks so much for coming back in the Cube and, you know, sharing the innovations and, uh, great to see you again. And good luck in the coming here. Well, >>thank you very much. Great to be here. Great to see you. And hope we get Thio see each other in person against >>I hope so. All right. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after this short break
SUMMARY :
It's great to see you again. They have Great co two and always a pleasure. to, you know, essentially share data across different And so the you know the components of the name are pretty straightforward. And then you're gonna automatically keep track of the changes and keep everything up to date. So you can imagine. services or data products that are gonna help me, you know, monetize my business. that prevented that data from flowing in the way you would expect it, you'd have toe manually, And if for whatever reason, you can't what happens? So if we can recover, say, for example, you can you know that for a So let's talk about another innovation. that you might ask the system to do on your behalf. but in different ways, you know, like compare California in New York or and then the data comes then the you know, the other step would be at data ingestion Time. But the example that you just gave it the drill down to verify that the answer is correct. And I think, you know, we're seeing that increasingly, You know the users from doing things that you know, whether it's data access But the you know, the notion of what customers are doing and what we're seeing is that admission of the business or the or the organization I meant to ask you about acute customers And on then it's also serverless, so you could embed it at a really massive But then we're talking about, you know, glue, elastic views you're copying and moving And you know, the reality of customer architectures is that customers will use purpose built So that puts more pressure on you to then really what we see with that, you know, with that set of capabilities. I think it serves you well on your customers. speaks to the theme of simplifying getting data from point A to point B so you could have central in the Cube and, you know, sharing the innovations and, uh, great to see you again. thank you very much. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rahul Pathak | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
December 8 | DATE | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
third option | QUANTITY | 0.99+ |
Swami | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
A. W | PERSON | 0.99+ |
this year | DATE | 0.99+ |
10 instances | QUANTITY | 0.98+ |
A three | COMMERCIAL_ITEM | 0.98+ |
78 x | QUANTITY | 0.98+ |
two petabytes | QUANTITY | 0.98+ |
five | QUANTITY | 0.97+ |
Amazon Engineering | ORGANIZATION | 0.97+ |
Red Shift ML | TITLE | 0.97+ |
Formacion | ORGANIZATION | 0.97+ |
11 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
one way | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
five categories | QUANTITY | 0.94+ |
Aqua | ORGANIZATION | 0.93+ |
Elasticsearch | TITLE | 0.93+ |
terabytes | QUANTITY | 0.93+ |
both worlds | QUANTITY | 0.93+ |
next decade | DATE | 0.92+ |
two data sets | QUANTITY | 0.91+ |
Lake Formacion | ORGANIZATION | 0.9+ |
single query | QUANTITY | 0.9+ |
Data Lake | ORGANIZATION | 0.89+ |
thousands of educators | QUANTITY | 0.89+ |
Both stores | QUANTITY | 0.88+ |
Thio | PERSON | 0.88+ |
agile | TITLE | 0.88+ |
Cuba | LOCATION | 0.87+ |
dynamodb | ORGANIZATION | 0.86+ |
1.7 x | QUANTITY | 0.86+ |
Swamis | PERSON | 0.84+ |
EMR | TITLE | 0.82+ |
one size | QUANTITY | 0.82+ |
Red Shift | TITLE | 0.82+ |
up to three X | QUANTITY | 0.82+ |
100 gigabits per second | QUANTITY | 0.82+ |
Marnie | PERSON | 0.79+ |
last decade | DATE | 0.79+ |
reinvent 2020 | EVENT | 0.74+ |
Invent | EVENT | 0.74+ |
last 10 years | DATE | 0.74+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
today | DATE | 0.74+ |
A Ro | EVENT | 0.71+ |
three new capabilities | QUANTITY | 0.71+ |
two | QUANTITY | 0.7+ |
E T Elling | PERSON | 0.69+ |
Eso | ORGANIZATION | 0.66+ |
Aqua | TITLE | 0.64+ |
Cube | ORGANIZATION | 0.63+ |
Query | COMMERCIAL_ITEM | 0.63+ |
SAS | ORGANIZATION | 0.62+ |
Aurora | ORGANIZATION | 0.61+ |
Lake House | ORGANIZATION | 0.6+ |
Sequel | TITLE | 0.58+ |
P. | PERSON | 0.56+ |
Ed Walsh, ChaosSearch | AWS re:Invent 2020 Partner Network Day
>> Narrator: From around the globe it's theCUBE, with digital coverage of AWS re:Invent 2020. Special coverage sponsored by AWS Global Partner Network. >> Hello and welcome to theCUBE Virtual and our coverage of AWS re:Invent 2020 with special coverage of APN partner experience. We are theCUBE Virtual and I'm your host, Justin Warren. And today I'm joined by Ed Walsh, CEO of ChaosSearch. Ed, welcome to theCUBE. >> Well thank you for having me, I really appreciate it. >> Now, this is not your first time here on theCUBE. You're a regular here and I've loved it to have you back. >> I love the platform you guys are great. >> So let's start off by just reminding people about what ChaosSearch is and what do you do there? >> Sure, the best way to say is so ChaosSearch helps our clients know better. We don't do that by a special wizard or a widget that you give to your, you know, SecOp teams. What we do is a hard work to give you a data platform to get insights at scale. And we do that also by achieving the promise of data lakes. So what we have is a Chaos data platform, connects and indexes data in a customer's S3 or glacier accounts. So inside your data lake, not our data lake but renders that data fully searchable and available for analysis using your existing tools today 'cause what we do is index it and publish open API, it's like API like Elasticsearch API, and soon SQL. So give you an example. So based upon those capabilities were an ideal replacement for a commonly deployed, either Elasticsearch or ELK Stack deployments, if you're hitting scale issues. So we talk about scalable log analytics, and more and more people are hitting these scale issues. So let's say if you're using Elasticsearch ELK or Amazon Elasticsearch, and you're hitting scale issues, what I mean by that is like, you can't keep enough retention. You want longer retention, or it's getting very expensive to keep that retention, or because the scale you hit where you have availability, where the cluster is hard to keep up running or is crashing. That's what we mean by the issues at scale. And what we do is simply we allow you, because we're publishing the open API of Elasticsearch use all your tools, but we save you about 80% off your monthly bill. We also give you an, and it's an and statement and give you unlimited retention. And as much as you want to keep on S3 or into Glacier but we also take care of all the hassles and management and the time to manage these clusters, which ends up being on a database server called leucine. And we take care of that as a managed service. And probably the biggest thing is all of this without changing anything your end users are using. So we include Kibana, but imagine it's an Elastic API. So if you're using API or Kibana, it's just easy to use the exact same tools used today, but you get the benefits of a true data lake. In fact, we're running now Elasticsearch on top of S3 natively. If that makes it sense. >> Right and natively is pretty cool. And look, 80% savings, is a dramatic number, particularly this year. I think there's a lot of people who are looking to save a few quid. So it'd be very nice to be able to save up to 80%. I am curious as to how you're able to achieve that kind of saving though. >> Yeah, you won't be the first person to ask me that. So listen, Elastic came around, it was, you know we had Splunk and we also have a lot of Splunk clients, but Elastic was a more cost effective solution open source to go after it. But what happens is, especially at scale, if it's fall it's actually very cost-effective. But underneath last six tech ELK Stack is a leucine database, it's a database technology. And that sits on our servers that are heavy memory count CPU count in and SSDs. So you can do on-prem or even in the clouds, so if you do an Amazon, basically you're spinning up a server and it stays up, it doesn't spin up, spin down. So those clusters are not one server, it's a cluster of those servers. And typically if you have any scale you're actually having multiple clusters because you don't dare put it on one, for different use cases. So our savings are actually you no longer need those servers to spin up and you don't need to pay for those seen underneath. You can still use Kibana under API but literally it's $80 off your bill that you're paying for your service now, and it's hard dollars. So it's not... And we typically see clients between 70 and 80%. It's up to 80, but it's literally right within a 10% margin that you're saving a lot of money, but more importantly, saving money is a great thing. But now you have one unified data lake that you can have. You used to go across some of the data or all the data through the role-based access. You can give different people. Like we've seen people who say, hey give that, help that person 40 days of this data. But the SecOp up team gets to see across all the different law. You know, all the machine generated data they have. And we can give you a couple of examples of that and walk you through how people deploy if you want. >> I'm always keen to hear specific examples of how customers are doing things. And it's nice that you've thought of drawn that comparison there around what what cloud is good for and what it isn't is. I'll often like to say that AWS is cheap to fail in, but expensive to succeed. So when people are actually succeeding with this and using this, this broad amount of data so what you're saying there with that savings I've actually got access to a lot more data that I can do things with. So yeah, if you could walk through a couple of examples of what people are doing with this increased amount of data that they have access to in EKL Search, what are some of the things that people are now able to unlock with that data? >> Well, literally it's always good for a customer size so we can go through and we go through it however it might want, Kleiner, Blackboard, Alert Logic, Armor Security, HubSpot. Maybe I'll start with HubSpot. One of our good clients, they were doing some Cloud Flare data that was one of their clusters they were using a lot to search for. But they were looking at to look at a denial service. And they were, we find everyone kind of at scale, they get limited. So they were down to five days retention. Why? Well, it's not that they meant to but basically they couldn't cost-effectively handle that in the scale. And also they're having scale issues with the environment, how they set the cluster and sharding. And when they also denial service tech, what happened that's when the influx of data that is one thing about scale is how fast it comes out, yet another one is how much data you have. But this is as the data was coming after them at denial service, that's when the cluster would actually go down believe it or not, you know right. When you need your log analysis tools. So what we did is because they're just using Kibana, it was easy swap. They ran in parallel because we published the open API but we took them from five days to nine days. They could keep as much as they want but nine days for denial services is what they wanted. And then we did save them in over $4 million a year in hard dollars, What they're paying in their environment from really is the savings on the server farm and a little bit on the Elasticsearch Stack. But more importantly, they had no outages since. Now here's the thing. Are you talking about the use case? They also had other clusters and you find everyone does it. They don't dare put it on one cluster, even though these are not one server, they're multiple servers. So the next use case for CloudFlare was one, the next QS and it was a 10 terabyte a day influx kept it for 90 days. So it's about a petabyte. They brought another use case on which was NetMon, again, Network Monitoring. And again, I'm having the same scale issue, retention area. And what they're able to do is easily roll that on. So that's one data platform. Now they're adding the next one. They have about four different use cases and it's just different clusters able to bring together. But now what they're able to do give you use cases either they getting more cost effective, more stability and freedom. We say saves you a lot of time, cost and complexity. Just the time they manage that get the data in the complexities around it. And then the cost is easy to kind of quantify but they've got better but more importantly now for particular teams they only need their access to one data but the SecOP team wants to see across all the data. And it's very easy for them to see across all the data where before it was impossible to do. So now they have multiple large use cases streaming at them. And what I love about that particular case is at one point they were just trying to test our scale. So they started tossing more things at it, right. To see if they could kind of break us. So they spiked us up to 30 terabytes a day which is for Elastic would even 10 terabytes a day makes things fall over. Now, if you think of what they just did, what were doing is literally three steps, put your data in S3 and as fast as you can, don't modify, just put it there. Once it's there three steps connect to us, you give us readability access to those buckets and a place to write the indexy. All of that stuff is in your S3, it never comes out. And then basically you set up, do you want to do live or do you want to do real time analysis? Or do you want to go after old data? We do the rest, we ingest, we normalize the schema. And basically we give you our back and the refinery to give the right people access. So what they did is they basically throw a whole bunch of stuff at it. They were trying to outrun S3. So, you know, we're on shoulders of giants. You know, if you think about our platform for clients what's a better dental like than S3. You're not going to get a better cross curve, right? You're not going to get a better parallelism. And so, or security it's in your, you know a virtual environment. But if you... And also you can keep data in the right location. So Blackboard's a good example. They need to keep that in all the different regions and because it's personal data, they, you know, GDPR they got to keep data in that location. It's easy, we just put compute in each one of the different areas they are. But the net net is if you think that architecture is shoulders of giants if you think you can outrun by just sheer volume or you can put in more cost-effective place to keep long-term or you think you can out store you have so much data that S3 and glacier can't possibly do it. Then you got me at your bigger scale at me but that's the scale we'r&e talking about. So if you think about the spiked our throughput what they really did is they try to outrun S3. And we didn't pick up. Now, the next thing is they tossed a bunch of users at us which were just spinning up in our data fabric different ways to do the indexing, to keep up with it. And new use cases in case they're going after everyone gets their own worker nodes which are all expected to fail in place. So again, they did some of that but really they're like you guys handled all the influx. And if you think about it, it's the shoulders of giants being on top of an Amazon platform, which is amazing. You're not going to get a more cost effective data lake in the world, and it's continuing to fall in price. And it's a cost curve, like no other, but also all that resiliency, all that security and the parallelism you can get, out of an S3 Glacier is just a bar none is the most scalable environment, you can build an environment. And what we do is a thin layer. It's a data platform that allows you to have your data now fully searchable and queryable using your tools >> Right and you, you mentioned there that, I mean you're running in AWS, which has broad experience in doing these sorts of things at scale but on that operational management side of things. As you mentioned, you actually take that off, off the hands of customers so that you run it on their behalf. What are some of the areas that you see people making in trying to do this themselves, when you've gone into customers, and brought it into the EKL Search platform? >> Yeah, so either people are just trying their best to build out clusters of Elasticsearch or they're going to services like Logz.io, Sumo Logic or Amazon Elasticsearch services. And those are all basically on the same ELK Stack. So they have the exact same limits as the same bits. Then we see people trying to say, well I really want to go to a data lake. I want to get away from these database servers and which have their limits. I want to use a data Lake. And then we see a lot of people putting data into environments before they, instead of using Elasticsearch, they want to use SQL type tools. And what they do is they put it into a Parquet or Presto form. It's a Presto dialect, but it into Parquet and structure it. And they go a lot of other way to, Hey it's in the data lake, but they end up building these little islands inside their data lake. And it's a lot of time to transform the data, to get it in a format that you can go after our tools. And then what we do is we don't make you do that. Just literally put the data there. And then what we do is we do the index and a polish API. So right now it's Elasticsearch in a very short time we'll publish Presto or the SQL dialect. You can use the same tool. So we do see people, either brute forcing and trying their best with a bunch of physical servers. We do see another group that says, you know, I want to go use an Athena use cases, or I want to use a there's a whole bunch of different startups saying, I do data lake or data lake houses. But they are, what they really do is force you to put things in the structure before you get insight. True data lake economics is literally just put it there, and use your tools natively to go after it. And that's where we're unique compared to what we see from our competition. >> Hmm, so with people who have moved into ChaosSearch, what's, let's say pick one, if you can, the most interesting example of what people have started to do with, with their data. What's new? >> That's good. Well, I'll give you another one. And so Armor Security is a good one. So Armor Security is a security service company. You know, thousands of clients doing great I mean a beautiful platform, beautiful business. And they won Rackspace as a partner. So now imagine thousand clients, but now, you know massive scale that to keep up with. So that would be an example but another example where we were able to come in and they were facing a major upgrade of their environment just to keep up, and they expose actually to their customers is how their customers do logging analytics. What we're able to do is literally simply because they didn't go below the API they use the exact same tools that are on top and in 30 days replaced that use case, save them tremendous amount of dollars. But now they're able to go back and have unlimited retention. They used to restrict their clients to 14 days. Now they have an opportunity to do a bunch of different things, and possible revenue opportunities and other. But allow them to look at their business differently and free up their team to do other things. And now they're, they're putting billing and other things into the same environment with us because one is easy it's scale but also freed up their team. No one has enough team to do things. And then the biggest thing is what people do interesting with our product is actually in their own tools. So, you know, we talk about Kibana when we do SQL again we talk about Looker and Tableau and Power BI, you know, the really interesting thing, and we think we did the hard work on the data layer which you can say is, you know I can about all the ways you consolidate the performance. Now, what becomes really interesting is what they're doing at the visibility level, either Kibana or the API or Tableau or Looker. And the key thing for us is we just say, just use the tools you're used to. Now that might be a boring statement, but to me, a great value proposition is not changing what your end users have to use. And they're doing amazing things. They're doing the exact same things they did before. They're just doing it with more data at bigger scale. And also they're able to see across their different machine learning data compared to being limited going at one thing at a time. And that getting the correlation from a unified data lake is really what we, you know we get very excited about. What's most exciting to our clients is they don't have to tell the users they have to use a different tool, which, you know, we'll decide if that's really interesting in this conversation. But again, I always say we didn't build a new algorithm that you going to give the SecOp team or a new pipeline cool widget that going to help the machine learning team which is another API we'll publish. But basically what we do is a hard work of making the data platform scalable, but more importantly give you the APIs that you're used to. So it's the platform that you don't have to change what your end users are doing, which is a... So we're kind of invisible behind the scenes. >> Well, that's certainly a pretty strong proposition there and I'm sure that there's plenty of scope for customers to come and and talk to you because no one's creating any less data. So Ed, thanks for coming out of theCUBE. It's always great to see you here. >> Know, thank you. >> You've been watching theCUBE Virtual and our coverage of AWS re:Invent 2020 with special coverage of APN partner experience. Make sure you check out all our coverage online, either on your desktop, mobile on your phone, wherever you are. I've been your host, Justin Warren. And I look forward to seeing you again soon. (soft music)
SUMMARY :
the globe it's theCUBE, and our coverage of AWS re:Invent 2020 Well thank you for having me, loved it to have you back. and the time to manage these clusters, be able to save up to 80%. And we can give you a So yeah, if you could walk and the parallelism you can get, that you see people making it's in the data lake, but they end up what's, let's say pick one, if you can, I can about all the ways you It's always great to see you here. And I look forward to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
$80 | QUANTITY | 0.99+ |
40 days | QUANTITY | 0.99+ |
five days | QUANTITY | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
90 days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS Global Partner Network | ORGANIZATION | 0.99+ |
nine days | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
HubSpot | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
Elasticsearch | TITLE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Armor Security | ORGANIZATION | 0.99+ |
14 days | QUANTITY | 0.99+ |
thousand clients | QUANTITY | 0.99+ |
Blackboard | ORGANIZATION | 0.99+ |
Kleiner | ORGANIZATION | 0.99+ |
S3 | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
Alert Logic | ORGANIZATION | 0.99+ |
three steps | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
GDPR | TITLE | 0.98+ |
one thing | QUANTITY | 0.98+ |
one data | QUANTITY | 0.98+ |
one server | QUANTITY | 0.98+ |
Elastic | TITLE | 0.98+ |
70 | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
about 80% | QUANTITY | 0.97+ |
Kibana | TITLE | 0.97+ |
first time | QUANTITY | 0.97+ |
over $4 million a year | QUANTITY | 0.97+ |
one cluster | QUANTITY | 0.97+ |
first person | QUANTITY | 0.97+ |
CloudFlare | TITLE | 0.97+ |
ChaosSearch | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
Glacier | TITLE | 0.97+ |
up to 80% | QUANTITY | 0.97+ |
Parquet | TITLE | 0.96+ |
each one | QUANTITY | 0.95+ |
Splunk | ORGANIZATION | 0.95+ |
Sumo Logic | ORGANIZATION | 0.94+ |
up to 80 | QUANTITY | 0.94+ |
Power BI | TITLE | 0.93+ |
today | DATE | 0.93+ |
Rackspace | ORGANIZATION | 0.92+ |
up to 30 terabytes a day | QUANTITY | 0.92+ |
one point | QUANTITY | 0.91+ |
S3 Glacier | COMMERCIAL_ITEM | 0.91+ |
Elastic API | TITLE | 0.89+ |
Matt Kixmoeller, Pure Storage & Michael Ferranti, Portworx | Kubecon + CloudNativeCon NA 2020
>> Narrator: From around the globe, it's theCUBE. With coverage of KubeCon and CloudNativeCon North America 2020, virtual. Brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Hi, I'm Joep Piscaer. Welcome to theCUBEs coverage of KubeCon, CloudNativeCon 2020. So I'm joined today by Matt Kixmoeller, he's VP of strategy at Pure Storage, as well as Michael Ferranti, he's the senior director of product marketing at Portworx now acquired by Pure Storage. Fellows, welcome to the show. >> Thanks here. >> I want to start out with you know , how about the lay of the land of storage in the Cloud Native space in the Kubernetes space. You know, what's hard? what's happening? What are the trends that you see going on? Matt, if you could shed some light on that for me? >> Yeah, I think you know, from a Pure point of view obviously we just told customers will they maturing their comprehensive deployments and particularly leaning towards persistant, you know applications and so you know we noticed within our customer base that there was quite a lot of deployments of a Portworx on Pure Storage. And that inspired us to start talking to one another you know, almost six plus months ago that eventually ended in us bringing the two companies together. So it's been a great journey from the Pure point of view, bringing Portworx into the Pure family. And, you know, we're working through now with our joint customers, integration strategies and how to really broaden the use of the technology. So that's quite exciting times for us. >> And of course, it's good to hear that the match goes beyond just the marketing color, like the brand color. >> Absolutely. Yeah. I mean, the fact that both companies were orange and you know, their logo looked like kind of a folded up version of ours, just started things off on the right foot >> A match made in heaven, right? So I want to talk a little bit about you know, the acquisition, what's happened there and especially, you know looking at Portworx as a company, and as a product set, it's fairly popular in the cloud community. A lot of traction with customers. So I want to zoom in on the acquisition itself and kind of the roadmap going forward merging the two companies and adding Portworx to that Pure portfolio. Matt, if you could shed some light on that as well. >> Yeah. Why don't I start and then Michael can jump in as well? So, you know, we at Pure had been really working for years now to outfit our all flash storage arrays for the container use case and shipped a piece of software that we call PSO. That was really a super CSI driver that allowed us to do intelligent placement of you know, persistent volumes on Pure arrays. But the more time we spent in the market, the more we just started to engage with customers and realized that there were a whole number of use cases that didn't really want a hardware based solution, you know. They either wanted to run completely in the cloud, hybrid between on-prem and cloud and leverage bare metal hardware. And so you know, we came to the conclusion that you know, first off, although positioning arrays for the market was the right thing to do, we wouldn't really be able to serve the broader needs restoratively for containers, if you did that. And then, you know, the second thing I think was that we heard from customers that they wanted a much richer data management stack. You know, it's not just about providing the business versus the volume for the container, but you know, all the capabilities around snapshoting and replication and mobilization and mobility between on-prem and cloud were necessary. And so, you know, Portworx we bought to bear not only a software based solution into our portfolio, but really that full data management stack platform in addition to just storage. And so as we look to integrate our product lines you know, we're looking to deliver a consistent experience for data management, for Kubernetes whatever infrastructure customer would like to, whether they want to run on all flash arrays, white box servers, bare metal, VMs or on cloud storage as well. You know, all of that can have a consistent experience with the Portworx platform. >> Yeah, and because you know, data management especially in this world of containers is you know, it's a little more difficult it's definitely more fragmented across you know, multiple clouds, multiple cloud vendors, multiple cloud services, multiple instances of a service. So the fragmentation has you know, given IT departments quite the headache in operationally managing all that. So Michael you know, what's kind of the use case for Portworx in this fragmented cloud storage space. >> Yeah. It's a great question. You know, the used cases are many and varied, you know to put it in a little bit of historical perspective you know, I've been attending coupons either (indistinct) for about five or six years now, kind of losing count. And we really started seeing Kubernetes as kind of an agile way to run CI/CD environments and other test dev environments. And there were just a handful of customers that were really running production workloads at the very, very beginning. If you fast forward to today, Kubernetes is being used to tackle some of the biggest central board level problems that enterprises face, because they need that scale and they need that agility. So you know, COVID's accelerated that. So we see customers say in the retail space, who are having to cope with a massive increase in traffic on their website. People searching for kind of you know, the products that they can't find anywhere else. Are they available? Can I buy them online? And so they're re-architecting those web services to use often open source databases in this case Elasticsearch, in order to create a great user experiences. And they're managing that across clouds and across environments using Kubernetes. Another customer that I would say kind of a very different use case but also one that matches that scale would be Esri which unfortunately the circumstances of becoming a household name are a lot of the covert tracking ArcGIS system to keep track of, tracing and outbreaks. They're running that service in the cloud using Portworx. And again, it's all about how do we reliably and agilely deploy applications that are always available and create that experience that our customers need. And so we see kind of you know, financial services doing similar things healthcare, pharmaceutical, doing similar things. Again, the theme is it's the biggest business problems that we're using now, not just the kind of the low hanging fruit as we used to talk about. >> Yeah exactly. Because you know storage, is it a lot of the times it's kind of a boiler plate functionality you know, it's there it works. And if it doesn't, you know, the problem with storage in a cloud data space is that fragmentation right? Is that enormous you know, on the one hand that you don't have a scale on the other hand, the tons of different services that can hold data that need protecting as well as data management. So I want to zoom in on a recent development in the Portworx portfolio where the PX backup product has spun out its own little product. You know, what's the strategy there, Michael? >> Yeah, so I think, you know fundamentally data protection needs to change in a Kubernetes context. The way in which we protected applications in the past was very closely related to the way in which we protected servers. Because we would run one app per server. So if we protected the server our application was protected. Kubernetes breaks that model now an individual application is made up of dozens or hundreds of components that are spread across multiple servers. And you have container images, you have configuration I mean you have data, and it's very difficult for any one person to understand where any of that is in the cluster at any given moment. And so you need to leverage automation and the ability for Kubernetes to understand where a particular set of components is deployed and use that Kubernetes native functionality to take what we call application aware backups. So what PX backup provides is data protection engineered from the ground up for this new application delivery model that we see within Kubernetes. So unlike traditional backup and recovery solutions that were very machine focused, we can allow a team to back up a single application within their Kubernetes cluster, all of the applications in a namespace or the entire cluster all at once, and do so in a self-service manner where integrated with your corporate identity systems individuals can be responsible for protecting their own applications. So we marry kind of a couple of really important concepts. One is kind of the application specific nature of Kubernetes the self service desire of DevOps teams, as well as with the page you go model, where you can have this flexible consumption model, where as you grow, you can pay more. You don't have to do an upfront payment in order to protect your Kubernetes applications. >> Yeah. I think one key thing that Michael hit on was just how this obligation is designed to fit like a glove with the Kubernetes admin. I see a lot of parallels to what happened over a decade ago in the VMware space when you know, VMware came about they needed to be backed up differently. And a little company called Veem built a tool that was purpose-built for it. And it just had a really warm embrace by the VMware community because it really felt like it was built for them, not some legacy enterprise backup application that was forced to fit into this new use case. And you know, we think that the opportunity is very similar on Kubernetes backup and perhaps the difference of the environment is even more profound than on the VMware side where you know, the Kubernetes admin really wants something that fits in their operational model, deploys within the cluster itself, backs up to object storage. Is just perfect purpose-built for this use case. And so we see a huge opportunity for that, and we believe that for a lot of customers, this might be the easiest place for them to start trying to Portworx portfolio. You know, you've got an existing competitors cluster download this, give it a shot, it'll work on any instructions you've got going with Kubernetes today. >> And especially because, you know, looking at the kind of breakdown of Kubernetes in a way data is, you know, infrastructure is provisioned. Data is placing in cloud services. It's no longer the cluster admin necessarily, that gets to decide where data goes, what application has access to it, you know, that's in the hands of the developers. And that's a pretty big shift you know, it used to be the VI admin the virtualization admin that did that, had control over where data was living, where data was accessed out, how it was accessed. But now we see developers kind of taking control over their infrastructure resources. They get to decide where it runs, how it runs what services to use, what applications to tie it into. So I'm curious, you know, how our Portworx and PX backup kind of help the developer stay in control and still have that freedom of choice. >> Yeah, we think of it in terms of data services. So I have a database and I needed to be highly available. I needed to be encrypted, backed up. I might need a DR. An off site DR schedule. And with Portworx, you can think about adding these services HA, security, backup, capacity management as really just I want to check a box and now I have this service available. My database is now highly available. It's backed up, it's encrypted. I can migrate it. I can attach a backup schedule to it. So 'cause within a Kubernetes cluster some apps are going to need that entire menu of services. And some apps might not need any of those services because we're only in Testa phage, everything is multiplexed into a single cluster. And so being able to turn off and turn on these various data services is how we empower a developer, a DevOps team to take an application all the way from test dev, into production, without having to really change anything about their Kubernetes deployments besides, you know, a flag within their YAML file. It makes it really, really easy to get the performance and the security and the availability that we were used to with VM based applications via that admin now within Kubernetes. >> So Matt, I want to spend the last couple of minutes talking about the bigger picture, right? We've talked about Portworx, PX backup. I want to take a look at the broader storage picture of cloud native and kind of look at the Pure angle on the trends on what you see happening in this space. >> Yeah absolutely. You know, a couple of high-level things I would, you know, kind of talk about, you know, the first buzz that I think, you know hybrid cloud deployments are the de facto now. And so when people are picking storage, whether they be you know, a storage for a traditional database application or next gen application, cloud native application, the thought from the beginning is how do I architect for hybrid? And so you know, within the Pure portfolio, we've really thought about how we build solutions that work with cloud native apps like Portworx, but also traditional applications. And our cloud block store allows you know, those to be mobilized to the cloud without, with minimal re-architecture. Another big trend that we see is the growth of object storage. And, you know if you look at the first generation of object storage, object storage is what? 15 plus years old and many of the first deployments were characterized by really low costs low performance, kind of the last retention layer if you will, for unimportant content. But then this web application thing happens and people started to build web apps that used object storage as their primary storage. And so now, as people try to bring those cloud native applications on-prem and build them in a multicloud way there's a real growth in the need for you know, high-performance kind of applications object storage. And so we see this real change to the needs and requirements on the object storage landscape. And it's one that in particular, we're trying to serve with our FlashBlade product that provides a unified file and object access, because many of those applications are kind of graduating from file or moving towards object, but they can't do that overnight. And so being able to provide a high-performance way to deliver unstructured data (indistinct) object files solve is very strategic right now. >> Well, that's insightful. Thanks. So I want to thank you both for being here. And, you know, I look forward to hearing about Portworx and Pure in the future as is acquisition. You know, it integrates and new products and new developments come out from the Pure side. So thanks both for being here and thank you at home for watching. I'm Joep Piscaer, thanks for watching the theCUBE's coverage of KubeCon CloudNativeCon 2020. Thanks. >> Yeah. Thanks too. >> Yeah. Thank you. (gentle music)
SUMMARY :
Brought to you by Red Hat, he's the senior director What are the trends that you see going on? Yeah, I think you know, beyond just the marketing and you know, their logo looked like and kind of the roadmap going forward And so you know, we came So the fragmentation has you know, And so we see kind of you know, And if it doesn't, you know, One is kind of the application And you know, we think and PX backup kind of help the developer and the availability that we were used to and kind of look at the the need for you know, And, you know, I look forward to hearing
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael Ferranti | PERSON | 0.99+ |
Joep Piscaer | PERSON | 0.99+ |
Matt Kixmoeller | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Portworx | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
Matt | PERSON | 0.99+ |
both companies | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
Veem | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
second thing | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
one app | QUANTITY | 0.98+ |
ArcGIS | TITLE | 0.97+ |
first deployments | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
Portworx | TITLE | 0.97+ |
first generation | QUANTITY | 0.97+ |
Elasticsearch | TITLE | 0.97+ |
CloudNativeCon | EVENT | 0.97+ |
today | DATE | 0.96+ |
Pure | ORGANIZATION | 0.96+ |
CloudNativeCon North America 2020 | EVENT | 0.96+ |
six years | QUANTITY | 0.95+ |
about five | QUANTITY | 0.94+ |
one person | QUANTITY | 0.94+ |
PX | ORGANIZATION | 0.94+ |
one key thing | QUANTITY | 0.94+ |
single application | QUANTITY | 0.93+ |
15 plus years old | QUANTITY | 0.93+ |
CloudNativeCon 2020 | EVENT | 0.93+ |
six plus months ago | DATE | 0.89+ |
single cluster | QUANTITY | 0.87+ |
theCUBE | ORGANIZATION | 0.85+ |
Kubecon | ORGANIZATION | 0.8+ |
Esri | TITLE | 0.8+ |
COVID | TITLE | 0.79+ |
a decade ago | DATE | 0.77+ |
Muddu Sudhakar | CUBE on Cloud
(gentle music) >> From the Cube Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is theCube Conversation. >> Hi everybody, this is Dave Vellante, we're back at Cube on Cloud, and with me is Muddu Sudhakar. He's a long time alum of theCube, a technologist and executive, a serial entrepreneur and an investor. Welcome my friend, good to see you. >> Good to see you, Dave. Pleasure to be with you. Happy elections, I guess. >> Yeah, yeah. So I wanted to start, this work from home, pivot's been amazing, and you've seen the enterprise collaboration explode. I wrote a piece a couple months ago, looking at valuations of various companies, right around the snowflake IPO, I want to ask you about that, but I was looking at the valuations of various companies, at Spotify, and Shopify, and of course Zoom was there. And I was looking at just simple revenue multiples, and I said, geez, Zoom actually looks, might look undervalued, which is crazy, right? And of course the stock went up after that, and you see teams, Microsoft Teams, and Microsoft doing a great job across the board, we've written about that, you're seeing Webex is exploding, I mean, what do you make of this whole enterprise collaboration play? >> No, I think the look there is a trend here, right? So I think this probably trend started before COVID, but COVID is going to probably accelerate this whole digital transformation, right? People are going to work remotely a lot more, not everybody's going to come back to the offices even after COVID, so I think this whole collaboration through Slack, and Zoom, and Microsoft Teams and Webex, it's going to be the new game now, right? Both the video, audio and chat solutions, that's really going to help people like eyeballs. You're not going to spend time on all four of them, right? It's like everyday from a consumer side, you're going to spend time on your Gmail, Facebook, maybe Twitter, maybe Instagram, so like in the consumer side, on your personal life, you have something on the enterprise. The eyeballs are going to be in these platforms. >> Yeah. Well. >> But we're not going to take everything. >> Well, So you are right, there's a permanence to this, and I got a lot of ground to cover with you. And I always like our conversations mood because you tell it like it is, I'm going to stay on that work from home pivot. You know a lot about security, but you've seen three big trends, like mega trends in security, Endpoint, Identity Access Management, and Cloud Security, you're seeing this in the stock prices of companies like CrowdStrike, Zscaler, Okta- >> Right >> Sailpoint- >> Right, I mean, they exploded, as a result of the pandemic, and I think I'm inferring from your comment that you see that as permanent, but that's a real challenge from a security standpoint. What's the impact of Cloud there? >> No, it isn't impact but look, first is all the services required to be Cloud, right? See, the whole ideas for it to collaborate and do these things. So you cannot be running an application, like you can't be running conference and SharePoint oN-Prem, and try to on a Zoom and MS teams. So that's why, if you look at Microsoft is very clever, they went with Office 365, SharePoint 365, now they have MS Teams, so I think that Cloud is going to drive all these workloads that you have been talking about a lot, right? You and John have been saying this for years now. The eruption of Cloud and SAS services are the vehicle to drive this next-generation collaboration. >> You know what's so cool? So Cloud obviously is the topic, I wonder how you look at the last 10 years of Cloud, and maybe we could project forward, I mean the big three Cloud vendors, they're running it like $20 billion a quarter, and they're growing collectively, 35, 40% clips, so we're really approaching a hundred billion dollars for these three. And you hear stats like only 20% of the workloads are in the public Cloud, so it feels like we're just getting started. How do you look at the impact of Cloud on the market, as you say, the last 10 years, and what do you expect going forward? >> No, I think it's very fascinating, right? So I remember when theCube, you guys are talking about 10 years back, now it's been what? More than 10 years, 15 years, since AWS came out with their first S3 service back in 2006. >> Right. >> Right? so I think look, Cloud is going to accelerate even more further. The areas is going to accelerate is for different reasons. I think now you're seeing the initial days, it's all about startups, initial workloads, Dev test and QA test, now you're talking about real production workloads are moving towards Cloud, right? Initially it was backup, we really didn't care for backup they really put there. Now you're going to have Cloud health primary services, your primary storage will be there, it's not going to be an EMC, It's not going to be a ETAP storage, right? So workloads are going to shift from the business applications, and this business App again, will be running on the Cloud, and I'll make another prediction, make customer service and support. Customer service and support, again, we should be running on the Cloud. You're not want to run the thing on a Dell server, or an IBM server, or an HP server, with your own hosted environment. That model is not because there's no economies of scale. So to your point, what will drive Cloud for the next 10 years, will be economies of scale. Where can you take the cost? How can I save money? If you don't move to the Cloud, you won't save money. So all those workloads are going to go to the Cloud are people who really want to save, like global gradual custom, right? If you stay on the ASP model, a hosted, you're not going to save your costs, your costs will constantly go up from a SAS perspective. >> So that doesn't bode well for all the On-prem guys, and you hear a lot of the vendors that don't own a Cloud that talk about repatriation, but the numbers don't support that. So what do those guys do? I mean, they're talking multi-Cloud, of course they're talking hybrid, that's IBM's big play, how do you see it? >> I think, look, see there, to me, multi-Cloud makes sense, right? You don't want one vendor that you never want to get, so having Amazon, Microsoft, Google, it gives them a multi-Cloud. Even hybrid Cloud does make sense, right? There'll be some workloads. It's like, we are still running On-prem environment, we still have mainframe, so it's never going to be a hundred percent, but I would say the majority, your question is, can we get to 60, 70, 80% workers in the next 10 years? I think you will. I think by 2025, more than 78% of the Cloud Migration by the next five years, 70% of workload for enterprise will be on the Cloud. The remaining 25, maybe Hybrid, maybe On-prem, but I get panics, really doesn't matter. You have saved and part of your business is running on the Cloud. That's your cost saving, that's where you'll see the economies of scale, and that's where all the growth will happen. >> So square the circle for me, because again, you hear the stat on the IDC stat, IBM Ginni Rometty puts it out there a lot that only 20% of the workloads are in the public Cloud, everything else is On-prem, but it's not a zero sum game, right? I mean the Cloud native stuff is growing like crazy, the On-prem stuff is flat to down, so what's going to happen? When you talk about 70% of the workloads will be in the Cloud, do you see those mission critical apps and moving into the car, I mean the insurance companies going to put their claims apps in the Cloud, or the financial services companies going to put their mission critical workloads in the Cloud, or they just going to develop new stuff that's Cloud native that is sort of interacts with the On-prem. How do you see that playing out? >> Yeah, no, I think absolutely, I think a very good question. So two things will happen. I think if you take an enterprise, right? Most businesses what they'll do is the workloads that they should not be running On-prem, they'll move it up. So obviously things like take, as I said, I use the word SharePoint, right? SharePoint and conference, all the knowledge stuff is still running on people's data centers. There's no reason. I understand, I've seen statistics that 70, 80% of the On-prem for SharePoint will move to SharePoint on the Cloud. So Microsoft is going to make tons of money on that, right? Same thing, databases, right? Whether it's CQL server, whether there is Oracle database, things that you are running as a database, as a Cloud, we move to the Cloud. Whether that is posted in Oracle Cloud, or you're running Oracle or Mongo DB, or Dynamo DB on AWS or SQL server Microsoft, that's going to happen. Then what you're talking about is really the App concept, the applications themselves, the App server. Is the App server is going to run On-prem, how much it's going to laureate outside? There may be a hybrid Cloud, like for example, Kafka. I may use a Purse running on a Kafka as a service, or I may be using Elasticsearch for my indexing on AWS or Google Cloud, but I may be running my App locally. So there'll be some hybrid place, but what I would say is for every application, 75% of your Comprende will be on the Cloud. So think of it like the Dev. So even for the On-prem app, you're not going to be a 100 percent On-prem. The competent, the billing materials will move to the Cloud, your Purse, your storage, because if you put it On-prem, you need to add all this, you need to have all the whole things to buy it and hire the people, so that's what is going to happen. So from a competent perspective, 70% of your bill of materials will move to the Cloud, even for an On-prem application. >> So, Of course, the susification of the industry in the last decade and in my three favorite companies last decade, you've worked for two of them, Tableau, ServiceNow, and Splunk. I want to ask you about those, but I'm interested in the potential disruption there. I mean, you've got these SAS companies, Salesforce of course is another one, but they can't get started in 1999. What do you see happening with those? I mean, we're basically building these sort of large SAS, platforms, now. Do you think that the Cloud native world that developers can come at this from an angle where they can disrupt those companies, or are they too entrenched? I mean, look at service now, I mean, I don't know, $80 billion market capital where they are, they bigger than Workday, I mean, just amazing how much they've grown and you feel like, okay, nothing can stop them, but there's always disruption in this industry, what are your thoughts on that. >> Not very good with, I think there'll be disrupted. So to me actually to your point, ServiceNow is now close to a 100 billion now, 95 billion market coverage, crazy. So from evaluation perspective, so I think the reason they'll be disrupted is that the SAS vendors that you talked about, ServiceNow, and all this plan, most of these services, they're truly not a multi-tenant or what do you call the Cloud Native. And that is the Accenture. So because of that, they will not be able to pass the savings back to the enterprises. So the cost economics, the economics that the Cloud provides because of the multi tenancy ability will not. The second reason there'll be disrupted is AI. So far, we talked about Cloud, but AI is the core. So it's not really Cloud Native, Dave, I look at the AI in a two-piece. AI is going to change, see all the SAS vendors were created 20 years back, if you remember, was an operator typing it, I don't respond administered we'll type a Splunk query. I don't need a human to type a query anymore, system will actually find it, that's what the whole security game has changed, right? So what's going to happen is if you believe in that, that AI, your score will disrupt all the SAS vendors, so one angle SAS is going to have is a Cloud. That's where you make the Cloud will take up because a SAS application will be Cloudified. Being SAS is not Cloud, right? Second thing is SAS will be also, I call it, will be AI-fied. So AI and machine learning will be trying to drive at the core so that I don't need that many licenses. I don't need that many humans. I don't need that many administrators to manage, I call them the tuners. Once you get a driverless car, you don't need a thousand tuners to tune your Tesla, or Google Waymo car. So the same philosophy will happen is your Dev Apps, your administrators, your service management, people that you need for service now, and these products, Zendesk with AI, will tremendously will disrupt. >> So you're saying, okay, so yeah, I was going to ask you, won't the SAS vendors, won't they be able to just put, inject AI into their platforms, and I guess I'm inferring saying, yeah, but a lot of the problems that they're solving, are going to go away because of AI, is that right? And automation and RPA and things of that nature, is that right? >> Yes and no. So I'll tell you what, sorry, you have asked a very good question, let's answer, let me rephrase that question. What you're saying is, "Why can't the existing SAS vendors do the AI?" >> Yes, right. >> Right, >> And there's a reason they can't do it is their pricing model is by number of seats. So I'm not going to come to Dave, and say, come on, come pay me less money. It's the same reason why a board and general lover build an electric car. They're selling 10 million gasoline cars. There's no incentive for me, I'm not going to do any AI, I'm going to put, I'm not going to come to you and say, hey, buy me a hundred less license next year from it. So that is one reason why AI, even though these guys do any AI, it's going to be just so I call it, they're going to, what do you call it, a whitewash, kind of like you put some paint brush on it, trying to show you some AI you did from a marketing dynamics. But at the core, if you really implement the AI with you take the driver out, how are you going to change the pricing model? And being a public company, you got to take a hit on the pricing model and the price, and it's going to have a stocking part. So that, to your earlier question, will somebody disrupt them? The person who is going to disrupt them, will disrupt them on the pricing model. >> Right. So I want to ask you about that, because we saw a Snowflake, and it's IPO, we were able to pour through its S-1, and they have a different pricing model. It's a true Cloud consumption model, Whereas of course, most SAS companies, they're going to lock you in for at least one year term, maybe more, and then, you buy the license, you got to pay X. If you, don't use it, you still got to pay for it. Snowflake's different, actually they have a different problem, that people are using it too much and the sea is driving the CFO crazy because the bill is going up and up and up, but to me, that's the right model, It's just like the Amazon model, if you can justify it, so how do you see the pricing, that consumption model is actually, you're seeing some of the On-prem guys at HPE, Dell, they're doing as a service. They're kind of taking a page out of the last decade SAS model, so I think pricing is a real tricky one, isn't it? >> No, you nailed it, you nailed it. So I think the way in which the Snowflake there, how the disruptors are data warehouse, that disrupted the open source vendors too. Snowflake distributed, imagine the playbook, you disrupted something as the $ 0, right? It's an open source with Cloudera, Hortonworks, Mapper, that whole big data that you want me to, or that market is this, that disrupting data warehouses like Netezza, Teradata, and the charging more money, they're making more money and disrupting at $0, because the pricing models by consumption that you talked about. CMT is going to happen in the service now, Zen Desk, well, 'cause their pricing one is by number of seats. People are going to say, "How are my users are going to ask?" right? If you're an employee help desk, you're back to your original health collaborative. I may be on Slack, I could be on zoom, I'll maybe on MS Teams, I'm going to ask by using usage model on Slack, tools by employees to service now is the pricing model that people want to pay for. The more my employees use it, the more value I get. But I don't want to pay by number of seats, so the vendor, who's going to figure that out, and that's where I look, if you know me, I'm right over as I started, that's what I've tried to push that model look, I love that because that's the core of how you want to change the new game. >> I agree. I say, kill me with that problem, I mean, some people are trying to make it a criticism, but you hit on the point. If you pay more, it's only because you're getting more value out of it. So I wanted to flip the switch here a little bit and take a customer angle. Something that you've been on all sides. And I want to talk a little bit about strategies, you've been a strategist, I guess, once a strategist, always a strategist. How should organizations be thinking about their approach to Cloud, it's cost different for different industries, but, back when the cube started, financial services Cloud was a four-letter word. But of course the age of company is going to matter, but what's the framework for figuring out your Cloud strategy to get to your 70% and really take advantage of the economics? Should I be Mono Cloud, Multi-Cloud, Multi-vendor, what would you advise? >> Yeah, no, I mean, I mean, I actually call it the tech stack. Actually you and John taught me that what was the tech stack, like the lamp stack, I think there is a new Cloud stack needs to come, and that I think the bottomline there should be... First of all, anything with storage should be in the Cloud. I mean, if you want to start, whether you are, financial, doesn't matter, there's no way. I come from cybersecurity side, I've seen it. Your attackers will be more with insiders than being on the Cloud, so storage has to be in the Cloud and encompass compute whoever it is. If you really want to use containers and Kubernetes, it has to be in the public Cloud, leverage that have the computer on their databases. That's where it can be like if your data is so strong, maybe run it On-prem, maybe have it on a hosted model for when it comes to database, but there you have a choice between hybrid Cloud and public Cloud choice. Then on top when it comes to App, the app itself, you can run locally or anywhere, the App and database. Now the areas that you really want to go after to migrate is look at anything that's an enterprise workload that you don't need people to manage it. You want your own team to move up in the career. You don't want thousand people looking at... you don't want to have a, for example, IT administrators to call central people to the people to manage your compute storage. That workload should be more, right? You already saw Sierra moved out to Salesforce. We saw collaboration already moved out. Zoom is not running locally. You already saw SharePoint with knowledge management mode up, right? With a box, drawbacks, you name anything. The next global mode is a SAS workloads, right? I think Workday service running there, but work data will go into the Cloud. I bet at some point Zendesk, ServiceNow, then either they put it on the public Cloud, or they have to create a product and public Cloud. To your point, these public Cloud vendors are at $2 trillion market cap. They're they're bigger than the... I call them nation States. >> Yeah, >> So I'm servicing though. I mean, there's a 2 trillion market gap between Amazon and Azure, I'm not going to compete with them. So I want to take this workload to run it there. So all these vendors, if you see that's where Shandra from Adobe is pushing this right, Adobe, Workday, Anaplan, all the SAS vendors we'll move them into the public Cloud within these vendors. So those workloads need to move out, right? So that all those things will start, then you'll start migrating, but I call your procurement. That's where the RPA comes in. The other thing that we didn't talk about, back to your first question, what is the next 10 years of Cloud will be RPA? That third piece to Cloud is RPA because if you have your systems On-prem, I can't automate them. I have to do a VPN into your house there and then try to automate your systems, or your procurement, et cetera. So all these RPA vendors are still running On-prem, most of them, whether it's UI path automation anywhere. So the Cloud should be where the brain should be. That's what I call them like the octopus analogy, the brain is in the Cloud, the tentacles are everywhere, they should manage it. But if my tentacles have to do a VPN with your house to manage it, I'm always will have failures. So if you look at the why RPA did not have the growth, like the Snowflake, like the Cloud, because they are running it On-prem, most of them still. 80% of the RP revenue is On-prem, running On-prem, that needs to be called clarified. So AI, RPA and the SAS, are the three reasons Cloud will take off. >> Awesome. Thank you for that. Now I want to flip the switch again. You're an investor or a multi-tool player here, but so if you're, let's say you're an ecosystem player, and you're kind of looking at the landscape as you're in an investor, of course you've invested in the Cloud, because the Cloud is where it's at, but you got to be careful as an ecosystem player to pick a spot that both provides growth, but allows you to have a moat as, I mean, that's why I'm really curious to see how Snowflake's going to compete because they're competing with AWS, Microsoft, and Google, unlike, Frank, when he was at service now, he was competing with BMC and with on-prem and he crushed it, but the competitors are much more capable here, but it seems like they've got, maybe they've got a moat with MultiCloud, and that whole data sharing thing, we'll see. But, what about that? Where are the opportunities? Where's that white space? And I know there's a lot of white space, but what's the framework to look at, from an investor standpoint, or even a CEO standpoint, where you want to put place your bets. >> No, very good question, so look, I did something. We talk as an investor in the board with many companies, right? So one thing that says as an investor, if you come back and say, I want to create a next generation Docker or a computer, there's no way nobody's going to invest. So that we can motor off, even if you want to do object storage or a block storage, I mean, I've been an investor board member of so many storage companies, there's no way as an industry, I'll write a check for a compute or storage, right? If you want to create a next generation network, like either NetSuite, or restart Juniper, Cisco, there is no way. But if you come back and say, I want to create a next generation Viper for remote working environments, where AI is at the core, I'm interested in that, right? So if you look at how the packets are dropped, there's no intelligence in either not switching today. The packets come, I do it. The intelligence is not built into the network with AI level. So if somebody comes with an AI, what good is all this NVD, our GPS, et cetera, if you cannot do wire speed, packet inspection, looking at the content and then route the traffic. If I see if it's a video package, but in UN Boston, there's high interview day of they should be loading our package faster, because you are a premium ISP. That intelligence has not gone there. So you will see, and that will be a bad people will happen in the network, switching, et cetera, right? So that is still an angle. But if you work and it comes to platform services, remember when I was at Pivotal and VMware, all models was my boss, that would, yes, as a platform, service is a game already won by the Cloud guys. >> Right. (indistinct) >> Silicon Valley Investors, I don't think you want to invest in past services, right? I mean, you might come with some lecture edition database to do some updates, there could be some game, let's say we want to do a time series database, or some metrics database, there's always some small angle, but the opportunity to go create a national database there it's very few. So I'm kind of eliminating all the black spaces, right? >> Yeah. >> We have the white spaces that comes in is the SAS level. Now to your point, if I'm Amazon, I'm going to compete with Snowflake, I have Redshift. So this is where at some point, these Cloud platforms, I call them aircraft carriers. They're not going to stay on the aircraft carriers, they're going to own the land as well. So they're going to move up to the SAS space. The question is you want to create a SAS service like CRM. They are not going to create a CRM like service, they may not create a sales force and service now, but if you're going to add a data warehouse, I can very well see Azure, Google, and AWS, going to create something to compute a Snowflake. Why would I not? It's so close to my database and data warehouse, I already have Redshift. So that's going to be nightlights, same reason, If you look at Netflix, you have a Netflix and you have Amazon prime. Netflix runs on Amazon, but you have Amazon prime. So you have the same model, you have Snowflake, and you'll have Redshift. The both will help each other, there'll be a... What do you call it? Coexistence will happen. But if you really want to invest, you want to invest in SAS companies. You do not want to be investing in a compliment players. You don't want to a feature. >> Yeah, that's great, I appreciate that perspective. And I wonder, so obviously Microsoft play in SAS, Google's got G suite. And I wonder if people often ask the Andy Jassy, you're going to move up the stack, you got to be an application, a SAS vendor, and you never say never with Atavist, But I wonder, and we were talking to Jerry Chen about this, years ago on theCube, and his angle was that Amazon will play, but they'll play through developers. They'll enable developers, and they'll participate, they'll take their, lick off the cone. So it's going to be interesting to see how directly Amazon plays, but at some point you got Tam expansion, you got to play in that space. >> Yeah, I'll give you an example of knowing, I got acquired by a couple of times by EMC. So I learned a lot from Joe Tucci and Paul Merage over the years. see Paul and Joe, what they did is to look at how 20 years, and they are very close to Boston in your area, Joe, what games did is they used to sell storage, but you know what he did, he went and bought the Apps to drive them. He bought like Legato, he bought Documentum, he bought Captiva, if you remember how he acquired all these companies as a services, he bought VMware to drive that. So I think the good angle that Microsoft has is, I'm a SAS player, I have dynamics, I have CRM, I have SharePoint, I have Collaboration, I have Office 365, MS Teams for users, and then I have the platform as Azure. So I think if I'm Amazon, (indistinct). I got to own the apps so that I can drive this workforce on my platform. >> Interesting. >> Just going to developers, like I know Jerry Chan, he was my peer a BMF. I don't think just literally to developers and that model works in open source, but the open source game is pretty much gone, and not too many companies made money. >> Well, >> Most companies pretty much gone. >> Yeah, he's right. Red hats not bad idea. But it's very interesting what you're saying there. And so, hey, its why Oracle wants to have Tiktok, running on their platform, right? I mean, it's going to. (laughing) It's going to drive that further integration. I wanted to ask you something, you were talking about, you wouldn't invest in storage or compute, but I wonder, and you mentioned some commentary about GPU's. Of course the videos has been going crazy, but they're now saying, okay, how do we expand our Team, they make the acquisition of arm, et cetera. What about this DPU thing, if you follow that, that data processing unit where they're like hyper dis-aggregation and then they reaggregate, and as an offload and really to drive data centric workloads. Have you looked at that at all? >> I did, I think, and that's a good angle. So I think, look, it's like, it goes through it. I don't know if you remember in your career, we have seen it. I used to get Silicon graphics. I saw the first graphic GPU, right? That time GPU was more graphic processor unit, >> Right, yeah, work stations. >> So then become NPUs at work processing units, right? There was a TCP/IP office offloading, if you remember right, there was like vector processing unit. So I think every once in a while the industry, recreated this separate unit, as a co-processor to the main CPU, because main CPU's inefficient, and it makes sense. And then Google created TPU's and then we have the new world of the media GPU's, now we have DPS all these are good, but what's happening is, all these are driving for machine learning, AI for the training period there. Training period Sometimes it's so long with the workloads, if you can cut down, it makes sense. >> Yeah. >> Because, but the question is, these aren't so specialized in nature. I can't use it for everything. >> Yup. >> I want Ideally, algorithms to be paralyzed, I want the training to be paralyzed, I want so having deep use and GPS are important, I think where I want to see them as more, the algorithm, there should be more investment from the NVIDIA's and these guys, taking the algorithm to be highly paralyzed them. (indistinct) And I think that still has not happened in industry yet. >> All right, so we're pretty much out of time, but what are you doing these days? Where are you spending your time, are you still in Stealth, give us a little glimpse. >> Yeah, no, I'm out of the Stealth, I'm actually the CEO of Aisera now, Aisera, obviously I invested with them, but I'm the CEO of Aisero. It's funded by Menlo ventures, Norwest, True, along with Khosla ventures and Ram Shriram is a big investor. Robin's on the board of Google, so these guys, look, we are going out to the collaboration game. How do you automate customer service and support for employees and then users, right? In this whole game, we talked about the Zoom, Slack and MS Teams, that's what I'm spending time, I want to create next generation service now. >> Fantastic. Muddu, I always love having you on you, pull punches, you tell it like it is, that you're a great visionary technologist. Thanks so much for coming on theCube, and participating in our program. >> Dave, it's always a pleasure speaking to you sir. Thank you. >> Okay. Keep it right there, there's more coming from Cuba and Cloud right after this break. (slow music)
SUMMARY :
From the Cube Studios Welcome my friend, good to see you. Pleasure to be with you. I want to ask you about that, but COVID is going to probably accelerate Yeah. because you tell it like it is, that you see that as permanent, So that's why, if you look and what do you expect going forward? you guys are talking about 10 years back, So to your point, what will drive Cloud and you hear a lot of the I think you will. the On-prem stuff is flat to Is the App server is going to run On-prem, I want to ask you about those, So the same philosophy will So I'll tell you what, sorry, I'm not going to come to you and say, hey, the license, you got to pay X. I love that because that's the core But of course the age of Now the areas that you So AI, RPA and the SAS, where you want to put place your bets. So if you look at how Right. but the opportunity to go So you have the same So it's going to be interesting to see the Apps to drive them. I don't think just literally to developers I wanted to ask you something, I don't know if you AI for the training period there. Because, but the question is, taking the algorithm to but what are you doing these days? but I'm the CEO of Aisero. Muddu, I always love having you on you, pleasure speaking to you sir. right after this break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
1999 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
$0 | QUANTITY | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
$ 0 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Netezza | ORGANIZATION | 0.99+ |
2006 | DATE | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
35 | QUANTITY | 0.99+ |
Frank | PERSON | 0.99+ |
Muddu Sudhakar | PERSON | 0.99+ |
Ram Shriram | PERSON | 0.99+ |
95 billion | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
2025 | DATE | 0.99+ |
Webex | ORGANIZATION | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
60 | QUANTITY | 0.99+ |
Jerry Chan | PERSON | 0.99+ |
$80 billion | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Paul Merage | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Norwest | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Aisero | ORGANIZATION | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
two-piece | QUANTITY | 0.99+ |
2 trillion | QUANTITY | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
100 percent | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Muddu | PERSON | 0.99+ |
Mapper | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
100 billion | QUANTITY | 0.99+ |
Pivotal | ORGANIZATION | 0.99+ |
$2 trillion | QUANTITY | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
Joe Tucci | PERSON | 0.99+ |
20 years | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Aisera | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
Jared Bell T-Rex Solutions & Michael Thieme US Census Bureau | AWS Public Sector Partner Awards 2020
>> Narrator: From around the globe, it's theCUBE with digital coverage of AWS Public Sector Partner Awards brought to you by Amazon web services. >> Hi, and welcome back, I'm Stu Miniman and we're here at the AWS Public Sector. Their Partner Awards, really enjoying this. We get to talk to some of the diverse ecosystem as well as they've all brought on their customers, some really phenomenal case studies. Happy to welcome to the program two first time guests. First of all, we have Jared Bell, he's the Chief Engineer of self response, operational readiness at T-Rex Solutions and T-Rex is the award winner for the most customer obsessed mission-based in Fed Civ. So Jared, congratulations to you and the T-Rex team and also joining him, his customer Michael Thieme, he's the Assistant Director for the Decennial Census Program systems and contracts for the US Census Bureau, thank you so much both for joining us. >> Good to be here. >> All right, Jared, if we could start with you, as I said, you're an award winner, you sit in the Fed Civ space, you've brought us to the Census Bureau, which most people understand the importance of that government program coming up on that, you know, every 10 year we've been hearing, you know, TV and radio ads talking about it, but Jared, if you could just give us a thumbnail of T-Rex and what you do in the AWS ecosystem. >> So yeah again, my name's Jared Bell and I work for T-Rex Solutions. T-Rex is a mid tier IT federal contracting company in Southern Maryland, recently graduated from hubs on status, and so T-Rex really focuses on four key areas, infrastructure in Cloud modernization, cybersecurity, and active cyber defense, big data management and analytics, and then overall enterprise system integration. And so we've been, you know, AWS partner for quite some time now and with decennial, you know, we got to really exercise a lot of the bells and whistles that are out there and really put it all to the test. >> All right, well, Michael, you know, so many people in IT, we talk about the peaks and valleys that we have, not too many companies in our organization say, well, we know exactly, you know, that 10 year spike of activity that we're going to have, I know there's lots of work that goes on beyond that, but it tells a little bit , your role inside the Census Bureau and what's under your purview. >> Yes, the Census Bureau, is actually does hundreds of surveys every year, but the decennial census is our sort of our main flagship activity. And I am the Assistant Director under our Associate Director for the IT and for the contracts for the decennial census. >> Wonderful and if you could tell us a little bit the project that you're working on, that eventually pulled T-Rex in. >> Sure. This is the 2020 census and the challenge of the 2020 census is we've done the census since 1790 in the United States. It's a pillar, a foundation of our democracy, and this was the most technologically advanced census we've ever done. Actually up until 2020, we have done our censuses mostly by pen, paper, and pencil. And this is a census where we opened up the internet for people to respond from home. We can have people respond on the phone, people can respond with an iPhone or an Android device. We tried to make it as easy as possible and as secure as possible for people to respond to the census where they were and we wanted to meet the respondent where they were. >> All right. So Jared, I'd love you to chime in here 'cause I'm here and talking about, you know, the technology adoption, you know, how much was already in plans there, where did T-Rex intersect with this census activity? >> Yeah. So, you know, census deserves a lot of credit for their kind of innovative approach with this technical integrator contract, which T-Rex was fortunate enough to win. When we came in, you know, we were just wrapping up the 2018 test. we really only had 18 months to go from start to, you know, a live operational tests to prepare for 2020. And it was really exciting to be brought in on such a large mission critical project and this is one of the largest federal IT products in the Cloud to date. And so, you know, when we came in, we had to really, you know, bring together a whole lot of solutions. I mean, the internet self response, which is what we're going to to talk about today was one of the major components. But we really had a lot of other activities that we had to engage in. You know, we had to design and prepare an IT solution to support 260 field offices, 16,000 field staff, 400,000 mobile devices and users that were going to go out and knock on doors for a numeration. So it was real6ly a big effort that we were honored to be a part of, you know, and on top of that, T-Rex actually brought to the table, a lot of its past experience with cybersecurity and active cyber defense, also, you know, because of the importance of all this data, you know, we had the role in security all throughout, and I think T-Rex was prepared for that and did a great job. And then, you know, overall I think that, not necessarily directly to your question, but I think, y6ou know, one of the things that we were able to do to make ourselves successful and to really engage with the census Bureau and be effective with our stakeholders was that we really build a culture of decennial within the technical integrator, you know, we had brown bags and working sessions to really teach the team the importance of the decennial, you know, not just as a career move, but also as a important activity for our country. And so I think that that really helped the team, you know, internalize that mission and really drove kind of our dedication to the census mission and really made us effective and again, a lot of the T-Rex leadership had a lot of experience there from past decennials and so they really brought that mindset to the team and I think it really paid off. >> Michael, if you could bring us inside a little a bit the project, you know, 18 months, obviously you have a specific deadline you need to hit, for that help us understand kind of the architectural considerations that you had there, any concerns that you had and I have to imagine that just the global activities, the impacts of COVID-19 has impacted some of the end stage, if you will, activities here in 2020. >> Absolutely. Yeah. The decennial census is, I believe a very unique IT problem. We have essentially 10 months out of the decade that we have to scale up to gigantic and then scale back down to run the rest of the Census Bureau's activities. But our project, you know, every year ending in zero, April 1st is census day. Now April 1st continued to be census day in 2020, but we also had COVID essentially taking over virtually everything in this country and in fact in the world. So, the way that we set up to do the census with the Cloud and with the IT approach and modernization that we took, actually, frankly, very luckily enabled us to kind of get through this whole thing. Now, we haven't had, Jared discussed a little bit the fact that we're here to talk about our internet self response, we haven't had one second of downtime for our response. We've taken 77 million. I think even more than 78 million responses from households, out of the 140 million households in the United States, we've gotten 77 million people to respond on our internet site without one second of downtime, a good user experience, a good supportability, but the project has always been the same. It's just this time, we're actually doing it with much more technology and hopefully the way that the Cloud has supported us will prove to be really effective for the COVID-19 situation. Because we've had changes in our plans, difference in timeframes, we are actually not even going into the field, or we're just starting to go into the field these next few weeks where we would have almost been coming out of the field at this time. So that flexibility, that expandability, that elasticity, that being in the Cloud gives all of our IT capabilities was really valuable this time. >> Well, Jared, I'm wondering if you can comment on that. All of the things that Michael just said, you know, seem like, you know, they are just the spotlight pieces that I looked at Cloud for. You know, being able to scale on demand, being able to use what I need when I need it, and then dial things down when I don't, and especially, you know, I don't want to have to, you know, I want to limit how much people actually need to get involved. So help understand a little bit, you know, what AWS services underneath, we're supporting this and anything else around the Cloud deployment. >> Sure, yeah. Michael is spot on. I mean, the cloud is tailor made for our operation and activity here. You know, I think all told, we use over 30 of the AWS FedRAMP solutions in standing up our environment across all those 52 system of systems that we were working with. You know, just to name a few, I mean, internet self response alone, you're relying heavily on auto scaling groups, elastic load balancers, you know, we relied a lot on Lambda Functions, DynamoDB. We're one of the first adopters through DynamoDB global tables, which we use for a session persistence across regions. And then on top of that, you know, the data was all flowing down into RDS databases and then from there to, you know, the census data Lake, which was built on EMR and Elasticsearch capabilities, and that's just to name a couple. I mean, you know, we had, we ran the gamut of AWS services to make all this work and they really helped us accelerate. And as Michael said, you know, we stood this up expecting to be working together in a war room, watching everything hand in hand, and because of the way we, were able to architect it in partnership with AWS, we all had to go out and stay at home, you know, the infrastructure remain rock solid. We can have to worry about, you know, being hands on with the equipment and, you know, again, the ability to automate and integrate with those solutions Cloud formation and things like that really let us keep a small agile team of, you know, DevSecOps there to handle the deployments. And we were doing full scale deployments with, you know, one or two people in the middle of the night without any problems. So it really streamlined things for us and helped us keep a tight natural, sure. >> Michael, I'm curious about what kind of training your team need to go through to take advantage of this solution. So from bringing it up to the ripple effect, as you said, you're only now starting to look at who would go into the field who uses devices and the like, so help us understand really the human aspect of undergoing this technology. >> Sure. Now, the census always has to ramp up this sort of immediate workforce. We hire, we actually process over 3 million people through, I think, 3.9 million people applied to work for the Census Bureau. And each decade we have to come up with a training program and actually training sites all over the country and the IT to support those. Now, again, modernization for the 2020 census, didn't only involve the things like our internet self response, it also involves our training. We have all online training now, we used to have what we called verbatim training, where we had individual teachers all over the country in places like libraries, essentially reading text exactly the same way to exactly over and over again to our, to the people that we trained. But now it's all electronic, it allows us to, and this goes to the COVID situation as well, it allows us to bring only three people in at a time to do training. Essentially get them started with our device that we have them use when they're knocking on doors and then go home and do the training, and then come back to work with us all with a minimal contact, human contact, sort of a model. And that, even though we designed it differently, the way that we set the technology of this time allowed us to change that design very quickly, get people trained, not essentially stop the census. We essentially had to slow it down because we weren't sure exactly when it was going to be safe to go knocking on door to door, but we were able to do the training and all of that worked and continues to work phenomenally. >> Wonderful. Jared, I wonder if you've got any lessons learned from working with the census group that might be applicable to kind of, the broader customers out there? >> Oh, sure. Well, working with the census, you know, it was really a great group to work with. I mean, one of the few groups I worked with who have such a clear vision and understanding of what they want their final outcome to be, I think again, you know, for us the internalization of the decennial mission, right? It's so big, it's so important. I think that because we adopted it early on we felt that we were true partners with census, we had a lot of credibility with our counterparts and I think that they understood that we were in it with them together and that was really important. I would also say that, you know, because we're talking about the go Cloud solutions that we worked, you know, we also engage heavily with the AWS engineering group and in partnership with them, you know, we relied on the infrastructure event management services they offer and was able to give us a lot of great insight into our architecture and our systems and monitoring to really make us feel like we were ready for the big show when the time came. So, you know, I think for me, another lesson learned there was that, you know, the Cloud providers like AWS, they're not just a vendor, they're a partner and I think that now going forward, we'll continue to engage with those partners early and often. >> Michael the question I have for you is, you know, what would you say to your peers? What lessons did you have learned and how much of what you've done for the census, do you think it will be applicable to all those other surveys that you do in between the big 10 year surveys? >> All right. I think we have actually set a good milestone for the rest of the Census Bureau, that the modernization that the 2020 census has allowed since it is our flagship really is something that we hope we can continue through the decade and into the next census, as a matter of fact. But I think one of the big lessons learned I wanted to talk about was we have always struggled with disaster recovery. And one of the things that having the Cloud and our partners in the Cloud has helped us do is essentially take advantage of the resilience of the Cloud. So there are data centers all over the country. If ever had a downtime somewhere, we knew that we were going to be able to stay up. For the decennial census, we've never had the budget to pay for a persistent disaster recovery. And the Cloud essentially gives us that kind of capability. Jared talked a lot about security. I think we have taken our security posture to a whole different level, something that allowed us to essentially, as I said before, keep our internet self response free of hacks and breaches through this whole process and through a much longer process than we even intended to keep it open. So, there's a lot here that I think we want to bring into the next decade, a lot that we want to continue, and we want the census to essentially stay as modern as it has become for 2020. >> Well, I will tell you personally Michael, I did take the census online, it was really easy to do, and I'll definitely recommend if they haven't already, everybody listening out there so important that you participate in the census so that they have complete data. So, Michael, Jared, thank you so much. Jared, congratulations to your team for winning the award and you know, such a great customer. Michael, thank you so much for what you and your team are doing. We Appreciate all that's being done, especially in these challenging times. >> Thank you and thanks for doing the census. >> All right and stay tuned for more coverage of the AWS public sector partner award I'm Stu Miniman and thank you for watching theCUBE. (upbeat music)
SUMMARY :
brought to you by Amazon web services. and T-Rex is the award winner you know, TV and radio and with decennial, you know, we know exactly, you know, and for the contracts Wonderful and if you and the challenge of the 2020 census you know, the technology adoption, the importance of the decennial, you know, some of the end stage, if you will, and in fact in the world. and especially, you know, and then from there to, you know, really the human aspect and the IT to support those. that might be applicable to kind of, and in partnership with them, you know, and our partners in the and you know, such a great customer. for doing the census. of the AWS public sector partner award
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jared | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Michael Thieme | PERSON | 0.99+ |
Jared Bell | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Census Bureau | ORGANIZATION | 0.99+ |
T-Rex | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
140 million | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
400,000 mobile devices | QUANTITY | 0.99+ |
Southern Maryland | LOCATION | 0.99+ |
April 1st | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
10 months | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
US Census Bureau | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
3.9 million people | QUANTITY | 0.99+ |
T-Rex Solutions | ORGANIZATION | 0.99+ |
77 million | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
10 year | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
next decade | DATE | 0.99+ |
over 3 million people | QUANTITY | 0.99+ |
77 million people | QUANTITY | 0.99+ |
one second | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
1790 | DATE | 0.98+ |
today | DATE | 0.98+ |
260 field offices | QUANTITY | 0.98+ |
COVID-19 | OTHER | 0.98+ |
DynamoDB | TITLE | 0.97+ |
each decade | QUANTITY | 0.97+ |
16,000 field staff | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
AWS Public Sector | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.96+ |
Cloud | TITLE | 0.95+ |
Anurag Goel, Render & Steve Herrod, General Catalyst | CUBE Conversation, June 2020
>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, and welcome to this CUBE Conversation, from our Boston area studio, I'm Stu Miniman, happy to welcome to the program, first of all we have a first time guest, always love when we have a founder on the program, Anurag Goel is the founder and CEO of Render, and we've brought along a longtime friend of the program, Dr. Steve Herrod, he is a managing director at General Catalyst, a investor in Render. Anurag and Steve, thanks so much for joining us. >> Thank you for having me. >> Yeah, thanks, Stu. >> All right, so Anurag, Render, your company, the tagline is the easiest cloud for developers and startups. It's a rather bold statement, most people feel that the first generation of cloud has happened and there were certain clear winners there. The hearts and minds of developers absolutely has been a key thing for many many companies, and one of those drivers in the software world. Why don't you give us a little bit of your background, and as the founder of the company, what was it, the opportunity that you saw, that had you create Render? >> Yeah, so I was the fifth engineer at Stripe, and helped launch the company and grow it to five billion dollars in revenue. And throughout that period, I saw just how much money we were spending on just hiring DevOps engineers, AWS was a huge huge management headache, really, there's no other way to describe it. And even after I left Stripe, I was thinking hard about what I wanted to do next, and a lot of those ideas required some form of development and deployment, and putting things in production, and every single time I had to do the same thing over and over and over again, as a developer, so despite all the advancements in the cloud, it was always repetitive work, that wasn't just for my projects, I think a lot of my friends felt the same way. And so, I decided that we needed to automate some of these new things that have come about, as part of the regular application deployment process, and how it evolves, and that's how Render was born. >> All right, so Steve, remember in the early days, cloud was supposed to be easy and inexpensive, I've been saying on theCUBE it's like well, I guess it hasn't quite turned out that way. Love your viewpoint a little bit, because you've invested here, to really be competitive in the cloud, tens of billions of dollars a year, that need to go into this, right? >> Yeah, I had the fortunate chance to meet Anurag early on, General Catalyst was an investor in Stripe, and so seeing what they did sort of spurred us to think about this, but I think we've talked about this before, also, on theCUBE, even back, long ago in the VMware days, we looked very seriously at buying Heroku, one of the early players, and still around, obviously, at Salesforce in this PaaS space, and every single infrastructure conversation I've had from the start, I have to come back to myself and come back to everyone else and just say, don't forget, the only reason any infrastructure even exists is to run applications. And as we talked about, the first generation of cloud, it was about, let's make the infrastructure disappear, and make it programmatic, but I think even that, we're realizing from developers, that is just still way too low of an abstraction level. You want to write code, you want to have it in GitHub, and you want to just press go, and it should automatically deploy, automatically scale, automatically secure itself, and just let the developer focus purely on the app, and that's a idea that people have been talking about for 20 years, and should continue to talk about, but I really think with Render, we found a way to make it just super easy to deploy and run, and certainly it is big players out there, but it really starts with developers loving the platform, and that's been Anurag's obsession since I met him. >> Yeah, it's interesting, when I first was reading I'm like "Wait," reminds me a lot of somebody like DigitalOcean, cloud for developers who are, Steve, we walked through, the PaaS discussion has gone through so many iterations, what would containerization do for things, or serverless was from its name, I don't need to think about that underlying layer. Anurag, give us a little bit as to how should we think of Render, you are a cloud, but you're not so much, you're not an infrastructure layer, you're not trying to compete against the laundry list of features that AWS, Azure, or Google have, you're a little bit different than some of the previous PaaS players, and you're not serverless, so, what is Render? >> Yeah, it is actually a new category that has come about because of the advent of containers, and because of container orchestration tools, and all of the surrounding technologies, that make it possible for companies like Render to innovate on top of those things, and provide experiences to developers that are essentially serverless, so by serverless you could mean one of two things, or many things really, but the way in which Render is serverless is you just don't have to think about servers, all you need to do is connect your code to GitHub, and give Render a quick start command for your server and a build command if needed, and we suggest a lot of those values ourselves, and then every push to your GitHub repo deploys a new version of your service. And then if you wanted to check out pull requests, which is a way developers test out code before actually pushing it to deployment, every pull request ends up creating a new instance of your service, and you can do everything from a single static site, to building complex clusters of several microservices, as well as managed Postgres, things like clustered Kafka and Elasticsearch, and really one way to think about Render, is it is the platform that every company ends up building internally, and spends a lot of time and money to build, and we're just doing it once for everyone and doing it right, and this is what we specialize in, so you don't have to. >> Yeah, just to add to that if I could, Stu, what's I think interesting is that we've had and talked about a lot of startups doing a lot of different things, and there's a huge amount of complexity to enable all of this to work at scale, and to make it work with all the things you look for, whether it's storage or CDNs, or metrics and alerting and monitoring, all of these little startups that we've gone through and big companies alike, if you could just hide that entirely from the developer and just make it super easy to use and deploy, that's been the mission that Anurag's been on to start, and as you hear it from some of the early customers, and how they're increasing the usage, it's just that love of making it simple that is key in this space. >> All right, yeah, Anurag, maybe it would really help illustrate things if you could talk a little bit about some of your early customers, their use case, and give us what stats you can about how your company's growing. >> Certainly. So, one of our more prominent customers was the Pete Buttigieg campaign, which ran through most of 2019, and through the first couple of months of 2020. And they moved to us from Google Cloud, because they just could not or did not want to deal with the complexity in today's standard infrastructure providers, where you get a VM and then you have to figure out how to work with it, or even Managed Kubernetes, actually, they were trying to run on Managed Kubernetes on GKE, and that was too complex or too much to manage for the team. And so they moved all of their infrastructure over to Render, and they were able to service billions of requests over the next few months, just on our platform, and every time Pete Buttigieg went on stage during a debate and said "Oh, go to PeteForAmerica.com," there's a huge spike in traffic on our platform, and it scaled with every debate. And so that's just one example of where really high quality engineering teams are saying "No, this stuff is too complex, it doesn't need to be," and there is a simpler alternative, and Render is filling in that gap. We also have customers all over, from single indie hackers who are just building out their new project ideas, to late stage companies like Stripe, where we are making sure that we scale with our users, and we give them the things that they would need without them having to "mature" into AWS, or grow into AWS. I think Render is built for the entire lifecycle of a company, which is you start off really easily, and then you grow with us, and that is what we're seeing with Render where a lot of customers are starting out simple and then continuing to grow their usage and their traffic with us. >> Yeah, I was doing some research getting ready for this, Anurag, I saw, not necessarily you're saying that you're cheaper, but there are some times that price can help, performance can be better, if I was a Heroku customer, or an AWS customer, I guess what might be some of the reasons that I'd be considering Render? >> So, for Heroku, I think the comparison of course, there's a big difference in price, because we think Heroku is significantly overpriced, because they have a perpetual free tier, and so their paid customers end up footing the bill for that. We don't have a perpetual free tier that way, we make sure that our paid customers pay what's fair, but more importantly, we have features that just haven't been available in any platform as a service up until now, for example, you cannot spin up persistent storage, block storage, in Heroku, you cannot set up private networking in Heroku as a developer, unless you pay for some crazy enterprise tier which is 1500, 3000 dollars a month. And Render just builds all of that into the platform out of the box, and when it comes to AWS, again, there's no comparison in terms of ease of use, we'll never be cheaper than AWS, that's not our goal either, it's our goal to make sure that you never have to deal with the complexity of AWS while still giving you all of the functionality that you would need from AWS, and when you think about applications as applications and services as opposed to applications that are running on servers, that's where Render makes it much easier for developers and development teams to say "Look, we don't actually need "to hire hundreds of DevOps people," we can significantly reduce our DevOps team and the existing DevOps team that we have can focus on application-level concerns, like performance. >> All right, so Steve, I guess, a couple questions for you, number one is, we haven't talked about security yet, which I know is a topic near and dear to your heart, was one of the early concerns about cloud, but now often is a driver to move to cloud, give us the security angle for this space. >> Yeah, I mean the key thing in all of the space is to get rid of the complexity, and complexity and human error is often, as we've talked about, that is the number one security problem. So by taking this fresh approach that's all about just the application, and a very simple GitOps-based workflow for it, you're not going to have the human error that typically has misconfigured things and coming into there, I think more broadly, the overall notion of the serverless world has also been a very nice move forward for security. If you're only bringing up and taking down the pieces of the application as needed, they're not there to be hacked or attacked. So I think for those two reasons, this is really a more modern way of looking at it, and again, I think we've talked about many times, security is the bane of DevOps, it's the slowest part of any deployment, and the more we get rid of that, the more the extra value proposition comes safer and also faster to deploy. >> The question I'd like to hear both of you is, the role of the developer has changed an awful lot. Five years ago, if I talked to companies, and they were trying to bring DevOps to the enterprise, or anything like that, it seemed like they were doomed, but things have matured, we all understand how important the developer is, and it feels like that line between the infrastructure team and the developer team is starting to move, or at least have tools and communication happening between them, I'd love, maybe Steve if you can give us a little bit your macroview of it, and Anurag, where that plays for Render too. >> Yeah, and Anurag especially would be able to go into our existing customers. What I love about Render, this is a completely clean sheet approach to thinking about, get rid of infrastructure, just make it all go away, and have it be purely there for the developers. Certainly the infrastructure people need to audit and make sure that you're passing the certifications and make sure that it has acceptable security, and data retention and all those other pieces, but that becomes Anurag's problem, not the developer problem. And so that's really how you look at it. The second thing I've seen across all these startups, you don't typically have, especially, you're not talking about startups, but mid-sized companies and above, they don't convert all the way to DevOps. You typically have people peeling off individual projects, and trying to move faster, and use some new approach for those, and then as those hopefully go successful, more and more of the existing projects will begin to move over there, and so what Render's been doing, and what we've been hoping from the start, is let's attract some of the key developers and key new projects, and then word will spread within the companies from there, but so the answer, and a lot of these companies make developers love you, and make the infrastructure team at least support you. >> Yeah, and that was a really good point about developers and infrastructure, DevOps people, the line between them sort of thinning, and becoming more of a gray area, I think that's absolutely right, I think the developers want to continue to think about code, but then, in today's environment, outside of Render when we see things like AWS, and things like DigitalOcean, you still see developers struggling. And in some ways, Render is making it easy for smaller companies and developers and startups to use the same best practices that a fully fledged DevOps team would give them, and then for larger companies, again, it makes it much easier for them to focus their efforts on business development and making sure they're building features for their users, and making their apps more secure outside of the infrastructure realm, and not spending as much time just herding servers, and making those servers more secure. To give you an example, Render's machines aren't even accessible from the public internet, where our workloads run, so there's no firewall to configure, really, for your app, there's no DMZ, there's no VPN. And then when you want to make sure that you're just, you want a private network, that's just built into Render along with service discovery. All your services are visible to each other, but not to anyone else. And just setting those things up, on something like AWS, and then managing it on an ongoing basis, is a huge, huge, huge cost in terms of resources, and people. >> All right, so Anurag, you just opened your first region, in Europe, Frankfurt if I remember right. Give us a little bit as to what growth we should expect, what you're seeing, and how you're going to be expanding your services. >> Yeah, so the expansion to Europe was by far our most requested feature, we had a lot of European users using Render, even though our servers were, until now, based in the US. In fact, one of, or perhaps the largest recipe-sharing site in Italy was using Render, even though the servers were in the US, and all their users were in Italy, and when we moved to Europe, that was like, it was Christmas come early for them, and they just started moving over things to our European region. But that's just the start, we have to make sure that we make compute as accessible to everyone, not just in the US or Europe but also in other places, so we're looking forward to expanding in Asia, to expanding in South America, and even Africa. And our goal is to make sure that your applications can run in a way that is completely transparent to where they're running, and you can even say "Look, I just want my application to run "in these four regions across the globe, "you figure out how to do it," and we will. And that's really the sort of dream that a lot of platforms as service have been selling, but haven't been able to deliver yet, and I think, again, Render is sort of this, at this point in time, where we can work on those crazy crazy dreams that we've been selling all along, and actually make them happen for companies that have been burned by platforms as a service before. >> Yeah, I guess it brings up a question, you talk about platforms, and one of the original ideas of PaaS and one of the promises of containerization was, I should be able to focus on my code and not think about where it lives, but part of that was, if I need to be able to run it somewhere else, or want to be able to move it somewhere else, that I can. So that whole discussion of portability, in the Kubernetes space, it definitely is something that gets talked quite a bit about. And can I move my code, so where does multicloud fit into your customers' environments, Anurag, and is it once they come onto Render, they're happy and it's easy and they're just doing it, or are there things that they develop on Render and then run somewhere else also, maybe for a region that you don't have, how does multicloud fit into your customers' world? >> That's a great question, and I think that multicloud is a reality that will continue to exist, and just grow over time, because not every cloud provider can give you every possible service you can think of, obviously, and so we have customers who are using, say, Redshift, on AWS, but they still want to run their compute workloads on Render. And as a result, they connect to AWS from their services running on Render. The other thing to point out here, is that Render does not force you into a specific paradigm of programming. So you can take your existing apps that have been containerized, or not, and just run them as-is on Render, and then if you don't like Render for whatever reason, you can take them away without really changing anything in your app, and run them somewhere else. Now obviously, you'll have to build out all the other things that Render gives you out of the box, but we don't lock you in by forcing you to program in a way that, for example, AWS Lambda does. And when it comes to the future, multicloud, I think Render will continue to run in all the major clouds, as well as our own data centers, and make sure that our customers can run the appropriate workloads wherever they are, as well as connect to them from the Render services with ease. >> Excellent. >> And maybe I'll make one more point if I could, Stu, which is one thing I've been excited to watch is the, in any of these platform as a services, you can't do everything yourself, so you want the opensource package vendors and other folks to really buy into this platform too, and one exciting thing we've seen at Render is a lot of the big opensource packages are saying "Boy, it'd be easier for our customers to use our opensource "if it were running on Render." And so this ecosystem and this set of packages that you can use will just be easier and easier over time, and I think that's going to lead to, at the end of the day people would like to be able to move their applications and have it run anywhere, and I think by having those services here, ultimately they're going to deploy to AWS or Google or somewhere else, but it is really the right abstraction layer for letting people build the app they want, that's going to be future-proof. >> Excellent, well Steve and Anurag, thank you so much for the update, great to hear about Render, look forward to hearing more updates in the future. >> Thank you, Stu. >> Thanks, Stu, good to talk to you. >> All right, and stay tuned, lots more coverage, if you go to theCUBE.net you can see all of the events that we're doing with remote coverage, as well as the back catalog of what we've done. I'm Stu Miniman, thank you for watching theCUBE. (calm music)
SUMMARY :
leaders all around the world, and we've brought along a and as the founder of the company, and grow it to five that need to go into this, right? and just let the developer I don't need to think about and all of the surrounding technologies, and to make it work with us what stats you can about and then continuing to grow their usage and the existing DevOps near and dear to your heart, and the more we get rid of that, and the developer team and make sure that you're Yeah, and that was a to be expanding your services. and you can even say and one of the original ideas of PaaS and then if you don't like and I think that's going to lead to, great to hear about Render, can see all of the events
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Anurag Goel | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Anurag | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
June 2020 | DATE | 0.99+ |
Steve Herrod | PERSON | 0.99+ |
Africa | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
South America | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
five billion dollars | QUANTITY | 0.99+ |
Render | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
hundreds | QUANTITY | 0.99+ |
General Catalyst | ORGANIZATION | 0.99+ |
Render | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Stripe | ORGANIZATION | 0.99+ |
Elasticsearch | TITLE | 0.99+ |
Heroku | ORGANIZATION | 0.99+ |
Kafka | TITLE | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
Christmas | EVENT | 0.99+ |
2019 | DATE | 0.99+ |
1500 | QUANTITY | 0.99+ |
two reasons | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.98+ |
Salesforce | ORGANIZATION | 0.98+ |
first region | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Anurag | ORGANIZATION | 0.98+ |
fifth engineer | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
second thing | QUANTITY | 0.97+ |
Greg DeKoenigsberg & Robyn Bergeron, Red Hat | AnsibleFest 2019
>>live from Atlanta, Georgia. It's the Q covering answerable best 2019. Brought to you by Red hat. >>Welcome back, everyone to the Cube. Live coverage in Atlanta, Georgia for answerable fest. This is Red Hats Event where all the practices come together. The community to talk about automation anywhere. John Kerry with my coast to Minutemen, our next two guests arrive. And Bergeron, principal community architect for answerable now Red Hat and Greg Dankers Berg, senior director, Community Ansel's. Well, thanks for coming on. Appreciate it. >>Thank you. >>Okay, So we were talking before camera that you guys had. This is a two day event. We're covering the Cube. You guys have an awful fast, but you got your community day yesterday. The day before the people came in early. The core community heard great things about it. Love to get an update. Could you share just what happened yesterday? And then we'll get in some of the community. Sure. >>We s o uh, for all of our answer professed for a while now we've started them with ah, community contributor conference. And the goal of that conference is to get together. Ah, lot of the people we work with online right people we see is IRC nicks or get hub handles rights to get them together in the same room. Ah, have them interact with, uh, with core members of our team. Uh, and that's where we really do, uh, make a lot of decisions about how we're gonna be going forward, get really direct feedback from some of our key contributors about the decisions were making The things were thinking about, uh, with the goal of, you know, involving our community deeply in a lot of decisions we make, that's >>a working session, meets social, get together. That's >>right, Several working sessions and then, you know, drinks afterward for those who want the drinks and just hang out time that >>way. Drinks and their last night was really good. I got the end of it. I missed the session, but >>they have the peaches, peaches, it on the >>table. That was good. But this is the dynamic community. This is one things we notice here. Not a seat open in the house on the keynote Skinny Ramon Lee, active participant base from this organic as well Be now going mainstream. How >>you >>guys handling it, how you guys ride in this way? Because certainly you certainly do. The communities which is great for feedback get from the community. But as you have the commercial eyes open sores and answerable, it's a tough task. >>Well, I'd like to think part of it is, I guess maybe it's not our first rodeo. Is that what we'd say? I mean, yeah, uh, for Ansel. I worked at ELASTICSEARCH, uh, doing community stuff. Before that, I worked at Red Hat. It was a fedora. Project leader, number five. And you were Fedora project Leader. What number was that? Number one depends >>on how you count, but >>you're the You're the one that got us to be able to call it having a federal project leader. So I sort of was number one. So we've been dealing with this stuff for a really long time. It's different in Anselm that, you know, unlike a lot of, you know, holds old school things like fedora. You know, a lot of this stuff is newer and part of the reason it's really important for us to get You know, some of these folks here to talk to us in person is that you know especially. And you saw my keynote this morning where they talked about we talked about modularity. Lot of these folks are really just focused on. They're one little bit and they don't always have is much time. People are working in lots of open source projects now, right, and it's hard to pay deep attention to every single little thing all the time. So this gives them a day of in case you missed it. Here's the deep, dark dive into everything that you know we're planning or thinking about, and they really are. You know, people who are managing those smaller parts all around answerable, really are some of our best feedback loops, right? Because they're people who probably wrote that model because they're using it every single day and their hard core Ansel users. But they also understand how to participate in community so we can get those people actually talking with the rest of us who a lot of us used to be so sad. Men's. I used to be a sis admin, lots of us. You know. A lot of our employees actually just got into wanting to work on Ansel because they loved using it so much of their jobs. And when you're not, actually, since admitting every day, you you lose a little bit of >>the front lines with the truth of what's around. Truth is right there >>and putting all these people together in room make sure that they all also, you know, when you have to look at someone in the eye and tell them news that they might not like you have a different level of empathy and you approach it a little bit differently than you may on the Internet. So, >>Robin So I lived in your keynote this morning. You talked about answerable. First commit was only back in 2012. So that simplicity of that modularity and the learnings from where open source had been in the past Yes, they're a little bit, you know, what could answerable do, being a relatively young project that it might not have been able to dio if it had a couple of decades of history? >>Maybe Greg should tell the story about the funk project >>way. There was a There was a project, a tread hat that we started in 2007 in a coffee shop in Chapel Hill, North Carolina is Ah, myself and Michael the Han and Seth the doll on entry likens Who still works with this with us? A danceable Ah, and we we put together Ah, an idea with all the same underpinnings, right? Ah, highly modular automation tool We debated at the time whether it should be based on SSL or SS H for funk. We chose SSL Ah, and you know, after watching that grow to a certain point and then stagnates and it being inside of red Hat where, you know, there were a lot of other business pressures, things like that. We learned a lot from that experience and we were able to take that experience. And then in 2012 there there's the open source community was a little different. Open source was more acceptable. Get Hubbell was becoming a common plat platform for open source project hosting. And so a lot of things came together in a short pier Time All that experience, although, >>and also market conditions, agenda market conditions in 2007 Cloud was sort of a weird thing that not really everyone was doing 2012 rolls around. Everyone has these cloud images and they need to figure out how to get something in it. Um, and it turns out that Hansel's a really great way to actually do that. And, you know, even if we had picked SS H back in the beginning, I don't know, you know, not have had time projects grow to a certain point. And I could point a lots of projects that were just It's a shame they were so ahead of their time. And because of that, you know, >>timing is everything with the key. I think now what I've always admired about the simplicity is automation requires that the abstract, the way, the complexities and so I think you bring a cloud that brings up more complexity, more use cases for some of the underlying paintings of the plumbing. And this is always gonna This is a moving train that's never going to stop. What was the feedback from the community this year around? As you guys get into some of these analytical capabilities, so the new features have a platform flair to it. It's a platform you guys announced answerable automation platform that implies that enables some value. >>You know, I >>think in >>a way. We've always been a platform, right, because platform is a set of small rules and then modules that attached to it. It's about how that grows, right? And, uh, traditionally, we've had a batteries included model where every module and plug in was built to go into answerable Boy, that got really big bright and >>we like to hear it. I don't even know how many I keep say, I'll >>say 2000. Then it'll be 3000 say 3000 >>something else, a lot of content. And it's, you know, in the beginning, it was I can't imagine this ever being more than 202 150 batteries included, and at some point, you know, it's like, Whoa, yeah, taking care of this and making sure it all works together all the time gets >>You guys have done a great You guys have done a great job with community, and one of the things that you met with Cloud is as more use cases come, scale becomes a big question, and there's real business benefits now, so open source has become part of the business. People talk about business, models will open source. You guys know that you've been part of that 28 years of history with Lennox. But now you're seeing Dev Ops, which is you'll go back to 78 2009 10 time frame The only the purest we're talking Dev ops. At that time, Infrastructures Co was being kicked around. We certainly been covering the cubes is 2010 on that? But now, in mainstream enterprise, it seems like the commercialization and operational izing of Dev ops is here. You guys have a proof point in your own community. People talk about culture, about relationships. We have one guest on time, but they're now friends with the other guy group dowels. So you stay. The collaboration is now becoming a big part of it because of the playbook because of the of these these instances. So talk about that dynamic of operational izing the Dev Ops movement for Enterprise. >>All right, so I remember Ah, an example at one of the first answer professed I ever went thio There were there were a few before I came on board. Ah, but it was I >>think it was >>the 1st 1 I came to when I was about to make the jump from my previous company, and I was just There is a visitor and a friend of the team, and there was an adman who talked to me and said, For the first time, I have this thing, this playbook, that I can write and that I can hand to my manager and say this is what we're going to D'oh! Right? And so there was this artifact that allowed for a bridging between different parts of the organization. That was the simplicity of that playbook that was human readable, that he could show to his boss or to someone else in the organ that they could agree on. And suddenly there was this sort of a document that was a mechanism for collaboration that everyone could understand buy into that hadn't really existed before. Answerable existed after me. That was one of the many, you know, flip of the light moments where I was like, Oh, wow, maybe we have something >>really big. There were plenty of other infrastructures, code things that you could hand to someone. But, you know, for a lot of people, it's like I don't speak that language right? That's why we like to say like Ansel sort of this universal automation language, right? Like everybody can read it. You don't have to be a rocket scientist. Uh, it's, you know, great for your exact example, right? I'm showing this to my manager and saying This is the order of operations and you don't have to be a genius to read it because it's really, really readable >>connecting system which connects people >>right. It's fascinating to May is there was this whole wave of enterprise collaboration tools that the enterprise would try to push down and force people to collaborate. But here is a technology tool that from the ground up, is getting people to do that collaboration. And they want to do it. And it's helping bury some >>of those walls. And it's interesting you mention that I'm sure that something like slack is a thing that falls into that category. And they've built around making sure that the 20 billion people inside a company all sign up until somebody in the I T departments like, What do you mean? These random people are just everyone's using it. No one saving it isn't secure, and they all freak out, and, um, well, I mean, this is sort of, you know, everybody tells her friend about Ansel and they go, Oh, right, Tool. That's gonna save the world Number 22 0 wait, actually, yeah. No, this is This actually is pretty cool. Yeah, yeah, yeah, I get started. >>Well, you know, sometimes the better mouse trap will always drive people to that solution. You guys have proven that organic. What's interesting to me is not only does it keep win on capabilities, it actually grew organically. And this connective tissue between different groups, >>right? Got it >>breaks down that hole silo mentality. And that's really where I tease been stuck? Yes. And as software becomes more prominent and data becomes more prominent, it's gonna just shift more power in the hands of developer and to the, uh, just add mons who are now being redeployed into being systems, architects or whatever they are. This transitional human rolls with automation, >>transformation architect >>Oh my God, that's a real title. I don't >>have it, but >>double my pay. I'll take it. >>So collections is one of the key things talked about when we talk about the Antelope Automation platform. Been hearing a lot discussion about how the partner ecosystems really stepping up even more than before. You know, 4600 plus contributors out there in community, But the partners stepping up Where do you see this going? Where? Well, collections really catalyze the next growth for your >>It's got to be the future for us that, you know, there there were a >>few >>key problems that we recognize that the collections was ultimately the the dissolution that we chose. Uh, you know, one key problem is that with the batteries included model that put a lot of pressure on vendors to conform to whatever our processes were, they had to get their batteries in tow. Are thing to be a part of the ecosystem. And there was a huge demand to be a part of our ecosystem. The partners would just sort of, you know, swallow hard and do what they needed to d'oh. But it really wasn't optimized Tol partners, right? So they might have different development processes. They might have different release cycles. They might have different testing on the back end. That would be, you know, more difficult to hook together collections, breaks a lot of that out and gives our partners a lot of freedom to innovate in their own time. Uh, >>release on their own cycle, the down cycle. We just released our new version of software, but you can't actually get the new Ansel modules that are updated for it until answerable releases is not always the thing that you know makes their product immediately useful. You know, you're a vendor, you really something new. You want people to start using it right away, not wait until, you know answerable comes around so >>and that new artifact also creates more network effects with the, you know, galaxy and automation hub. And you know, the new deployment options that we're gonna have available for that stuff. So it's, I think it's just leveling up, right? It's taking the same approach that's gotten us this foreign, just taking out to, uh, to another level. >>I certainly wouldn't consider it to be like that. Partners air separate part of our They're still definitely part of the community. It's just they have slightly different problems. And, you know, there were folks from all sorts of different companies who are partners in the contributor summit. Yesterday >>there were >>actually, you know, participating and you know, folks swapping stories and listening to each other and again being part of that feedback. >>Maybe just a little bit broader. You know, the other communities out there, I think of the Cloud Native Computing Foundation, the Open Infrastructure Foundation. You're wearing your soul pin. I talk a little bit of our handsome How rentable plays across these other communities, which are, you know, very much mixture of the vendors and the end users. >>Well, I mean and will certainly had Sorry. Are you asking about how Ansel is relating to those other communities? Okay, Yeah, because I'm all about that. I mean, we certainly had a long standing sort of, ah fan base over in the open stacks slash open infrastructure foundation land. Most of the deployment tools for all of you know, all the different ways. So many ways to deploy open stack. A lot of them wound up settling on Ansel towards the end of time. You know, that community sort of matured, and, you know, there's a lot of periods of experimentation and, you know, that's one of the things is something's live. Something's didn't but the core parts of what you actually need to make a cloud or, you know, basically still there. Um And then we also have a ton of modules, actually unanswerable, that, you know, help people to operationalize all their open stack cloud stuff. Just like we have modules for AWS and Google Cloud and Azure and whoever else I'm leaving out this week as far as the C N. C f stuff goes, I mean again, we've seen a lot of you know how to get this thing up and running. Turns out Cooper Daddy's is not particularly easy to get up and running. It's even more complicated than a cloud sometimes, because it also assumes you've got a cloud of some sort already. And I like working on our thing. It's I can actually use it. It's pretty cool. Um, cube spray on. Then A lot of the other projects also have, you know, things that are related to Ansel. Now there's the answer. Will operator stuff? I don't know if you want to touch on that, but >>yeah, uh, we're working on. We know one of the big questions is ah, how do answerable, uh, and open shift slash kubernetes work together frequently and in sort of kubernetes land Open shift land. You want to keep his much as you can on the cluster. Lots of operations on the cluster. >>Sometimes you got >>to talk to things outside of the cluster, right? You got to set up some networking stuff, or you gotta go talk to an S three bucket. There's always something some storage thing. As much as you try to get things in a container land, there's all there's always legacy stuff. There's always new stuff, maybe edge stuff that might not all be part of your cluster. And so one of the things we're working on is making it easier to use answerable as part of your operator structure, to go and manage some of those things, using the operator framework that's already built into kubernetes and >>again, more complexity out there. >>Well, and and the thing is, we're great glue. Answerable is such great glue, and it's accessible to so many people and as the moon. As we move away from monolithic code bases to micro service's and vastly spread out code basis, it's not like the complexity goes away. The complexity simply moves to the relationship between the components and answerable. It's excellent glue for helping to manage those relationships between. >>Who doesn't like a glue layer >>everyone, if it's good and easy to understand, even better, >>the glue layers key guys, Thanks for coming on. Sharing your insights. Thank you so much for a quick minute to give a quick plug for the community. What's up? Stats updates. Quick projects Give a quick plug for what's going on the community real quick. >>You go first. >>We're big. We're 67 >>snow. It was number six. Number seven was kubernetes >>right. Number six out of 96 million projects on Get Hub. So lots of contributors. Lots of energy. >>Anytime. I tried to cite a stat, I find that I have to actually go and look it up. And I was about to sight again. >>So active, high, high numbers of people activity. What's that mean? You're running the plumbing, so obviously it's it's cloud on premise. Other updates. Projects of the contributor day. What's next, what's on the schedule. >>We're looking to put together our next contributor summit. We're hoping in Europe sometime in the spring, so we've got to get that on the plate. I don't know if we've announced the next answer will fast yet >>I know that happens tomorrow. So don't Don't really don't >>ruin that for everybody. >>Gradual ages on the great community. You guys done great. Work out in the open sores opened business. Open everything these days. Can't bet against open. >>But again, >>I wouldn't bet against open. >>We're here. Cube were open. Was sharing all the data here in Atlanta with the interviews. I'm John for his stupid men. Stayed with us for more after this short break.
SUMMARY :
Brought to you by Red hat. The community to talk about automation anywhere. Okay, So we were talking before camera that you guys had. And the goal of that conference is to get together. a working session, meets social, get together. I got the end of it. Not a seat open in the house on the keynote Skinny Ramon Lee, active participant But as you have the commercial eyes open sores and answerable, And you were Fedora project Leader. some of these folks here to talk to us in person is that you know especially. the front lines with the truth of what's around. and putting all these people together in room make sure that they all also, you know, when you have to look at someone in the eye and So that simplicity of that modularity and the learnings from where open source had been in the past We chose SSL Ah, and you know, And because of that, you know, requires that the abstract, the way, the complexities and so I think you bring a cloud that brings up more complexity, It's about how that grows, I don't even know how many I keep say, I'll And it's, you know, in the beginning, You guys have done a great You guys have done a great job with community, and one of the things that you met with Cloud is All right, so I remember Ah, an example at one of the first answer That was one of the many, you know, flip of the light moments where I was like, saying This is the order of operations and you don't have to be a genius to read it because it's really, that the enterprise would try to push down and force people to collaborate. And it's interesting you mention that I'm sure that something like slack is a thing that falls into that Well, you know, sometimes the better mouse trap will always drive people to that solution. it's gonna just shift more power in the hands of developer and to the, uh, I don't double my pay. But the partners stepping up Where do you see this going? That would be, you know, more difficult to hook together collections, breaks a lot of that out and gives our always the thing that you know makes their product immediately useful. And you know, the new deployment options that we're gonna have available And, you know, there were folks from all sorts of different companies who are partners in the contributor actually, you know, participating and you know, folks swapping stories and listening to each other and again handsome How rentable plays across these other communities, which are, you know, very much mixture of the vendors on. Then A lot of the other projects also have, you know, things that are related to Ansel. You want to keep his much as you can on the cluster. You got to set up some networking stuff, or you gotta go talk to an S three bucket. Well, and and the thing is, we're great glue. Thank you so much for a quick minute to give a quick plug for the community. We're big. It was number six. So lots of contributors. And I was about to sight again. Projects of the contributor day. in the spring, so we've got to get that on the plate. I know that happens tomorrow. Work out in the open sores opened business. Was sharing all the data here in Atlanta with the interviews.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2007 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
Robyn Bergeron | PERSON | 0.99+ |
John Kerry | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Atlanta | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Open Infrastructure Foundation | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
28 years | QUANTITY | 0.99+ |
two day | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Atlanta, Georgia | LOCATION | 0.99+ |
Greg DeKoenigsberg | PERSON | 0.99+ |
Bergeron | PERSON | 0.99+ |
Greg | PERSON | 0.99+ |
Greg Dankers Berg | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
Infrastructures Co | ORGANIZATION | 0.99+ |
Red hat | ORGANIZATION | 0.99+ |
Ansel | ORGANIZATION | 0.99+ |
20 billion people | QUANTITY | 0.99+ |
4600 plus contributors | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2000 | QUANTITY | 0.98+ |
ELASTICSEARCH | ORGANIZATION | 0.98+ |
Yesterday | DATE | 0.98+ |
tomorrow | DATE | 0.98+ |
67 | QUANTITY | 0.98+ |
May | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
3000 | QUANTITY | 0.98+ |
Red Hats | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
Fedora | ORGANIZATION | 0.97+ |
one guest | QUANTITY | 0.97+ |
more than 202 150 batteries | QUANTITY | 0.97+ |
two guests | QUANTITY | 0.96+ |
96 million projects | QUANTITY | 0.96+ |
Chapel Hill, North Carolina | LOCATION | 0.95+ |
Lennox | ORGANIZATION | 0.95+ |
Minutemen | LOCATION | 0.94+ |
fedora | ORGANIZATION | 0.93+ |
first | QUANTITY | 0.91+ |
first rodeo | QUANTITY | 0.91+ |
Anselm | LOCATION | 0.91+ |
one key problem | QUANTITY | 0.91+ |
Get Hub | ORGANIZATION | 0.91+ |
this year | DATE | 0.91+ |
Michael the Han | PERSON | 0.9+ |
Cooper | PERSON | 0.89+ |
2009 | DATE | 0.89+ |
Number seven | QUANTITY | 0.87+ |
Community Ansel | ORGANIZATION | 0.87+ |
Azure | TITLE | 0.86+ |
first answer | QUANTITY | 0.84+ |
Cloud | TITLE | 0.84+ |
this morning | DATE | 0.83+ |
First commit | QUANTITY | 0.79+ |
one little | QUANTITY | 0.79+ |
Number six | QUANTITY | 0.76+ |
last night | DATE | 0.75+ |
AnsibleFest | EVENT | 0.75+ |
a day | QUANTITY | 0.74+ |
single day | QUANTITY | 0.73+ |
10 time | QUANTITY | 0.71+ |
C N. C f | TITLE | 0.7+ |
single little thing | QUANTITY | 0.69+ |
1st 1 | QUANTITY | 0.67+ |
D'oh | ORGANIZATION | 0.66+ |
Google Cloud | ORGANIZATION | 0.64+ |
couple | QUANTITY | 0.62+ |
Lew Cirne, New Relic | New Relic FutureStack 2019
>> Narrator: From New York City, it's theCUBE, covering New Relic FutureStack 2019, brought to you by New Relic. >> Hi, I'm Stu Miniman and this is theCUBE at New Relic FutureStack 2019 here in New York City. It's our first year of the event, but the event itself has been around for seven years and to help us end our coverage, no better person than the founder and CEO of New Relic, and the one who the name of the company came from, Lew Cirne. Of course, Lew Cirne is an anagram for New Relic. >> Indeed it is. >> Lew, thank you so much for having theCUBE at the event here and thanks for hosting us. >> I'm a huge fan of theCUBE. I've been watching it for a long time and it's such a pleasure to have you guys here. Thank you for coming. >> All right, so Lew, you're known as the coding CEO >> Lew: I am. >> And you come out with a vision of making software better. It's a great goal. Give us a little bit about the state of the industry. You know the internet challenge these days. It's going to fragment into a bunch of pieces and Open Source isn't what it used to be. There's so many changes going in the industry. Just kind of macro view before we get into New Relic. >> Yeah, from a macro view at New Relic we do this for the love of software. It's not just me, it's the whole company. We believe in software. We think it unquestionably is changing the world, transforming every industry. It's not enough just to build software that's great. You have to deliver more perfect software. That's now become almost obvious whereas when we first started out that was actually a bit of an evangelical sale where we had to convince people that they needed to observe their software. Now it's become a must-do thing, and that's why observability has become a household term. Everybody recognizes that anything that runs in production in internet scale needs to be observed, needs to be measured in real time. And so, that's been going on and has become a must-do thing for our customers. What we're so excited about is that we're delivering the first observability platform. What do we mean by that? Well, we see with this proliferation of tools, you might have metrics going to one place and logs going to another place and traces going to Zipkin or logs going to Elasticsearch. You want it all in one place, and more important, you want it to be connected so that you can see the relationship between the application and its server or infrastructure and the user experience all in one connected platform. That's what we're delivering with New Relic One today that's so exciting. >> Yeah. So, Lew, the IT industry in general is known for its fragmentation. >> Lew: Yeah, it is. >> When I want to build my application in the old days, I talk to the CIO. He's like, "Give me a million dollars and 18 months "and I will build you the Taj Mahal of my application." And we've got it beautifully designed and pull it out. Well, today things are moving much faster, but I've got everything from that Taj Mahal to the Kubernetes and Serverless, Microservice Architectures-- >> Lew: All that compartment-based stuff, yeah. >> There's usually a lot of different teams, and a lot of different tools in there. How does New Relic fit across that landscape and how are you helping to pull things together? >> Well, certainly the industry's moving from the monolithic application to the component-based application, often running in smaller and smaller services, usually running in something like Kubernetes or a containerized environment and with that comes a proliferation of things to monitor, and often a proliferation of tools. We have enterprise customers that have 20, 30 different monitoring and telemetry tools. It's not because they want it, it's because there might be one particular feature that one tool does that gives them the visibility they need. And what they want is a single platform. What people have historically used New Relic for is dropping our agents into their application or their infrastructure. Then our agents automatically put visibility in and then they report on the health of that system. We do that really well, but what we're announcing today is that we're opening up our platform to consume telemetry from Open Source, agentless sources. So that, if you've got something like Prometheus that's gathering data from Kubernetes, that can go straight into New Relic and be treated as first class data, so that you don't have to switch between a bunch of tools. None of our customers want that. They want it all in one place, but they need an open platform that's connected and most importantly programmable so that they can actually have one tool to see it all. And that's New Relic. >> A lot of the logging and tracing information out there isn't agent-led. What do you see as the future of agents, and what are some of the challenges of pulling all of these various data types together? >> Well, the most important thing for the future is that our customers have complete control in a choice. What we see particularly in large enterprises is they want both. They have a portfolio of more than a thousand applications. They want to observe them all. Most of them they'll want to drop an agent in because they don't have time to reinstrument them, but they still need to see them. Some of them they may want to manually instrument because they want a higher level of control or they want to adopt an Open Source API like OpenTelemetry. But then, if they're adopting that for some of their portfolio, when a transaction reaches across these different services, you don't want to lose visibility. We're delivering best of both worlds. You can manually instrument what you want. You can use OpenTelemetry in parts of your environment. And then you can also use our automatic instrumentation that comes from our agents. Our customers get to decide, and that's the future. >> So, Lew, you've laid out the case in a strong way as to why New Relic One should be the platform for the monitoring observability. I think you undersold a little bit the NRDB piece. When I look inside my business or I talk to customers, being able to see my data and act on my data can be challenging. You showed a demo of 10 terabytes and being able to change it in a snap. >> You know, NRDB is pretty magical. At some risk, let's see if this will show up on my phone right now. Just give you a sense of how fast NRDB is performing right now. Okay. One more time. So we've got-- >> Hold it up a little bit and show the camera this way. >> NRDB right at this moment is inserting 18 million events every second. Every second, 17.89 million pieces of data coming into NRDB in real time. And our customers are querying that in real time. Right now, in this moment, they're reading 24 billion pieces of data per second. Those pieces of data could be log messages. They could be someone pressing something on their app, could be a request going through a server. It's all in the same database. And the last one is a hundred millisecond response time on those queries, which is mind-blowing for these analytics queries. >> You actually showed the press an analyst this at lunch and it was over 20 million-- >> I think it was at 40 billion at that moment. >> 40 billion coming out and the same response time. A hundred milliseconds is Google good as to how fast I get a response. >> For this kind of data processing, it's mind-blowing. Now, the thing that our customers need to know is that all your metrics, all your events, all your logs, all your traces going into the same database with one query language. That's so much better than going to Elasticsearch and using its query language for logs, then using a totally different query language for getting at your metrics, and then trying to stitch it all together. We put it all not only in one cloud but in one database. That is the most powerful telemetry database in the world, which is NRDB. >> Lew, give us a little bit of the journey to the announcement today. Observability's been talked about in the industry for a while. VC money has been pouring into startups. There's been some acquisitions in this space already. Give us a little bit as to how we got to today. >> So how we got to today was when we started off as a company, we were championing the whole idea of observability, putting visibility into application code. As I said, that was a bit evangelical in the early days. People were wondering if they needed it. Now there's no question they need it. In fact, some people need it so badly they want complete control, and so they're manually instrumenting. OK, I've talked about that. Now where we see people going is now that all of this telemetry data is coming ideally into one place like New Relic, our customers are saying, "I need to go beyond dashboards. "Dashboards are good, but often dashboards are incomplete "to get the most out of the data we're collecting." That's why we're claiming we have the first and only platform for observability, with a capital P. What do I mean by that? It's only a platform if you can build software on it, and New Relic One is the first software development platform for observability applications. Our customers can take all this data and build real-time applications that leverage all the value out of it. When a customer buys something online, New Relic's database could be the first piece of, certainly, analytics database that sees that data. So you could a navigation that shows real-time sales for your business people all based on New Relic One. We can also solve all sorts of IT operations problems by building applications on this platform. And to prove it out, we're offering 12 free Open Source applications to anyone. They can download, they can clone them off of GitHub and push them into their New Relic account and they can use that as inspiration to build their own applications on top of our platform. >> Right. This is, if I understand, the first twelve, and you expect both New Relic and your customers will build many more. >> Yes, and actually it's thirteen already. We just added another one today. Some of those have been built by our customers already, and we're already seeing customers deploying these applications into their New Relic One accounts in production today. >> It really goes back to the promise of SaaS is that when customers need something and make a change or build on it, it's not just that customer that gets to be able to leverage that, but everybody else that is on the platform-- >> They can share and benefit. The way to think of it is, you're absolutely right, and without Force.com, Salesforce is just a CRM system. But with Force.com, companies could really leverage all the data inside Salesforce. Without programmability, ServiceNow is just a ticketing system, right? But how does ServiceNow become strategic? By allowing people to build applications tailored to their business. We believe the world needs an observability platform and the only one of its kind is New Relic One. >> All right. So, Lew, it sounds like this should be something that should accelerate growth for the company going forward. I read through your last earnings report. You're growing at 30, 35%, which is reasonable but less than the overall cloud marketplace itself is growing. So, how come the AWS, Azure, GCP tailwind isn't pushing New Relic faster? >> Well, it is a good tailwind for us, and I can't go into too much detail. We're a public company in a quiet period so I can't speak to specifics. What I can tell you is history has shown that people tend to adopt platforms at a certain rate and then, a few years later, they adopt the management technologies for those platforms. So we tend to be a little bit behind the adoption of cloud but then when people standardize and they go all in on it, then they really increase their investment in New Relic. I believe that things like our platform capabilities take our customers that might be spending... We have 850 plus customers that spend more than 100,000 a year with New Relic, and I believe when they start to adopt our platform and go strategic with us, many of them will be million-dollar customers, and that ought to be the basis of durable growth for the company. >> All right. So, Lew, there was some news leading up to the event. Some management changes. Let you speak a little bit of that, and you've got some history with, of course, Mike was already on the board, but-- >> We're so thrilled about Mike Christenson joining the company as President and COO. I've known Mike since 2006, when he acquired my last company, Wily Technology, which was really the very first APM company. Mike was the President and COO of CA, and so he had a similar role there to what he has here. Mike is, I think, one of the most brilliant operational minds I've ever met. He's been involved with New Relic for nine years. He's been one of the first investors in the company. He's been on our board of directors, and he's always had a keen mind for how to think about growing our business. I've been thinking for a long time on how to get him more involved as a member of the team and finally I convinced him to come join. Mike joined us as our President and COO. He's going to be my partner in growing the business. I think those that know me know that I love technology and products and thinking about where we are five years from now. Mike will be my partner to help make sure we're operating the company and growing the business on a day-to-day basis. >> Lew, you and your team helped create and democratize this wave of APM, Application Performance Management. As you look at it today, we talked about microservices. You talk about the dispersed nature of everything going on. How would you reframe the market today and New Relic, where it needs to be today and going forward? >> Phase 0 was people-monitored servers, back in the Stone Ages. Monitoring was just "Is the server up or down "and does it have enough CPU?" >>Blinking lights. >> Right. Then came APM. APM really was the precursor to observability. It was the notion that these are complex systems. They need to be observed at high granularity. APM gave birth to observability, so when New Relic first came along, we're "Let's democratize APM." And as observability came along, we saw this as an opportunity to open up the platform. Now where we are, if you look at our track record, first of all, my first company created the category of APM. New Relic then democratized APM, and now we're delivering the first observability platform. I believe that the future is programmable, and that New Relic is the future. >> Lew, you've always been enthusiastic when it comes to the vision that you put out, but it's been noted by some of my peers that your energy level and enthusiasm is even higher today than usual. So many things that you talked about, some of the things that you highlight, maybe behind the scenes, or things that might get missed beyond the headlines that you want to share. >> The idea for New Relic One was born two years ago. I took some of the brightest people in New Relic offsite and we fleshed out the thinking and the early prototype of what's become this. This is my life's work. This company's my life's work. I believe so much in this platform. I believe in its capabilities. I'm seeing our customers ripping it out of our hands, saying, "This is going to enable us "to fully achieve our goal of complete visibility "and completely tailored to the needs of our business." Why I'm so fired up and passionate is when you put your heart and soul into something that's new, that no one else has done before... There's been a handful of times I've done that in my life. The first time became APM. The second time became New Relic. The third was when I created NRDB. And now the fourth is New Relic One. And we're just getting started. >> Well, Lew, I want to let you have the final word as to what you want your customers taking away here from FutureStack 2019. >> My belief is that the future of observability is you need a platform. That platform needs to be open, connected, and programmable. We have such a beautiful, easy... It's a Heroku-like developer experience. So within seconds, you can be building an application that takes the telemetry data in New Relic and turns it into actionable business insights for your company. And if you want inspiration, there's 13 applications now up on GitHub that you can install right into your New Relic account, and maybe modify and tailor to your needs and republish to share with our other customers. >> I know you and your team are making sure that New Relic doesn't become a relic of the past. Thank you so much for having us here-- >> We're always in the future. >> And congratulations. I look forward to watching the progress going forward. >> Thank you, I enjoyed it. Thank you. All right, bye-bye. >> Thank you so much. And that's a wrap theCUBE's coverage of New Relic FutureStack 2019. I'm Stu Miniman, of course. Go to theCUBE.net for all of the coverage. A big thanks to the team here and everyone supporting and as always, thank you for watching theCUBE. (Electronic Music)
SUMMARY :
brought to you by New Relic. and to help us end our coverage, at the event here and thanks for hosting us. and it's such a pleasure to have you guys here. There's so many changes going in the industry. that they needed to observe their software. is known for its fragmentation. I talk to the CIO. and how are you helping to pull things together? so that you don't have to switch between a bunch of tools. A lot of the logging and tracing information but they still need to see them. and being able to change it in a snap. Just give you a sense of how fast And the last one is a hundred millisecond response time 40 billion coming out and the same response time. Now, the thing that our customers need to know to the announcement today. and New Relic One is the first software development platform and you expect both New Relic and your customers and we're already seeing customers and the only one of its kind is New Relic One. but less than the overall cloud marketplace and that ought to be the basis of durable growth and you've got some history with, and so he had a similar role there to what he has here. and democratize this wave of APM, back in the Stone Ages. and that New Relic is the future. some of the things that you highlight, and the early prototype of what's become this. as to what you want your customers taking away and maybe modify and tailor to your needs that New Relic doesn't become a relic of the past. I look forward to watching the progress going forward. Thank you, I enjoyed it. and as always, thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike | PERSON | 0.99+ |
Mike Christenson | PERSON | 0.99+ |
20 | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
New Relic | ORGANIZATION | 0.99+ |
fourth | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
Lew Cirne | PERSON | 0.99+ |
13 applications | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2006 | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
thirteen | QUANTITY | 0.99+ |
40 billion | QUANTITY | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
18 million events | QUANTITY | 0.99+ |
Lew | PERSON | 0.99+ |
second time | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.99+ |
nine years | QUANTITY | 0.99+ |
one database | QUANTITY | 0.99+ |
more than a thousand applications | QUANTITY | 0.99+ |
first twelve | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
30, 35% | QUANTITY | 0.99+ |
850 plus customers | QUANTITY | 0.99+ |
million-dollar | QUANTITY | 0.99+ |
over 20 million | QUANTITY | 0.99+ |
Prometheus | TITLE | 0.98+ |
one place | QUANTITY | 0.98+ |
CA | LOCATION | 0.98+ |
first year | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
18 months | QUANTITY | 0.98+ |
first piece | QUANTITY | 0.98+ |
more than 100,000 a year | QUANTITY | 0.98+ |
New Relic FutureStack 2019 | TITLE | 0.98+ |
ServiceNow | TITLE | 0.98+ |
12 free Open Source applications | QUANTITY | 0.98+ |
one cloud | QUANTITY | 0.97+ |
first company | QUANTITY | 0.97+ |
both worlds | QUANTITY | 0.97+ |
one query language | QUANTITY | 0.97+ |
two years ago | DATE | 0.97+ |
New Relic FutureStack 2019 | EVENT | 0.97+ |
single platform | QUANTITY | 0.97+ |
first software | QUANTITY | 0.96+ |
OpenTelemetry | TITLE | 0.96+ |
APM | ORGANIZATION | 0.96+ |
seven years | QUANTITY | 0.96+ |
One more time | QUANTITY | 0.96+ |
Dmitry Traytel, Timehop | AWS Summit New York 2019
>> Announcer: Live from New York, it's theCube, covering AWS Global Summit 2019. Brought to you by Amazon Web Services. >> Welcome back. We're reaching towards the end of theCube's coverage of AWS Summit in New York City. I'm Stu Miniman, my co-host is Corey Quinn. Behind us, they're starting to roll out the beer trucks, but before we get there, we're really excited to have on the program first-time guest, Dmitry Traytel, who's the CTO of Timehop. Dmitry, thanks so much for joining us. >> Thanks for having me. >> All right, so Timehop, for our audience that's not familiar with it, I'm familiar with it on social media, is the "oh hey, here's your memory from a year ago, three years ago, five years ago." It's interesting always to know. I know I go to a lot of events, so it's like "Groundhog Day" to me. It's like, "oh hey, AWS New York City, I remember two years ago where I saw this person, this person, this person." We capture lots of videos and photos. We should probably figure out some partnership to bring some of those memories back when we do it, but >> Dmitry: Exactly. give us a little bit for those of us that might not know Timehop. Seems like there's more than just kind of the one thing. What's the company do? >> So, Timehop, the consumer product, the mobile app, is essentially a place for you to celebrate your digital memories, right? We are the nostalgia company, where you can look back on what you did on this day, and the kind of things that you posted on social media, Facebook, Twitter, Instagram, et cetera. And relive those things, share them with your friends, and also look at what's on your phone, in your local device. Stuff you haven't shared. So, the thousand photos you took of your kid at year one, you'll see a year later, and the year after that, and you get to relive those moments. >> Okay, very cool. So, boy there must be some good metadata underneath there. You talk about the content creation that goes on with most people. It's nice that in 2019, I don't really think too much about the thousands of photos that I have in my library. Boy, I know people that are pretty noisy on social media, and boy, you'd think their feed would be overwhelmed looking back on certain days, especially the guy sitting next to me. If it's a keynote day at a conference, Corey would be like, "oh boy, did I say those things?" Is it just, I get all of it, or is there some intelligence behind that? Give us a little bit of insight. What happens? >> Sure, there's definitely some intelligence behind it, a random link you might've shared out probably won't make it, but photos and videos certainly do. And any sort of text posts, tweet threads, Facebook statuses that you might've added, particularly those from 10 plus years ago, those are the most interesting ones, because people used Facebook in a very different way back then, then they do these days. Some people used it more, some less, and we try to feature especially those that have the most engagement, we try to surface those ahead of everything else. >> Yeah, I remember back in the old days of Facebook, where it was like, "Stu is," and then it was my thing there, it's like wow. The engagement that you'd have, and photos were all very different on all of these platforms before Facebook realized, "oh hey, photos are a pretty important thing there." So, you're the CTO. Bring us a little bit inside. I'm sure architecture is something you're talking about at a show like this. I have to believe AWS is a piece, if not a major piece, of what goes behind the scenes. So, bring us inside the technology a little bit. >> Absolutely. AWS is the bedrock upon which everything is built. We run over 200 instances on EC2. We're probably running about 20 different back-end services across around 15 to 20 different AWS services, and we're doing all of this with four back-end engineers. We're a very small company. One of those engineers, Mark, he's here, he spoke earlier today about how we were able to leverage AWS to essentially spin-off a whole new line of business that's not a consumer product, but a B2B offering for the ad industry. And that's kind of what we're announcing and talking about this week. We launched a new website about it, we have some early partners that we're working with, and this is the sort of thing that saved us as a company, and allowed us to become financially independent. Amazon was the bedrock of our ability to do that without increasing staff at all. >> So, what is the capability story that AWS unlocked as a part of that, or Cloud to the larger point. We don't necessarily need to be vendor-specific, despite the room we're sitting in. What was it that empowered for you that unlocked, I guess, the opportunity? >> There were a few things. Skill ability for one thing. We were able to go from 115, 120 instances, up to 200 very quickly when our clients needed us to, because a lot of them are larger than Timehop is, in terms of user base and access. The second one would've been global reach. We expanded from one availability zone, or rather one region, out to seven, because some of them are international, or have an international user base that requires us to be global. And then beyond that, just the breadth of services, like Elasticsearch, Kinesis Firehose. All of those things that let us connect the data from what we import from social media services, over to the user themselves, when they send push notifications or show the memories. The breadth of services that Amazon as a Cloud provider offers, means we don't have to write this stuff ourselves. We can just leverage what's already there, and we can connect all those dots, and deploy quickly. >> Yeah, the undifferentiated heavy lifting is the phrase that they're in love with to describe that. I always used to frame it slightly differently, as far as you're spending time locally, solving a global problem, where the things that the infrastructure provider can do at massive scale, it just makes sense. There's no competitive value for anyone anymore, and being able to go down to the data center, and replace a failing hard drive. So, why not make that someone who can get economies and scale out of it? And focus on >> Exactly. the way they're doing things that drive business value. But, that said, you said this awhile as well, and then the slide deck yet again today for the keynote, in the future, the only code you write is business value. And then, in a very tiny font that no one except me could read was probably in JavaScript, but that's neither here nor there. How close are we to that future, based upon what you're seeing? >> Close. I know we demonstrated the CDK, and the demonstration was in TypeScript, so we're one step away from the JavaScript world. Everything that we do, we do in Go, obviously other than some of the descriptor files that allow us to spin-off that infrastructure. But, we're incredibly close to being there, and Go is so close to the hardware itself, that I'm assuming Amazon will eventually support Go for that kind of CDK as well. I know they already do for Lambda, and that's relatively recent. I think it'll take a lot of companies a long time to get there, because there's a lot of processes than some of the larger enterprise words. We're fairly small, and we can pivot very quickly, as we've proven with the ad server called Nimbus. But, we're not that far away, at least at Timehop. >> So Dmitry, we live in the enterprise world a lot, and I have to imagine that there's some companies that would be like, "why am I going to work with this consumer social media company?" So, is being on a public Cloud, and specifically AWS, does that help give credibility behind the new services that you're offering? >> I think so. I think from a reliability and dependability standpoint, when we tell a mobile app publisher that they can trust us to run their ads for them, they know because we're on AWS that that's always going to be there. And, because we monetize for them, we end up having to depend on that reliability in order to promise them four nines above time. And, the fact that they can keep a revenue stream going at all times to keep the lights on and the doors open. >> And it's funny we're having this conversation today, when Twitter was hard down globally for an hour. So, nothing is going to be impenetrable. Nothing's going to stay up forever. I don't believe in making fun of companies for their down time, but at some point, past a certain point, it's okay. If there is a region-wide outage in AWS, for example, on that day, the internet's not going to be working super great for an awful lot of people. Depending on what your business model is, and what your use case is, maybe that's acceptable. Maybe in the case of my nonsense, the world is better off if it's not on the internet for that hour or two. But, it is a difference, I think, in the business modeling, between life-critical things, versus things that people use as entertainment. It feels like the B2B story that you're telling is somewhere in between those two ends of the spectrum. >> It certainly can be. One of the reasons we did go global is to prevent that sort of thing from happening. So, everything has a backup somewhere in a different hemisphere, which is awesome. But, depending on the kind of partner that we're working with, some of them are for looking through memories like us. Some of them are for reading short stories on the internet, which you can pause on that for an hour if Amazon goes down. For some others, they might be more mission-critical, like posting portfolios or resumes, and the free version might show ads. And in that case, you might be at a job interview, and you don't need that to go down. Now, the ad side can take a minute, and I'm sure whoever's depending on it has other fires to fight at the time. But for us, we have an obligation to all of our partners to make sure that we deliver on what we promised to them, and the same way that Amazon has to us. >> So Dmitry, what learnings can you share spinning off this new line of business, moving forward, working with Amazon there. What would you be talking to your peers about as to, is there anything you would've done a little bit differently, or now that you've gone through this, that you might recommend to them? >> I would say, build in-house what you can, if nobody else is doing it better than you can. I kind of wish that we had built Nimbus a lot earlier in our life cycle, because as soon as we built it, we prototyped it over a weekend, and we learned immediately that it was going to work better than any third-party ad-tech that we could've tried. At the same time, always evaluate what you're doing against your competition. Run those A/B tests, run them properly, measure, instrument everything, and in the end, understand where your dependencies are on third-parties. And eliminate them as much as possible. Again, we're so small that we do leverage as much third-party code. The best kind of code is the code you didn't have to write in the first place. But, in certain cases, you end up bringing a lot of value to the table by writing something proprietary, and kind of the way Amazon did with AWS when they built Rotor for themselves, and started offering it to everybody else. We're doing the same with Nimbus, where we wrote this Cloud-based ad platform, and we realized that it could help us. We're now realizing that it could help everybody else in our position. >> Okay. >> So Dmitry, we want to give you the final word here. Coming to an Amazon event like this, what's it mean to Timehop? What do you personally, you and the team, get out of it? >> It means a lot. It meant a lot to my colleague Mark to be able to speak today, to share with people some of our journey. Amazon is one of the partners that we work with, even on the ad side, 'cause that is a line of business Amazon has. And, we get to announce Nimbus as a service on adsbynimbus.com, with a website we just launched this week, to share with the world that Timehop is not just the consumer Timehop product. But, we are also this ad-tech company at this point that is growing very quickly, that is hiring. And, we want to continue to work with Amazon, and all of our other partners in order to scale that business. >> All right, well Dmitry, congratulations on the launch of the new product. We know a year from now what you'll be looking back at from this event. Apologies for that, but thank you so much for joining us. >> Thank you too. >> All right. For Corey Quinn, I'm Stu Miniman. We're at the end now of our day of coverage here from AWS New York City Summit for 2019. As always, go to thecube.net for all of the content here. We're at lots of AWS shows, many of the other Cloud infrastructure, big-data, AI, IOT, you name it. If there's a show out there with great information, great content, please contact us. Thank you as always for watching theCube.
SUMMARY :
Brought to you by Amazon Web Services. the beer trucks, but before we get there, is the "oh hey, here's your memory from a year ago, What's the company do? and the kind of things that you posted on social media, especially the guy sitting next to me. have the most engagement, we try to surface those I have to believe AWS is a piece, if not a major piece, AWS is the bedrock upon which everything is built. despite the room we're sitting in. and we can connect all those dots, and deploy quickly. is the phrase that they're in love with to describe that. in the future, the only code you write is business value. and Go is so close to the hardware itself, And, the fact that they can keep a revenue on that day, the internet's not going to be working One of the reasons we did go global that you might recommend to them? and kind of the way Amazon did with AWS So Dmitry, we want to give you the final word here. Amazon is one of the partners that we work with, on the launch of the new product. many of the other Cloud infrastructure, big-data,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
Dmitry | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dmitry Traytel | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Mark | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
New York City | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
two years ago | DATE | 0.99+ |
New York | LOCATION | 0.99+ |
a year ago | DATE | 0.99+ |
three years ago | DATE | 0.99+ |
Corey | PERSON | 0.99+ |
five years ago | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
adsbynimbus.com | OTHER | 0.99+ |
a year later | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
over 200 instances | QUANTITY | 0.98+ |
115, 120 instances | QUANTITY | 0.98+ |
thousand photos | QUANTITY | 0.98+ |
thecube.net | OTHER | 0.98+ |
Nimbus | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one region | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.98+ |
JavaScript | TITLE | 0.98+ |
Timehop | ORGANIZATION | 0.97+ |
AWS Summit | EVENT | 0.97+ |
ORGANIZATION | 0.97+ | |
10 plus years ago | DATE | 0.97+ |
seven | QUANTITY | 0.97+ |
around 15 | QUANTITY | 0.97+ |
up to 200 | QUANTITY | 0.96+ |
TypeScript | TITLE | 0.96+ |
first-time | QUANTITY | 0.96+ |
AWS Global Summit 2019 | EVENT | 0.95+ |
first place | QUANTITY | 0.94+ |
Groundhog Day | EVENT | 0.94+ |
ORGANIZATION | 0.92+ | |
20 | QUANTITY | 0.91+ |
thousands of photos | QUANTITY | 0.91+ |
EC2 | TITLE | 0.91+ |
two ends | QUANTITY | 0.9+ |
one availability zone | QUANTITY | 0.9+ |
one step | QUANTITY | 0.89+ |
second one | QUANTITY | 0.86+ |
New York City Summit | EVENT | 0.86+ |
Aaron Kao & Deepak Singh, AWS | AWS Summit New York 2019
>> Announcer: Live from New York. It's the Cube. Covering AWS Global Summit 2019. Brought to you by Amazon Web Services. >> Welcome back rush hour's started a little bit early here in New York City with over 10,000 people in attendance for AWS summit in New York City. I'm Stu Miniman, my co host for today is Corey Quinn. Happy to welcome to the program two first time guests from our host, Amazon Web Services. To my right here is Deepak Singh, who's the Director of Compute Services. Sitting to his right is Aaron Kao, who's the Senior Manager of Product Marketing. Gentlemen, thanks so much for joining us. >> Thank you for having us. >> Thank you for having us. >> Alright, so we know that every day we wake up and there's new announcements coming from Amazon and the only way most of us keep up with it is trying to read Corey's newsletter here. But in your group in compute, we know there's a lot going on and quite a few announcements. So Aaron, why don't you kick us off with some of the hard news that went through this morning? >> Yeah, we just launched Amazon EventBridge. It's a serverless event boss that allows you to connect your applications with data from sources like SaaS applications, AWS resources and your own applications. >> All right, so Deepak, I would love to dig into that a little bit. Like you said you that Amazon, you've learned a lot from CloudWatch and building this tool. Everybody looking at kind of, you know, Lambda in the serverless space is like, Okay, how are all these pieces going to come together? Is it all Amazon services all the time? And of course, Amazon has a huge ecosystem, but help us understand or layer down you know how this works? >> Yeah so as you know, AWS services send events to CloudWatch events. They consume events from CloudWatch events. One of the best ways to do it is through Lambda. One of Lambda's biggest strengths is the number of integrations we have with event sources, both taking in events and triggering events. But to your point, there are always events inside database ecosystem. And I think one of the things as a service owner that really excites me about EventBridge is how now customers have access not just to event triggers inside AWS, but also to our partners like Zendesk and the applications you can build will be really exciting. >> Alright, quite a few other announcements, maybe walk us through some of them. >> Yeah, CDK is another announcement where it's an open source software development framework that allows you to model your applications using programming language like TypeScript, Java, Python and .net. You know, the whole thing with building in the cloud, it's slightly different. You used to take your code, put it on a server and run it. Now people are building things a little more distributed, using a lot of different resources for their applications. So it's getting, provisioning your infrastructure is a little bit harder, right? You either have to do a lot of things manually or maybe you're writing a lot of scripts or using a domain specific language. But with CDK, you're now able to use the programming languages that you're programming your applications with, to model and provision your infrastructure. So it's super helpful. Really think it's going to help developers increase their development velocity. They're able to use things like loops, conditions, object oriented programming, they don't have to do context switching and just with a few lines of code, they're able to do a lot more. >> All right. >> I wound up playing with it a little bit when it was in preview and one of the things that I found that it was extremely helpful was, it was a lot easier for me to write something in using CDK, and then see what that rendered down to in terms of cloud formation and then oh, I guess that's how I do it in cloud formation, which was great. The counterpoint though, is it also felt at times like it was super wordy. So if I read that what it generates compared to what I normally write, which is admittedly awful, but I almost start to feel like I'm doing it wrong with that and then with amplify and with Sam and the rest, there's a lot of higher level abstractions that build cloud formation for you. But then it renders down in a few different and key ways. Under the hood, how much are these products that you're coming out with starting to shape the direction of cloud formation itself, or is that mostly baked and done? >> There's a lot of products that we're building that you know, are complementing cloud formation. You know, cloud formation is the templating modeling language to provision AWS resources. But on top of that, we have things like Sam right, that provides a declarative a more high level abstract declarative way to build on top of cloud formation, you know, we have Amplified that also uses cloud formation to help you build mobile applications and front end development. And then finally, you have CDK for just general use. So, these things are all complementing and, you know, things customers are asking for and helping us shape the ecosystem there. >> Yeah, Deepak the container space, of course, has been you know, one of these tidal waves that we've been watching and it's fundamentally changing the way people architect their applications and has huge impact on your product line. Give us the update. If you could just start with some of the high level, I remember first when I talked to you a couple of years ago it was when the whole Kubernetes piece was sorting out. So you know, ECS, EKS, used to have a much longer name that Cory would constantly >> Only for Cory >> Finally you've fixed the compensation problem where someone was getting compensated based upon number of syllables and a service name so good on you on that one. >> Right and you know the acronym A-M-I maybe you can you know settle once and for all you know how how we pronounce that. >> I'm old school it'll always be AMI. (laughs loudly) >> Walk us through kind of, you know your container services. >> I think the great thing about containers is as you said the adoption is everywhere. And what we find is there's a growth of ECS, the growth of EKS whether you're running it on EC2 or Fargate everything is growing like crazy, because people find new interesting ways to run applications based on what they know and what they're comfortable with. We have customers, customers like SNAP that know Kubernetes well and they are building on there're building a big chunk of their new infrastructure on EKS on AWS and it basically helps the developer velocity. On the flip side, you have customers like Turner Broadcasting that run a lot of their web services or the Comedy Central content properties like that on Fargate because they can just stamp them out. They all you know, it's a website, it's a service that they can just keep expanding. So it boils down to what are the key things that you're comfortable with? What are the reasons you've picked something. So if you're running like SNAP across, you know, in many different places, you are likely to choose Kubernetes and standardize on that. So that's the best part for me is, people have choices and then they pick based on what they need at that point in time, which can be two different teams at the same place, picking a different solution. I will add that one of the areas that we are focused on now is observe ability and developer experience. Those are areas that our customers have been asking for. CDK plays into that you saw in the demo this morning and with observe ability with container insights and with the fluid plugins that we announced. I think those are areas that you'll see us do a lot more going forward. >> So right, that was one of news today, CloudWatch container insights just to explain what that one is. >> So historically, when you do CloudWatch look, it's very BM-centric, you're looking at CPU memory, you assuming an application, instances run for a particular period of time. In the container world, you have services where the underlying tasks come and go, all you know, at a very different rate. CloudWatch container insights is meant to be a world that's aware of the fact that your containerized applications are tasks and services and pods, so you're able to get more fine grained metrics on the things that container customers care about and you're not trying to use BM-centric language to look at a containerized infrastructure. So that's the biggest reason for doing that. And then on the Fluent Bit side was, our customers want log routing to whatever they want to do it on. Whether they want it to send to S3 or the Elasticsearch We do that with Kinesis Data Firehose. So we basically wrote a bunch of open source plugins for Fluent Bit that just send your logs where you want them to go. So that's kind of where we are focused. >> Yeah, I view it as more of a log router than I do almost anything else. >> It is that. >> Yeah. A question of: Where does it come from? Where does it go? How do you keep it straight? >> Yeah. >> It's at this point, what does it output to you these days? Are there are various destination options, third party vendors, CloudWatch, history? >> So we wrote two plugins one was for well three, I don't know. One for S3 because so many people don't understand the data to S3. The other one was a Kinesis Data Firehose. So from there, you can send it to Redshift, you can send it to you can send it to Elasticsearch. So based on what you however you want another analyze it, you can send it to a custom resource that's Kinesis. So, you're using some third party provider, you can just send your logs over to those. >> Yeah, Corey, you know, you're dealing with a lot of customers, you know, there's now so many, you know, different instance types and some of the pieces, you know, what's the feedback you're giving to, you know, Amazon these days? >> Entirely depends upon the service teams and it ranges from this is amazing, excellent job to okay, it's a good start. And it's always a question though, it's when you have what 200 service options or darn near it at this point, 170. It's impossible to wind up with something that is evenly consistent and you have services that are sub components of other services and built on top. I mean, I think the, I guess the feedback I've been giving almost universally across the board is, assume that I am about 20% as smart as you right now seem to think I am and then explain it to me and then I'll probably understand it a lot better. It comes down to service to storytelling, more or less of meeting people at various points along their journey and then I was mentioning in our editorial session just before this segment, that that's something that AWS has markedly improved on the last two or three years. Where you have customer stories that are rapidly moving up the stack as far as leverage services. It's not just we took the VMs and now we run them somewhere else. Now it's about building a high, extremely volume intensive applications on top of a whole bunch of managed services and these are serious companies. These are regulators it's not just Twitter for pets anymore. >> Nothing wrong with that. >> No. >> So, you know, we were discussing, like FINRA was a great case study this morning and they talked about in the four years that they've been on, they've re-architected three times. You know, how do you balance all of these new instances coming out with, you know, and how do I make sure that I deploy something today that I've got the flexibility to change, but you know, I want to be able to lock in my pricing and make it easier. >> So actually, we think about that quite a bit. One of the reasons we built app match the way we did, as something that sits outside the container orchestrator, was it doesn't lock you into choosing one or the other or even choosing an architecture. You can start off with a monolith, start putting side cards on it, getting visibility into all your traffic, then portions of your applications you can start breaking out, you can put them on Fargate, you can put them on ECS, you can put them on the EC2. I think that is something we did very consciously because so many of our customers are in that position and I think more and more are going to go higher up the stack using managed databases, using Lambda, but it's not decision they need to make all up front. They can do it piecemeal, and we see our customers find another good example, they've done that. >> One of the philosophies of it, like AWS is giving customers building blocks to build things on. So the whole thing is, here's a new primitive that you can use, then you can take it out, replace something with something else, depending on your needs. So we give customers flexibility and choice. >> And part of the problem is that, that very much becomes a double-edged sword. I mean, most recently, you've had effectively declared war on Alphabet. I don't mean the large cloud provider that turns things off for a living. I'm talking about the English alphabet, where you take a look at all the different EC2 instance types. I think in US East one now there's over what is it 190 different instances you can pick from. It leads to analysis paralysis, which one do I pick? What's the right answer? What am I committing to, what am I not? And you see, that's a microcosm of the larger service problem. I want to build a web app that does a thing, which services do I use, you open up the service listing and you just get this sort of sinking sensation? I get that I can't imagine what someone new to the space is getting to there. >> All right, and this is where things like Amplify, Fargate, AWS Batch where you don't need to select an instance. Where you just tell us what your requirements are and Batch makes that selection for you. The core building blocks are important because you can't really figure out what to do. But then you'll see us do much more about the stack to help people get there. It's an ongoing thing that will keep trying to tackle but you'll see a lot more of that. >> It's controversial. One of my favorite things about Lambda, for example, is there's one knob RAM and as you turn that up, other performance characteristics increase and people complain about it but I love the simplicity, because I don't have to sit and think and make all these different decisions. It's one access. >> Yeah, but if you want more knobs, you can use Fargate. So I think that, that's the beauty of it that you do have that choice. >> Yeah, one of the lines Aaron, I really liked in Werner's keynote is he said, "we've really, you know, my words commoditized IT. "We all have access to all of the tools now." You know, that was, you know what big data originally and cloud also was, you know, you used to have to be a nation state or fortune 100 to be able to do some of these things so, you know, what do you hear from customers? You know, how do they make sure, you know, they're staying competitive and ahead, and therefore, in that relationship between the business and IT, what do you hear from your customers these days? >> In terms of that? Well, I think for, you know, for customers, like I think EventBridge is a, a pretty good example of that, in terms of customers asking us for ability to, you know, integrate their SaaS providers, integrate a lot of different things and not have to, you know, not have to do a lot of undifferentiated heavy lifting and things like that and, you know, customers are increasingly moving towards like event driven architectures and they asked us, hey, we really like CloudWatch events and how you do things with IT automation and then bringing SaaS providers in and, we want to, you know, we don't want to build pulling infrastructure in order to access API's and do all all those heavy liftings. What we did was we built out, we took CloudWatch events and added new features for SaaS applications and built that into a separate service for people to use. So that's like, you know, a lot of the relationships we have with our customers, listening to what they need and giving them what they want. >> And I think that, that's a very valuable thing. You know, we used to say, you know, five years ago, you would talk about, you know, let's get rid of undifferentiated heavy lifting. >> Yeah. >> Well, now it's like, no, no, let's enable, you know, something that you would have thought was heavy lifting and we're daunted to be able to do it but now hopefully, it's easier, because a lot of this stuff, you know, as Corey said, this is still a little bit daunting and you know, well you've got a lot of ecosystem and service providers and services to help us, you know, take care of, you know, because it's the Paradox of Choice with all the options that you have. >> And I think that's the beauty of what, I mean our customers are smart, they manage to find it interesting ways to keep challenging us and they keep us busy. But I also think that really, really many of them, the ones who have been able to be successful, have figured out what it means to take all the tools we give them, which are the ones where they want to completely hand it over to AWS and give us the responsibility and then which ones do they really feel they care about and the ones who can find their balance are the ones that we see moving the fastest. I think that's what we're trying to do. >> All right, now and one thing that does absolutely permeates virtually every service team I've worked with at AWS, I mean, you I've had this experience with you, where I talked about how my use case isn't a terrific fit for your product and your response is always well, what is your use case? It's not, is starting off from the baseline assumption that my use case is ridiculous, which let's face it, it probably is. But being able to address a customer need and understand that even if it doesn't dictate roadmap, is incredibly valuable and I don't find that there are too many players in any space, let alone this one that are willing to have the patience to listen to, frankly, some loud person wearing a suit. >> We try, I mean, I think you heard Andy say there's so much like a big chunk 85, 90% of our roadmap is customer requests, I would say that even the remaining 10% is maybe not things that they've directly asked for but things that we've observed they've run into or that we've run into working with, you know, the one or two customers who are ahead of the pack. And Okay, they have this problem, how do you generalize that? And we try and understand what it means. One of the reasons we made the container roadmap public, was this space is moving so quickly, it's almost impossible for us to talk to enough customers to figure that out. So like, Okay, this gives us an avenue for them to come to us and just tell us, GitHub issues. >> Yeah, so right. Final question I have for both of you. Directionally looking forward, you know, the roadmap, we love when there is publicly facing material not under the NDAs that we normally have to be able to hear. So what are you hearing from your customers? What direction are they pulling you towards and that we should expect to watch AWS kind of further, as we head towards re:Invent later this year. >> I think customers are asking us for different things for developer experience, especially event driven architectures. I think there's going to be a lot of interesting things happening in the Lambda space and that entire space. >> Yeah and to add to that, I think, to your point earlier, helping them simplify choices is going to be a big part of it. Meeting them where they are, in their IDEs with a tooling is a big part of what you'll see us do. So, you know, I think you saw examples today and we'll keep building on top of those. >> All right, well, send our congratulations to the two pizza teams that worked on all of the projects that were announced today. Look forward to seeing you, you know, down the road. Thanks so much and welcome to being Cube alumni. >> Thank you for have us. >> Thank you for having us on. >> Appreciate it. >> Aaron, Deepak you know, from AWS. He's Corey Quinn, I'm Stu Miniman. Back with lots more coverage from AWS summit, here in New York City, thanks for watching the Cube.
SUMMARY :
Brought to you by Amazon Web Services. Happy to welcome to the program two first time guests So Aaron, why don't you kick us off It's a serverless event boss that allows you Everybody looking at kind of, you know, and the applications you can build will be really exciting. Alright, quite a few other announcements, that allows you to model your applications So if I read that what it generates that you know, are complementing cloud formation. So you know, ECS, EKS, used to have a much longer name so good on you on that one. and for all you know how how we pronounce that. I'm old school it'll always be AMI. you know your container services. On the flip side, you have customers So right, that was one of news today, In the container world, you have services Yeah, I view it as more of a log router How do you keep it straight? So based on what you however you want another analyze it, that is evenly consistent and you have services that I've got the flexibility to change, you can start breaking out, you can put them on Fargate, here's a new primitive that you can use, and you just get this sort of sinking sensation? Where you just tell us what your requirements are is there's one knob RAM and as you turn that up, that you do have that choice. to be able to do some of these things so, you know, and things like that and, you know, You know, we used to say, you know, five years ago, and you know, well you've got a lot of ecosystem and the ones who can find their balance I mean, you I've had this experience with you, you know, the one or two customers So what are you hearing from your customers? I think there's going to be a lot of So, you know, I think you saw examples today all of the projects that were announced today. Aaron, Deepak you know, from AWS.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Deepak Singh | PERSON | 0.99+ |
Corey | PERSON | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
Deepak | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Aaron Kao | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Aaron | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
Werner | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Sam | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
three times | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Lambda | TITLE | 0.99+ |
two customers | QUANTITY | 0.99+ |
two different teams | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
FINRA | ORGANIZATION | 0.98+ |
four years | QUANTITY | 0.98+ |
Amplify | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
SNAP | ORGANIZATION | 0.98+ |
200 service | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
one knob | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
190 different instances | QUANTITY | 0.97+ |
five years ago | DATE | 0.97+ |
EventBridge | TITLE | 0.97+ |
today | DATE | 0.97+ |
Fargate | ORGANIZATION | 0.97+ |
AWS | EVENT | 0.97+ |
US East | LOCATION | 0.97+ |
over 10,000 people | QUANTITY | 0.96+ |
CloudWatch | TITLE | 0.96+ |
two plugins | QUANTITY | 0.96+ |
Turner Broadcasting | ORGANIZATION | 0.96+ |
TypeScript | TITLE | 0.96+ |
EKS | ORGANIZATION | 0.95+ |
AWS Global Summit 2019 | EVENT | 0.95+ |
Comedy Central | ORGANIZATION | 0.94+ |
CDK | ORGANIZATION | 0.94+ |
Deepak Singh & Aaron Kao, AWS | AWS Summit New York 19
>> live from New York. It's the Q covering AWS Global Summit 2019 brought to you by Amazon Web service, is >> Welcome back. Rush hours started a little bit early here in New York City, with over 10,000 people in attendance for any of US Summit in New York City. I'm stupid, and my co host for today is Corey Quinn. Having a welcome to the program to first time guests from our host Amazon Web service is to my right. Here is Deepak Singh, who is the director of Compute Service's. To his right is Aaron Cow, Who's the senior manager product marketing Gentlemen, thanks so much for joining us. Thank >> you for having us >> for having us, all right, so we know that every day we wake up and there's new announcements coming from Amazon, and the only way most of us keep up with it is trying to re Cory's newsletter here. But in your group and computer, we know there's a lot going on and quite a few announcements. So, Aaron, what do you kick us off with? Some of the hard news that went >> through this morning? Yeah, we just launched Amazon event. Bridgette's Ah, serverless event boss that allows youto connect your applications with data from sources like sass applications. A devious resource is in your own applications. >> All right, So Deepak would look to dig into that a little bit. I like you said, you that Amazon. You learned a lot from cloudwatch in building this tool. Everybody looking at kind of lambda and the service faces, Like Okay, how all these pieces together is that all? Amazon service is all the time. And, of course, Amazon has a huge ecosystem. But help help us understand a layer down. You know how this works. >> Yeah. So, you know, a dress service send events watchman consumer event from one of the best ways to do it is through Lando. Lando. One of London's biggest trends is the number off integration we have with events both taking in events and triggering event. But to your point there already events inside database system. I think one of the things as a service owner, that really excites me about event. How now? Customers of access, not just two ventricles inside eight of us were awesome apartments extended so that the application you can build will be really exciting. >> Quite a few other announcements maybe August or someone CK >> is another announcement where it's open. Source. Software development framework allows you to model your applications using programming language like typescript Job a python and got that. You know the whole thing with building in the cloud. It's slightly different. You usedto take your coat. Put it on a servant. Run it. Now people are building things a little more distributed. Using a lot of different resource is for their applications, so it's getting provisioning. Your infrastructure is a little bit harder, right? Either Have to do a lot of things manually. Are maybe you're writing. A lot of scripts are using a domain specific language, But with CD Kay, you're now able to use the programming languages that you're hurting your applications with two model and provisions your infrastructure. So it's super helpful. Really think it's gonna help developers increase their development velocity? They're able to use things like loops, conditions, object oriented programming. They don't have to do context switching and just a few lines of code. They're able to do a lot more. All right, >> I want I want a playing with it a little bit when it was in review, and one of things that I found that it was extremely helpful was it was a lot easier for me to write something in using CD kay and then see what that rendered down to in terms of cloud formation. And then, oh, I guess that's how I do it in cloud formation, which was great. The counterpoint, though, is it also felt, at times like it was super wordy. So if I read that what it generates compared to what I normally right, which is admittedly awful. But it's all right, we'll start to feel like I'm doing it wrong with that. And then with amplify and with Sam and the rest. There's a lot of higher level abstractions that build cloud formation for you. But then it renders down in a few different key ways under the hood. How much are these products that you're coming out with starting to shape the direction of confirmation itself? Or is that mostly baked and done? >> There's a lot of products that we're building that you know are complimenting information. Information is the template ing modeling language to provisional abusive resource is put on top of that. We have things like Sam, right? That provides a declared of ATM or high level abstracted declared way to build on topical information. You know, we have amplified also use this information to help you build mobile applications in front development and then finally have see decay for general use other things. They're all complimenting and you know are things customers are asking for helping us >> get the ecosystem. Deepak. The container space, of course, has been You know what one of these tidal waves that we've been watching on It's fundamentally changing the way people architect their applications. That huge impact on your product line Give us the update. If you could just start with some of the high level. Remember first when I talk to you. A couple of years ago, the whole kubernetes piece was sorting out. So you know, e c s E. K s usedto have a much longer name that Cory >> Cory. Finally, you fix the compensation problem where someone was getting compensated based upon number of syllables in a service name. So good on you on that one. >> Right on. Uh, you know, acronym, am I? Maybe you can you know, settle once and for all. You know how how we pronounce that >> I'm old school in love with the Army. >> But what what walk us through? Kind of. You know, your container service is, >> I think, the great thing about container, I said, adoption is everywhere on what we find. It brought a VCs the growth of cares where they're running it on to our fargate. Everything is growing like crazy because people find new interesting ways to run applications based on what they know. One what they're comfortable with their customers. Customers like Snap. There's no community well, and they're building on their building a big chunk of their new infrastructure on kneecaps or need to be with, and it basically helped develop a velocity. On the flip side, your customers like Turner Broadcasting that run a lot of their Web service is the comedy central content properties like that on Fargate because they can just stamp them out. They all you know, it's about time. It's a service that you can just keep expanding. So it boils down to one of the key things that you're comfortable with. One of the reasons you fix something if you are running like snap across. You know, in many different looks places you are likely to choose community and standardize on that. So that's the best part for me is people have choices and then the pic based on what they need. At that point in time, it can be two different teams at the same place. Picking a different solution. I will add that one of the areas that we are focused on now is a dub ability and develop experience, though the areas that our customers have been asking for CD Kay played into that record in the demo this morning. And with the probability with container inside on with the fluid that be announced, I think though that area, they do a lot more >> going forward, right? That was one of those cloudwatch container insights. Just explain what that one is >> so historically, when you do cloudwatch look very bm centric, you're looking at CPU memory. You're zooming application. We are instances run for a particular period of time. At the container world you have service is with the underlying tasks. Come and go all you know, a very different rate container inside. It's meant to be a world aware of the fact that you're containerized application that fast service is and part, they're able to get more fine grained metrics on the things that container customers care about. And you're not trying to use the BM centric language to look at the content. That's the biggest reason for doing that. And then on the floor in bedside Boy, our customers want loud rounding to whatever they want to do it on where they understand three or elasticsearch. We do that with data borrows. So we basically wrote a bunch of open source plug in for fluent, but they just end your log where you want them to go. That's kind of maybe a >> Yeah, I view it as more of a log router than I do. Almost anything else? Yeah, a question of where did it come from? Where does it go? How do you do? Keep straight. It's at this point. What is it out? What is it output to these days of their various destination options? Third party vendors cloudwatch history >> to plug in 14315413 because so many people in the center there with three the other one was like Anita. There. Apart from there, you can send it to read, Chef, you can send it todo you can send it to elasticsearch. So based on what however you want and I'll analyze it, you can send it to a custom resource. So you want you're using some third party provider. You can just send your logs over to those. >> Corey, you know, you're dealing with a lot of customers. You know, there's so many, you know, different instance types and some of some of the pieces. You know, what's the feedback you're giving? You know, Amazon these days >> entire depends upon the service teams, and it ranges from This is amazing. Excellent job, too. Okay, it's a good start, and it's always a question, though. It's when you have what 200 service options are darn near. It at this point aren't 70. It's impossible to wind up with something that is evenly consistent, and you have service is that air sub components of other service is built on top. I mean, I think the uh, I guess the feedback I've been giving almost universally across the board is assume that I am about 20% as smart as you right now seem to think I am and then explain it to me, and then I'll probably understand it a lot better. It comes down to service the storytelling more or less of meeting people of various points along their journey, and that I was mentioning in our editorial session just before this segment that that's something that AWS has markedly improved on the last two or three years, where you have customer stories that are rapidly moving up the up the stack as Faras Leverage Service's It's not just we took the EMS, and now we run them somewhere else. Now it's about building of extremely volume intensive applications on top of a whole bunch of managed service is and these air serious cos these air regulators. It's not just Twitter for pets anymore. >> Nothing wrong with that. No, >> So way were discussing like Enron was a great case this morning, and they talked about in the four years that they've been on, they re architected three times, you know, how do you balance all of these new wins is coming out with, you know, how do we make sure that I deploy something today that I've got the flexibility to change. But, you know, I want to be able to lock in my pricing and make it easier. >> Actually, we think about that quite a bit. One of the reasons we met, the way we did something that sits outside a container orchestrator. What? It doesn't lock you into choosing one or the other or even using an architecture. You can start over the monolith, start putting sidecars on it. It's getting with the ability to all your traffic portions of applications. You can start breaking out. You can put them on target. You can put them on PCs. You can put them on it, too. I think that is something we did very consciously because so many of our customers are in that position. And I think more and more are going to go higher up the stock using managed databases. You think lambda. But it's not decision they need to make all up front. They can do it piecemeal, and we see a custom fender. The good example there done that. >> I think one of the >> philosophies of like eight of us is giving customers building blocks the buildings on, so the whole thing is here's a new primitive that you can use. Then you can take it out, replace something with something else, depending on your needs. So we give customers flexibility and choice. >> And part of the problem is that that very much becomes a double edged sword. I mean, most recently you've had effectively declared war on alphabet. I don't mean the large cloud provider that turns things off for a living. I'm talking about the English alphabet where you take a look at all the different ec2 instance types. I think in US East one. Now there's over. What is it? 100 90 different instances you can pick from. It leads to analysis paralysis. Which one do I pick? What's the right answer? What am I committing to? What am I not? And you see that? That's a microcosm. The larger service problem. I want to build a Web app that does a thing. Which service is do I use? You open up the service listing and you just get this sort of sinking sensation. I get that. I can't imagine what someone new to the space is getting to >> you, and this is where things like amplify fargate aws patch. You don't need to select an instance where you just tell us for your requirements are on Batch makes that collection for you the core building. What's important because you can't really figure out what to do. But then you see us too much more about the attack to help people get there. It's an ongoing thing that will keep trying to tackle, but you see a lot more of that. >> It's controversial. One of my favorite things about Lambda, for example, is there's one knob ram, and as you turn that up, other performance characteristics increase and people complain about it. But I love the simplicity because I don't have to sit and think and make all these different decisions. It's one access, >> but if you want more knob, you can you fuck it. So I think that that's the beauty ofit that you do have that choice. >> Yeah, one lines there, and I really liked it. Borders keynote. Is he said way? Really? You know my words, commoditized. I t We all have access to all of the tools now, you know that was you know what big date originally file. It also was used to have to be a nation state 4100 to be able to do some of these things. So, you know, what do you hear from customers? How do they make sure you know, they're staying competitive and ahead on their four in that relationship between the business and I T. What do you hear from your customers these days? >> In terms of that? Well, I think, um, for you know, for customers like I think of Emperor age is a, uh, a pretty good example off that in terms of customers asking us for ability to, you know, integrate their SAS providers and a great a lot of different things and not have thio you No, no, no. >> I have >> to do a lot of undifferentiated heavy lifting and things like that. And customers are increasingly moving towards, like avenger oven architectures. And they asked us, Hey, we really like cloudwatch events and how you do things with a iittie automation and then bringing SAS providers and on way wantto you know, we don't want to build a polling infrastructure and orderto access athe eyes and do all all the heavy lifting. What we did was we built out way took cloudwatch events and added new features for SAS applications and build that into a separate service for people to use. That's like, you know, a lot of the relationships we have our customers listening to what they need and giving them what they want. >> I think that that's a very valuable thing. We used to say, You know, five years ago you would talk about, you know, let's get rid of indifferent, heavy lifting Well, now it's like, No, no, let's enable you know some thing that you would have thought was heavy lifting and we're daunted to be able to do it. But now hopefully it's easier because a lot of this stuff, you know, he said, This is still a little bit daunting, and you know, you've got a lot of ecosystem and service providers, and service is help us. You take care of, you know, because it's the paradox of choice. With all the options that you >> have on. I think that's the beauty of what I'm in a customer that smart. They managed to find interesting ways to keep challenging us and keep us busy. But I also think that really, really many of them the ones who've been able to be successful. I figured out what it needs to be. Take all the tools to give them which other ones where they want to completely hand it over to AWS and give us the responsibility. And then which ones today really feeling, get they care about and the ones who can find their balance of the ones that we see moving faster. I think that's what we're trying to >> write that one thing that does absolutely permeates virtually every service team I've worked with that AWS. I mean, I've had this experience with you where I talk about how my use case isn't a terrific fit for your product, and your response is always well, what is your use case? It's not. Is starting off on the baseline assumption that my use cases ridiculous, which, let's face it, it probably is. But being able to address a customer need to understand that even if it doesn't dictate, road map is incredibly valuable, and I don't find there are too many players in any space, let alone this one that are willing to have the patience to listen to. Frankly, some loud person wearing a suit. >> Way try. I mean, I think you heard me say this so much like a big junk. 85 90% of a road map. Customer request. I would say that even though remember remaining 10% maybe not think that they're directly asked for but think that you observed their running to or that we run into working with, you know, the one of the customers go ahead of the pack. Okay. They have this problem, Baker. How do you generalize that? And we try and understand what it means. One of the reasons to be made the container road map public was This space is moving so quickly. It's almost impossible for us to talk to enough customers to figure that out. So, like, okay, that gives us an avenue for them to come to us and just tell us and get have >> issues. Yeah, s o right. Final question for both of you directions. Looking forward, you know, the road map we love when there is publicly facing material, not under the NBA's that we normally have to be able to hear. So what are you hearing from your customers? What direction are they pulling you towards and that we should expect tow watch aws kind of a cz we head towards reinvent later this year. Yeah, >> like customers are asking us for different things for developer experience, especially event driven architectures. I think there's gonna be a lot of interesting things happening in the land of space and that entire space >> on to add to that. I think your point earlier helping the simplified choices is going to be a big part of it. Meeting them where they are in their ideas with the cooling is a big part of what you'll see us do. So you know, I think you saw examples today. We'll keep building on top of >> All right. Well, send our congratulations to the two pizza teams that worked on all of the projects that were announced today. Look forward to seeing you. You know, down the road in tracking down. Thanks so much. And welcome to be in Cuba one night having us Deepak, you know, from AWS. He's Cory Quinn on student back with lots more coverage from 80 West Summit here in New York City. Thanks for watching
SUMMARY :
Global Summit 2019 brought to you by Amazon Web service, Cow, Who's the senior manager product marketing Gentlemen, thanks so much for joining us. So, Aaron, what do you kick us off with? A devious resource is in your own applications. I like you said, you that Amazon. extended so that the application you can build will be really exciting. You know the whole thing with building in the cloud. There's a lot of higher level abstractions that build cloud formation for you. There's a lot of products that we're building that you know are complimenting information. So you know, e c s E. So good on you on that one. Uh, you know, acronym, You know, your container service is, One of the reasons you fix something if you are running like snap Just explain what that one is the container world you have service is with the underlying tasks. How do you do? So based on what however you want and I'll analyze it, you can send it to a custom resource. Corey, you know, you're dealing with a lot of customers. It's when you have what 200 Nothing wrong with that. and they talked about in the four years that they've been on, they re architected three times, you know, And I think more and more are going to go higher up the stock using managed databases. so the whole thing is here's a new primitive that you can use. You open up the service listing and you just get this sort of sinking You don't need to select an instance where you just tell us for your requirements are on Batch makes that collection But I love the simplicity because I don't have to sit and think and make all these different decisions. So I think that that's the beauty ofit that you do have that choice. So, you know, what do you hear from customers? terms of customers asking us for ability to, you know, That's like, you know, a lot of the relationships we have our customers listening to what they need this stuff, you know, he said, This is still a little bit daunting, and you know, you've got a lot of I think that's the beauty of what I'm in a customer that smart. I mean, I've had this experience with you where I talk about how my use case isn't a terrific fit for your product, running to or that we run into working with, you know, the one of the customers go ahead of the pack. So what are you hearing from your customers? I think there's gonna be a lot of interesting things happening in the land of space and that entire So you know, I think you saw examples today. you know, from AWS.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Aaron | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
Cuba | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Enron | ORGANIZATION | 0.99+ |
Aaron Cow | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
Aaron Kao | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Deepak | PERSON | 0.99+ |
August | DATE | 0.99+ |
10% | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Corey | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Cory Quinn | PERSON | 0.99+ |
three times | QUANTITY | 0.99+ |
Cory | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
85 | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Bridgette | PERSON | 0.99+ |
Turner Broadcasting | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
two different teams | QUANTITY | 0.98+ |
London | LOCATION | 0.98+ |
today | DATE | 0.98+ |
python | TITLE | 0.98+ |
four years | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Amazon Web service | ORGANIZATION | 0.98+ |
200 service | QUANTITY | 0.98+ |
Anita | PERSON | 0.97+ |
one night | QUANTITY | 0.97+ |
five years ago | DATE | 0.97+ |
one knob | QUANTITY | 0.97+ |
US East | LOCATION | 0.97+ |
90% | QUANTITY | 0.97+ |
14315413 | OTHER | 0.97+ |
two ventricles | QUANTITY | 0.96+ |
Fargate | ORGANIZATION | 0.96+ |
over 10,000 people | QUANTITY | 0.96+ |
NBA | ORGANIZATION | 0.96+ |
four | QUANTITY | 0.96+ |
Snap | ORGANIZATION | 0.96+ |
two pizza teams | QUANTITY | 0.96+ |
one lines | QUANTITY | 0.96+ |
Baker | PERSON | 0.96+ |
Compute Service | ORGANIZATION | 0.95+ |
80 West Summit | LOCATION | 0.95+ |
Lando | ORGANIZATION | 0.95+ |
English | OTHER | 0.94+ |
AWS Global Summit 2019 | EVENT | 0.94+ |
first time | QUANTITY | 0.93+ |
cloudwatch | ORGANIZATION | 0.93+ |
snap | ORGANIZATION | 0.93+ |
about 20% | QUANTITY | 0.92+ |
this morning | DATE | 0.92+ |
100 90 different instances | QUANTITY | 0.9+ |
SAS | ORGANIZATION | 0.89+ |
two model | QUANTITY | 0.89+ |
later this year | DATE | 0.89+ |
Kolby Allen, Zipwhip | AWS re:Inforce 2019
>> live from Boston, Massachusetts. It's the Cube covering AWS Reinforce 2019. Brought to you by Amazon Web service is and its ecosystem partners. Welcome >> back, everyone. Day two of live coverage here in Boston, Massachusetts, for AWS Amazon Web services. Inaugural conference called Reinforce. This is a Cloud security conference, the first of its kind. It's the beginning of what we see as a new generation of shift in now new category called Cloud Security. Obviously, Cloud has been growing. Security equation is changing and evolving. I got a great guest here. Colby Alan, who's a platform architect at ZIP with based in Seattle. Great for joining us. Thanks for coming on. Thanks for having me. So we're chatting before we came on about your journey and your Dev ops chops you guys have built over there that I want to get into that just quickly explain what you guys do real quick. Set the context. >> Yes, it is on SMS text messaging provider way Specialize in toll free messaging. We also texting able landline phone numbers. Our business is kind of really split into two parts way. Have you know your traditional Sadd's application that ran runs like a sad That's where you can, you know, have the you I thio interface your landline phone number eight under number With that messaging, no, top that We run a carrier grade network. So we have direct binds into all the major carriers in the U. S. Bringing online some Canadian carriers. That's really where the power of our platform and we own the network on DSO way started Nicolo and over the last last year, which has spent nine months moving all that into Amazon and >> forget about that. So explain the architecture. You guys move yet polos with network you moved to Amazon with three people. Just classic devils. A lot of hard work, I'm sure take us through what happened. What was the old environment? And now what does it look like now? >> Yeah, so, you know, when I just started with, you know, they were interesting place. They were just starting a huge growth. And so at that point, they existed in a few data centers in the U. S. And running the empire workloads on or bare metal databases on. The problem was, there was just a scaling problem, right? I mean, we couldn't way We're looking at the type of scale we needed and trying to procure hardware. And we just couldn't physically get it fast enough with the right amount of budget. So I come from a previous place doing a job? Yes. I mean, that's kind of what I've done for a lot of years. So, you know, I convinced my boss stay here. Let's let's run the stats happen. Eight of us. So we built that ran it, launched our new version of arse as application in Amazon. And at that point, you know, our traffic skyrocketed. You know, I think last year we had somewhere to 180% growth, right? And, you know, our core infrastructure just wasn't surviving. Right is outages and problems. And so, you know, we took it and we we went to Amazon with it. And, you know, we rebuilt it all. And it was a really interesting thing, because Amazon was Luther releasing features and we were consuming them, right? Five. Siri's and Nitro came out, and we're like finally waken get performance of the networking interfaces. Then they released the D instances within ve Emmys, or like finally, our databases will survive and they can go fast enough, you know? And then we leveraging huge Aurore instances, real impact power, the back end of this thing. So you >> guys really tapped really? At the right time? You guys were growing. You saw the, you know, that scale potentially bursting. You saw the scale coming in growth coming in the company you could almost see. Okay, look, we got a plan. So you go to Amazon News Service is what's the impact on the staff has been any more people. What's been the impact on? >> Yeah, I think the big thing is the initial move. We did it for three of us. I mean, it was a lot of work. We spent a lot of time doing it. A lot of people, sleepless nights, a lot of long weekends. But now you know, we've got a really stable platform, and, you know, we were able to really continue processing our message. Growth is increased, and we know we haven't, you know, had to totally re architect things again, right? The architecture's work has grown and expanded. Stale ability has been fantastic for us. The performance, of course, is you know, some of >> the best walking commercial for eight of us, a question paper. But if you'll have that same experience, but what's interesting is you guys essentially are, in my opinion, representative of the trend that we're seeing, which is certainly in security as they catch up the devil. That's a big story here. Security now can level up with speed of the Dev ops kind of engineering philosophy and pointing, but it's it's the trend of building your own and a lot of companies. They're reinvesting in teams of people because they're close to the action and they can actually code if I quickly use cases that they know are bona fide, whether it's a low level platform service, primitive or right up into the app, using machine learning and data. So you know you have now that now you had security in there. This is where the action is and so cos I mean, I see the successful ones like you guys coming in saying You know what? Let's not boil the ocean over. Let's just solve one problem scale and then let's look at the service is that we can leverage to doom or take us through that philosophies. I think you guys were great example of that. >> So, I mean, if we touch on the security aspect, I think that that was a big thing is way. Don't run a dedicate security team. My team is the security team, right? And that was a big thing that both me and my director is. You know, we wanted the people building it to be doing the security. And, you know, the that was what was really, you know, easy with eight of us is, you know, we could turn on all these fancy features. It was just, you know, a flag and Terra formed all of a sudden way. Have encryption arrest. It's something we've never had before. So there's that. And then, you know, to the builder methodology be because we came from such a scrappy like way. Got to go fast, like we didn't have time to evaluate software bringing consultants, you know, it's so, you know, we kind of just kind of adopted that, you know, it's better for us a lot of times to kind of roll our own thing. Andan there, times where there's software that's a good fit for it. I mean, we do use some external vendors on things, and >> that's really more of a decision on the platform. But as you look at the platform engineer, you go. Okay, we gotta build here. Let's weigh No, he don't really is not me that be a core competency. Let's go look at some vendors for this, this and that. But ultimately, if you look at something that's really core, you can dig into it. And certainly with Kubernetes and with a lot of the service is coming out sas after taking eventually Cloud Native. >> Yeah, yeah, through you're you're so we're huge Criminality is 100% kubernetes everywhere, and I think that that's really been another big thing for us is you know, it's it's brought our application up a level to be able to integrate, be more reliable. I mean, you know where you used to have this external service discovery piece, and then you have your security peace. You know where kubernetes I can go deploy a container application. Describe it all at once, right? It's all in my coat config so I can audit it for our compliance sees. You know we can co to review for our compliance, sees but the same time I deploy the whole thing. I'm not. Here's this team to point the There's this other team then coming by trying to secure the app. It it's all together. >> The old way would have been kind of build it out, maybe use some software. Have all these silo teams. Yes, and that's kind of all kind of built in. >> Yeah, we kinda just opened it out, right? I mean, you know, from from arse, as teams leveraging a lot of, you know, the security features that are available to us to our core piece, which is a very different type of software, you know, is leveraging the same pieces and same type of monitoring principle. >> It's interesting, You know, the Kino. There's something people hemming and hard around, like the word Dev sec ops. I mean, I love Devon. We've been we've been part of that since day one. It's been fun to be part of it, but we saw the benefits of it. Clearly. You see, no doubt there's no debate. But when you start getting into some of the semantic definitions, go to security known feel that, by the way, is fragmented like crazy and now you get the growth of the cloud is starting to see cloud security become its own thing That's different than the on premises side. So what's your take on that? Because a lot of people are wanting their going to cloud anyway. So what's that they're saying on premise, security posturing and cloud security? In your opinion? >> Yeah, so I mean, it is drastically different. I think part of it's the tool set that's available, right? I mean, we ran data centers. I've automated data centers, but, you know, they're just not at the level of which I could do the automation in the auditing in the cloud. So I feel like I found actually, some respects makes it easier for me to do security on run security and audit security numbers. The data center. You know, I don't run a lot of tooling and a lot of things to get all the views. I need it, But there's a lot of really separate systems, you know, in the cloud you have, like this one. Nice, fundamental, a p I. That hi is a person who has to build the infrastructure can use, but it's the same a p I that I put my security had on that. Like I used to make security, right, security groups, things of that sort. It's all the same, right? I'm not having to learn five different applications has been really important for our team because, you know, my team comes from the vast majority of no true Dev ops Thio. You know, we've been upgraded from people in our knock, you know, and have them really just learned the one ecosystem >> is you don't want to fragment the team. Yeah, I don't wanna have five different skill sets, kind of >> their victims. We just We don't wanna have tools that only one person knew how to do right. We wanted people to take vacations right? And like, we don't want to have a tool that's like only only that person knows how to run it, nobody else does. And so >> that was the big thing for us. What you think about the show here, reinforce all say it's not an Amazon Webster's summit. They do the summits which assistance see a commercial version of reinventing regions. This is a branded show is obviously their cloud security going hard at it. What's your take. So far, >> I've really enjoyed it. I mean, so I've gone to some. It's I've been to reinvent for a few years spoken to reinvent once, you know? But, you know, those things were fun, but they're so big and there's so much going on, you know, it's it's refreshing to be in this reinforced conference and focus on the security side. Sitting talks were like, You have people getting into kms and like some of these really pivotal tools. Yeah, it's been really, really >> get down and dirty here. Yeah, And people talk to, you know, approachable >> without, like, having to deal with all of Amazon, right? I can focus on, like, this one little >> portion reinvent you kidding? Walked through the hallways just like >> yeah, I mean, Well, where one hotel Are you gonna >> be at that point now, right? Yeah. >> Okay. So I gotta ask you about the dev ops question. We've been commenting yesterday day Volonte, who is on his way in. He and I were talking with a lot of si sos and a lot of practitioners. And the conversation generally was security needs to catch up to Dev ops and to pay who you talk to. They may or may not believe that way. Think that to be true. We think security now has the level up with the speed of Dev ops from his agility things that are highlights. For example, you guys have What's your take on that when someone says, Hey, security's got to catch up the devil Is it really catching a prism or transformation? What's your view on this >> will be like when you say catching up like it takes a negative. You know, I don't want to be negative there on DSO. I feel like it's a transformation. That means the same thing of going from the data center as as just as an operational engineer to Amazon is, there wasn't catching up. It was you just changing everything you do and how you think. And I think you know that's That's the same thing that a lot of security people I've seen struggle with was their success. Life are the ones that have gone, and I understand that, like, >> what do you think is the most important story happening in this world security cloud security screen general that should be covered by media that should be covered by the industry that is covered him should be amplified Maur or isn't covered and should be talking about what's the what is the most important stories that should be told. >> Well, so again, you know, I'm a fundamental layer, so things to me that I are always over shouted or like, you know, just encryption, right? I mean, everybody's like train encryption on. But, you know, I feel that talks I've gone to today or deeper dives into that. I feel like, you know, the kms product of Amazon. I feel like is a very powerful product that isn't super talked about. It's been nice here because they talked about 100 like you go to reinvent you don't really see a lot of kms type things are crowded, just them. And, you know, I think it makes some of those very difficult products to run in a data center very easy. You know what you hear on the security side is unsecured, as three buckets are like. Security groups are in conflict. Configure it incorrectly. And you know, no one knows that commercial. Everyone knows that. You know Elasticsearch not turned into a new s three right compromises You choose your database of choice of public. But for me, I think it's like a part that I feel is missing with Amazon is the ease of use of like, clicking a button. And >> now I have >> full Aurora encryption by default >> and the service you can just turn on what's next for you guys. Give us a peek into some of the things they're working on. What excited about? >> So I mean, we're making Ah, big thing is, you know, so we spend a lot of building now we're kind of going back and really kind of wrapping are a lot of our compliance is so zip it is a hole has been working towards a lot of stock to type compliance, seize on things like that. So, you know, we've been working through governance and no deploying. You know, software that kind of is more actively watching our environment and alerting us or helping us make sure we're staying at C. I s type benchmark so that you know, when my boss comes to me and says, Show me that we're doing this, I can just say, Oh, here's dashboard. So we were really not like via more secure State is a big, big product that we're working with right now. We leverage cloud health and those kind of the two external vendors that we've really partnered with. And so, you know, this year's been adopting those into the system. That's when the eight of us side, you know, we still just run Cooper Nettie. So there's a lot going on in the Cuban aunties ecosystem that we're also working on. So, like, service, mash and things of that sort like, How can I take this idea of security groups in this least trust model infrastructural e up to kubernetes, which by default this kind of flattened open. And so, you know, we've been exploring envoy and sdo linker D or write our own, you know, you know, and looking through those things and and then again wrote, making more robust CCD pipeline. So container scanning vulnerability, protecting our edge way running cloudfront wife for a while. But, you know, a lot of this year's gonna be spent, you know, Evaluate Now you know, we deployed a lost about 10 and got it turned on right because it works. But diving more deeply into like some of the autumn mediations >> have a fun environment right now, is it? You can knock down some core business processes, scale them up, and then you got the toys to play with the open source front. You got kubernetes really a robust ecosystem. They're just It's a lot of fun. >> Yeah, Criminal has definitely been exciting to play with >> advice to fellow practitioners and platform engineers because, you know, you guys been successful with transmission A the best. You got your hands on a lot of cool things. You got a good view, the landscape on security side of the deaf, upside for the people out there who were like they want to jump in with a parachute open. Whatever makes you that nervous, Some people are aggressively going at it hard core. Some have cultural change issues. What's your invite? General advice to your >> fellow appears My advice is just jump in and do it right. I mean, you know, don't be afraid. I mean, we had a really fast transformation, and we failed a lot very fast, and we weren't afraid of it. I mean, you know, if we weren't failing, we weren't doing it right. You know, in my opinion, right. We had to fail a few times a year. I was gonna work. And so I think, you know, don't be scared to jump in and just build, you know, right the automation. See what it does. Run some tests against it. >> You know, it's almost like knowing what not to do is the answer. Get some testing out there, get his hands dirty. >> What's gonna work for you? What's gonna work for your business? And the only way you're going to do that is to actually do it. >> Showed up in specialized Colby. Thanks for coming and sharing the great insight. Kobe Alan, platform engineer for Zip Whip Great company here. The Cube. Bring all the action. Extracting the signal from the noise. Great insights. And here, coming from reinforced here in Boston, eight dresses. First conference around. Cloud security will be right back after this short break
SUMMARY :
Brought to you by Amazon Web service is This is a Cloud security conference, the first of its kind. where you can, you know, have the you I thio interface your landline phone number eight under number With that you moved to Amazon with three people. Yeah, so, you know, when I just started with, you know, they were interesting place. You saw the, you know, But now you know, we've got a really stable platform, and, you know, we were able to really continue So you know you have now that now you had security in there. And, you know, the that was what was really, you know, easy with eight of us is, But as you look at the platform engineer, you go. and I think that that's really been another big thing for us is you know, it's it's brought our application Yes, and that's kind of all kind of built in. I mean, you know, from from arse, as teams leveraging a lot of, now you get the growth of the cloud is starting to see cloud security become its own thing That's different You know, we've been upgraded from people in our knock, you know, is you don't want to fragment the team. And like, we don't want to have a tool that's like only only that person knows What you think about the show here, reinforce all say it's not an Amazon Webster's summit. you know, it's it's refreshing to be in this reinforced conference and focus on the security side. Yeah, And people talk to, you know, approachable be at that point now, right? needs to catch up to Dev ops and to pay who you talk to. And I think you know that's That's the same thing that a lot of security people I've seen struggle what do you think is the most important story happening in this world security cloud security And you know, no one knows that commercial. and the service you can just turn on what's next for you guys. So I mean, we're making Ah, big thing is, you know, so we spend a lot of building now we're kind of going back and then you got the toys to play with the open source front. advice to fellow practitioners and platform engineers because, you know, you guys been successful with And so I think, you know, don't be scared to jump in and just build, you know, You know, it's almost like knowing what not to do is the answer. And the only way you're going to do that is to actually do it. Thanks for coming and sharing the great insight.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Volonte | PERSON | 0.99+ |
last year | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Kolby Allen | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
eight dresses | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
U. S. | LOCATION | 0.99+ |
three people | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
U. S. | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
Five | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.97+ |
Zip Whip | ORGANIZATION | 0.97+ |
Kobe Alan | PERSON | 0.97+ |
Colby Alan | PERSON | 0.97+ |
five different applications | QUANTITY | 0.97+ |
Siri | TITLE | 0.97+ |
Amazon News Service | ORGANIZATION | 0.97+ |
one person | QUANTITY | 0.96+ |
Nicolo | ORGANIZATION | 0.96+ |
Cooper Nettie | ORGANIZATION | 0.96+ |
last | DATE | 0.96+ |
this year | DATE | 0.96+ |
Reinforce | EVENT | 0.96+ |
yesterday day | DATE | 0.96+ |
first | QUANTITY | 0.95+ |
five different skill sets | QUANTITY | 0.95+ |
2019 | DATE | 0.95+ |
about 10 | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
ZIP | ORGANIZATION | 0.93+ |
Zipwhip | PERSON | 0.93+ |
two external vendors | QUANTITY | 0.92+ |
Amazon Web services | ORGANIZATION | 0.91+ |
Aurora | TITLE | 0.91+ |
Devon | PERSON | 0.91+ |
Amazon Web service | ORGANIZATION | 0.91+ |
three buckets | QUANTITY | 0.91+ |
Colby | PERSON | 0.89+ |
Maur | PERSON | 0.88+ |
Amazon Webster | ORGANIZATION | 0.87+ |
about 100 | QUANTITY | 0.87+ |
today | DATE | 0.86+ |
Eight of us | QUANTITY | 0.86+ |
180% growth | QUANTITY | 0.83+ |
Emmys | TITLE | 0.82+ |
Kubernetes | ORGANIZATION | 0.79+ |
Cuban | OTHER | 0.76+ |
First conference | QUANTITY | 0.76+ |
Kino | ORGANIZATION | 0.76+ |
day one | QUANTITY | 0.74+ |
one ecosystem | QUANTITY | 0.69+ |
DSO | ORGANIZATION | 0.68+ |
Cube | ORGANIZATION | 0.67+ |
Cloud Native | TITLE | 0.67+ |
Nitro | TITLE | 0.65+ |
Cloud Security | TITLE | 0.63+ |
Luther | PERSON | 0.63+ |
three right compromises | QUANTITY | 0.62+ |
Reinforce 2019 | TITLE | 0.62+ |
Cloud | TITLE | 0.59+ |
Thio | PERSON | 0.57+ |
Canadian | OTHER | 0.56+ |
Terra | TITLE | 0.54+ |
Elasticsearch | ORGANIZATION | 0.52+ |
a year | QUANTITY | 0.51+ |
Sadd | ORGANIZATION | 0.5+ |
Stale | TITLE | 0.47+ |
Reza Shafii, Red Hat | Red Hat Summit 2019
>> Announcer: Live from Boston, Massachusetts, it's theCUBE. Covering Red Hat Summit 2019. Brought to you by Red Hat. >> Good to have you back here on theCube we are live in Boston at the Convention Center here. Along with Stu Miniman, I'm John Walls and on theCUBE we're continuing our coverage of Red Hat Summit 2019 in Boston, as I said. Joined now by Reza Shafii, who is the VP of Platform Services at Red Hat. Former CoreOS guy >> That's right. >> Stu actually has his CoreOS socks on, >> He told me. >> Today, yeah, so he came dressed for the occasion. >> Shh, can't see those on camera, John. I can't be wearing vendor here. >> Don't show it to the camera. >> Well I just say they're cool! They're cool. Glad to have you with us, Reza. And first off, your impression, you have a big announcement, right, with OpenShift. OpenShift 4 being launched officially on the keynote stage today. That's some big news, right? >> It's a big deal, it's a big deal. The way I think about it is that it's really a culmination of the efforts that we planned out when we sat down between the CoreOS leadership team and the Red Hat leadership team, when the acquisition was closed. And we planned this out, I remember a meeting we had in the white board room. We planned this out. In terms of bringing the best of OpenShift and CoreOS technology together. And it's really great to see it out there on the keynote, and actually all demoed and working. >> And working, right? Key part. >> Reza, dig in for us a little bit here, because it's one thing to say okay, we got a white board and we put things together. You know, when I looked at both companies, at first both, CoreOS before the acquisition and Red Hat, I mean open source, absolutely as its core. I remember talking to the CoreOS team, I'm like, you guys are gonna build a whole bunch of really cool tools, but what's the business there? Do you guys think you're gonna be the next Red Hat? Come on. Well, now you're part of Red Hat. So, give us a little bit of the insight as to what it took to get from there to the announcements, CoreOS infused in many of the pieces that we heard announced this week. >> Yeah, so the way I like to think about it is that Red Hat's OpenShift's roots, it started with making sure that they create a really nice comfortable surface area for the deaf teams. The deaf teams can go in and start pushing the applications and it just ensures that it's running those applications in the right way. The CoreOS roots came from the operations perspective and the system administrator. We always looked at the world from the system administrator. Yes, you're right, CoreOS had a number of technologies they were working on, etcd, Rocket, clair. I used to joke that there's a constellation of open source services that we're working on, but where is the one product? And, towards the end, right before the acquisition, the one product I think was pretty clear is Tectonic, the Kubernetes software. Now, if you look at Tectonic, the key value difference was automated operations. The core tenants of what Alex Polvi and Brandon Philips said into the mindset of the company was we're outnumbered, the number of machines out there is going to be way more than we can handle, therefore we need to automate all operations. They started that on the operating system itself, with CoreOS, the namesake of the company. And then they brought that to Kubernetes. What you see with OpenShift is, OpenShift 4, you see us bringing that to, not only the Kubernetes core, that's the foundation of OpenShift 4, so all capabilities of running Kubernetes are automated with 20 plus operators now. But you see that apply to all the other value capabilities that are on top of OpenShift as well, and we're bringing that to ISV. I was walking around and a number of ISV's have their operators as the number one thing they're advertising. So you're seeing automated operations really take hold and with OpenShift 4 being a foundation for that. >> You talk about operations or operators, you have Operator Hub that was launched earlier this year, what was the driving force behind that? And then ultimately what are you trying to get out of that in terms of advancement and going forward here? >> Right, I think it means it's worked. Going back a little bit of history on this, the operator pattern was coined at CoreOS as a way to do things on a Kubernetes cluster to automate operations. The right way. You have to expose it as a proper API, you have to use a controller, so on and so forth. Then as the team started doing that we realized well there's a lot of demand for this pattern, we started documenting it, describing it better and so on. But then we realized there's a good case for a framework to help people build these automations. Therefore we announced the operator framework at Cubeacon. I think it was a year and a half ago. What happened then was interesting, suddenly we started seeing hundreds plus operators being built on the operator framework. But, it was hard because you could see five Redis operators, 10 MySQL operators. It was hard for our customers to know where can I find the right set of operators that have the right functionality and how do they compare to each other? OperatorHub.IO is a registry that we launched together with AWS, Google and Microsoft to solve for that problem. Now that we have a way to create operators easily and capture that automated operations, we have sort of created a pattern and a framework around it, where do you go to find the right set of operators. >> It's an interesting point because if you look in the container space, especially Kubernetes, it's like, okay well what's standardized, what works across all of these environments? We always worry, I've probably got some pain from previous projects and foundations as to well what's certified and what's not and how do we do that? So, did I see there's a certification now for operators and how do you balance that we need it to work everywhere, we don't wanna have it's Red Hat's building an open ecosystem not something that's limited to only this? >> Yes. So OperatorHub.IO is a community initiative. And, every operator you find on there should work on any Kubernetes. So in fact as part of the vetting process we make sure that that's the case. And then on the certification we launched today, actually, and you can see a number of, we have already 20 plus operators that are certified. This is where we take it a step further and we work with the vendors to make sure that it works on OpenShift. It's following a number of guidelines that we have, in terms of using, for example, Rail as the basis. They work with us to run the updates through security checks and so on. And that's just to give our enterprise customers more levels of guarantees and validation, if they would like to. >> So what are they getting out of that, out of the certification system? What, I guess, stability and certainty and all those kinds of things that I'm looking for, standardization of some kind, is that what's driving that? >> It's simple, at the end of the day they got three things. They get automated updates that are pushed through the OpenShift update mechanism. So if you are using the Redis one, for example, and it's certified, you're gonna be able to update the Redis operator through the same cluster administration mechanism, then you would apply it to the entire cluster itself. You see updates from Redis come in, you can put it through the same approval work so on, so on. The second is they get support. So they get first line of support from Red Hat. They can call Red Hat, our customers and actually we work with them on that. And the third is that they actually get that security vulnerability scans that we put them through to make sure that they pass certain checks. And actually one last one, they also get Rail as the basis of the operator, so, yup. >> Reza, help bring us into the customer point of view. What does all this mean to them, what are the big challenges, how do they modernize their applications and get more applications moving along this path? >> Yeah, in this case the operator customer is mainly the infrastructure administrators. It's important to point that out. The developers will get some benefit on that in that it's self service, so the provision, but there's other ways to do that as well. You can go to a Helm chart, deploy that Helm chart, you get that level of self service automated provisioning. To go ahead and configure for example, a charted MongoDB database on a Kubernetes cluster, you have to create something like 20 different objects. And then to update that to change the charts, you have to go and modify all those 20 different objects. Let's just stay at that level alone. An operator makes that before different parameters on a yaml file that you change. The operator takes that and applies all these configurations for you. So, it's all about simplifying the life of the infrastructure administrators. I truly believe that operators, human operators, infrastructure administrators are one of the least appreciated personas right now that we have out there. They're not the most important ones, but there is a lot of pain points and challenges that they have we're not really thinking about too much. And I think OpenShift goes a long way and operators go a long way to actually start thinking about their pain point as well. >> So what do you think their reaction was this morning when they're looking, first off, the general announcement, right? And then some of the demonstrations and all those things that are occurring? Is there, do you have or are you talking to customers? Are you getting the sense of relief or of anticipation or expectation? I mean, how would you characterize that? >> Think they're falling into a couple of different buckets. There's the customers we've talked to, for awhile now, that know this stuff, so this is not super new to them, but they're very happy to see it. There's one big automaker that's a customer of us and the main human operator was telling me awhile ago that he does not want any service on the cluster unless it has an operator, this is a year and a half ago. And he kept pushing me well I want a Kafka one and I want an Elasticsearch one, and you know. And we, CoreOS, were too small to try to build that ourselves. Obviously that's not, we can't maintain a Kafka operator and a CoreOS one. Now, he's able to go to our operator APP, he's gonna be able to get a Kafka operator that's maintained by Kafka experts. He's gonna be able to get a Redis operator that's maintained by Redis experts. So that bucket of customers are super happy. And then there's another one that's just starting to understand the power of all this. And I think they're just starting to kick the tires and play around with this. Hopefully they will get to the same point as the first bucket of customers, and be asking for everything to be operator based all the time. >> Convert the tire kickers, you're gonna be okay, right? >> That's right. >> Thank you for the time. >> Thank you. >> We appreciate that and continued success at Red Hat, and, once again, good to see you. >> Thank you, always a pleasure. >> You bet. Live, here on theCUBE, you're watching Red Hat Summit 2019. (upbeat music)
SUMMARY :
Brought to you by Red Hat. Good to have you back here on theCube I can't be wearing vendor here. Glad to have you with us, Reza. of the efforts that we planned out when we sat down And working, right? many of the pieces that we heard announced this week. is going to be way more than we can handle, Then as the team started doing that we realized and you can see a number of, we have already 20 plus It's simple, at the end of the day they got three things. What does all this mean to them, And then to update that to change the charts, and the main human operator was telling me awhile ago and, once again, good to see you. Live, here on theCUBE, you're watching Red Hat Summit 2019.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Boston | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Reza Shafii | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Alex Polvi | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Cubeacon | ORGANIZATION | 0.99+ |
20 plus operators | QUANTITY | 0.99+ |
Tectonic | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
OpenShift 4 | TITLE | 0.99+ |
John | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
20 different objects | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
Redis | TITLE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
CoreOS | ORGANIZATION | 0.99+ |
Red Hat Summit 2019 | EVENT | 0.99+ |
OpenShift | TITLE | 0.99+ |
Today | DATE | 0.99+ |
a year and a half ago | DATE | 0.99+ |
Brandon Philips | PERSON | 0.99+ |
today | DATE | 0.99+ |
second | QUANTITY | 0.98+ |
one product | QUANTITY | 0.98+ |
first bucket | QUANTITY | 0.98+ |
Convention Center | LOCATION | 0.98+ |
three things | QUANTITY | 0.98+ |
CoreOS | TITLE | 0.98+ |
20 plus operators | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.97+ |
Redis | ORGANIZATION | 0.97+ |
hundreds plus operators | QUANTITY | 0.97+ |
this week | DATE | 0.96+ |
earlier this year | DATE | 0.96+ |
first line | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Kafka | TITLE | 0.95+ |
OpenShift | ORGANIZATION | 0.94+ |
MongoDB | TITLE | 0.93+ |
one thing | QUANTITY | 0.92+ |
first | QUANTITY | 0.91+ |
Reza | PERSON | 0.9+ |
Operator Hub | ORGANIZATION | 0.88+ |
both | QUANTITY | 0.87+ |
ISV | TITLE | 0.86+ |
MySQL | TITLE | 0.85+ |
CoreOS | COMMERCIAL_ITEM | 0.85+ |
OperatorHub.IO | ORGANIZATION | 0.83+ |
this morning | DATE | 0.83+ |
Kubernetes | ORGANIZATION | 0.76+ |
Jozef de Vries, IBM | IBM Think 2019
(dramatic music) >> Live from San Francisco. It's theCUBE, covering IBM Think 2019. Brought to you by IBM. >> Welcome back to theCUBE. We are live at IBM Think 2019. I'm Lisa Martin with Dave Vellante. We're in San Francisco this year at the newly rejuved Moscone Center. Welcoming to theCUBE for the first time, Jozef de Vries, Director of IBM Cloud Databases. Jozef, it's great to have you on the program. >> Thank you very much, great to be here, great to be here. >> So as we were talking before we went live, this is, I was asking what you're excited about for this year's IBM Think. >> Yeah. >> Only the second annual IBM Think. >> Right. >> This big merger of a number of shows. >> Sure, you're right. >> Day minus one, team minus one, >> Yeah. >> everything really kicks off tomorrow. Talk to us about some of the things that you're working on. You've been at IBM for a long time. >> Mmm hmm. >> But cloud managed databases, let's talk value there for the customers. >> Yeah, definitely. Cloud managed databases really, at its core, it's about simplifying adoption of cloud provided services and reducing the capital expense that comes along with developing applications. Fundamentally what we're trying to do is abstract the overhead that is associated with running your own systems. Whether it's the infrastructure management, whether it's the network management, whether it's the configuration and deployment of you databases. Our collection of services really is about streamlining time to value of accessing and building against your databases. So we are really focused on is allowing the developer to focus on their business critical applications, their objectives, and really what they're paid for. They're paid to build applications, not paid to maintain systems. When we talk about the CIO office, the CTO office, they are looking at cost, they're looking at ways to reduce overall expenditures. And what we're able to provide with cloud managed databases is the ability not to have to staff an IT team, not to have to maintain and pay for infrastructure, not have to procure licenses, what have you, everything that goes into standing up the managing those systems yourself, we provide that and we provide the consumption based methods. So you basically pay for what you use, and we have various ways in which you can interact with your databases and the charges that are associated with that. But it really is again about alleviating all of that overhead and that expense that is associated with running systems yourself. >> 15 years ago, you're back to, before you started with IBM, >> Yeah. >> There was obviously IBM DB2, Oracle, SQL Server, >> SQL Server. >> I guess MySQL is around >> Mm hmm. >> back then, LabStack was building out the internet. But databases are pretty boring >> Yeah. >> back then. And then all of a sudden, it exploded. >> Right. >> And the NoSQL movement happened in a huge way. >> Mm hmm. >> Coincided with the big data movement. What happened? >> Yeah, I think as we saw the space of this technology evolve, and a variety of different kind of use cases cropping up. The development community kind of respond to that. And really what we try to do with our portfolio is provide that variety of database technology solutions. To me, not any number of different use cases. And we like to think about it broken down into two categories. Your primary data stores. This is where your applications are writing and reading the data that has been stored. And then particularly to your point, this is where we call the auxiliary data services, for example. These are your in memory caches, your message brokers, your search index, what have you. There is a plethora of different database technologies out there today that plug into any number of different use cases and application developers are attempting to fill. And more often than not, they're using more than one database at a time. And really what we're trying to do at IBM with our cloud managed database offering is provide a variety of those data services and database technologies to meet a variety of those use cases, whether they're mixing and matching, or different kind of applications workloads or what have you. We'd like to provide our customers with the choices that are out there today in the community at large. >> So many choices. >> Yeah. >> Am I hearing that its kind of horses for courses? I mean, you get things like, even niches like Cumulo with fine grain security. >> Yeah. >> Or Couchbase, obviously. >> Mm hmm. This one scales. And then this one is easy to use. You take Mongo, for text, really easy to use >> Yeah exactly. >> Sort of different specialized use cases. How do you squint through, and how does IBM match the right characteristics with the right technology? >> It's really, it's two-pronged. It's about understanding the user base. Understanding and listening to your customers. And really internalizing what are the use cases that they are looking to fulfill? It's also being in tune with the database technology in the market today. It's understanding where there are trends. Understanding where there are new use cases cropping up. And it's about building a deep enough engineering operations team where we can quickly spin up these new offerings. And again provide that technology to our end customers. And it's about working with our customers as well. And understanding the use cases and then sometimes making recommendations on what database technology or combination of databases would be best suited for their objectives. >> I'm curious. One of the things that you mentioned in terms of what the developer's day-to-day job should be, is this almost IBM's approach to aligning with the developer role and enabling it in new ways? >> It is really about, I think, having sympathy in delivering on solutions in regards that is simply for the pains that they had otherwise endured 10, 15 years ago. When the notion of cloud managed anything really wasn't a thing yet. Or was just starting to emerge. IBM in houses runs their own systems for years and years obviously and the folks on my team, they have come from other companies, they know that the pain, what pain is involved in trying to run services. So like I said it's a little bit out of sympathy, it's a bit out of knowing what your users need in a cloud managed service. Whether again it's security, or availability, or redundancy, you name it. It's about coming around to the other side of the table and I sat where you once sat. And we know what you need out of your data services. So trusting us to provide that for you. >> How are the requirements different? Things like recovery and resiliency. Do I need asset compliance in this new world? May be you could. >> Yeah. It's funny, that's a good question in that we don't necessarily deal so much with database specific requirements. Again as I mention we try to provide a variety of different database technologies. And by and large the users are going to know what they need, what combinations that they will need. And we'll work with them if they're navigating their way through it. Really what we see more the requirements these days are around the management characteristics. As you cited, are they highly available? Are they backed up? What's your disaster recovery policy? What security policies do you have in place? what compliance, so on and so forth. It's really about presenting the overall package of that managed solution. Not so much, whether the database is going to be high available verses consistent replication or what have you. I mean that's in there, and it's part of what we engage with our customers about, but also what we'd like to put a lot of emphasis is on providing those recognized database technologies so that there is a community behind and there's opportunity for the users to understand what it is that they need beyond just what we can sell them. It's really about selling the value proposition of again, the management characteristics of the services. >> So who do you see as the competition? Obviously the other big, the two big cloud providers, AWS and Azure. >> Yep. >> You're competing with them. >> Definitely. >> Quality of offerings. May be talk about how you fit. >> And Google's another one. Or Oracle is another emerging one. Even Alibaba is catching up quite a bit. It really feels like a neck-to-neck race in our day after day. The way we try to approach our portfolio is focusing on deep, broad and secure. Deep being that there're a core set of database technologies. We're building the database itself. Db2, Cloudant which is based off of Couchbase. Excuse me, CouchDB. And then broad. Again as I've been mentioning, having a variety of different database technologies. And they're secure across the board. Whether it's secure in how we run the systems, secure on how we certify them through external compliance certifications. Or secure in how we integrate with security based tooling that our users can take advantage of. Regarding our competitors, it really is one week it may be a new big data at scale type of database technology. Another day it may be, or another week it might be deeper integrations into the platform. It might be new open source database technologies. It might be a new proprietary database technology. But we're, it's a constant, like I say, race to who got the most robust portfolio. >> Developers are like teenagers. They're fickle. >> Yeah, that too, that too. We got to be quick in order to respond to those demands. >> In this age of hybrid multi-cloud, where the average company has five plus private cloud, public cloud, through inertia, through acquisition, et cetera. Where's IBM's advantage there as companies are, I think we heard a stat the other day, Dave, that in 2018, 80% of the companies migrated data and apps from public cloud. In terms of this reality that companies live in this multi-cloud, where is IBM's advantage there? And where does your approach to cloud managed services really differentiate IBM's capabilities? >> Really there's, for the last couple of years, a tremendous amount of investment on building on the Kubernetes open source platform. And even in particular to our cloud managed database services, we have been developing and have been recently releasing a number of different databases that run on a platform that we've developed against Kubernetes. It's a platform that allows us to orchestrate deployments, deletions of databases, backups, high availability, platform level integrations, all, a number of different things. What that has allowed us to do when concerning a hybrid type of strategy is it makes our platform more portable. So Kubernetes is something that can run on the cloud. It can run in a private cloud. It can run on premise. And this platform we're developing is something that can be deployed, which we do today for private, public cloud consumption, which can also be packaged up and deploy into a private cloud type environment. And ultimately it's portable and it's leveraging of that Kubernetes technology itself. So we're not hamstringing ourselves to purely public cloud type services, or only private cloud type services. We want to have something that is abstracted enough that again it can move around to these different kind of environments. >> How important is open source and how important is it for you to commit to the different open source projects? There are so many, >> Yeah. >> And you have limited resources. So how do you manage that? >> Open source is really critical both in what we're building and what we're also offering. As we've talked about our users out there, they know what they often want or sometimes we nudge them to the right or to the left, but generally speaking it's around all the open source technologies and whatever may be trending for that current month is often times what we're getting requested for. It could be a Postgres. It could be a RabbitMQ. It could be ElasticSearch. What have you. And really we put a lot of emphasis on embracing the open source community, providing those database technologies to our customers. And then it allows our customers to benefit from the community at large too. We don't become again the sole provider of education and information about that technology. We're able to expose the whole community to our customers and they're able to take advantage of that. >> I hear a lot of complaints sometimes, particularly from folks that might list themselves in a marketplace for one cloud or another, that they feel like the primary cloud vendor might be nudging the customer into their proprietary database. What's IBM's position on that? Is that fair? Is that overblown? >> We obviously have proprietary tech, particularly the Db2. And that's something we're continue investing in. It's what we view as one of our strategic top priority database technologies. We are very active developers in the Couch community as well. I wouldn't consider that proprietary, but again back to the point of-- >> CouchDB. You're as the steward of CouchDB. >> Exactly. >> Right. >> Right, exactly. But again, firm believers in open source. We want to give those opportunities to our customers to avoid those vendor lock-in type situations. We actually have quite a lot of interests from our EU customer base. And by and large EU policies are around anti-trust and what have you. They tend to gravitate towards open source technology because they know it's again portable. They can be used in Postgres by IBM one month and if they no longer are satisfied with that, they can take their Postgres workloads and move them into another cloud provider. Ideally they're coming from the other cloud providers onto IBM. >> Well I should be actually more specific, in fairness, Dynamo's often cited. I supposed Google's Spanner although that's sort of a more of a niche, >> Mm hmm. >> specialized database. If I understand it correctly, Db2, that's a hard core transaction >> Sure. >> system. You're not going to confused that with, I don't think, anyway CouchDB. Although, who knows? May be there are some use cases there. But it sounds like you're not nudging them to your proprietary, certainly Db2 is proprietary. CouchDB is one of many options that you offer. >> Certainly Db2 is one of our core products for our database portfolio. And we do want to push our customers to Db2 where-- >> If it makes sense. >> Exactly, where it makes sense. And where there's demand for it. If it doesn't make sense so there's not demand we will offer up any number of the other databases that we also offer. >> Excellent, here's our last question.As >> Sure. >> As IBM Think the 2nd annual kicks off really tomorrow. For this developer audience that you were talking about a lot in our conversation, what are some of the exciting things that they're going to you? Any sort of obviously not breaking news, but >> Mmm hmm. >> Where would you advise the developer community, who's attending IBM Think to go to learn more about cloud managed databases? And how they can really become far more efficient to do their jobs better. >> Sure. Databases are hard, plain and simple. They are particularly hard to run, and developers who are not necessarily database admins, they're not database operators, that they want to focus on building the applications, are going to want to find solutions that alleviate that overhead of running those systems themselves. So to your question we've got sessions all throughout the week where we're talking about our Cloudant offerings and the future of where we're going with that. We've got a couple of different sessions around our IBM cloud database portfolio. This is a lot of the open source database technology we're running. We have demos in the solution center and Db2's strided all around the conference as well. So there's lots of different sessions focused on talking the value proposition of IBM's cloud managed database portfolio across the board. >> A lot of opportunities for learning. Well, Jozef de Vries, Thank you so much for joining Dave and me on theCube this afternoon. >> Thank you very much, it was great. And for Dave Vallente, I am Lisa Martin. You're watching theCube, live from IBM Think 2019. Day 1 stick around. We'll be right back with our next guest. (upbeat music)
SUMMARY :
Brought to you by IBM. Jozef, it's great to have you on the program. this is, I was asking what you're excited about a number of shows. Talk to us about some of the things that you're working on. But cloud managed databases, is the ability not to have to staff an IT team, back then, LabStack was building out the internet. And then all of a sudden, it exploded. Coincided with the big data movement. And really what we try to do with our portfolio Am I hearing that its kind of horses for courses? And then this one is easy to use. the right characteristics with the right technology? And again provide that technology to our end customers. One of the things that you mentioned in terms of And we know what you need out of your data services. How are the requirements different? And by and large the users are going to know what they need, the two big cloud providers, AWS and Azure. May be talk about how you fit. Or secure in how we integrate with security based Developers are like teenagers. We got to be quick in order to respond to those demands. in 2018, 80% of the companies migrated data and apps So Kubernetes is something that can run on the cloud. And you have limited resources. And then it allows our customers to benefit from the or another, that they feel like the primary cloud vendor We obviously have proprietary tech, particularly the Db2. You're as the steward of CouchDB. and what have you. of a niche, that's a hard core transaction CouchDB is one of many options that you offer. And we do want to push our customers to Db2 that we also offer. Excellent, here's our last question that they're going to you? And how they can really become far more efficient and the future of where we're going with that. Thank you so much And for Dave Vallente, I am Lisa Martin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave Vallente | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jozef de Vries | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
2018 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Jozef | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
one week | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
MySQL | TITLE | 0.98+ |
one month | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
IBM Cloud Databases | ORGANIZATION | 0.98+ |
two categories | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Dynamo | ORGANIZATION | 0.97+ |
CouchDB | TITLE | 0.96+ |
15 years ago | DATE | 0.96+ |
EU | ORGANIZATION | 0.96+ |
IBM Think | ORGANIZATION | 0.96+ |
LabStack | ORGANIZATION | 0.96+ |
IBM Think 2019 | EVENT | 0.96+ |
more than one database | QUANTITY | 0.96+ |
10, 15 years ago | DATE | 0.95+ |
One | QUANTITY | 0.95+ |
five plus | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
Postgres | ORGANIZATION | 0.94+ |
SQL Server | TITLE | 0.93+ |
Day 1 | QUANTITY | 0.92+ |
Moscone Center | LOCATION | 0.92+ |
second annual | QUANTITY | 0.91+ |
Db2 | TITLE | 0.9+ |
this afternoon | DATE | 0.9+ |
two big cloud | QUANTITY | 0.89+ |
Couch | TITLE | 0.89+ |
one cloud | QUANTITY | 0.88+ |
last couple of years | DATE | 0.87+ |
Azure | ORGANIZATION | 0.84+ |
Cloudant | ORGANIZATION | 0.82+ |
NoSQL | TITLE | 0.81+ |
2019 | DATE | 0.8+ |
Think 2019 | EVENT | 0.8+ |
Day minus one | QUANTITY | 0.79+ |
Daniel Berg, IBM Cloud & Norman Hsieh, LogDNA | KubeCon 2018
>> Live from Seattle, Washington it's theCUBE, covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Hey, welcome back everyone, it's theCUBE live here in Seattle for day three of three of wall-to-wall coverage. We've been analyzing here on theCUBE for three days, talking to all the experts, the CEOs, CTOs, developers, startups. I'm John Furrier, Stu Miniman, with theCUBE coverage of here at dock, not DockerCon, KubeCon and CloudNativeCon. Getting down to the last Con. >> So close, John, so close. >> Lot of Docker containers around here. We'll check it on the Kubernetes. Our next two guests got a startup, hot startup here. You got Norman Hsieh, head of business development, LogDNA. New compelling solution on Kubernetes give them a unique advantage, and of course, Daniel Berg who's distinguished engineer at IBM. They have a deal. We're going to talk about the startup and the deal with IBM. The highlights, kind of a new model, a new world's developing. Thanks for joining us. >> Yeah, no problem, thanks for having us. >> May get you on at DockerCon sometimes. (Daniel laughing) Get you DockerCon. The container certainly been great, talk about your product first. Let's get your company out there. What do you guys do? You got something new and different. Something needed. What's different about it? >> Yeah, so we started building this product. One thing we were trying to do is finding a login solution that was built for developers, especially around DevOps. We were running our own multi-tenant SaaS product at the time and we just couldn't find anything great. We tried open source Elastic and it turned out to be a lot to manage, there was a lot of configuration we had to do. We tried a bunch of the other products out there which were mostly built for log analysis, so you'd analyze logs, maybe a week or two after, and there was nothing just realtime that we wanted, and so we decided to build our own. We overcame a lot of challenges where we just felt that we could build something that was easier to use than what was out there today. Our philosophy is for developers in the terms of we want to make it as simple as possible. We don't want you to manage where you're going to think about how logs work today. And so, the whole idea, even you can go down to some of the integrations that we have, our Kubernetes integration's two lines. You essentially hit two QCTL lines, your entire cluster will get logged, directly logged in in seconds. That's something we show often times at demos as well. >> Norman, I wonder if you can drill in a little bit more for us. Always look at is a lot of times the new generation, they've got just new tools to play with and new things to do. What was different, what changes? Just the composability and what a small form factor. I would think that you could just change the order of magnitude in some of the pricing of some of these. Tell us why it's different. >> Yeah, I mean, I think there's, three major things was speed. So what we found was that there weren't a lot of solutions that were optimized really, really well for finding logs. There were a lot of log solutions out there, but we wanted to optimize that so we fine-tuned Elasticsearch. We do a lot of stuff around there to make that experience really pleasurable for our users. The other is scale. So we're noticing now is if you kind of expand on the world of back in the day we had single machines that people got logs off of, then you went to VMware where you're taking a single machine and splitting up to multiple different things, and now you have containers, and all of a sudden you have Kubernetes, you're talking about thousands and thousands of nodes running and large production service. How do you find logs in those things? And so we really wanted to build for that scale and that usability where, for Kubernetes, we'll automatically tag all your logs coming through. So you might get a single log line, but we'll tag it with all the meta-data you need to find exactly what you want. So if I want to, if my container dies and I no longer know that containers around, how am I going to get the logs off of that, well, you can go to LogDNA, find the container that you're looking for, know exactly where that error's coming from as well. >> So you're basically storing all this data, making it really easy for the integration piece. Where does the IBM relationship fit in? What's the partnership? What are you guys doing together? >> I don't know if Dan wants to-- >> Go ahead, go ahead. >> Yeah, so we're partnering with IBM. We are one of their major partners for login. So if you go into Observability tab under IMB Cloud and click on Login, login is there, you can start the login instance. What we've done is, IBM's brought us a great opportunity where we could take our product and help benefit their own customers and also IBM themselves with a lot of the login that we do. They saw that we are very simplistic way of thinking about logs and it was really geared towards when you think about IBM Cloud and the shift that they're moving towards, which is really developer-focused, it was a really, really good match for us. It brought us the visibility into the upmarket with larger customers and also gives us the ability to kind of deploy globally across IBM Cloud as well. >> I mean, IBMs got a great channel on the sales side too, and you guys got a great relationship. We've seen that playbook before where I think we've interviewed in all the other events with IBM. Startups can really, if they fit in with IBM, it's just massive, but what's the reason? Why the partnership? Explain. >> Well, I mean, first of all we were looking for a solution, a login solution, that fit really well with IKS, our Kubernetes service. And it's cloud-native, high scale, large number of cluster, that's what our customers are building. That's what we want to use internally as well. I mean, we were looking for a very robust cloud-native login service that we could use ourselves, and that's when we ran across these guys. What, about a year ago? >> Yeah, I mean, I think we kind of first got introduced at last year's KubeCon and then it went to Container World, and we just kept seeing each other. >> And we just kept on rolling with it so what we've done with that integration, what's nice about the integration, is it's directly in the catalog. So it's another service in the catalog, you go and select it, and provision it very easily. But what's really cool about it is we wanted to have that integration directly with the Kubernetes services as well, so there's the tab on the Integration tab on the Kubernetes, literally one button, two lines of code that you just have to execute, bam! All your logs are now streaming for the entire cluster with all the index and everything. It just makes it a really nice, rich experience to capture your logs. >> This is infrastructure as code, that's what the promise was. >> Absolutely, yes. >> You have very seamless integration and the backend just works. Now talk about the Kubernetes pieces. I think this is fascinating 'cause we've been pontificating and evaluating all the commentary here in theCUBE, and we've come to the conclusion that cloud's great, but there's other new platform-like things emerging. You got Edge and all these things, so there's a whole new set, new things are going to come up, and it's not going to be just called cloud, it's going to be something else. There's Edge, you got cameras, you got data, you got all kinds of stuff going on. Kubernetes seems to fit a lot of these emerging use cases. Where does the Kubernetes fit in? You say you built on Kubernetes, just why is that so important? Explain that one piece. >> Yeah, I mean, I think there's, Kubernetes obviously brought a lot of opportunities for us. The big differentiator for us was because we were built on Kubernetes from the get go, we made that decision a long time ago, we didn't realize we could actually deploy this package anywhere. It didn't have to be, we didn't have to just run as a multi-tenant SaaS product anymore and I think part of that is for IBM, their customers are actually running, when they're talking about an integrated login service, we're actually running on IBM Cloud, so their customers can be sure that the data doesn't actually move anywhere else. It's going to stay in IBM Cloud and-- >> This is really important and because they're on the Kubernetes service, it gives them the opportunity, running on Kubernetes, running automatic service, they're going to be able to put LogDNA in each of the major regions. So customer will be able to keep their logged data in the regions that they want it to stay. >> Great for compliance. >> Absolutely. >> I mean, compliance, dreams-- >> Got to have it. >> Especially with EU. >> How about search and discovery, that's fit in too? Just simple, what's your strategy on that? >> Yeah, so our strategy is if you look at a lot of the login solutions out there today, a lot of times they require you to learn complex query languages and things like that. And so the biggest thing we were hearing was like, man, onboarding is really hard because some of our developers don't look at logs on a daily basis. They look at it every two weeks. >> Jerry Chen from Greylock Ventures said machine learning is the new, ML is the new SQL. >> Yup. (Daniel laughing) >> To your point, this complex querying is going to be automated away. >> Yup. >> Yes. >> And you guys agree with that. >> Oh, yeah. >> You actually, >> Totally agree with that. >> you talked about it on our interview. >> Norman, wonder if you can bring us in a little bit of compliance and what discussions you're having with customers. Obviously GDPR, big discussion point we had. We've got new laws coming from California soon. So how important is this to your customers, and what's the reality kind of out there in your user base? >> Yeah, compliance was, our founders had run a lot of different businesses before. They had two major startups where they worked with eBay, compliance was the big thing, so we made a decision early on to say, hey, look, we're about 50 people right now, let's just do compliance now. I've been at startups where we go, let's just keep growing and growing and we'll worry about compliance later-- >> Yeah, bite you in the ass, big time. >> Yeah, we made a decision to say, hey, look, we're smaller, let's just implement all the processes and necessary needs, so. >> Well, the need's there too, that's two things, right? I mean, get it out early. Like security, build it up front and you got it in. >> Exactly. >> And remember earlier we were talking and I was telling you how within the Kubernetes service we like to use our own services to build expertise? It's the same thing here. Not only are they running on top of IKS, we're using LogDNA to manage the logs and everything, and cross the infrastructure for IKS as well. So we're heavily using it. >> This also highlights, Daniel, the ecosystem dynamic of having when you break down this monolithic type of environments and their sets of services, you benefit because you can tap into a startup, they can tap in to IBM's goodness. It's like somewhat simple Biz Dev deal other than the RevShare component of the sales, but technically, this is what customers want at the endgame is they want the right tool, the right job, the right product. If it comes from a startup, you guys don't have to build it. >> I mean, exactly. Let the experts do it, we'll integrate it. It's a great relationship. And the teams work really well together which is fantastic. >> What do you guys do with other startups? If a startup watches and says, hey, I want to be like LogDNA. I want to plug into IBM's Cloud. I want to be just like them and make all that cash. What do they got to do? What's the model? >> I mean, we're constantly looking at startups and new business opportunities obviously. We do this all the time. But it's got to be the right fit, alright? And that's important. It's got to be the right fit with the technology, it's got to be the right fit as far as culture, and team dynamics of not only my team but the startup's teams and how we're going to work together, and this is why it worked really great with LogDNA. I mean, everything, it just all fit, it all made sense, and it had a good business model behind that as well. So, yes, there's opportunities for others but we have to go through and explore all those. >> So, Norman, wonder if you can share, how's your experience been at the show here? We'd love to hear, you're going to have so many startups here. You got record-setting attendance for the show. What were your expectations coming in? What are the KPIs you're measuring with and how has it met what you thought you were going to get? >> No, it's great, I mean, previous to the last year's KubeCon we had not really done any events. We're a small company, we didn't want to spend the resources, but we came in last year and I think what was refreshing was people would talk to us and we're like, oh, yeah, we're not an open source technology, we're actually a log vendor and we can, and we'll-- (Stu laughing) So what we said was, hey, we'll brush that into an experience, and people were like, oh, wow, this is actually pretty refreshing. I'm not configuring my fluentd system, fluentd to tap into another Elasticsearch. There was just not a lot of that. I think this year expectation was we need the size doubled. We still wanted to get the message out there. We knew we were hot off the presses with the IMB public launch of our service on IBM Cloud. And I think we we're expecting a lot. I mean, we more than doubled what our lead count was and it's been an amazing conference. I mean, I think the energy that you get and the quality of folks that come by, it's like, yeah, everybody's running Kubernetes, they know what they're talking about, and it makes that conversation that much easier for us as well. >> Now you're CUBE alumni now too. It's the booth, look at that. (everyone laughing) Well, guys, thanks for coming on, sharing the insight. Good to see you again. Great commentary, again, having distinguished engineering, and these kinds of conversations really helps the community figure out kind of what's out there, so I appreciate that. And if everything's going to be on Kubernetes, then we should put theCUBE on Kubernetes. With these videos, we'll be on it, we'll be out there. >> Hey, yeah, absolutely, that'd be great. >> TheCUBE covers day three. Breaking it down here. I'm John Furrier, Stu Miniman. That's a wrap for us here in Seattle. Thanks for watching and look for us next year, 2019. That's a wrap for 2018, Stu, good job. Thanks for coming on, guys, really appreciate it. >> Thanks. >> Thank you. >> Thanks for watching, see you around. (futuristic instrumental music)
SUMMARY :
Brought to you by Red Hat, the CEOs, CTOs, developers, startups. We're going to talk about the startup and the deal with IBM. What do you guys do? And so, the whole idea, even you can go down and new things to do. and all of a sudden you have Kubernetes, What are you guys doing together? about IBM Cloud and the shift that they're moving towards, and you guys got a great relationship. Well, I mean, first of all we were looking for a solution, Yeah, I mean, I think we kind of first got introduced And we just kept on rolling with it so what we've done that's what the promise was. and it's not going to be just called cloud, It didn't have to be, we didn't have to just run in each of the major regions. And so the biggest thing we were hearing was like, machine learning is the new, ML is the new SQL. is going to be automated away. you talked about it So how important is this to your customers, so we made a decision early on to say, Yeah, we made a decision to say, and you got it in. And remember earlier we were talking and I was telling you of having when you break down this monolithic type And the teams work really well together which is What do you guys do It's got to be the right fit with the technology, and how has it met what you thought you were going to get? I mean, I think the energy that you get Good to see you again. Hey, yeah, absolutely, That's a wrap for us here in Seattle. see you around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Daniel Berg | PERSON | 0.99+ |
Norman Hsieh | PERSON | 0.99+ |
Norman | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
two lines | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Dan | PERSON | 0.99+ |
Greylock Ventures | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
Daniel | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Elastic | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
IBMs | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Seattle, Washington | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
LogDNA | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
IMB | ORGANIZATION | 0.98+ |
Stu | PERSON | 0.98+ |
IKS | ORGANIZATION | 0.98+ |
single machines | QUANTITY | 0.98+ |
single machine | QUANTITY | 0.98+ |
IBM Cloud | ORGANIZATION | 0.98+ |
IMB Cloud | TITLE | 0.97+ |
one button | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
two | QUANTITY | 0.97+ |
each | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
CloudNativeCon | EVENT | 0.96+ |
today | DATE | 0.94+ |
CloudNativeCon North America 2018 | EVENT | 0.94+ |
single log line | QUANTITY | 0.93+ |
KubeCon 2018 | EVENT | 0.93+ |
thousands | QUANTITY | 0.92+ |
first | QUANTITY | 0.91+ |
GDPR | TITLE | 0.91+ |
about 50 people | QUANTITY | 0.91+ |
Container World | ORGANIZATION | 0.91+ |
day three | QUANTITY | 0.9+ |
this year | DATE | 0.9+ |
two major startups | QUANTITY | 0.9+ |
three | QUANTITY | 0.89+ |
Edge | TITLE | 0.88+ |
DevOps | TITLE | 0.88+ |
EU | ORGANIZATION | 0.87+ |
about a year ago | DATE | 0.86+ |
a week | QUANTITY | 0.86+ |
Elasticsearch | TITLE | 0.85+ |