Breaking Analysis: Databricks faces critical strategic decisions…here’s why
>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Spark became a top level Apache project in 2014, and then shortly thereafter, burst onto the big data scene. Spark, along with the cloud, transformed and in many ways, disrupted the big data market. Databricks optimized its tech stack for Spark and took advantage of the cloud to really cleverly deliver a managed service that has become a leading AI and data platform among data scientists and data engineers. However, emerging customer data requirements are shifting into a direction that will cause modern data platform players generally and Databricks, specifically, we think, to make some key directional decisions and perhaps even reinvent themselves. Hello and welcome to this week's wikibon theCUBE Insights, powered by ETR. In this Breaking Analysis, we're going to do a deep dive into Databricks. We'll explore its current impressive market momentum. We're going to use some ETR survey data to show that, and then we'll lay out how customer data requirements are changing and what the ideal data platform will look like in the midterm future. We'll then evaluate core elements of the Databricks portfolio against that vision, and then we'll close with some strategic decisions that we think the company faces. And to do so, we welcome in our good friend, George Gilbert, former equities analyst, market analyst, and current Principal at TechAlpha Partners. George, good to see you. Thanks for coming on. >> Good to see you, Dave. >> All right, let me set this up. We're going to start by taking a look at where Databricks sits in the market in terms of how customers perceive the company and what it's momentum looks like. And this chart that we're showing here is data from ETS, the emerging technology survey of private companies. The N is 1,421. What we did is we cut the data on three sectors, analytics, database-data warehouse, and AI/ML. The vertical axis is a measure of customer sentiment, which evaluates an IT decision maker's awareness of the firm and the likelihood of engaging and/or purchase intent. The horizontal axis shows mindshare in the dataset, and we've highlighted Databricks, which has been a consistent high performer in this survey over the last several quarters. And as we, by the way, just as aside as we previously reported, OpenAI, which burst onto the scene this past quarter, leads all names, but Databricks is still prominent. You can see that the ETR shows some open source tools for reference, but as far as firms go, Databricks is very impressively positioned. Now, let's see how they stack up to some mainstream cohorts in the data space, against some bigger companies and sometimes public companies. This chart shows net score on the vertical axis, which is a measure of spending momentum and pervasiveness in the data set is on the horizontal axis. You can see that chart insert in the upper right, that informs how the dots are plotted, and net score against shared N. And that red dotted line at 40% indicates a highly elevated net score, anything above that we think is really, really impressive. And here we're just comparing Databricks with Snowflake, Cloudera, and Oracle. And that squiggly line leading to Databricks shows their path since 2021 by quarter. And you can see it's performing extremely well, maintaining an elevated net score and net range. Now it's comparable in the vertical axis to Snowflake, and it consistently is moving to the right and gaining share. Now, why did we choose to show Cloudera and Oracle? The reason is that Cloudera got the whole big data era started and was disrupted by Spark. And of course the cloud, Spark and Databricks and Oracle in many ways, was the target of early big data players like Cloudera. Take a listen to Cloudera CEO at the time, Mike Olson. This is back in 2010, first year of theCUBE, play the clip. >> Look, back in the day, if you had a data problem, if you needed to run business analytics, you wrote the biggest check you could to Sun Microsystems, and you bought a great big, single box, central server, and any money that was left over, you handed to Oracle for a database licenses and you installed that database on that box, and that was where you went for data. That was your temple of information. >> Okay? So Mike Olson implied that monolithic model was too expensive and inflexible, and Cloudera set out to fix that. But the best laid plans, as they say, George, what do you make of the data that we just shared? >> So where Databricks has really come up out of sort of Cloudera's tailpipe was they took big data processing, made it coherent, made it a managed service so it could run in the cloud. So it relieved customers of the operational burden. Where they're really strong and where their traditional meat and potatoes or bread and butter is the predictive and prescriptive analytics that building and training and serving machine learning models. They've tried to move into traditional business intelligence, the more traditional descriptive and diagnostic analytics, but they're less mature there. So what that means is, the reason you see Databricks and Snowflake kind of side by side is there are many, many accounts that have both Snowflake for business intelligence, Databricks for AI machine learning, where Snowflake, I'm sorry, where Databricks also did really well was in core data engineering, refining the data, the old ETL process, which kind of turned into ELT, where you loaded into the analytic repository in raw form and refine it. And so people have really used both, and each is trying to get into the other. >> Yeah, absolutely. We've reported on this quite a bit. Snowflake, kind of moving into the domain of Databricks and vice versa. And the last bit of ETR evidence that we want to share in terms of the company's momentum comes from ETR's Round Tables. They're run by Erik Bradley, and now former Gartner analyst and George, your colleague back at Gartner, Daren Brabham. And what we're going to show here is some direct quotes of IT pros in those Round Tables. There's a data science head and a CIO as well. Just make a few call outs here, we won't spend too much time on it, but starting at the top, like all of us, we can't talk about Databricks without mentioning Snowflake. Those two get us excited. Second comment zeros in on the flexibility and the robustness of Databricks from a data warehouse perspective. And then the last point is, despite competition from cloud players, Databricks has reinvented itself a couple of times over the year. And George, we're going to lay out today a scenario that perhaps calls for Databricks to do that once again. >> Their big opportunity and their big challenge for every tech company, it's managing a technology transition. The transition that we're talking about is something that's been bubbling up, but it's really epical. First time in 60 years, we're moving from an application-centric view of the world to a data-centric view, because decisions are becoming more important than automating processes. So let me let you sort of develop. >> Yeah, so let's talk about that here. We going to put up some bullets on precisely that point and the changing sort of customer environment. So you got IT stacks are shifting is George just said, from application centric silos to data centric stacks where the priority is shifting from automating processes to automating decision. You know how look at RPA and there's still a lot of automation going on, but from the focus of that application centricity and the data locked into those apps, that's changing. Data has historically been on the outskirts in silos, but organizations, you think of Amazon, think Uber, Airbnb, they're putting data at the core, and logic is increasingly being embedded in the data instead of the reverse. In other words, today, the data's locked inside the app, which is why you need to extract that data is sticking it to a data warehouse. The point, George, is we're putting forth this new vision for how data is going to be used. And you've used this Uber example to underscore the future state. Please explain? >> Okay, so this is hopefully an example everyone can relate to. The idea is first, you're automating things that are happening in the real world and decisions that make those things happen autonomously without humans in the loop all the time. So to use the Uber example on your phone, you call a car, you call a driver. Automatically, the Uber app then looks at what drivers are in the vicinity, what drivers are free, matches one, calculates an ETA to you, calculates a price, calculates an ETA to your destination, and then directs the driver once they're there. The point of this is that that cannot happen in an application-centric world very easily because all these little apps, the drivers, the riders, the routes, the fares, those call on data locked up in many different apps, but they have to sit on a layer that makes it all coherent. >> But George, so if Uber's doing this, doesn't this tech already exist? Isn't there a tech platform that does this already? >> Yes, and the mission of the entire tech industry is to build services that make it possible to compose and operate similar platforms and tools, but with the skills of mainstream developers in mainstream corporations, not the rocket scientists at Uber and Amazon. >> Okay, so we're talking about horizontally scaling across the industry, and actually giving a lot more organizations access to this technology. So by way of review, let's summarize the trend that's going on today in terms of the modern data stack that is propelling the likes of Databricks and Snowflake, which we just showed you in the ETR data and is really is a tailwind form. So the trend is toward this common repository for analytic data, that could be multiple virtual data warehouses inside of Snowflake, but you're in that Snowflake environment or Lakehouses from Databricks or multiple data lakes. And we've talked about what JP Morgan Chase is doing with the data mesh and gluing data lakes together, you've got various public clouds playing in this game, and then the data is annotated to have a common meaning. In other words, there's a semantic layer that enables applications to talk to the data elements and know that they have common and coherent meaning. So George, the good news is this approach is more effective than the legacy monolithic models that Mike Olson was talking about, so what's the problem with this in your view? >> So today's data platforms added immense value 'cause they connected the data that was previously locked up in these monolithic apps or on all these different microservices, and that supported traditional BI and AI/ML use cases. But now if we want to build apps like Uber or Amazon.com, where they've got essentially an autonomously running supply chain and e-commerce app where humans only care and feed it. But the thing is figuring out what to buy, when to buy, where to deploy it, when to ship it. We needed a semantic layer on top of the data. So that, as you were saying, the data that's coming from all those apps, the different apps that's integrated, not just connected, but it means the same. And the issue is whenever you add a new layer to a stack to support new applications, there are implications for the already existing layers, like can they support the new layer and its use cases? So for instance, if you add a semantic layer that embeds app logic with the data rather than vice versa, which we been talking about and that's been the case for 60 years, then the new data layer faces challenges that the way you manage that data, the way you analyze that data, is not supported by today's tools. >> Okay, so actually Alex, bring me up that last slide if you would, I mean, you're basically saying at the bottom here, today's repositories don't really do joins at scale. The future is you're talking about hundreds or thousands or millions of data connections, and today's systems, we're talking about, I don't know, 6, 8, 10 joins and that is the fundamental problem you're saying, is a new data error coming and existing systems won't be able to handle it? >> Yeah, one way of thinking about it is that even though we call them relational databases, when we actually want to do lots of joins or when we want to analyze data from lots of different tables, we created a whole new industry for analytic databases where you sort of mung the data together into fewer tables. So you didn't have to do as many joins because the joins are difficult and slow. And when you're going to arbitrarily join thousands, hundreds of thousands or across millions of elements, you need a new type of database. We have them, they're called graph databases, but to query them, you go back to the prerelational era in terms of their usability. >> Okay, so we're going to come back to that and talk about how you get around that problem. But let's first lay out what the ideal data platform of the future we think looks like. And again, we're going to come back to use this Uber example. In this graphic that George put together, awesome. We got three layers. The application layer is where the data products reside. The example here is drivers, rides, maps, routes, ETA, et cetera. The digital version of what we were talking about in the previous slide, people, places and things. The next layer is the data layer, that breaks down the silos and connects the data elements through semantics and everything is coherent. And then the bottom layers, the legacy operational systems feed that data layer. George, explain what's different here, the graph database element, you talk about the relational query capabilities, and why can't I just throw memory at solving this problem? >> Some of the graph databases do throw memory at the problem and maybe without naming names, some of them live entirely in memory. And what you're dealing with is a prerelational in-memory database system where you navigate between elements, and the issue with that is we've had SQL for 50 years, so we don't have to navigate, we can say what we want without how to get it. That's the core of the problem. >> Okay. So if I may, I just want to drill into this a little bit. So you're talking about the expressiveness of a graph. Alex, if you'd bring that back out, the fourth bullet, expressiveness of a graph database with the relational ease of query. Can you explain what you mean by that? >> Yeah, so graphs are great because when you can describe anything with a graph, that's why they're becoming so popular. Expressive means you can represent anything easily. They're conducive to, you might say, in a world where we now want like the metaverse, like with a 3D world, and I don't mean the Facebook metaverse, I mean like the business metaverse when we want to capture data about everything, but we want it in context, we want to build a set of digital twins that represent everything going on in the world. And Uber is a tiny example of that. Uber built a graph to represent all the drivers and riders and maps and routes. But what you need out of a database isn't just a way to store stuff and update stuff. You need to be able to ask questions of it, you need to be able to query it. And if you go back to prerelational days, you had to know how to find your way to the data. It's sort of like when you give directions to someone and they didn't have a GPS system and a mapping system, you had to give them turn by turn directions. Whereas when you have a GPS and a mapping system, which is like the relational thing, you just say where you want to go, and it spits out the turn by turn directions, which let's say, the car might follow or whoever you're directing would follow. But the point is, it's much easier in a relational database to say, "I just want to get these results. You figure out how to get it." The graph database, they have not taken over the world because in some ways, it's taking a 50 year leap backwards. >> Alright, got it. Okay. Let's take a look at how the current Databricks offerings map to that ideal state that we just laid out. So to do that, we put together this chart that looks at the key elements of the Databricks portfolio, the core capability, the weakness, and the threat that may loom. Start with the Delta Lake, that's the storage layer, which is great for files and tables. It's got true separation of compute and storage, I want you to double click on that George, as independent elements, but it's weaker for the type of low latency ingest that we see coming in the future. And some of the threats highlighted here. AWS could add transactional tables to S3, Iceberg adoption is picking up and could accelerate, that could disrupt Databricks. George, add some color here please? >> Okay, so this is the sort of a classic competitive forces where you want to look at, so what are customers demanding? What's competitive pressure? What are substitutes? Even what your suppliers might be pushing. Here, Delta Lake is at its core, a set of transactional tables that sit on an object store. So think of it in a database system, this is the storage engine. So since S3 has been getting stronger for 15 years, you could see a scenario where they add transactional tables. We have an open source alternative in Iceberg, which Snowflake and others support. But at the same time, Databricks has built an ecosystem out of tools, their own and others, that read and write to Delta tables, that's what makes the Delta Lake and ecosystem. So they have a catalog, the whole machine learning tool chain talks directly to the data here. That was their great advantage because in the past with Snowflake, you had to pull all the data out of the database before the machine learning tools could work with it, that was a major shortcoming. They fixed that. But the point here is that even before we get to the semantic layer, the core foundation is under threat. >> Yep. Got it. Okay. We got a lot of ground to cover. So we're going to take a look at the Spark Execution Engine next. Think of that as the refinery that runs really efficient batch processing. That's kind of what disrupted the DOOp in a large way, but it's not Python friendly and that's an issue because the data science and the data engineering crowd are moving in that direction, and/or they're using DBT. George, we had Tristan Handy on at Supercloud, really interesting discussion that you and I did. Explain why this is an issue for Databricks? >> So once the data lake was in place, what people did was they refined their data batch, and Spark has always had streaming support and it's gotten better. The underlying storage as we've talked about is an issue. But basically they took raw data, then they refined it into tables that were like customers and products and partners. And then they refined that again into what was like gold artifacts, which might be business intelligence metrics or dashboards, which were collections of metrics. But they were running it on the Spark Execution Engine, which it's a Java-based engine or it's running on a Java-based virtual machine, which means all the data scientists and the data engineers who want to work with Python are really working in sort of oil and water. Like if you get an error in Python, you can't tell whether the problems in Python or where it's in Spark. There's just an impedance mismatch between the two. And then at the same time, the whole world is now gravitating towards DBT because it's a very nice and simple way to compose these data processing pipelines, and people are using either SQL in DBT or Python in DBT, and that kind of is a substitute for doing it all in Spark. So it's under threat even before we get to that semantic layer, it so happens that DBT itself is becoming the authoring environment for the semantic layer with business intelligent metrics. But that's again, this is the second element that's under direct substitution and competitive threat. >> Okay, let's now move down to the third element, which is the Photon. Photon is Databricks' BI Lakehouse, which has integration with the Databricks tooling, which is very rich, it's newer. And it's also not well suited for high concurrency and low latency use cases, which we think are going to increasingly become the norm over time. George, the call out threat here is customers want to connect everything to a semantic layer. Explain your thinking here and why this is a potential threat to Databricks? >> Okay, so two issues here. What you were touching on, which is the high concurrency, low latency, when people are running like thousands of dashboards and data is streaming in, that's a problem because SQL data warehouse, the query engine, something like that matures over five to 10 years. It's one of these things, the joke that Andy Jassy makes just in general, he's really talking about Azure, but there's no compression algorithm for experience. The Snowflake guy started more than five years earlier, and for a bunch of reasons, that lead is not something that Databricks can shrink. They'll always be behind. So that's why Snowflake has transactional tables now and we can get into that in another show. But the key point is, so near term, it's struggling to keep up with the use cases that are core to business intelligence, which is highly concurrent, lots of users doing interactive query. But then when you get to a semantic layer, that's when you need to be able to query data that might have thousands or tens of thousands or hundreds of thousands of joins. And that's a SQL query engine, traditional SQL query engine is just not built for that. That's the core problem of traditional relational databases. >> Now this is a quick aside. We always talk about Snowflake and Databricks in sort of the same context. We're not necessarily saying that Snowflake is in a position to tackle all these problems. We'll deal with that separately. So we don't mean to imply that, but we're just sort of laying out some of the things that Snowflake or rather Databricks customers we think, need to be thinking about and having conversations with Databricks about and we hope to have them as well. We'll come back to that in terms of sort of strategic options. But finally, when come back to the table, we have Databricks' AI/ML Tool Chain, which has been an awesome capability for the data science crowd. It's comprehensive, it's a one-stop shop solution, but the kicker here is that it's optimized for supervised model building. And the concern is that foundational models like GPT could cannibalize the current Databricks tooling, but George, can't Databricks, like other software companies, integrate foundation model capabilities into its platform? >> Okay, so the sound bite answer to that is sure, IBM 3270 terminals could call out to a graphical user interface when they're running on the XT terminal, but they're not exactly good citizens in that world. The core issue is Databricks has this wonderful end-to-end tool chain for training, deploying, monitoring, running inference on supervised models. But the paradigm there is the customer builds and trains and deploys each model for each feature or application. In a world of foundation models which are pre-trained and unsupervised, the entire tool chain is different. So it's not like Databricks can junk everything they've done and start over with all their engineers. They have to keep maintaining what they've done in the old world, but they have to build something new that's optimized for the new world. It's a classic technology transition and their mentality appears to be, "Oh, we'll support the new stuff from our old stuff." Which is suboptimal, and as we'll talk about, their biggest patron and the company that put them on the map, Microsoft, really stopped working on their old stuff three years ago so that they could build a new tool chain optimized for this new world. >> Yeah, and so let's sort of close with what we think the options are and decisions that Databricks has for its future architecture. They're smart people. I mean we've had Ali Ghodsi on many times, super impressive. I think they've got to be keenly aware of the limitations, what's going on with foundation models. But at any rate, here in this chart, we lay out sort of three scenarios. One is re-architect the platform by incrementally adopting new technologies. And example might be to layer a graph query engine on top of its stack. They could license key technologies like graph database, they could get aggressive on M&A and buy-in, relational knowledge graphs, semantic technologies, vector database technologies. George, as David Floyer always says, "A lot of ways to skin a cat." We've seen companies like, even think about EMC maintained its relevance through M&A for many, many years. George, give us your thought on each of these strategic options? >> Okay, I find this question the most challenging 'cause remember, I used to be an equity research analyst. I worked for Frank Quattrone, we were one of the top tech shops in the banking industry, although this is 20 years ago. But the M&A team was the top team in the industry and everyone wanted them on their side. And I remember going to meetings with these CEOs, where Frank and the bankers would say, "You want us for your M&A work because we can do better." And they really could do better. But in software, it's not like with EMC in hardware because with hardware, it's easier to connect different boxes. With software, the whole point of a software company is to integrate and architect the components so they fit together and reinforce each other, and that makes M&A harder. You can do it, but it takes a long time to fit the pieces together. Let me give you examples. If they put a graph query engine, let's say something like TinkerPop, on top of, I don't even know if it's possible, but let's say they put it on top of Delta Lake, then you have this graph query engine talking to their storage layer, Delta Lake. But if you want to do analysis, you got to put the data in Photon, which is not really ideal for highly connected data. If you license a graph database, then most of your data is in the Delta Lake and how do you sync it with the graph database? If you do sync it, you've got data in two places, which kind of defeats the purpose of having a unified repository. I find this semantic layer option in number three actually more promising, because that's something that you can layer on top of the storage layer that you have already. You just have to figure out then how to have your query engines talk to that. What I'm trying to highlight is, it's easy as an analyst to say, "You can buy this company or license that technology." But the really hard work is making it all work together and that is where the challenge is. >> Yeah, and well look, I thank you for laying that out. We've seen it, certainly Microsoft and Oracle. I guess you might argue that well, Microsoft had a monopoly in its desktop software and was able to throw off cash for a decade plus while it's stock was going sideways. Oracle had won the database wars and had amazing margins and cash flow to be able to do that. Databricks isn't even gone public yet, but I want to close with some of the players to watch. Alex, if you'd bring that back up, number four here. AWS, we talked about some of their options with S3 and it's not just AWS, it's blob storage, object storage. Microsoft, as you sort of alluded to, was an early go-to market channel for Databricks. We didn't address that really. So maybe in the closing comments we can. Google obviously, Snowflake of course, we're going to dissect their options in future Breaking Analysis. Dbt labs, where do they fit? Bob Muglia's company, Relational.ai, why are these players to watch George, in your opinion? >> So everyone is trying to assemble and integrate the pieces that would make building data applications, data products easy. And the critical part isn't just assembling a bunch of pieces, which is traditionally what AWS did. It's a Unix ethos, which is we give you the tools, you put 'em together, 'cause you then have the maximum choice and maximum power. So what the hyperscalers are doing is they're taking their key value stores, in the case of ASW it's DynamoDB, in the case of Azure it's Cosmos DB, and each are putting a graph query engine on top of those. So they have a unified storage and graph database engine, like all the data would be collected in the key value store. Then you have a graph database, that's how they're going to be presenting a foundation for building these data apps. Dbt labs is putting a semantic layer on top of data lakes and data warehouses and as we'll talk about, I'm sure in the future, that makes it easier to swap out the underlying data platform or swap in new ones for specialized use cases. Snowflake, what they're doing, they're so strong in data management and with their transactional tables, what they're trying to do is take in the operational data that used to be in the province of many state stores like MongoDB and say, "If you manage that data with us, it'll be connected to your analytic data without having to send it through a pipeline." And that's hugely valuable. Relational.ai is the wildcard, 'cause what they're trying to do, it's almost like a holy grail where you're trying to take the expressiveness of connecting all your data in a graph but making it as easy to query as you've always had it in a SQL database or I should say, in a relational database. And if they do that, it's sort of like, it'll be as easy to program these data apps as a spreadsheet was compared to procedural languages, like BASIC or Pascal. That's the implications of Relational.ai. >> Yeah, and again, we talked before, why can't you just throw this all in memory? We're talking in that example of really getting down to differences in how you lay the data out on disk in really, new database architecture, correct? >> Yes. And that's why it's not clear that you could take a data lake or even a Snowflake and why you can't put a relational knowledge graph on those. You could potentially put a graph database, but it'll be compromised because to really do what Relational.ai has done, which is the ease of Relational on top of the power of graph, you actually need to change how you're storing your data on disk or even in memory. So you can't, in other words, it's not like, oh we can add graph support to Snowflake, 'cause if you did that, you'd have to change, or in your data lake, you'd have to change how the data is physically laid out. And then that would break all the tools that talk to that currently. >> What in your estimation, is the timeframe where this becomes critical for a Databricks and potentially Snowflake and others? I mentioned earlier midterm, are we talking three to five years here? Are we talking end of decade? What's your radar say? >> I think something surprising is going on that's going to sort of come up the tailpipe and take everyone by storm. All the hype around business intelligence metrics, which is what we used to put in our dashboards where bookings, billings, revenue, customer, those things, those were the key artifacts that used to live in definitions in your BI tools, and DBT has basically created a standard for defining those so they live in your data pipeline or they're defined in their data pipeline and executed in the data warehouse or data lake in a shared way, so that all tools can use them. This sounds like a digression, it's not. All this stuff about data mesh, data fabric, all that's going on is we need a semantic layer and the business intelligence metrics are defining common semantics for your data. And I think we're going to find by the end of this year, that metrics are how we annotate all our analytic data to start adding common semantics to it. And we're going to find this semantic layer, it's not three to five years off, it's going to be staring us in the face by the end of this year. >> Interesting. And of course SVB today was shut down. We're seeing serious tech headwinds, and oftentimes in these sort of downturns or flat turns, which feels like this could be going on for a while, we emerge with a lot of new players and a lot of new technology. George, we got to leave it there. Thank you to George Gilbert for excellent insights and input for today's episode. I want to thank Alex Myerson who's on production and manages the podcast, of course Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our EIC over at Siliconangle.com, he does some great editing. Remember all these episodes, they're available as podcasts. Wherever you listen, all you got to do is search Breaking Analysis Podcast, we publish each week on wikibon.com and siliconangle.com, or you can email me at David.Vellante@siliconangle.com, or DM me @DVellante. Comment on our LinkedIn post, and please do check out ETR.ai, great survey data, enterprise tech focus, phenomenal. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis.
SUMMARY :
bringing you data-driven core elements of the Databricks portfolio and pervasiveness in the data and that was where you went for data. and Cloudera set out to fix that. the reason you see and the robustness of Databricks and their big challenge and the data locked into in the real world and decisions Yes, and the mission of that is propelling the likes that the way you manage that data, is the fundamental problem because the joins are difficult and slow. and connects the data and the issue with that is the fourth bullet, expressiveness and it spits out the and the threat that may loom. because in the past with Snowflake, Think of that as the refinery So once the data lake was in place, George, the call out threat here But the key point is, in sort of the same context. and the company that put One is re-architect the platform and architect the components some of the players to watch. in the case of ASW it's DynamoDB, and why you can't put a relational and executed in the data and manages the podcast, of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Mike Olson | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Erik Bradley | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Sun Microsystems | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
60 years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
Databricks' | ORGANIZATION | 0.99+ |
two places | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
M&A | ORGANIZATION | 0.99+ |
Frank Quattrone | PERSON | 0.99+ |
second element | QUANTITY | 0.99+ |
Daren Brabham | PERSON | 0.99+ |
TechAlpha Partners | ORGANIZATION | 0.99+ |
third element | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
50 year | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Holger Mueller, Constellation Research | AWS re:Invent 2022
(upbeat music) >> Hey, everyone, welcome back to Las Vegas, "theCube" is on our fourth day of covering AWS re:Invent, live from the Venetian Expo Center. This week has been amazing. We've created a ton of content, as you know, 'cause you've been watching. But, there's been north of 55,000 people here, hundreds of thousands online. We've had amazing conversations across the AWS ecosystem. Lisa Martin, Paul Gillan. Paul, what's your, kind of, take on day four of the conference? It's still highly packed. >> Oh, there's lots of people here. (laughs) >> Yep. Unusual for the final day of a conference. I think Werner Vogels, if I'm pronouncing it right kicked things off today when he talked about asymmetry and how the world is, you know, asymmetric. We build symmetric software, because it's convenient to do so, but asymmetric software actually scales and evolves much better. And I think that that was a conversation starter for a lot of what people are talking about here today, which is how the cloud changes the way we think about building software. >> Absolutely does. >> Our next guest, Holger Mueller, that's one of his key areas of focus. And Holger, welcome, thanks for joining us on the "theCube". >> Thanks for having me. >> What did you take away from the keynote this morning? >> Well, how do you feel on the final day of the marathon, right? We're like 23, 24 miles. Hit the ball yesterday, right? >> We are going strong Holger. And, of course, >> Yeah. >> you guys, we can either talk about business transformation with cloud or the World Cup. >> Or we can do both. >> The World Cup, hands down. World Cup. (Lisa laughs) Germany's out, I'm unbiased now. They just got eliminated. >> Spain is out now. >> What will the U.S. do against Netherlands tomorrow? >> They're going to win. What's your forecast? U.S. will win? >> They're going to win 2 to 1. >> What do you say, 2:1? >> I'm optimistic, but realistic. >> 3? >> I think Netherlands. >> Netherlands will win? >> 2 to nothing. >> Okay, I'll vote for the U.S.. >> Okay, okay >> 3:1 for the U.S.. >> Be optimistic. >> Root for the U.S.. >> Okay, I like that. >> Hope for the best wherever you work. >> Tomorrow you'll see how much soccer experts we are. >> If your prediction was right. (laughs) >> (laughs) Ja, ja. Or yours was right, right, so. Cool, no, but the event, I think the event is great to have 50,000 people. Biggest event of the year again, right? Not yet the 70,000 we had in 2019. But it's great to have the energy. I've never seen the show floor going all the way down like this, right? >> I haven't either. >> I've never seen that. I think it's a record. Often vendors get the space here and they have the keynote area, and the entertainment area, >> Yeah. >> and the food area, and then there's an exposition, right? This is packed. >> It's packed. >> Maybe it'll pay off. >> You don't see the big empty booths that you often see. >> Oh no. >> Exactly, exactly. You know, the white spaces and so on. >> No. >> Right. >> Which is a good thing. >> There's lots of energy, which is great. And today's, of course, the developer day, like you said before, right now Vogels' a rockstar in the developer community, right. Revered visionary on what has been built, right? And he's becoming a little professorial is my feeling, right. He had these moments before too, when it was justifying how AWS moved off the Oracle database about the importance of data warehouses and structures and why DynamoDB is better and so on. But, he had a large part of this too, and this coming right across the keynotes, right? Adam Selipsky talking about Antarctica, right? Scott against almonds and what went wrong. He didn't tell us, by the way, which often the tech winners forget. Scott banked on technology. He had motorized sleds, which failed after three miles. So, that's not the story to tell the technology. Let everything down. Everybody went back to ponies and horses and dogs. >> Maybe goes back to these asynchronous behavior. >> Yeah. >> The way of nature. >> And, yesterday, Swami talking about the bridges, right? The root bridges, right? >> Right. >> So, how could Werner pick up with his video at the beginning. >> Yeah. >> And then talk about space and other things? So I think it's important to educate about event-based architecture, right? And we see this massive transformation. Modern software has to be event based, right? Because, that's how things work and we didn't think like this before. I see this massive transformation in my other research area in other platforms about the HR space, where payrolls are being rebuilt completely. And payroll used to be one of the three peaks of ERP, right? You would size your ERP machine before the cloud to financial close, to run the payroll, and to do an MRP manufacturing run if you're manufacturing. God forbid you run those three at the same time. Your machine wouldn't be able to do that, right? So it was like start the engine, start the boosters, we are running payroll. And now the modern payroll designs like you see from ADP or from Ceridian, they're taking every payroll relevant event. You check in time wise, right? You go overtime, you take a day of vacation and right away they trigger and run the payroll, so it's up to date for you, up to date for you, which, in this economy, is super important, because we have more gig workers, we have more contractors, we have employees who are leaving suddenly, right? The great resignation, which is happening. So, from that perspective, it's the modern way of building software. So it's great to see Werner showing that. The dirty little secrets though is that is more efficient software for the cloud platform vendor too. Takes less resources, gets less committed things, so it's a much more scalable architecture. You can move the events, you can work asynchronously much better. And the biggest showcase, right? What's the biggest transactional showcase for an eventually consistent asynchronous transactional application? I know it's a mouthful, but we at Amazon, AWS, Amazon, right? You buy something on Amazon they tell you it's going to come tomorrow. >> Yep. >> They don't know it's going to come tomorrow by that time, because it's not transactionally consistent, right? We're just making every ERP vendor, who lives in transactional work, having nightmares of course, (Lisa laughs) but for them it's like, yes we have the delivery to promise, a promise to do that, right? But they come back to you and say, "Sorry, we couldn't make it, delivery didn't work and so on. It's going to be a new date. We are out of the product.", right? So these kind of event base asynchronous things are more and more what's going to scale around the world. It's going to be efficient for everybody, it's going to be better customer experience, better employee experience, ultimately better user experience, it's going to be better for the enterprise to build, but we have to learn to build it. So big announcement was to build our environment to build better eventful applications from today. >> Talk about... This is the first re:Invent... Well, actually, I'm sorry, it's the second re:Invent under Adam Selipsky. >> Right. Adam Selipsky, yep. >> But his first year. >> Right >> We're hearing a lot of momentum. What's your takeaway with what he delivered with the direction Amazon is going, their vision? >> Ja, I think compared to the Jassy times, right, we didn't see the hockey stick slide, right? With a number of innovations and releases. That was done in 2019 too, right? So I think it's a more pedestrian pace, which, ultimately, is good for everybody, because it means that when software vendors go slower, they do less width, but more depth. >> Yeah. >> And depth is what customers need. So Amazon's building more on the depth side, which is good news. I also think, and that's not official, right, but Adam Selipsky came from Tableau, right? >> Yeah. So he is a BI analytics guy. So it's no surprise we have three data lake offerings, right? Security data lake, we have a healthcare data lake and we have a supply chain data lake, right? Where all, again, the epigonos mentioned them I was like, "Oh, my god, Amazon's coming to supply chain.", but it's actually data lakes, which is an interesting part. But, I think it's not a surprise that someone who comes heavily out of the analytics BI world, it's off ringside, if I was pitching internally to him maybe I'd do something which he's is familiar with and I think that's what we see in the major announcement of his keynote on Tuesday. >> I mean, speaking of analytics, one of the big announcements early on was Amazon is trying to bridge the gap between Aurora. >> Yep. >> And Redshift. >> Right. >> And setting up for continuous pipelines, continuous integration. >> Right. >> Seems to be a trend that is common to all database players. I mean, Oracle is doing the same thing. SAP is doing the same thing. MariaDB. Do you see the distinction between transactional and analytical databases going away? >> It's coming together, right? Certainly coming together, from that perspective, but there's a fundamental different starting point, right? And with the big idea part, right? The universal database, which does everything for you in one system, whereas the suite of specialized databases, right? Oracle is in the classic Oracle database in the universal database camp. On the other side you have Amazon, which built a database. This is one of the first few Amazon re:Invents. It's my 10th where there was no new database announced. Right? >> No. >> So it was always add another one specially- >> I think they have enough. >> It's a great approach. They have enough, right? So it's a great approach to build something quick, which Amazon is all about. It's not so great when customers want to leverage things. And, ultimately, which I think with Selipsky, AWS is waking up to the enterprise saying, "I have all this different database and what is in them matters to me." >> Yeah. >> "So how can I get this better?" So no surprise between the two most popular database, Aurora and RDS. They're bring together the data with some out of the box parts. I think it's kind of, like, silly when Swami's saying, "Hey, no ETL.". (chuckles) Right? >> Yeah. >> There shouldn't be an ETL from the same vendor, right? There should be data pipes from that perspective anyway. So it looks like, on the overall value proposition database side, AWS is moving closer to the universal database on the Oracle side, right? Because, if you lift, of course, the universal database, under the hood, you see, well, there's different database there, different part there, you do something there, you have to configure stuff, which is also the case but it's one part of it, right, so. >> With that shift, talk about the value that's going to be in it for customers regardless of industry. >> Well, the value for customers is great, because when software vendors, or platform vendors, go in depth, you get more functionality, you get more maturity you get easier ways of setting up the whole things. You get ways of maintaining things. And you, ultimately, get lower TCO to build them, which is super important for enterprise. Because, here, this is the developer cloud, right? Developers love AWS. Developers are scarce, expensive. Might not be want to work for you, right? So developer velocity getting more done with same amount of developers, getting less done, less developers getting more done, is super crucial, super important. So this is all good news for enterprise banking on AWS and then providing them more efficiency, more automation, out of the box. >> Some of your customer conversations this week, talk to us about some of the feedback. What's the common denominator amongst customers right now? >> Customers are excited. First of all, like, first event, again in person, large, right? >> Yeah. >> People can travel, people meet each other, meet in person. They have a good handle around the complexity, which used to be a huge challenge in the past, because people say, "Do I do this?" I know so many CXOs saying, "Yeah, I want to build, say, something in IoT with AWS. The first reference built it like this, the next reference built it completely different. The third one built it completely different again. So now I'm doubting if my team has the skills to build things successfully, because will they be smart enough, like your teams, because there's no repetitiveness and that repetitiveness is going to be very important for AWS to come up with some higher packaging and version numbers.", right? But customers like that message. They like that things are working better together. They're not missing the big announcement, right? One of the traditional things of AWS would be, and they made it even proud, as a system, Jassy was saying, "If we look at the IT spend and we see something which is, like, high margin for us and not served well and we announced something there, right?" So Quick Start, Workspaces, where all liaisons where AWS went after traditional IT spend and had an offering. We haven't had this in 2019, we don't have them in 2020. Last year and didn't have it now. So something is changing on the AWS side. It's a little bit too early to figure out what, but they're not chewing off as many big things as they used in the past. >> Right. >> Yep. >> Did you get the sense that... Keith Townsend, from "The CTO Advisor", was on earlier. >> Yep. >> And he said he's been to many re:Invents, as you have, and he said that he got the sense that this is Amazon's chance to do a victory lap, as he called it. That this is a way for Amazon to reinforce the leadership cloud. >> Ja. >> And really, kind of, establish that nobody can come close to them, nobody can compete with them. >> You don't think that- >> I don't think that's at all... I mean, love Keith, he's a great guy, but I don't think that's the mindset at all, right? So, I mean, Jassy was always saying, "It's still the morning of the day in the cloud.", right? They're far away from being done. They're obsessed over being right. They do more work with the analysts. We think we got something right. And I like the passion, from that perspective. So I think Amazon's far from being complacent and the area, which is the biggest bit, right, the biggest. The only thing where Amazon truly has floundered, always floundered, is the AI space, right? So, 2018, Werner Vogels was doing more technical stuff that "Oh, this is all about linear regression.", right? And Amazon didn't start to put algorithms on silicon, right? And they have a three four trail and they didn't announce anything new here, behind Google who's been doing this for much, much longer than TPU platform, so. >> But they have now. >> They're keen aware. >> Yep. >> They now have three, or they own two of their own hardware platforms for AI. >> Right. >> They support the Intel platform. They seem to be catching up in that area. >> It's very hard to catch up on hardware, right? Because, there's release cycles, right? And just the volume that, just talking about the largest models that we have right now, to do with the language models, and Google is just doing a side note of saying, "Oh, we supported 50 less or 30 less, not little spoken languages, which I've never even heard of, because they're under banked and under supported and here's the language model, right? And I think it's all about little bit the organizational DNA of a company. I'm a strong believer in that. And, you have to remember AWS comes from the retail side, right? >> Yeah. >> Their roll out of data centers follows their retail strategy. Open secret, right? But, the same thing as the scale of the AI is very very different than if you take a look over at Google where it makes sense of the internet, right? The scale right away >> Right. >> is a solution, which is a good solution for some of the DNA of AWS. Also, Microsoft Azure is good. There has no chance to even get off the ship of that at Google, right? And these leaders with Google and it's not getting smaller, right? We didn't hear anything. I mean so much focused on data. Why do they focus so much on data? Because, data is the first step for AI. If AWS was doing a victory lap, data would've been done. They would own data, right? They would have a competitor to BigQuery Omni from the Google side to get data from the different clouds. There's crickets on that topic, right? So I think they know that they're catching up on the AI side, but it's really, really hard. It's not like in software where you can't acquire someone they could acquire in video. >> Not at Core Donovan. >> Might play a game, but that's not a good idea, right? So you can't, there's no shortcuts on the hardware side. As much as I'm a software guy and love software and don't like hardware, it's always a pain, right? There's no shortcuts there and there's nothing, which I think, has a new Artanium instance, of course, certainly, but they're not catching up. The distance is the same, yep. >> One of the things is funny, one of our guests, I think it was Tuesday, it was, it was right after Adam's keynote. >> Sure. >> Said that Adam Selipsky stood up on stage and talked about data for 52 minutes. >> Yeah. Right. >> It was timed, 52 minutes. >> Right. >> Huge emphasis on that. One of the things that Adam said to John Furrier when they were able to sit down >> Yeah >> a week or so ago at an event preview, was that CIOs and CEOs are not coming to Adam to talk about technology. They want to talk about transformation. They want to talk about business transformation. >> Sure, yes, yes. >> Talk to me in our last couple of minutes about what CEOs and CIOs are coming to you saying, "Holger, help us figure this out. We have to transform the business." >> Right. So we advise, I'm going quote our friends at Gartner, once the type A company. So we'll use technology aggressively, right? So take everything in the audience with a grain of salt, followers are the laggards, and so on. So for them, it's really the cusp of doing AI, right? Getting that data together. It has to be in the cloud. We live in the air of infinite computing. The cloud makes computing infinite, both from a storage, from a compute perspective, from an AI perspective, and then define new business models and create new best practices on top of that. Because, in the past, everything was fine out on premise, right? We talked about the (indistinct) size. Now in the cloud, it's just the business model to say, "Do I want to have a little more AI? Do I want a to run a little more? Will it give me the insight in the business?". So, that's the transformation that is happening, really. So, bringing your data together, this live conversation data, but not for bringing the data together. There's often the big win for the business for the first time to see the data. AWS is banking on that. The supply chain product, as an example. So many disparate systems, bring them them together. Big win for the business. But, the win for the business, ultimately, is when you change the paradigm from the user showing up to do something, to software doing stuff for us, right? >> Right. >> We have too much in this operator paradigm. If the user doesn't show up, doesn't find the click, doesn't find where to go, nothing happens. It can't be done in the 21st century, right? Software has to look over your shoulder. >> Good point. >> Understand one for you, autonomous self-driving systems. That's what CXOs, who're future looking, will be talked to come to AWS and all the other cloud vendors. >> Got it, last question for you. We're making a sizzle reel on Instagram. >> Yeah. >> If you had, like, a phrase, like, or a 30 second pitch that would describe re:Invent 2022 in the direction the company's going. What would that elevator pitch say? >> 30 second pitch? >> Yeah. >> All right, just timing. AWS is doing well. It's providing more depth, less breadth. Making things work together. It's catching up in some areas, has some interesting offerings, like the healthcare offering, the security data lake offering, which might change some things in the industry. It's staying the course and it's going strong. >> Ah, beautifully said, Holger. Thank you so much for joining Paul and me. >> Might have been too short. I don't know. (laughs) >> About 10 seconds left over. >> It was perfect, absolutely perfect. >> Thanks for having me. >> Perfect sizzle reel. >> Appreciate it. >> We appreciate your insights, what you're seeing this week, and the direction the company is going. We can't wait to see what happens in the next year. And, yeah. >> Thanks for having me. >> And of course, we've been on so many times. We know we're going to have you back. (laughs) >> Looking forward to it, thank you. >> All right, for Holger Mueller and Paul Gillan, I'm Lisa Martin. You're watching "theCube", the leader in live enterprise and emerging tech coverage. (upbeat music)
SUMMARY :
across the AWS ecosystem. of people here. and how the world is, And Holger, welcome, on the final day of the marathon, right? And, of course, or the World Cup. They just got eliminated. What will the U.S. do They're going to win. Hope for the best experts we are. was right. Biggest event of the year again, right? and the entertainment area, and the food area, the big empty booths You know, the white spaces in the developer community, right. Maybe goes back to So, how could Werner pick up and run the payroll, the enterprise to build, This is the first re:Invent... Right. a lot of momentum. compared to the Jassy times, right, more on the depth side, in the major announcement one of the big announcements early on And setting up for I mean, Oracle is doing the same thing. This is one of the first to build something quick, So no surprise between the So it looks like, on the overall talk about the value Well, the value for customers is great, What's the common denominator First of all, like, So something is changing on the AWS side. Did you get the sense that... and he said that he got the sense that can come close to them, And I like the passion, or they own two of their own the Intel platform. and here's the language model, right? But, the same thing as the scale of the AI from the Google side to get The distance is the same, yep. One of the things is funny, Said that Adam Selipsky Yeah. One of the things that are not coming to Adam coming to you saying, for the first time to see the data. It can't be done in the come to AWS and all the We're making a sizzle reel on Instagram. 2022 in the direction It's staying the course Paul and me. I don't know. It was perfect, and the direction the company is going. And of course, we've the leader in live enterprise
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul | PERSON | 0.99+ |
Holger | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jassy | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Paul Gillan | PERSON | 0.99+ |
23 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Tuesday | DATE | 0.99+ |
2020 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Holger Mueller | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Werner Vogels | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Werner | PERSON | 0.99+ |
21st century | DATE | 0.99+ |
52 minutes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
2018 | DATE | 0.99+ |
Holger Mueller | PERSON | 0.99+ |
10th | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Tomorrow | DATE | 0.99+ |
Netherlands | ORGANIZATION | 0.99+ |
U.S. | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Lisa | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
50,000 people | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.99+ |
2 | QUANTITY | 0.99+ |
Domenic Ravita, SingleStore | AWS re:Invent 2022
>>Hey guys and girls, welcome back to The Cube's Live coverage of AWS Reinvent 22 from Sin City. We've been here, this is our third day of coverage. We started Monday night first. Full day of the show was yesterday. Big news yesterday. Big news. Today we're hearing north of 50,000 people, and I'm hearing hundreds of thousands online. We've been having great conversations with AWS folks in the ecosystem, AWS customers, partners, ISVs, you name it. We're pleased to welcome back one of our alumni to the program, talking about partner ecosystem. Dominic Rav Vida joins us, the VP of Developer relations at single store. It's so great to have you on the program. Dominic. Thanks for coming. >>Thanks. Great. Great to see you >>Again. Great to see you too. We go way back. >>We do, yeah. >>So let's talk about reinvent 22. This is the 11th reinvent. Yeah. What are some of the things that you've heard this week that are exciting that are newsworthy from single stores perspective? >>I think in particular what we heard AWS announce on the zero ETL between Aurora and Redshift, I think it's, it's significant in that AWS has provided lots of services for building blocks for applications for a long time. And that's a great amount of flexibility for developers. But there are cases where, you know, it's a common thing to need to move data from transactional systems to analytics systems and making that easy with zero etl, I think it's a significant thing and in general we see in the market and especially in the data management market in the cloud, a unification of different types of workloads. So I think that's a step in the right direction for aws. And I think for the market as a whole, why it's significant for single store is, that's our specialty in particular, is to unify transactions and analytics for realtime applications and analytics. When you've got customer facing analytic applications and you need low latency data from realtime streaming data sources and you've gotta crunch and compute that. Those are diverse types of workloads over document transactional workloads as well as, you know, analytical workloads of various shapes and the data types could be diverse from geospatial time series. And then you've gotta serve that because we're all living in this digital service first world and you need that relevant, consistent, fresh data. And so that unification is what we think is like the big thing in data right >>Now. So validation for single store, >>It does feel like that. I mean, I'd say in the recent like six months, you've seen announcements from Google with Alloy db basically adding the complement to their workload types. You see it with Snowflake adding the complement to their traditional analytical workload site. You see it with Mongo and others. And yeah, we do feel it was validation cuz at single store we completed the functionality for what we call universal storage, which is, is the industry's first third type of storage after row store and column store, single store dbs, universal storage, unifies those. So on a single copy of data you can form these diverse workloads. And that was completed three years ago. So we sort of see like, you know, we're onto something >>Here. Welcome to the game guys. >>That's right. >>What's the value in that universal storage for customers, whether it's a healthcare organization, a financial institution, what's the value in it in those business outcomes that you guys are really helping to fuel? >>I think in short, if there were like a, a bumper sticker for that message, it's like, are you ready for the next interaction? The next interaction with your customer, the next interaction with your supply chain partner, the next interaction with your internal stakeholders, your operational managers being ready for that interaction means you've gotta have the historical data at the ready, accessible, efficiently accessible, and and, and queryable along with the most recent fresh data. And that's the context that's expected and be able to serve that instantaneously. So being ready for that next interaction is what single store helps companies do. >>Talk about single store helping customers. You know, every company these days has to be a data company. I always think, whether it's my grocery store that has all my information and helps keep me fed or a gas station or a car dealer or my bank. And we've also here, one of the things that John Furrier got to do, and he does this every year before aws, he gets to sit down with the CEO and gets really kind of a preview of what's gonna happen at at the show, right? And Adams Lisky said to him some interesting very poignant things. One is that that data, we talk about data democratization, but he says the role of the data analyst is gonna go away. Or that maybe that term in, in that every person within an organization, whether you're marketing, sales, ops, finance, is going to be analyzing data for their jobs to become data driven. Right? How does single store help customers really become data companies, especially powering data intensive apps like I know you do. >>Yeah, that's, there's a lot of talk about that and, and I think there's a lot of work that's been done with companies to make that easier to analyze data in all these different job functions. While we do that, it's not really our starting point because, and our starting point is like operationalizing that analytics as part of the business. So you can think of it in terms of database terms. Like is it batch analysis? Batch analytics after the fact, what happened last week? What happened last month? That's a lot of what those data teams are doing and those analysts are doing. What single store focuses more is in putting those insights into action for the business operations, which typically is more on the application side, it's the API side, you might call it a data product. If you're monetizing your data and you're transacting with that providing as an api, or you're delivering it as software as a service, and you're providing an end-to-end function for, you know, our marketing marketer, then we help power those kinds of real time data applications that have the interactivity and have that customer touchpoint or that partner touchpoint. So you can say we sort of, we put the data in action in that way. >>And that's the most, one of the most important things is putting data in action. If it's, it can be gold, it can be whatever you wanna call it, but if you can't actually put it into action, act on insights in real time, right? The value goes way down or there's liability, >>Right? And I think you have to do that with privacy in mind as well, right? And so you have to take control of that data and use it for your business strategy And the way that you can do that, there's technology like single store makes that possible in ways that weren't possible before. And I'll give you an example. So we have a, a customer named Fathom Analytics. They provide web analytics for marketers, right? So if you're in marketing, you understand this use case. Any demand gen marketer knows that they want to see what the traffic that hits their site is. What are the page views, what are the click streams, what are the sequences? Have these visitors to my website hit certain goals? So the big name in that for years of course has been Google Analytics and that's a free service. And you interact with that and you can see how your website's performing. >>So what Fathom does is a privacy first alternative to Google Analytics. And when you think about, well, how is that possible that they, and as a paid service, it's as software, as a service, how, first of all, how can you keep up with that real time deluge of clickstream data at the rate that Google Analytics can do it? That's the technical problem. But also at the data layer, how could you keep up with Google has, you know, in terms of databases And Fathom's answer to that is to use single store. Their, their prior architecture had four different types of database technologies under the hood. They were using Redis to have fast read time cash. They were using MySEQ database as the application database they were using. They were looking at last search to do full tech search. And they were using DynamoDB as part of a another kind of fast look up fast cash. They replaced all four of those with single store. And, and again, what they're doing is like sort of battling defacto giant in Google Analytics and having a great success at doing that for posting tens of thousands of websites. Some big names that you've heard of as well. >>I can imagine that's a big reduction from four to one, four x reduction in databases. The complexities that go away, the simplification that happens, I can imagine is quite huge for them. >>And we've done a study, an independent study with Giga Home Research. We published this back in June looking at total cost of ownership with benchmarks and the relevant benchmarks for transactions and analytics and databases are tpcc for transactions, TPC H for analytics, TPC DS for analytics. And we did a TCO study using those benchmark datas on a combination of transactional and analytical databases together and saw some pretty big improvements. 60% improvement over Myse Snowflake, for >>Instance. Awesome. Big business outcomes. We only have a few seconds left, so you've already given me a bumper sticker. Yeah. And I know I live in Silicon Valley, I've seen those billboards. I know single store has done some cheeky billboard marketing campaigns. But if you had a new billboard to create from your perspective about single store, what does it say? >>I, I think it's that, are you, are you ready for the next interaction? Because business is won and lost in every moment, in every location, in every digital moment passing by. And if you're not ready to, to interact and transact rather your systems on your behalf, then you're behind the curve. It's easy to be displaced people swipe left and pick your competitor. So I think that's the next bumper sticker. I may, I would say our, my favorite billboard so far of what we've run is cover your SaaS, which is what is how, what is the data layer to, to manage the next level of SaaS applications, the next generation. And we think single store is a big part >>Of that. Cover your SaaS. Love it. Dominic, thank you so much for joining me, giving us an update on single store from your perspective, what's going on there, kind of really where you are in the market. We appreciate that. We'll have to >>Have you back. Thank you. Glad to >>Be here. All right. For Dominic rta, I'm Lisa Martin. You're watching The Cube, the leader in live, emerging and enterprise tech coverage.
SUMMARY :
It's so great to have you on the program. Great to see you Great to see you too. What are some of the things that you've heard this week that are exciting that are newsworthy from And so that unification is what we think is like the So on a single copy of data you can form these diverse And that's the context that's expected and be able to serve that instantaneously. one of the things that John Furrier got to do, and he does this every year before aws, he gets to sit down with the CEO So you can think of it in terms of database terms. And that's the most, one of the most important things is putting data in action. And I think you have to do that with privacy in mind as well, right? But also at the data layer, how could you keep up with Google has, you know, The complexities that go away, the simplification that happens, I can imagine is quite huge for them. And we've done a study, an independent study with Giga Home Research. But if you had a new billboard to create from your perspective And if you're not ready to, to interact and transact rather your systems on Dominic, thank you so much for joining me, giving us an update on single store from your Have you back. the leader in live, emerging and enterprise tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dominic | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Sin City | LOCATION | 0.99+ |
June | DATE | 0.99+ |
Today | DATE | 0.99+ |
last week | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Monday night | DATE | 0.99+ |
last month | DATE | 0.99+ |
Giga Home Research | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
John Furrier | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
third day | QUANTITY | 0.99+ |
single store | QUANTITY | 0.99+ |
Domenic Ravita | PERSON | 0.99+ |
Dominic Rav Vida | PERSON | 0.99+ |
Google Analytics | TITLE | 0.99+ |
Adams Lisky | PERSON | 0.99+ |
Fathom | ORGANIZATION | 0.98+ |
three years ago | DATE | 0.98+ |
single | QUANTITY | 0.98+ |
Fathom Analytics | ORGANIZATION | 0.98+ |
single copy | QUANTITY | 0.98+ |
hundreds of thousands | QUANTITY | 0.98+ |
Mongo | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
SingleStore | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
The Cube | TITLE | 0.96+ |
first third type | QUANTITY | 0.94+ |
four | QUANTITY | 0.94+ |
tens of thousands of websites | QUANTITY | 0.92+ |
MySEQ | TITLE | 0.91+ |
six months | QUANTITY | 0.91+ |
Redis | TITLE | 0.9+ |
DynamoDB | TITLE | 0.9+ |
Dominic rta | PERSON | 0.89+ |
Reinvent 22 | TITLE | 0.89+ |
north of 50,000 people | QUANTITY | 0.86+ |
first alternative | QUANTITY | 0.81+ |
first | QUANTITY | 0.81+ |
single stores | QUANTITY | 0.78+ |
first world | QUANTITY | 0.73+ |
Alloy | TITLE | 0.69+ |
few seconds | QUANTITY | 0.67+ |
Redshift | ORGANIZATION | 0.67+ |
Snowflake | ORGANIZATION | 0.67+ |
aws | ORGANIZATION | 0.67+ |
TCO | ORGANIZATION | 0.66+ |
Invent | EVENT | 0.66+ |
reinvent 22 | EVENT | 0.65+ |
Snowflake | TITLE | 0.65+ |
2022 | DATE | 0.64+ |
Aurora | ORGANIZATION | 0.56+ |
11th reinvent | EVENT | 0.52+ |
Fathom | PERSON | 0.51+ |
Myse | ORGANIZATION | 0.49+ |
Cube | ORGANIZATION | 0.46+ |
zero | QUANTITY | 0.34+ |
Chris Thomas & Rob Krugman | AWS Summit New York 2022
(calm electronic music) >> Okay, welcome back everyone to theCUBE's coverage here live in New York City for AWS Summit 2022. I'm John Furrier, host of theCUBE, but a great conversation here as the day winds down. First of all, 10,000 plus people, this is a big event, just New York City. So sign of the times that some headwinds are happening? I don't think so, not in the cloud enterprise innovation game. Lot going on, this innovation conversation we're going to have now is about the confluence of cloud scale integration data and the future of how FinTech and other markets are going to change with technology. We got Chris Thomas, the CTO of Slalom, and Rob Krugman, chief digital officer at Broadridge. Gentlemen, thanks for coming on theCUBE. >> Thanks for having us. >> So we had a talk before we came on camera about your firm, what you guys do, take a quick minute to just give the scope and size of your firm and what you guys work on. >> Yeah, so Broadridge is a global financial FinTech company. We work on, part of our business is capital markets and wealth, and that's about a third of our business, about $7 trillion a day clearing through our platforms. And then the other side of our business is communications where we help all different types of organizations communicate with their shareholders, communicate with their customers across a variety of different digital channels and capabilities. >> Yeah, and Slalom, give a quick one minute on Slalom. I know you guys, but for the folks that don't know you. >> Yeah, no problem. So Slalom is a modern consulting firm focused on strategy, technology, and business transformation. And me personally, I'm part of the element lab, which is focused on forward thinking technology and disruptive technology in the next five to 10 years. >> Awesome, and that's the scope of this conversation. The next five to 10 years, you guys are working on a project together, you're kind of customer partners. You're building something. What are you guys working on? I can't wait to jump into it, explain. >> Sure, so similar to Chris, at Broadridge, we've created innovation capability, innovation incubation capability, and one of the first areas we're experimenting in is digital assets. So what we're looking to do is we're looking at a variety of different areas where we think consolidation network effects that we could bring can add a significant amount of value. And so the area we're working on is this concept of a wallet of wallets. How do we actually consolidate assets that are held across a variety of different wallets, maybe traditional locations- >> Digital wallets. >> Digital wallets, but maybe even traditional accounts, bring that together and then give control back to the consumer of who they want to share that information with, how they want their transactions to be able to control. So the idea of, people talk about Web 3 being the internet of value. I often think about it as the internet of control. How do you return control back to the individual so that they can make decisions about how and who has access to their information and assets? >> It's interesting, I totally like the value angle, but your point is what's the chicken and the egg here, the cart before the horse, you can look at it both ways and say, okay, control is going to drive the value. This is an interesting nuance, right? >> Yes, absolutely. >> So in this architectural world, they thought about the data plane and the control plane. Everyone's trying to go old school, middleware thinking. Let's own the data plane, we'll win everything. Not going to happen if it goes decentralized, right, Chris? >> Yeah, yeah. I mean, we're building a decentralized application, but it really is built on top of AWS. We have a serverless architecture that scales as our business scales built on top of things like S3, Lambda, DynamoDB, and of course using those security principles like Cognito and AWS Gateway, API Gateway. So we're really building an architecture of Web 3 on top of the Web 2 basics in the cloud. >> I mean, all evolutions are abstractions on top of each other, IG, DNS, Key, it goes the whole nine yards. In digital, at least, that's the way. Question about serverless real quick. I saw that Redshift just launched general availability of serverless in Redshift? >> Yes. >> You're starting to see the serverless now part of almost all the services in AWS. Is that enabling that abstraction, because most people don't see it that way. They go, oh, well, Amazon's not Web 3. They got databases, you could use that stuff. So how do you connect the dots and cross the bridge to the future with the idea that I might not think Web 2 or cloud is Web 3? >> I'll jump in quick. I mean, I think it's the decentralize. If you think about decentralization. serverless and decentralization, you could argue are the same way of, they're saying the same thing in different ways. One is thinking about it from a technology perspective. One is thinking about it from an ecosystem perspective and how things come together. You need serverless components that can talk to each other and communicate with each other to actually really reach the promise of what Web 3 is supposed to be. >> So digital bits or digital assets, I call it digital bits, 'cause I think zero ones. If you digitize everything and everything has value or now control drives the value. I could be a soccer team. I have apparel, I have value in my logos, I have photos, I have CUBE videos. I mean some say that this should be an NFT. Yeah, right, maybe, but digital assets have to be protected, but owned. So ownership drives it too, right? >> Absolutely. >> So how does that fit in, how do you explain that? 'Cause I'm trying to tie the dots here, connect the dots and tie it together. What do I get if I go down this road that you guys are building? >> So I think one of the challenges of digital assets right now is that it's a closed community. And I think the people that play in it, they're really into it. And so you look at things like NFTs and you look at some of the other activities that are happening and there are certain naysayers that look at it and say, this stuff is not based upon value. It's a bunch of artwork, it can't be worth this. Well, how about we do a time out there and we actually look at the underlying technology that's supporting this, the blockchain, and the potential ramifications of that across the entire financial ecosystem, and frankly, all different types of ecosystems of having this immutable record, where information gets stored and gets sent and the ability to go back to it at all times, that's where the real power is. So I think we're starting to see. We've hit a bit of a hiccup, if you will, in the cryptocurrencies. They're going to continue to be there. They won't all be there. A lot of them will probably disappear, but they'll be a finite number. >> What percentage of stuff do you think is vapor BS? If you had to pick an order of magnitude number. >> (laughs) I would say at least 75% of it. (John laughs) >> I mean, there's quite a few projects that are failing right now, but it's interesting in that in the crypto markets, they're failing gracefully. Because it's on the blockchain and it's all very transparent. Things are checked, you know immediately which companies are insolvent and which opportunities are still working. So it's very, very interesting in my opinion. >> Well, and I think the ones that don't have valid premises are the ones that are failing. Like Terra and some of these other ones, if you actually really looked at it, the entire industry knew these things were no good. But then you look at stable coins. And you look at what's going on with CBDCs. These are backed by real underlying assets that people can be comfortable with. And there's not a question of, is this going to happen? The question is, how quickly is it going to happen and how quickly are we going to be using digital currencies? >> It's interesting, we always talk about software, software as money now, money is software and gold and oil's moving over to that crypto. How do you guys see software? 'Cause we were just arguing in the queue, Dave Vellante and I, before you guys came on that the software industry pretty much does not exist anymore, it's open source. So everything's open source as an industry, but the value is integration, innovation. So it's not just software, it's the free. So you got to, it's integration. So how do you guys see this software driving crypto? Because it is software defined money at the end of the day. It's a token. >> No, I think that's absolutely one of the strengths of the crypto markets and the Web 3 market is it's governed by software. And because of that, you can build a trust framework. Everybody knows it's on the public blockchain. Everybody's aware of the software that's driving the rules and the rules of engagement in this blockchain. And it creates that trust network that says, hey, I can transact with you even though I don't know anything about you and I don't need a middleman to tell me I can trust you. Because this software drives that trust framework. >> Lot of disruption, lot of companies go out of business as a middleman in these markets. >> Listen, the intermediaries either have to disrupt themselves or they will be disrupted. I think that's what we're going to learn here. And it's going to start in financial services, but it's going to go to a lot of different places. I think the interesting thing that's happening now is for the first time, you're starting to see the regulators start to get involved. Which is actually a really good thing for the market. Because to Chris's point, transparency is here, how do you actually present that transparency and that trust back to consumers so they feel comfortable once that problem is solved. And I think everyone in the industry welcomes it. All of a sudden you have this ecosystem that people can play in, they can build and they can start to actually create real value. >> Every structural change that I've been involved in my 30 plus year career has been around inflection points. There was always some sort of underbelly. So I'm not going to judge crypto. It's been in the market for a while, but it's a good sign there's innovation happening. So as now, clarity comes into what's real. I think you guys are talking a conversation I think is refreshing because you're saying, okay, cloud is real, Lambda, serverless, all these tools. So Web 3 is certainly real because it's a future architecture, but it's attracting the young, it's a cultural shift. And it's also cooler than boring Web 2 and cloud. So I think the cultural shift, the fact that it's got data involved, there's some disruption around middleman and intermediaries, makes it very attractive to tech geeks. You look at, I read a stat, I heard a stat from a friend in the Bay Area that 30% of Cal computer science students are dropping out and jumping into crypto. So it's attracting the technical nerds, alpha geeks. It's a cultural revolution and there's some cool stuff going on from a business model standpoint. >> There's one thing missing. The thing that's missing, it's what we're trying to work on, I think is experience. I think if you're being honest about the entire marketplace, what you would agree is that this stuff is not easy to use today, and that's got to be satisfied. You need to do something that if it's the 85 year old grandma that wants to actually participate in these markets that not only can they feel comfortable, but they actually know how to do it. You can't use these crazy tools where you use these terms. And I think the industry, as it grows up, will satisfy a lot of those issues. >> And I think this is why I want to tie back and get your reaction to this. I think that's why you guys talking about building on top of AWS is refreshing, 'cause it's not dogmatic. Well, we can't use Amazon, it's not really Web 3. Well, a database could be used when you need it. You don't need to write everything through the blockchain. Databases are a very valuable capability, you get serverless. So all these things now can work together. So what do you guys see for companies that want to be Web 3 for all the good reasons and how do they leverage cloud specifically to get there? What are some things that you guys have learned that you can point to and share, you want to start? >> Well, I think not everything has to be open and public to everybody. You're going to want to have some things that are secret. You're going to want to encrypt some things. You're going to want to put some things within your own walls. And that's where AWS really excels. I think you can have the best of both worlds. So that's my perspective on it. >> The only thing I would add to it, so my view is it's 2022. I actually was joking earlier. I think I was at the first re:Invent. And I remember walking in and this was a new industry. >> It was tiny. >> This is foundational. Like cloud is not a, I don't view like, we shouldn't be having that conversation anymore. Of course you should build this stuff on top of the cloud. Of course you should build it on top of AWS. It just makes sense. And we should, instead of worrying about those challenges, what we should be worrying about are how do we make these applications easier to use? How do we actually- >> Energy efficient. >> How do we enable the promise of what these things are going to bring, and actually make it real, because if it happens, think about traditional assets. There's projects going on globally that are looking at how do you take equity securities and actually move them to the blockchain. When that stuff happens, boom. >> And I like what you guys are doing, I saw the news out through this crypto winter, some major wallet exchanges that have been advertising are hurting. Take me through what you guys are thinking, what the vision is around the wallet of wallets. Is it to provide an experience for the user or the market industry itself? What's the target, is it both? Share the design goals for the wallet of wallets. >> My favorite thing about innovation and innovation labs is that we can experiment. So I'll go in saying we don't know what the final answer is going to be, but this is the premise that we have. In this disparate decentralized ecosystem, you need some mechanism to be able to control what's actually happening at the consumer level. So I think the key target is how do you create an experience where the consumer feels like they're in control of that value? How do they actually control the underlying assets? And then how does it actually get delivered to them? Is it something that comes from their bank, from their broker? Is it coming from an independent organization? How do they manage all of that information? And I think the last part of it are the assets. It's easy to think about cryptos and NFTs, but thinking about traditional assets, thinking about identity information and healthcare records, all of that stuff is going to become part of this ecosystem. And imagine being able to go someplace and saying, oh, you need my information. Well, I'm going to give it to you off my phone and I'm going to give it to you for the next 24 hours so you can use it, but after that you have no access to it. Or you're my financial advisor, here's a view of what I actually have, my underlying assets. What do you recommend I do? So I think we're going to see an evolution in the market. >> Like a data clean room. >> Yeah, but that you control. >> Yes! (laughs) >> Yes! >> I think about it very similarly as well. As my journey into the crypto market has gone through different pathways, different avenues. And I've come to a place where I'm really managing eight different wallets and it's difficult to figure exactly where all my assets are and having a tool like this will allow me to visualize and aggregate those assets and maybe even recombine them in unique ways, I think is hugely valuable. >> My biggest fear is losing my key. >> Well, and that's an experience problem that has to be solved, but let me give you, my favorite use case in this space is, 'cause NFTs, right? People are like, what does NFTs really mean? Title insurance, right? Anyone buy a house or refinance your mortgage? You go through this crazy process that costs seven or eight thousand dollars every single time you close on something to get title insurance so they could validate it. What if that title was actually sitting on the chain, you got an NFT that you put in your wallet and when it goes time to sell your house or to refinance, everything's there. Okay, I'm the owner of the house. I don't know, JP Morgan Chase has the actual mortgage. There's another lien, there's some taxes. >> It's like a link tree in the wallet. (laughs) >> Yeah, think about it, you got a smart contract. Boom, closing happens immediately. >> I think that's one of the most important things. I think people look at NFTs and they think, oh, this is art. And that's sort of how it started in the art and collectable space, but it's actually quickly moving towards utilities and tokenization and passes. And that's where I think the value is. >> And ownership and the token. >> Identity and ownership, especially. >> And the digital rights ownership and the economics behind it really have a lot of scale 'cause I appreciate the FinTech angle you are coming from because I can now see what's going on here with you. It's like, okay, we got to start somewhere. Let's start with the experience. The wallet's a tough nut to crack, 'cause that requires defacto participation in the industry as a defacto standard. So how are you guys doing there? Can you give an update and then how can people get, what's the project called and how do people get involved? >> Yeah, so we're still in the innovation, incubation stages. So we're not launching it yet. But what I will tell you is what a lot of our focus is, how do we make these transactional things that you do? How do we make it easy to pull all your assets together? How do we make it easy to move things from one location to the other location in ways that you're not using a weird cryptographic numeric value for your wallet, but you actually can use real nomenclature that you can renumber and it's easy to understand. Our expectation is that sometime in the fall, we'll actually be in a position to launch this. What we're going to do over the summer is we're going to start allowing people to play with it, get their feedback, and we're going to iterate. >> So sandbox in when, November? >> I think launch in the fall, sometime in the fall. >> Oh, this fall. >> But over the summer, what we're expecting is some type of friends and family type release where we can start to realize what people are doing and then fix the challenges, see if we're on the right track and make the appropriate corrections. >> So right now you guys are just together on this? >> Yep. >> The opening up friends and family or community is going to be controlled. >> It is, yeah. >> Yeah, as a group, I think one thing that's really important to highlight is that we're an innovation lab. We're working with Broadridge's innovation lab, that partnership across innovation labs has allowed us to move very, very quickly to build this. Actually, if you think about it, we were talking about this not too long ago and we're almost close to having an internal launch. So I think it's very rapid development. We follow a lot of the- >> There's buy-in across the board. >> Exactly, exactly, and we saw lot of very- >> So who's going to run this? A Dow, or your companies, is it going to be a separate company? >> So to be honest, we're not entirely sure yet. It's a new product that we're going to be creating. What we actually do with it. Our thought is within an innovation environment, there's three things you could do with something. You can make it a product within the existing infrastructure, you can create a new business unit or you can spin it off as something new. I do think this becomes a product within the organization based upon it's so aligned to what we do today, but we'll see. >> But you guys are financing it? >> Yes. >> As collective companies? >> Yeah, right. >> Got it, okay, cool. Well, let us know how we can help. If you guys want to do a remote in to theCUBE. I would love the mission you guys are on. I think this is the kind of work that every company should be doing in the new R and D. You got to jump in the deep end and swim as fast as possible. But I think you can do it. I think that is refreshing and that's smart. >> And you have to do it quick because this market, I think the one thing we would probably agree on is that it's moving faster than we could, every week there's something else that happens. >> Okay, so now you guys were at Consensus down in Austin when the winter hit and you've been in the business for a long time, you got to know the industries. You see where it's going. What was the big thing you guys learned, any scar tissue from the early data coming in from the collaboration? Was there some aha moments, was there some oh shoot moments? Oh, wow, I didn't think that was going to happen. Share some anecdotal stories from the experience. Good, bad, and if you want to be bold say ugly, too. >> Well, I think the first thing I want to say about the timing, it is the crypto winter, but I actually think now's a really great time to build something because everybody's continuing to build. Folks are focused on the future and that's what we are as well. In terms of some of the challenges, well, the Web 3 space is so new. And there's not a way to just go online and copy somebody else's work and rinse and repeat. We had to figure a lot of things on our own. We had to try different technologies, see which worked better and make sure that it was functioning the way we wanted it to function. Really, so it was not easy. >> They oversold that product out, that's good, like this team. >> But think about it, so the joke is that when winter is when real work happens. If you look at the companies that have not been affected by this it's the infrastructure companies and what it reminds me of, it's a little bit different, but 2001, we had the dot com bust. The entire industry blew up, but what came out of that? >> Everything that exists. >> Amazon, lots of companies grew up out of that environment. >> Everything that was promoted actually happened. >> Yes, but you know what didn't happen- >> Food delivery. >> But you know what's interesting that didn't happen- >> (laughs) Pet food, the soccer never happened. >> The whole Super Bowl, yes. (John laughs) In financial services we built on top of legacy. I think what Web 3 is doing, it's getting rid of that legacy infrastructure. And the banks are going to be involved. There's going to be new players and stuff. But what I'm seeing now is a doubling down of the infrastructure investment of saying okay, how do we actually make this stuff real so we can actually show the promise? >> One of the things I just shared, Rob, you'd appreciate this, is that the digital advertising market's changing because now banner ads and the old techniques are based on Web 2 infrastructure, basically DNS as we know it. And token problems are everywhere. Sites and silos are built because LinkedIn doesn't share information. And the sites want first party data. It's a hoarding exercise, so those practices are going to get decimated. So in comes token economics, that's going to get decimated. So you're already seeing the decline of media. And advertising, cookies are going away. >> I think it's going to change, it's going to be a flip, because I think right now you're not in control. Other people are in control. And I think with tokenomics and some of the other things that are going to happen, it gives back control to the individual. Think about it, right now you get advertising. Now you didn't say I wanted this advertising. Imagine the value of advertising when you say, you know what, I am interested in getting information about this particular type of product. The lead generation, the value of that advertising is significantly higher. >> Organic notifications. >> Yeah. >> Well, gentlemen, I'd love to follow up with you. I'm definitely going to ping in. Now I'm going to put CUBE coin back on the table. For our audience CUBE coin's coming. Really appreciate it, thanks for sharing your insights. Great conversation. >> Excellent, thank you for having us. >> Excellent, thank you so much. >> theCUBE's coverage here from New York City. I'm John Furrier, we'll be back with more live coverage to close out the day. Stay with us, we'll be right back. >> Excellent. (calm electronic music)
SUMMARY :
and the future of how what you guys work on. and wealth, and that's about I know you guys, but for the the next five to 10 years. Awesome, and that's the And so the area we're working on So the idea of, people talk about Web 3 going to drive the value. Not going to happen if it goes and of course using In digital, at least, that's the way. So how do you connect the that can talk to each other or now control drives the value. that you guys are building? and the ability to go do you think is vapor BS? (laughs) I would in that in the crypto markets, is it going to happen on that the software industry that says, hey, I can transact with you Lot of disruption, lot of and they can start to I think you guys are And I think the industry, as it grows up, I think that's why you guys talking I think you can have I think I was at the first re:Invent. applications easier to use? and actually move them to the blockchain. And I like what you guys are doing, all of that stuff is going to And I've come to a place that has to be solved, in the wallet. you got a smart contract. it started in the art So how are you guys doing there? that you can renumber and fall, sometime in the fall. and make the appropriate corrections. or community is going to be controlled. that's really important to highlight So to be honest, we're But I think you can do it. I think the one thing we in from the collaboration? Folks are focused on the future They oversold that product out, If you look at the companies Amazon, lots of companies Everything that was (laughs) Pet food, the And the banks are going to be involved. is that the digital I think it's going to coin back on the table. to close out the day. (calm electronic music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Thomas | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Rob Krugman | PERSON | 0.99+ |
Slalom | ORGANIZATION | 0.99+ |
2001 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Austin | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
JP Morgan Chase | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
Rob | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
30 plus year | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
one minute | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
Redshift | TITLE | 0.99+ |
Super Bowl | EVENT | 0.99+ |
eight thousand dollars | QUANTITY | 0.99+ |
November | DATE | 0.99+ |
Consensus | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
Bay Area | LOCATION | 0.98+ |
first time | QUANTITY | 0.98+ |
10,000 plus people | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
three things | QUANTITY | 0.97+ |
both ways | QUANTITY | 0.97+ |
AWS Summit 2022 | EVENT | 0.97+ |
one | QUANTITY | 0.97+ |
Lambda | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
Broadridge | ORGANIZATION | 0.97+ |
about $7 trillion a day | QUANTITY | 0.97+ |
10 years | QUANTITY | 0.97+ |
five | QUANTITY | 0.97+ |
85 year old | QUANTITY | 0.96+ |
one location | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.95+ |
nine yards | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.94+ |
Web | TITLE | 0.93+ |
DynamoDB | TITLE | 0.93+ |
first | QUANTITY | 0.93+ |
theCUBE | ORGANIZATION | 0.93+ |
AWS Summit | EVENT | 0.92+ |
zero | QUANTITY | 0.92+ |
this fall | DATE | 0.9+ |
API Gateway | TITLE | 0.9+ |
Dow | ORGANIZATION | 0.89+ |
First | QUANTITY | 0.88+ |
CUBE | ORGANIZATION | 0.88+ |
eight different wallets | QUANTITY | 0.87+ |
about a third | QUANTITY | 0.85+ |
2022 | DATE | 0.85+ |
Web 3 | TITLE | 0.84+ |
Cognito | TITLE | 0.82+ |
Invent | EVENT | 0.82+ |
every single time | QUANTITY | 0.8+ |
Web 3 | TITLE | 0.79+ |
Venkat Venkataramani, Rockset & Doug Moore, Command Alkon | AWS Startup Showcase S2 E2
(upbeat music) >> Hey everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. This is Data as Code, The Future of Enterprise Data and Analytics. This is also season two, episode two of our ongoing series with exciting partners from the AWS ecosystem who are here to talk with us about data and analytics. I'm your host, Lisa Martin. Two guests join me, one, a cube alumni. Venkat Venkataramani is here CEO & Co-Founder of Rockset. Good to see you again. And Doug Moore, VP of cloud platforms at Command Alkon. You're here to talk to me about how Command Alkon implemented real time analytics in just days with Rockset. Guys, welcome to the program. >> Thanks for having us. >> Yeah, great to be here. >> Doug, give us a little bit of a overview of Command Alkon, what type of business you are? what your mission is? That good stuff. >> Yeah, great. I'll pref it by saying I've been in this industry for only three years. The 30 years prior I was in financial services. So this was really exciting and eye opening. It actually plays into the story of how we met Rockset. So that's why I wanted to preface that. But Command Alkon is in the business, is in the what's called The Heavy Building Materials Industry. And I had never heard of it until I got here. But if you think about large projects like building buildings, cities, roads anything that requires concrete asphalt or just really big trucks, full of bulky materials that's the heavy building materials industry. So for over 40 years Command Alkon has been the north American leader in providing software to quarries and production facilities to help mine and load these materials and to produce them and then get them to the job site. So that's what our supply chain is, is from the quarry through the development of these materials, then out to the to a heavy building material job site. >> Got it, and now how historically in the past has the movement of construction materials been coordinated? What was that like before you guys came on the scene? >> You'll love this answer. So 'cause, again, it's like a step back in time. When I got here the people told me that we're trying to come up with the platform that there are 27 industries studied globally. And our industry is second to last in terms of automation which meant that literally everything is still being done with paper and a lot of paper. So when one of those, let's say material is developed, concrete asphalt is produced and then needs to get to the job site. They start by creating a five part printed ticket or delivery description that then goes to multiple parties. It ends up getting touched physically over 50 times for every delivery. And to give you some idea what kind of scale it is there are over 330 million of these type deliveries in north America every year. So it's really a lot of favor and a lot of manual work. So that was the state of really where we were. And obviously there are compelling reasons certainly today but even 3, 4, 5 years ago to automate that and digitize it. >> Wow, tremendous potential to go nowhere but up with the amount of paper, the lack of, of automation. So, you guys Command Alkon built a platform, a cloud software construction software platform. Talk to me of about that. Why you built it, what was the compelling event? I mean, I think you've kind of already explained the compelling event of all the paper but give us a little bit more context. >> Yeah. That was the original. And then we'll get into what happened two years ago which has made it even more compelling but essentially with everything on premises there's really in a huge amount of inefficiency. So, people have heard the enormous numbers that it takes to build up a highway or a really large construction project. And a lot of that is tied up in these inefficiencies. So we felt like with our significant presence in this market, that if we could figure out how to automate getting this data into the cloud so that at least the partners in the supply chain could begin sharing information. That's not on paper a little bit closer to real time that we could make has an impact on everything from the timing it takes to do a project to even the amount of carbon dioxide that's admitted, for example from trucks running around and being delayed and not being coordinated well. >> So you built the connect platform you started on Amazon DynamoDB and ran into some performance challenges. Talk to us about the, some of those performance bottlenecks and how you found Venkat and Rockset. >> So from the beginning, we were fortunate, if you start building a cloud three years ago you're you have a lot of opportunity to use some of the what we call more fully managed or serverless offerings from Amazon and all the cloud vendors have them but Amazon is the one we're most familiar with throughout the past 10 years. So we went head first into saying, we're going to do everything we can to not manage infrastructure ourselves. So we can really focus on solving this problem efficiently. And it paid off great. And so we chose dynamo as our primary database and it still was a great decision. We have obviously hundreds of millions of billions of these data points in dynamo. And it's great from a transactional perspective, but at some point you need to get the data back out. And what plays into the story of the beginning when I came here with no background basically in this industry, is that, and as did most of the other people on my team, we weren't really sure what questions were going to be asked of the data. And that's super, super important with a NoSQL database like dynamo. You sort of have to know in advance what those usage patterns are going to be and what people are going to want to get back out of it. And that's what really began to strain us on both performance and just availability of information. >> Got it. Venkat, let's bring you into the conversation. Talk to me about some of the challenges that Doug articulated the, is industry with such little automation so much paper. Are you finding that still out there for in quite a few industries that really have nowhere to go but up? >> I think that's a very good point. We talk about digital transformation 2.0 as like this abstract thing. And then you meet like disruptors and innovators like Doug, and you realize how much impact, it has on the real world. But now it's not just about disrupting, and digitizing all of these records but doing it at a faster pace than ever before, right. I think this is really what digital transformation in the cloud really enable tools you do that, a small team in a, with a very very big mission and responsibility like what Doug team have been, shepherding here. They're able to move very, very, very fast, to be able to kind of accelerate this. And, they're not only on the forefront of digitizing and transforming a very big, paper-heavy kind of process, but real-time analytics and real time reporting is a requirement, right? Nobody's wondering where is my supply chain three days ago? Are my, one of the most important thing in heavy construction is to keep running on a schedule. If you fall behind, there's no way to catch up because there's so many things that falls apart. Now, how do you make sure you don't fall behind, realtime analytics and realtime reporting on how many trucks are supposed to be delivered today? Halfway through the day, are they on track? Are they getting behind? And all of those things is not just able to manage the data but also be able to get reporting and analytics on that is a extremely important aspect of this. So this is like a combination of digital transformation happening in the cloud in realtime and realtime analytics being in the forefront of it. And so we are very, very happy to partner with digital disruptors like Doug and his team to be part of this movement. >> Doug, as Venkat mentioned, access to real time data is a requirement that is just simple truth these days. I'm just curious, compelling event wise was COVID and accelerator? 'Cause we all know of the supply chain challenges that we're all facing in one way or the other, was that part of the compelling event that had you guys go and say, we want to do DynamoDB plus Rockset? >> Yeah, that is a fantastic question. In fact, more so than you can imagine. So anytime you come into an industry and you're going to try to completely change or revolutionize the way it operates it takes a long time to get the message out. Sometimes years, I remember in insurance it took almost 10 years really to get that message out and get great adoption and then COVID came along. And when COVID came along, we all of a sudden had a situation where drivers and the foreman on the job site didn't want to exchange the paperwork. I heard one story of a driver taping the ticket for signature to the foreman on a broomstick and putting it out his windows so that he didn't get too close. It really was that dramatic. And again, this is the early days and no one really has any idea what's happening and we're all working from home. So we launched, we saw that as an opportunity to really help people solve that problem and understand more what this transformation would mean in the long term. So we launched internally what we called Project Lemonade obviously from, make lemonade out of lemons, that's the situation that we were in and we immediately made some enhancements to a mobile app and then launched that to the field. So that basically there's now a digital acceptance capability where the driver can just stay in the vehicle and the foreman can be anywhere, look at the material say it's acceptable for delivery and go from there. So yeah, it made a, it actually immediately caused many of our customers hundreds to begin, to want to push their data to the cloud for that reason just to take advantage of that one capability >> Project lemonade, sounds like it's made a lot of lemonade out of a lot of lemons. Can you comment Doug on kind of the larger trend of real time analytics and logistics? >> Yeah, obviously, and this is something I didn't think about much either not knowing anything about concrete other than it was in my driveway before I got here. And that it's a perishable product and you've got that basically no more than about an hour and a half from the time you mix it, put it in the drum and get it to the job site and pour it. And then the next one has to come behind it. And I remember I, the trend is that we can't really do that on paper anymore and stay on top of what has to be done we'll get into the field. So a foreman, I recall saying that when you're in the field waiting on delivery, that you have people standing around and preparing the site ready to make a pour that two minutes is an eternity. And so, working a real time is all always a controversial word because it means something different to anyone, but that gave it real, a real clarity to mean, what it really meant to have real time analytics and how we are doing and where are my vehicles and how is this job performing today? And I think that a lot of people are still trying to figure out how to do that. And fortunately, we found a great tool set that's allowing us to do that at scale. Thankfully, for Rockset primarily. >> Venkat talk about it from your perspective the larger trend of real time analytics not just in logistics, but in other key industries. >> Yeah. I think we're seeing this across the board. I think, whether, even we see a huge trend even within an enterprise different teams from the marketing team to the support teams to more and more business operations team to the security team, really moving more and more of their use cases from real time. So we see this, the industries that are the innovators and the pioneers here are the ones for whom real times that requirement like Doug and his team here or where, if it is all news, it's no news, it's useless, right? But I think even within, across all industries, whether it is, gaming whether it is, FinTech, Bino related companies, e-learning platforms, so across, ed tech and so many different platforms, there is always this need for business operations. Some, certain aspects certain teams within large organizations to, have to tell me how to win the game and not like, play Monday morning quarterback after the game is over. >> Right, Doug, let's go back at you, I'm curious with connects, have you been able to scale the platform since you integrated with Rockset? Talk to us about some of the outcomes that you've achieved so far? >> Yeah, we have, and of course we knew and we made our database selection with dynamo that it really doesn't have a top end in terms of how much information that we can throw at it. But that's very, very challenging when it comes to using that information from reporting. But we've found the same thing as we've scaled the analytics side with Rockset indexing and searching of that database. So the scale in terms of the number of customers and the amount of data we've been able to take on has been, not been a problem. And honestly, for the first time in my career, I can say that we've always had to add people every time we add a certain number of customers. And that has absolutely not been the case with this platform. >> Well, and I imagine the team that you do have is far more, sorry Venkat, far more strategic and able to focus on bigger projects. >> It, is, and, you've amazed at, I mean Venkat hit on a couple of points that it's in terms of the adoption of analytics. What we found is that we are as big a customer of this analytic engine as our customers are because our marketing team and our sales team are always coming to us. Well how many customers are doing this? How many partners are connected in this way? Which feature flags are turned on the platform? And the way this works is all data that we push into the platform is automatically just indexed and ready for reporting analytics. So we really it's no additional ad of work, to answer these questions, which is really been phenomenal. >> I think the thing I want to add here is the speed at which they were able to build a scalable solution and also how little, operational and administrative overhead that it has cost of their teams, right. I think, this is again, realtime analytics. If you go and ask hundred people, do you want fast analytics on realtime data or slow analytics on scale data, people, no one would say give me slow and scale. So, I think it goes back to again our fundamental pieces that you have to remove all the cost and complexity barriers for realtime analytics to be the new default, right? Today companies try to get away with batch and the pioneers and the innovators are forced to solve, I know, kind of like address some of these realtime analytics challenges. I think with the platforms like the realtime analytics platform, like Rockset, we want to completely flip it on its head. You can do everything in real time. And there may be some extreme situations where you're dealing with like, hundreds of petabytes of data and you just need an analyst to generate like, quarterly reports out of that, go ahead and use some really, really good batch base system but you should be able to get anything, and everything you want without additional cost or complexity, in real time. That is really the vision. That is what we are really enabling here. >> Venkat, I want to also get your perspective and Doug I'd like your perspective on this as well but that is the role of cloud native and serverless technologies in digital disruption. And what do you see there? >> Yeah, I think it's huge. I think, again and again, every customer, and we meet, Command Alkon and Doug and his team is a great example of this where they really want to spend as much time and energies and calories that they have to, help their business, right? Like what, are we accomplishing trying to accomplish as a business? How do we enable, how do we build better products? How do we grow revenue? How do we eliminate risk that is inherent in the business? And that is really where they want to spend all of their energy not trying to like, install some backend software, administer build IDL pipelines and so on and so forth. And so, doing serverless on the compute side of that things like AWS lambda does and what have you. And, it's a very important innovation but that isn't, complete the story or your data stack also have to become serverless. And, that is really the vision with Rockset that your entire realtime analytics stack can be operating and managing. It could be as simple as managing a serverless stack for your compute environments like your APS servers and what have you. And so I think that is going to be a that is for here to stay. This is a path towards simplicity and simplicity scales really, really well, right? Complexity will always be the killer that'll limit, how far you can use this solution and how many problems can you solve with that solution? So, simplicity is a very, very important aspect here. And serverless helps you, deliver that. >> And Doug your thoughts on cloud native and serverless in terms of digital disruption >> Great point, and there are two parts to the scalability part. The second one is the one that's more subtle unless you're in charge of the budget. And that is, with enough effort and enough money that you can make almost any technology scale whether it's multiple copies of it, it may take a long time to get there but you can get there with most technologies but what is least scalable, at least that I as I see that this industry is the people, everybody knows we have a talent shortage and these other ways of getting the real time analytics and scaling infrastructure for compute and database storage, it really takes a highly skilled set of resources. And the more your company grows, the more of those you need. And that is what we really can't find. And that's actually what drove our team in our last industry to even go this way we reached a point where our growth was limited by the people we could find. And so we really wanted to break out of that. So now we had the best of both scalable people because we don't have to scale them and scalable technology. >> Excellent. The best of both worlds. Isn't it great when those two things come together? Gentlemen, thank you so much for joining me on "theCUBE" today. Talking about what Rockset and Command Alkon are doing together better together what you're enabling from a supply chain digitization perspective. We appreciate your insights. >> Great. Thank you. >> Thanks, Lisa. Thanks for having us. >> My pleasure. For Doug Moore and Venkat Venkatramani, I'm Lisa Martin. Keep it right here for more coverage of "theCUBE", your leader in high tech event coverage. (upbeat music)
SUMMARY :
Good to see you again. what type of business you are? and to produce them and then And to give you some idea Talk to me of about that. And a lot of that is tied and how you found Venkat and Rockset. and as did most of the that really have nowhere to go but up? and his team to be part of this movement. and say, we want to do and then launched that to the field. kind of the larger trend and get it to the job site and pour it. the larger trend of real time analytics team to the support teams And that has absolutely not been the case and able to focus on bigger projects. that it's in terms of the and the pioneers and the but that is the role of cloud native And so I think that is going to be a And that is what we really can't find. and Command Alkon are doing Thank you. Moore and Venkat Venkatramani,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Doug Moore | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Venkat Venkataramani | PERSON | 0.99+ |
Command Alkon | ORGANIZATION | 0.99+ |
Rockset | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Doug Moore | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Two guests | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
27 industries | QUANTITY | 0.99+ |
two minutes | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Venkat | ORGANIZATION | 0.99+ |
north America | LOCATION | 0.99+ |
Monday morning | DATE | 0.99+ |
two parts | QUANTITY | 0.99+ |
over 50 times | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
over 330 million | QUANTITY | 0.99+ |
Venkat Venkatramani | PERSON | 0.99+ |
hundred people | QUANTITY | 0.99+ |
three days ago | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
over 40 years | QUANTITY | 0.99+ |
two years ago | DATE | 0.98+ |
three years ago | DATE | 0.98+ |
second | QUANTITY | 0.98+ |
five part | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Venkat | PERSON | 0.97+ |
hundreds | QUANTITY | 0.97+ |
30 years prior | DATE | 0.97+ |
both worlds | QUANTITY | 0.97+ |
Today | DATE | 0.97+ |
three years | QUANTITY | 0.96+ |
one story | QUANTITY | 0.95+ |
DynamoDB | TITLE | 0.94+ |
almost 10 years | QUANTITY | 0.94+ |
hundreds of millions of billions | QUANTITY | 0.93+ |
dynamo | ORGANIZATION | 0.92+ |
second one | QUANTITY | 0.91+ |
about an hour and a half | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.9+ |
NoSQL | TITLE | 0.89+ |
3 | DATE | 0.87+ |
Bino | ORGANIZATION | 0.85+ |
past 10 years | DATE | 0.84+ |
every year | QUANTITY | 0.84+ |
Doug | ORGANIZATION | 0.83+ |
Analytics | TITLE | 0.83+ |
5 years ago | DATE | 0.82+ |
north American | OTHER | 0.81+ |
Startup Showcase | EVENT | 0.81+ |
How Open Source is Changing the Corporate and Startup Enterprises | Open Cloud Innovations
(gentle upbeat music) >> Hello, and welcome to theCUBE presentation of the AWS Startup Showcase Open Cloud Innovations. This is season two episode one of an ongoing series covering setting status from the AWS ecosystem. Talking about innovation, here it's open source for this theme. We do this every episode, we pick a theme and have a lot of fun talking to the leaders in the industry and the hottest startups. I'm your host John Furrier here with Lisa Martin in our Palo Alto studios. Lisa great series, great to see you again. >> Good to see you too. Great series, always such spirited conversations with very empowered and enlightened individuals. >> I love the episodic nature of these events, we get more stories out there than ever before. They're the hottest startups in the AWS ecosystem, which is dominating the cloud sector. And there's a lot of them really changing the game on cloud native and the enablement, the stories that are coming out here are pretty compelling, not just from startups they're actually penetrating the enterprise and the buyers are changing their architectures, and it's just really fun to catch the wave here. >> They are, and one of the things too about the open source community is these companies embracing that and how that's opening up their entry to your point into the enterprise. I was talking with several customers, companies who were talking about the 70% of their pipeline comes from the open source community. That's using the premium version of the technology. So, it's really been a very smart, strategic way into the enterprise. >> Yeah, and I love the format too. We get the keynote we're doing now, opening keynote, some great guests. We have Sir John on from AWS started program, he is the global startups lead. We got Swami coming on and then closing keynote with Deepak Singh. Who's really grown in the Amazon organization from containers now, compute services, which now span how modern applications are being built. And I think the big trend that we're seeing that these startups are riding on that big wave is cloud natives driving the modern architecture for software development, not just startups, but existing, large ISV and software companies are rearchitecting and the customers who buy their products and services in the cloud are rearchitecting too. So, it's a whole new growth wave coming in, the modern era of cloud some say, and it's exciting a small startup could be the next big name tomorrow. >> One of the things that kind of was a theme throughout the conversations that I had with these different guests was from a modern application security perspective is, security is key, but it's not just about shifting lab. It's about doing so empowering the developers. They don't have to be security experts. They need to have a developer brain and a security heart, and how those two organizations within companies can work better together, more collaboratively, but ultimately empowering those developers, which goes a long way. >> Well, for the folks who are watching this, the format is very simple. We have a keynote, editorial keynote speakers come in, and then we're going to have a bunch of companies who are going to present their story and their showcase. We've interviewed them, myself, you Dave Vallante and Dave Nicholson from theCUBE team. They're going to tell their stories and between the companies and the AWS heroes, 14 companies are represented and some of them new business models and Deepak Singh who leads the AWS team, he's going to have the closing keynote. He talks about the new changing business model in open source, not just the tech, which has a lot of tech, but how companies are being started around the new business models around open source. It's really, really amazing. >> I bet, and does he see any specific verticals that are taking off? >> Well, he's seeing the contribution from big companies like AWS and the Facebook's of the world and large companies, Netflix, Intuit, all contributing content to the open source and then startups forming around them. So Netflix does some great work. They donated to open source and next thing you know a small group of people get together entrepreneurs, they form a company and they create a platform around it with unification and scale. So, the cloud is enabling this new super application environment, superclouds as we call them, that's emerging and this new supercloud and super applications are scaling data-driven machine learning and AI that's the new formula for success. >> The new formula for success also has to have that velocity that developers expect, but also that the consumerization of tech has kind of driven all of us to expect things very quickly. >> Well, we're going to bring in Serge Shevchenko, AWS Global Startup program into the program. Serge is our partner. He is the leader at AWS who has been working on this program Serge, great to see you. Thanks for coming on. >> Yeah, likewise, John, thank you for having me very excited to be here. >> We've been working together on collaborating on this for over a year. Again, season two of this new innovative program, which is a combination of CUBE Media partnership, and AWS getting the stories out. And this has been a real success because there's a real hunger to discover content. And then in the marketplace, as these new solutions coming from startups are the next big thing coming. So, you're starting to see this going on. So I have to ask you, first and foremost, what's the AWS startup showcase about. Can you explain in your terms, your team's vision behind it, and why those startup focus? >> Yeah, absolutely. You know John, we curated the AWS Startup Showcase really to bring meaningful and oftentimes educational content to our customers and partners highlighting innovative solutions within these themes and ultimately to help customers find the best solutions for their use cases, which is a combination of AWS and our partners. And really from pre-seed to IPO, John, the world's most innovative startups build on AWS. From leadership downward, very intentional about cultivating vigorous AWS community and since 2019 at re:Invent at the launch of the AWS Global Startup program, we've helped hundreds of startups accelerate their growth through product development support, go to market and co-sell programs. >> So Serge question for you on the theme of today, John mentioned our showcases having themes. Today's theme is going to cover open source software. Talk to us about how Amazon thinks about opensource. >> Sure, absolutely. And I'll just touch on it briefly, but I'm very excited for the keynote at the end of today, that will be delivered by Deepak the VP of compute services at AWS. We here at Amazon believe in open source. In fact, Amazon contributes to open source in multiple ways, whether that's through directly contributing to third-party project, repos or significant code contributions to Kubernetes, Rust and other projects. And all the way down to leadership participation in organizations such as the CNCF. And supporting of dozens of ISV myself over the years, I've seen explosive growth when it comes to open source adoption. I mean, look at projects like Checkov, within 12 months of launching their open source project, they had about a million users. And another great example is Falco within, under a decade actually they've had about 37 million downloads and that's about 300% increase since it's become an incubating project in the CNCF. So, very exciting things that we're seeing here at AWS. >> So explosive growth, lot of content. What do you hope that our viewers and our guests are going to be able to get out of today? >> Yeah, great question, Lisa. I really hope that today's event will help customers understand why AWS is the best place for them to run open source, commercial and which partner solutions will help them along their journey. I think that today the lineup through the partner solutions and Deepak at the end with the ending keynote is going to present a very valuable narrative for customers and startups in selecting where and which projects to run on AWS. >> That's great stuff Serge would love to have you on and again, I want to just say really congratulate your team and we enjoy working with them. We think this showcase does a great service for the community. It's kind of open source in its own way if I can co contributing working on out there, but you're really getting the voices out at scale. We've got companies like Armory, Kubecost, Sysdig, Tidelift, Codefresh. I mean, these are some of the companies that are changing the game. We even had Patreon a customer and one of the partners sneak with security, all the big names in the startup scene. Plus AWS Deepak saying Swami is going to be on the AWS Heroes. I mean really at scale and this is really a great. So, thank you so much for participating and enabling all of this. >> No, thank you to theCUBE. You've been a great partner in this whole process, very excited for today. >> Thanks Serge really appreciate it. Lisa, what a great segment that was kicking off the event. We've got a great lineup coming up. We've got the keynote, final keynote fireside chat with Deepak Singh a big name at AWS, but Serge in the startup showcase really innovative. >> Very innovative and in a short time period, he talked about the launch of this at re:Invent 2019. They've helped hundreds of startups. We've had over 50 I think on the showcase in the last year or so John. So we really gotten to cover a lot of great customers, a lot of great stories, a lot of great content coming out of theCUBE. >> I love the openness of it. I love the scale, the storytelling. I love the collaboration, a great model, Lisa, great to work with you. We also Dave Vallante and Dave Nicholson interview. They're not here, but let's kick off the show. Let's get started with our next guest Swami. The leader at AWS Swami just got promoted to VP of the database, but also he ran machine learning and AI at AWS. He is a leader. He's the author of the original DynamoDB paper, which is celebrating its 10th year anniversary really impacted distributed computing and open source. Swami's introduced many opensource aspects of products within AWS and has been a leader in the engineering side for many, many years at AWS, from an intern to now an executive. Swami, great to see you. Thanks for coming on our AWS startup showcase. Thanks for spending the time with us. >> My pleasure, thanks again, John. Thanks for having me. >> I wanted to just, if you don't mind asking about the database market over the past 10 to 20 years cloud and application development as you see, has changed a lot. You've been involved in so many product launches over the years. Cloud and machine learning are the biggest waves happening to your point to what you're doing now. Software is under the covers it's powering it all infrastructure is code. Open source has been a big part of it and it continues to grow and change. Deepak Singh from AWS talks about the business model transformation of how like Netflix donates to the open source. Then a company starts around it and creates more growth. Machine learnings and all the open source conversations around automation as developers and builders, like software as cloud and machine learning become the key pistons in the engine. This is a big wave, what's your view on this? How how has cloud scale and data impacting the software market? >> I mean, that's a broad question. So I'm going to break it down to kind of give some of the back data. So now how we are thinking about it first, I'd say when it comes to the open source, I'll start off by saying first the longevity and by ability of open sources are very important to our customers and that is why we have been a significant contributor and supporter of these communities. I mean, there are several efforts in open source, even internally by actually open sourcing some of our key Amazon technologies like Firecracker or BottleRocket or our CDK to help advance the industry. For example, CDK itself provides some really powerful way to build and configure cloud services as well. And we also contribute to a lot of different open source projects that are existing ones, open telemetries and Linux, Java, Redis and Kubernetes, Grafana and Kafka and Robotics Operating System and Hadoop, Leucine and so forth. So, I think, I can go on and on, but even now I'd say the database and observability space say machine learning we have always started with embracing open source in a big material way. If you see, even in deep learning framework, we championed MX Linux and some of the core components and we open sourced our auto ML technology auto Glue on, and also be open sourced and collaborated with partners like Facebook Meta on Fighter showing some major components and there, and then we are open search Edge Compiler. So, I would say the number one thing is, I mean, we are actually are very, very excited to partner with broader community on problems that really mattered to the customers and actually ensure that they are able to get amazing benefit of this. >> And I see machine learning is a huge thing. If you look at how cloud group and when you had DynamoDB paper, when you wrote it, that that was the beginning of, I call the cloud surge. It was the beginning of not just being a resource versus building a data center, certainly a great alternative. Every startup did it. That's history phase one inning and a half, first half inning. Then it became a large scale. Machine learning feels like the same way now. You feel like you're seeing a lot of people using it. A lot of people are playing around with it. It's evolving. It's been around as a science, but combined with cloud scale, this is a big thing. What should people who are in the enterprise think about how should they think about machine learning? How has some of your top customers thought about machine learning as they refactor their applications? What are some of the things that you can share from your experience and journey here? >> I mean, one of the key things I'd say just to set some context on scale and numbers. More than one and a half million customers use our database analytics or ML services end-to-end. Part of which machine learning services and capabilities are easily used by more than a hundred thousand customers at a really good scale. However, I still think in Amazon, we tend to use the phrase, "It's day one in the age of internet," even though it's an, or the phrase, "Now, but it's a golden one," but I would say in the world of machine learning, yes it's day one but I also think we just woke up and we haven't even had a cup of coffee yet. That's really that early, so. And, but when you it's interesting, you've compared it to where cloud was like 10, 12 years ago. That's early days when I used to talk to engineering leaders who are running their own data center and then we talked about cloud and various disruptive technologies. I still used to get a sense about like why cloud and basic and whatnot at that time, Whereas now with machine learning though almost every CIO, CEO, all of them never asked me why machine learning. Instead, the number one question, I get is, how do I get started with it? What are the best use cases? which is great, and this is where I always tell them one of the learnings that we actually learned in Amazon. So again, a few years ago, probably seven or eight years ago, and Amazon itself realized as a company, the impact of what machine learning could do in terms of changing how we actually run our business and what it means to provide better customer experience optimize our supply chain and so far we realized that the we need to help our builders learn machine learning and the help even our business leaders understand the power of machine learning. So we did two things. One, we actually, from a bottom-up level, we built what I call as machine learning university, which is run in my team. It's literally stocked with professors and teachers who offer curriculum to builders so that they get educated on machine learning. And now from a top-down level we also, in our yearly planning process, we call it the operational planning process where we write Amazon style narratives six pages and then answer FAQ's. We asked everyone to answer one question around, like how do you plan to leverage machine learning in your business? And typically when someone says, I really don't play into our, it does not apply. It's usually it doesn't go well. So we kind of politely encourage them to do better and come back with a better answer. This kind of dynamic on top-down and bottom-up, changed the conversation and we started seeing more and more measurable growth. And these are some of the things you're starting to see more and more among our customers too. They see the business benefit, but this is where to address the talent gap. We also made machine learning university curriculum actually now open source and freely available. And we launched SageMaker Studio Lab, which is a no cost, no set up SageMaker notebook service for educating learner profiles and all the students as well. And we are excited to also announce AIMLE scholarship for underrepresented students as well. So, so much more we can do well. >> Well, congratulations on the DynamoDB paper. That's the 10 year anniversary, which is a revolutionary product, changed the game that did change the world and that a huge impact. And now as machine learning goes to the next level, the next intern out there is at school with machine learning. They're going to be writing that next paper, your advice to them real quick. >> My biggest advice is, always, I encourage all the builders to always dream big, and don't be hesitant to speak your mind as long as you have the right conviction saying you're addressing a real customer problem. So when you feel like you have an amazing solution to address a customer problem, take the time to articulate your thoughts better, and then feel free to speak up and communicate to the folks you're working with. And I'm sure any company that nurtures good talent and knows how to hire and develop the best they will be willing to listen and then you will be able to have an amazing impact in the industry. >> Swami, great to know you're CUBE alumni love our conversations from intern on the paper of DynamoDB to the technical leader at AWS and database analyst machine learning, congratulations on all your success and continue innovating on behalf of the customers and the industry. Thanks for spending the time here on theCUBE and our program, appreciate it. >> Thanks again, John. Really appreciate it. >> Okay, now let's kick off our program. That ends the keynote track here on the AWS startup showcase. Season two, episode one, enjoy the program and don't miss the closing keynote with Deepak Singh. He goes into great detail on the changing business models, all the exciting open source innovation. (gentle bright music)
SUMMARY :
of the AWS Startup Showcase Good to see you too. and the buyers are changing and one of the things too Yeah, and I love the format too. One of the things and the AWS heroes, like AWS and the Facebook's of the world but also that the consumerization of tech He is the leader at AWS who has thank you for having me and AWS getting the stories out. at the launch of the AWS Talk to us about how Amazon And all the way down to are going to be able to get out of today? and Deepak at the end and one of the partners in this whole process, but Serge in the startup in the last year or so John. Thanks for spending the time with us. Thanks for having me. and data impacting the software market? but even now I'd say the database are in the enterprise and all the students as well. on the DynamoDB paper. take the time to articulate and the industry. Thanks again, John. and don't miss the closing
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Serge | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Swami | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Codefresh | ORGANIZATION | 0.99+ |
Deepak | PERSON | 0.99+ |
Armory | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Sysdig | ORGANIZATION | 0.99+ |
Serge Shevchenko | PERSON | 0.99+ |
Kubecost | ORGANIZATION | 0.99+ |
Tidelift | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
14 companies | QUANTITY | 0.99+ |
six pages | QUANTITY | 0.99+ |
one question | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
more than a hundred thousand customers | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
last year | DATE | 0.98+ |
CNCF | ORGANIZATION | 0.98+ |
More than one and a half million customers | QUANTITY | 0.98+ |
two organizations | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
CDK | ORGANIZATION | 0.98+ |
Intuit | ORGANIZATION | 0.98+ |
DynamoDB | TITLE | 0.98+ |
first half inning | QUANTITY | 0.98+ |
Linda Jojo, United Airlines | AWS re:Invent 2021
(upbeat music) >> Okay, welcome back everyone to theCUBE's coverage of AWS re:Invent 2021. This is theCUBE. I'm John Furrier, my host Lisa Martin here, with some keynote guests who are on the big stage here at re:Invent, Linda Jojo, Chief Digital Officer at United Airlines. Thanks for coming on. >> Hey, great to be here. Thanks for having me. >> So up on the big stage, big transformation story in front of 27,000 people, on the virginity, >> Linda: That many? >> that's the number, >> It's a big room >> pretty small for Amazon web services, nearly 60,000, but you know, pandemic and all but great presentation. What was the, what was the transformation story for United? >> Well, I think there's two parts of the story. One is just how fast everything happened, you know. February of 2020, we're having a kickoff meeting with AWS about how we're going to really transform the airline and a month later the world shut down. And so it changed, we went from thinking about the future to really just trying to make it through the next few weeks. But as soon as that happened, we knew that we had to take advantage of the crisis and think about everything from what can we do with our onboard products, we've changed out a lot of things about our airplanes, we've doubled down on sustainability. We're really focused on the diversity of our workforce, but also we really said, what can we do about transforming our technology? And that's where AWS came in because one of the silver linings for our tech team was that we didn't always have a plane in the air. And so when that happens, we had no time to make a change and back it out, if it doesn't work or heaven forbid have an outage. We a little bit longer. So we got aggressive and we made a lot of changes and made a lot of move to AWS Cloud. >> Talk to me a little bit about the cultural shift involved. I mean, you talked about, you know, everybody was just scrambling. >> Yeah. So quickly, there was this instant, what do we do? How do we pivot? How do we survive mode? But from a cultural perspective, it sounds like you took, you leveraged the situation to be able to make a lot of improvements across the United, but culturally that's, that's challenging to get all those folks on board at the same time. How did you facilitate that? >> Well, you know what, the story I'm going to tell isn't all just about me. It's about the incredible team that we have, but you know, folks got focused and Amazon talks about having a two pizza team about how if your team should be no bigger than what can be fed by two pizzas, and that really keeps the decision-making streamlined and fast. For us since we were now all working from home, we called it a one screen team. And so the idea was no more than a number of people that could fit on that video call was the idea. So that was the number of people that we had on our teams. We branded them even call them scrappy teams, which was really kind of fun. And those are the groups that just kind of got their job done. And you know, the first part of their job was every week or every day it seemed like we were getting new rules from the U.S government about what countries you couldn't fly to. And it was chaotic. It was confusing for customers and frankly, our, that one screen team, they were up like every night making modifications to who could check in online and who couldn't. And we said when it's time to open back up, we can't, we got to do this better. And so that group came up with something we now call the Travel-Ready Center. Which is really pretty incredible. What you can do now is first of all, when you book your flight, we'll tell you what you need to fly. You need this type of a COVID test, this many days in advance. This is what fully vaccinated means in the country you're going to. And so this was the kind of vaccine card we need to see. You upload it all. We use Amazon SageMaker and we have machine learning models that actually now will within seven seconds validate that you're ready to fly. And what that means is just like always, you can get your boarding pass before you get to the airport. Now, if you guys travel a lot, I hope you still do, >> Yeah. what that means is that you can actually bypass the lobby of the airport and all the document checking that's going on because your travel ready. So customers love it. Gate agents love it too, because gate agents, the rules are changing so fast. They don't, you know, and they work the flight to Tel Aviv one day and the flight to Paris the next and the rules are different. And maybe in between, they changed. So having the software actually figured that out is what helps. >> So very dynamic and new innovations popped out of this pandemic. What else did Amazon help you with? Was there other Amazon innovations that you guys gravitated to SageMaker was one, what were some of the other? >> Yeah. You know, honestly, the team uses a lot of the tools and a lot of different ways. I would say the other big one was DynamoDB, and some of the things that we did to actually migrate some of our core systems to Amazon and actually, you know, instead of making phoning home to data centers all the time, we're now going right to the Cloud and getting some, some really great performance out of that. >> And, and, and the travel thing that you guys did that was came out of the innovation from the teams. >> Yeah. >> is there any other, other examples that popped out from you guys? >> Yeah. Well, I think another one is something that we call Agent on Demand. Agent on Demand is where you used it when you had to talk to an agent in the airport, you'd go get in line somewhere. And sometimes it was a long line, right? Because there's only two people there. And so the first thing we did was we made sure the technologies they used worked on a phone or an iPad. So now we weren't limited by the number of, of stations at the gate. The next thing we is that we made it QR code enabled. And now what customers can do is they can scan the QR code and they get a live agent, like a FaceTime call on their, on their phone. They can do it from anywhere from their seat at the gate or in line for a coffee, and they can solve their problem right there. And those agents, by the way, now maybe there's a snowstorm going on in Chicago, but the agents are in Houston where it's sunny. And so we can actually leverage the fact that those agents are there to help our customers. >> So you've got the user experience, you did some innovation. How about the operational things, I noticed when I traveled the United, the packaging's different ,the greetings are different. I get why all these operational impacts happened to the whole supply chain.(laughing) >> Yeah. Well, you know, the technology's great, but what makes you remember United are the people that you're going to interact with. And so we really focused on service for our, for our employees. And how do we give them information in the palm of their hand to, to treat you in a very personal way. We know that you flew last week and where you went. We know that you just made a million miles. And so we can give that information to our flight attendant and they can provide a really great experience. >> That experience is key. These days. One of the things that's been in short supply, during the pandemic is patience. And obviously you guys have to be very cognizant of that with some of the things that have happened across all the airlines and passengers not having the patience that they normally would have. >> Oh yeah. That is a real kudos to our flight attendants. And what we did with them, you know, wearing a mask is required on the aircraft and, you know, some folks don't like to be told what to do anywhere, right? And so people don't like that. Our flight attendants learned how to deescalate the situation and deal with it on the ground. So it's very simple. If you're not wearing a mask, flight attendant asked you nicely, you still don't put your mask on. They just give you a little card that says, by the way, if you don't put your mask on, this is going to be your last United flight. And the vast majority of customers put their masks on. So we have not seen some of that level of stress that's happened on some of other, other airlines. >> That's key. Cause it's been pretty rampant. But the fact that you're, you're making things much more accessible. And in real time, I think another thing we learned during the pandemic is that real time is no longer a nice to have. It's essential. We have this expectation as consumers, whether we're flying or we're buying something from an online retailer that we're going to be able to get whatever we want in the palm of our hand. >> Yeah. Well, you know what we like to say, we're very proud of our mobile app. We're very proud of it. But we like to say that are not comparing our mobile app to another airline mobile app. You're, you're comparing it to the last app you probably used. And that might've been the Amazon app. So we have to be as good as the Amazon app, but we have a lot of legacy technology behind it. And so we have really focused on that. >> Good, I want to ask you cause you're a Chief Digital Officer, because this comes up in a lot of our CUBE conversations and around the digital side is that obviously with the virtual now hybrid things, new innovations have happened. So I have to ask you what's changed for the better that's going to be around and what might not be around that you've learned from the pandemic, because these new things are emerging. New standards, new protocols, new digital experiences. What have you learned that's going to stay around and what kind of went away? >> Yeah. >> Well, I think nothing tells you about how important your customers are if you're standing in the middle of O'Hare and not seeing any. And that's what happened in April of 2020, when we actually, there was a day that year, that month that we had more pilots than passengers. It was just, you know, so you realize it's really all about the customer. And what we have to do is make sure that customers choose us. There might be less reasons to fly to certain places all the time, but when you do fly, we want you to pick United. And so it's got to be more than just where we fly. It's got to be the experiences you have with the people. And we have to use the technology to make it easier. I mean, Touchless, wasn't really a thing. QR codes are back. I mean, they were gone, right. And we have QR codes on everything now. Cause you want to get through that airport without having to touch anything, and you do that with your mobile app. >> Yeah. Great innovations. >> It is a great innovation. That contact list is key. You talk about QR cuts coming back. And just some of the things that we've, that we've, some of the silver linings and frankly there have been some the last 22 months or so, but being able to have that experience, that's tailored to me as a consumer. >> Right. I don't need to know what's under the hood enabling it. I just know I want to be able to make transactions or find whatever I need to in the palm of my hand, 24/7. >> Yeah. And you know, for airlines, it usually comes back to something went wrong and frankly, there's always something that going quite right. There's a, there's a weather delay somewhere or maybe your bag didn't get on the same flight you did. And so we want to give you transparency in that and control over what you can do. And so how make it, make it easier to rebook, make you understand what the situation is, be very transparent about it. And we even have something called Connection Saver. And what we do with that is we actually use real time data analytics. And what we do is we say, there's a person that's arriving late. And then we say with real-time weather, real-time connection data. We say, can we hold that flight for Lisa? And we, and we, yeah.(laughing) The worst thing is when that door closed, you run all the way through the airport and they closed the door. Right? We don't want to do, gate agents don't like doing that either. And so we use calculations that say, you know, the wind is blowing in the right direction. The pilots can make up the time. There isn't anybody on the other side, that's going to miss a connection. And so about 2000 times a day, we hold a connection for our customer. >> That's key. If you missed, sometimes just stay overnight. If you miss that connection. >> Especially on the last flight of the day we'll be, we'll be very generous because that doesn't do anybody any good. >> Well, great, great story. I love the keynote, Cloud has changed. I have to ask you this year at re:Invent, what's your observation on the Cloud as the cloud continues to expand, as Adam is talking about, how do you guys see the Cloud evolving for United? >> Well, you know, I, I think what's really impressive here is everybody is coming from every industry. It's not one or two industries that are here, are early adopters in the industry. It really is what you have to do to survive. But I probably would be remiss not to say that, which was really great was that there were two women on the, on the keynote stage and two men. So we were at 50 50 now there are 51% women in the world, but we'll take it. And I, in all seriousness, I do think that there is, there's a lot more diversity here and I think that's good. Not just for AWS. That's good for everybody. >> I couldn't agree more. That was one of the first things I noticed this morning when you took the keynote stage was a strong female leader before you even started telling the story. And that's something from an optics perspective. I know that Amazon is really keen on, but it's nice to hear from your perspective as well that there's, there's that diversity. There's also that thought diversity when you have different perspectives come into play because there's so many dynamics going on these days. But I have to ask you one question. We know we talked to, we, we, we talk about every company, these days being a data company, being a digital company needing to be, to be competitive. >> Right. Do you think of United, should, should we be thinking about United as a digital first company? >> Well, we, we, we connect people, right? And so we are physically moving people from one destination to another and they really want to get there. So we're not going to always be digital, but I would tell you that I often speak with our Chief Customer Officer and our Chief Operating Officer. And it's really hard for us to talk about anything without talking about technology or how it impacts the operation or how it impacts our customer. It's really, really meshing together for sure. >> Great stuff, Linda, thanks for coming on theCUBE. Really appreciate it. United Airlines, Chief Digital Officer on the main stage here at re:Invent and now on theCUBE. I'm John Furrier, Lisa Martin. You're watching theCUBE, the tech leader in event coverage. Thanks for watching. (upbeat music)
SUMMARY :
to theCUBE's coverage Hey, great to be here. but you know, pandemic and And so it changed, we went I mean, you talked about, of improvements across the And so the idea was no more what that means is that you that you guys gravitated and some of the things that we that you guys did that was came out And so the first thing we did was you did some innovation. We know that you flew last And obviously you guys have And the vast majority of in the palm of our hand. And that might've been the Amazon app. So I have to ask you what's And so it's got to be more And just some of the things that we've, in the palm of my hand, 24/7. And so we want to give you transparency in If you miss that connection. flight of the day we'll be, I have to ask you this year at re:Invent, It really is what you But I have to ask you one question. Do you think of United, And so we are physically moving Chief Digital Officer on the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Houston | LOCATION | 0.99+ |
Linda | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Linda Jojo | PERSON | 0.99+ |
April of 2020 | DATE | 0.99+ |
February of 2020 | DATE | 0.99+ |
Lisa | PERSON | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
Adam | PERSON | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
last week | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
United Airlines | ORGANIZATION | 0.99+ |
two women | QUANTITY | 0.99+ |
two men | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
Paris | LOCATION | 0.99+ |
27,000 people | QUANTITY | 0.99+ |
Travel-Ready Center | ORGANIZATION | 0.99+ |
two pizzas | QUANTITY | 0.99+ |
one question | QUANTITY | 0.99+ |
FaceTime | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
a month later | DATE | 0.99+ |
two people | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
50 | QUANTITY | 0.98+ |
a million miles | QUANTITY | 0.98+ |
seven seconds | QUANTITY | 0.97+ |
first company | QUANTITY | 0.97+ |
nearly 60,000 | QUANTITY | 0.97+ |
United | LOCATION | 0.97+ |
United | ORGANIZATION | 0.96+ |
U.S government | ORGANIZATION | 0.96+ |
first part | QUANTITY | 0.96+ |
one screen team | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.95+ |
two industries | QUANTITY | 0.95+ |
this year | DATE | 0.95+ |
about 2000 times a day | QUANTITY | 0.93+ |
51% women | QUANTITY | 0.92+ |
COVID test | OTHER | 0.92+ |
re:Invent | EVENT | 0.91+ |
first things | QUANTITY | 0.9+ |
one screen | QUANTITY | 0.89+ |
that year | DATE | 0.89+ |
last 22 months | DATE | 0.89+ |
re:Invent 2021 | EVENT | 0.88+ |
one day | QUANTITY | 0.87+ |
DynamoDB | TITLE | 0.87+ |
two pizza team | QUANTITY | 0.86+ |
this morning | DATE | 0.8+ |
SageMaker | COMMERCIAL_ITEM | 0.79+ |
Invent 2021 | EVENT | 0.78+ |
next few weeks | DATE | 0.78+ |
theCUBE | ORGANIZATION | 0.77+ |
AWS Cloud | ORGANIZATION | 0.75+ |
Agent on Demand | ORGANIZATION | 0.72+ |
Amazon | TITLE | 0.71+ |
Chief | PERSON | 0.7+ |
that month | DATE | 0.69+ |
first | QUANTITY | 0.68+ |
Invent | EVENT | 0.67+ |
theCUBE | TITLE | 0.65+ |
B8 Scott Weber
(gentle music) >> Hello everyone, and welcome back to day two of AWS re:Invent 2021, theCUBE's continuous coverage. My name is Dave Vellante, I'm here with my co-host, David Nicholson. We've got two sets. We had two remote sets prior to the show. We're running all kinds of activities and we've got AWS executives, partners, ecosystem technologists, Scott Weber is here as the director and an AWS partner, ambassador from PwC. Scott, good to see you. >> Nice to meet you guys. Thanks for letting me be here. >> Well, so your expertise is around application modernization. It's a hot theme these days. If you're a company with a lot of legacy debt, you've got a big complex application portfolio. I would think, especially with the forced match to digital over the last year and a half, two years. Now is really a time when you're probably too late to really start thinking about rationalizing your portfolio. What are you seeing in this space? >> Definitely, we're seeing the customers that have reached that point. I view modernization as sort of the second wave of cloud that's coming. So you had your first wave, the early adopters that lifted and shifted into the cloud. We still have people looking at getting into the cloud, but for those that went early, now, they're saying, "How do I get more out of the cloud? How do I get closer to cloud native?" And that's what we're starting to see around this modernization move is, I want to start to utilize those higher level services from AWS and the cloud providers. I want to get a better return, I want to stop worrying about running infrastructure and hardware. >> So when you think about, I go back all the way back to Y2K, that was like a boondoggle for IT to spend a bunch of doh and do some cool stuff. And then of course the .com crashed, but today it's different. It's really about the business impact the business outcome that you can drive in transforming your digital business. So how do you as a technology agnostic consultant help a company understand what they should leave alone or sunset? What they should aggressively migrate? What's the process that you use to do that? >> In some ways we go back, we can reuse sort of those 6Rs that maybe got a customer to the cloud, or as they're on that cloud journey, right? And you really want to focus on where can you optimize ROI. And you're going to come across those things that are going to be like, look, maybe it's a vendor COTS solution. There's not a lot we can do there. You're just going to have to continue down that path. Unless we can look to move that to a SaaS service. Maybe the vendor has gone to a SaaS offering. Or we get into looking at they've done development in house, but that development is still monolithic running on virtual machines, either in the data center or in AWS, but it's a critical system to that business. It's maybe it's become fragile. How can we now modernize that? Because that's where there's going to be a great return on investment for that customer, and it's also going to allow business agility for those customers. As we can get them to microservices and Lambda and function as a service, the blast radius for changes become smaller, allows the customer to move faster than what they're doing. So it's the rationalization becomes what's driving the business forward? What's critical to the business? But what's holding them back as well? So that the customers can start to move faster. >> So it's a formula of okay, what's the business value of those applications essentially? You can kind of rank that, but then it's a formula there's a cost equation. That's pretty straightforward to figure out the s is and the 2b but then there's a speed. Like an ongoing time to value from a developer standpoint and then I guess there's risk. Have you got your core jewels? Maybe you don't want to touch those yet. Is that kind of your algorithm? >> It is and on that sort of cost and value piece, that's where we can really see some interesting things happen, where as we get customers away from licensed OSS proprietary databases, that return on investment can be huge. So we've helped customers migrate from running .net applications on top of a typical Microsoft Windows stack and SQL server stack. All the way to taking those workloads, all the way, either to Linux containers or all the way to serverless if we're going to take all the steps to rewrite, you can drive 60, 70, 80% of the cost of operating at that platform out of it, then you start this flywheel effect of reinvesting that money back into the next project to help the customer move forward. >> And it's quick follow up, but I know you want to jump in. >> Yeah, yeah. >> Why wouldn't a customer, that's a Microsoft customer just run that on Azure? Why AWS? >> I mean, that's a good question and that sort of gets into a lot of philosophical, like discussion we talk about for a long time. The fact of the matter is the majority of your Windows workloads still run on top of AWS today. I would argue AWS has some pretty superior things in their underlying architecture, they're nitro architectures and things like that. But I think it's also choice. And, the whole move of .net to Linux, Microsoft started that they put the ability to, you can run SQL server on top of Linux. Well, if I run SQL server on top of Linux, I take out 20% of my costs right there. They put the support in for .net core to be able to run on Linux or on containers, but that's to help the developers move faster, that's to help us get to microservices. So that cloud provider choice, I think is becomes a bigger discussion, but a lot of people are choosing AWS because they're not just doing Microsoft workloads . Again, we could get very deep into like, trade-offs on why one over the other, but customers are choosing AWS for a lot of these words. >> Diversity and better cloud, better infrastructure. >> Yeah, and philosophical is an interesting way to look at it when it becomes a hostage negotiation. I'm not sure there was a lot of philosophy involved when server and SQL 2008 were being end of support life. And people were told, move it to Azure and we'll take care of you. Don't move it to Azure, you're on your own. But something on the subject of ROI. ROI is typically measured over time. How do you rectify and address the sort of CIO dilemma, which is that if ROI is being delivered fantastically in four years, but the average tenure of a CIO is 2.7 years, how do you address that? What is the sweet spot for timeframes that you're seeing for people to actually implement when you consider as was mentioned today, the keynote that somewhere around 15% of IT spend is in cloud today, which leaves 85% of it on premises. So what do we do about that? >> Yeah, that's a great question. So, I think, I like to get small wins. So find a very big pain point for that customer. How can we start to get them some small wins and start that flywheel effect going of like you saved money here, now, can we reinvest and start to show some wins, but we've engaged in projects where we've completely rewritten a whole application stack that was the core service for a business in a year and a half, and we took them from a run rate of somewhere between 40 and $60,000 a month. Had they been running that in AWS, they were running it in a data center today. So that was our estimate to less than $5,000 a month to run that application on a serverless platform inside of AWS. >> So when you talk about modernizing an application environment, that's typically not thought of as low hanging fruit. So does that mean that all the low hanging fruit has been consumed? Are all the net new things that are developed in a cloud native format, have they already been done? Is this the only frontier for opportunity now? >> No, it's not the only frontier. I mean, there's a lot of customers that are still just trying to get into the cloud. >> Lots of applications out there? >> Yeah, and you look at things like mainframe as well. That's I think a coming area where customers are finally starting to say, "Enough with the mainframe, we saw it in the keynote today of a new sort of service offering around helping customers rationalize how to do, to start to do things with the mainframe." So, but sometimes you can get those easy wins. Like we find a scalability issue. And we can inject scalability and pull back costs very rapidly. 'Cause you run in that scenario, there provision for max capacity that may happen 10% of the year. Now they're vastly overpaying. So we can still get some easy wins with slight tweaks to the platform while we help them rationalize those longer built times. I think the other thing we're starting to see is a shift in CIOs that are coming more from a software background too. That aren't from the pure infrastructure background and as we see those software dBase CIO start to come in. They're starting to understand the game that can be had of making the investment in the software and those upgrades to the software. >> And their tenure is elongating 'cause, CIO career is over was the joke. Now you're losing CIO, is cause they're going onto a bigger and better. They getting more options. I mean, they're becoming rockstars again. I want to ask you just as a side about that mainframe compatible runtime that they announced 'cause it sounds like you've got some experience in converting mainframe. >> Yeah. >> 'Cause I've always been a skeptic. We've seen this movie before where people have to freeze code, they've got to freeze code for 18 months. It takes 24 months, but now it's cloud, Adam Selipsky said, we can cut migration time, which is critical here by two-thirds 'cause that's the key. If you can reduce the time of which you have to freeze the code or maybe not even freeze the code. Again, I'm a skeptic, but what are you seeing with practical experience? >> So at PwC, we're seeing a lot of customers, start down this path and the ROI is pretty amazing when once you get in and you really start to dig in of what it can be if to go down this path. And there's a lot of tools out there, there's a gentleman on our team that's a real genius with this and he's helped multiple customers go down this path. There's tools that can start to do code conversion for you. I mean, we all get a little skeptical on those things cause we never know what the machine is going to try to make the code look like, but it's the starting point. But there is more. >> Like a prewash? >> Yeah, (Dave laughs) there's more and more design patterns coming out to help us down those pathways. But it goes back to agility for the business cause a lot of these customers running mainframes today are looking at a six month release cycle if they want to make any changes to their environment. If we can get them into an agile mindset to a microservice, they can get to two weeks or less for release cycles. So it's a big win for the company overall. Yes, there's a risk, but I think you can take, you can try to de-risk it as much as you can, you don't take the core, the absolute core critical piece of that mainframe. You start to pick away around the edges and you get comfortable with what you're doing. >> And going back to the concept of ROI, specifically in the mainframe space, there have been some not so subtle nudges from the marketplace that changed the dynamics associated with staying on your mainframe. Because if I tell you that the tax to stay on your mainframe is going to triple or quadruple over the next several years, that changes the balance. So you have the old guard in the software business who will remain nameless, jacking up the prices because they feel like, you know what, "What are you going to do? What are you going to do other than write me a cheque?" And the answer is, "Well move," right?. >> Yep, it's reached a point like the companies are moving. And what I think companies start to see too is, when we talk about purpose-driven databases, Adam was talking about that in the keynote today too. And we've seen that with customers when we've done builds, what's the right database for this data? And now you can start to get things moving even faster. And you unleash new ways of thinking. And I mean, some of the vendors are doing things like that and the companies aren't happy about it. >> Well, yes, but look, you're talking about Oracle in particular. (group chattering) That's one of them, but Oracle invests in its database and it's two different theories. Adam, today's the right tool for the right job, API and primitives and Oracle takes the kind of Swiss army knife approach. But they do invest if you have hard core mission critical, recovery is everything. There's a risk factor involved there, but if you want to go fast and you're a developer, you're not going to necessarily knock on Oracle's door, you're going to go to get an AWS. But it gets to my question, having done a lot of TCO analysis, it used to be labor, was always two-thirds of the cost. Now with automation, especially in Oracle environments, software license costs are the dominant component and it's maybe less true for SQL server, certainly true for Db2. I remember the early days of the flash, we used to tell customers, install flash. You're going to be able to consolidate, reduce your Oracle licenses when they come up. So that was a preferred strategy, but what are you seeing in terms of the ability? First of all is that a correct premise that software licenses is still a big component or an increasingly large component, and how do you unshackle from that? >> Yeah, so definitely software licensing costs for the OSS and for the databases are huge. I mean, there's numbers out there that like for SQL server enterprise, if you can get somebody off the SQL server enterprise and get them to an open solution like Aurora Postgres or something like that, it's a 90% ROI, and the numbers are similar for Oracle. And I talked to a lot of customers are like, "But we don't know Postgres," but it's not really that different. It's still data modeling. And when you get to these managed services platforms like RDS and Aurora, you free up those DBS to do the higher value things. The ROI of a DBA is not managing memory and desk and babysitting the servers, it's helping the developers build better data models. And those sorts of things that are higher value. So it is a big thing and we're seeing customers saying like, "Help us reduce this licensing cost," and help us be more efficient because the open platforms now, especially in the relational database area, are on par in a lot of ways with the Oracles and the SQL servers. So then you start to say, "Well, what am I gaining by paying and being sort of held hostage to these numbers?" So we definitely see customers making this transition. >> I mean, the point about Postgres is a good one because you're going to get enterprise class recoverability but even EDB would say okay, don't start with your mission critical core, pick around the edges just what he's saying over and over time, you're going to become more cloud native and get to the point, can you get to that point where everything's cloud native, everything is a service, maybe not a 100%, but a large part of your application portfolio can get there, right? >> Yeah, you're going to find those, that goes back to doing that application tiering and evaluation and ROI. So, we have a case study that we did with Constellation Brands, where they really needed a B2B type ordering portal solution. And they looked at sort of the typical vendors in a packaged solution if you will, a cottage type solution. And we proposed doing a full custom solution, soup to nuts and building it natively in AWS. And it was built completely on top of platform services. There was no servers in that environment and we were done. We were using AWS Fargate to run their containers on top of, we were using RDS Postgres, we were using Lambda and in some places we were using DynamoDB for holding inflate orders. And so the whole environment is deployable from one cloud formation template. So it completely changed how we even went through the testing of the thing. 'Cause you ran the same cloud formation template to deploy to a different environment. And you knew you were getting the same exact thing. And so they went from, they no longer had to worry about securing underlying compute, secure the containers, run on top of Fargate, use a platform service for your databases, and it was a beautiful solution for them. >> Yeah, you got to taste of that and your eyes open up and say, "Wow, what's possible?" >> Yeah, its a game changer. >> We heard that from NASDAQ this morning. An amazing story. She said, our first Amazon bill was 20 bucks. I bet it's higher now, but first hits free kind of thing. But the point is when people talk about the AWS bill, et cetera, no question, you should try to optimize that. But at the end of the day, it's about the business value Scott, isn't it? >> Scott: Yeah, it is. >> Hey, thanks so much for coming to theCUBE. It was great perspectives, >> No, thank you guys. I appreciate having you guys on. >> Thank you very much. >> Keep it right there, Dave Nicholson and I will be right back. You're watching theCUBE's coverage of AWS re:Invent 2021. (gentle music)
SUMMARY :
Scott Weber is here as the director Nice to meet you guys. to digital over the last and shifted into the cloud. the business outcome that you can drive allows the customer to move faster the s is and the 2b but into the next project to help but I know you want to jump in. The fact of the matter is the majority Diversity and better to actually implement when you consider and start that flywheel effect going So when you talk about modernizing No, it's not the only frontier. that may happen 10% of the year. I want to ask you just as a side of which you have to freeze the code but it's the starting point. and you get comfortable that changes the balance. And I mean, some of the vendors I remember the early days of the flash, and the numbers are similar for Oracle. of the typical vendors But the point is when people talk for coming to theCUBE. I appreciate having you guys on. Dave Nicholson and I will be right back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
85% | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2.7 years | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
Scott Weber | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
20 bucks | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
six month | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
two sets | QUANTITY | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
70 | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
two remote sets | QUANTITY | 0.99+ |
Lambda | TITLE | 0.99+ |
Constellation Brands | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
a year and a half | QUANTITY | 0.99+ |
SQL 2008 | TITLE | 0.99+ |
less than $5,000 a month | QUANTITY | 0.99+ |
two-thirds | QUANTITY | 0.99+ |
two different theories | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
DynamoDB | TITLE | 0.98+ |
PwC | ORGANIZATION | 0.98+ |
Y2K | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
Oracles | ORGANIZATION | 0.98+ |
Windows | TITLE | 0.97+ |
Postgres | ORGANIZATION | 0.97+ |
Amazon | ORGANIZATION | 0.96+ |
around 15% | QUANTITY | 0.95+ |
two years | QUANTITY | 0.94+ |
Ranga Rajagopalan & Stephen Orban
(Techno music plays in intro) >> We're here with theCUBE covering Commvault Connections 21. And we're going to look at the data protection space and how cloud computing has advanced the way we think about backup, recovery and protecting our most critical data. Ranga Rajagopalan who is the Vice President of products at Commvault, and Stephen Orban who's the General Manager of AWS Marketplace & Control Services. Gents! Welcome to theCUBE. Good to see you. >> Thank you, always a pleasure to see you Dave. >> Dave, thanks for having us. Great to be here. >> You're very welcome. Stephen, let's start with you. Look, the cloud has become a staple of digital infrastructure. I don't know where we'd be right now without being able to access enterprise services, IT services remotely, Um, but specifically, how are customers looking at backup and recovery in the cloud? Is it a kind of a replacement for existing strategies? Is it another layer of protection? How are they thinking about that? >> Yeah. Great question, Dave. And again, thanks. Thanks for having me. And I think, you know, look. If you look back to 15 years ago, when the founders of AWS had the hypothesis that many enterprises, governments, and developers were going to want access to on demand, pay as you go, IT resources in the cloud. None of us would have been able to predict that it would have matured and, um, you know become the staple that it has today over the last 15 years. But the reality is that a lot of these are enterprise customers. Many of whom have been doing their own IT infrastructure for the last 10, 20 or or multiple decades do have to kind of figure out how they deal with it. The change management of moving to the cloud, and while a lot of our customers will initially come to us because they're looking to save money or costs. Almost all of them decide to stay and go big because of the speed at which they're able to innovate on behalf of their customers. And when it comes to storage and backup, that just plays right into where they're headed and there's a variety of different techniques that customers use. Whether it be, you know, a lift and shift for a particular set of applications. Or a data center or where it, where they do very much look at how can they replace the backup and recovery that they have on premises in the cloud using solutions like what we're partnering with Commvault to do. Or completely re-imagining their architecture for net new developments that they can really move quickly for, for their customers and, and completely developing something brand new, where it is really a, um, you know a brand new replacement and innovation for, for, for what they've done in the past. >> Great. Thank you, Stephen. Ranga, I want to ask you about the D word, digital. Look, if you're not a digital business today, you're basically out of business. So my question to you Ranga is, is how have you seen customers change the way they think about data protection during what I call the forced March to digital over the last 18, 19 months? Are customers thinking about data protection differently today? >> Definitely Dave, and and thank you for having me and Stephen pleasure to join you on this CUBE interview. First, going back to Stephen's comments, can't agree more. Almost every business that we talk with today has a cloud first strategy, a cloud transmission mandate. And, you know, the reality is back to your digital comment. There are many different paths to the hybrid micro cloud. And different customers. You know, there are different parts of the journey. So as Stephen was saying, most often customers, at least from a data protection perspective. Start the conversation their thinking, hey, I have all these tapes, can I start using cloud as my air gap, long-term retention target. And before they realize they start moving their workloads into the cloud, and none of the backup and recovery facilities are going to change. So you need to continue protecting the cloud, which is where the cloud meta data protection comes in. And then they start innovating around DR Can I use cloud as my DR sites so that, you know, I don't need to meet in another site. So this is all around us, cloud transmissions, all around us. And, and the real essence of this partnership between AWS and Commvault is essentially to drive, and simplify all the paths to the cloud Regardless of whether you're going to use it as a storage target or, you know, your production data center or your DR. Disaster Recovery site. >> Yeah. So really, it's about providing that optionality for customers. I talked to a lot of customers and said, hey, our business resilience strategy was really too focused on DR. I've talked to all the customers at the other end of the spectrum said, we didn't even have a DR strategy. Now we're using the cloud for that. So it's a, it's really all over the map and you want that optionality. So Stephen, >> (Ranga cuts in) >> Go ahead, please. >> And sorry. Ransomware plays a big role in many of these considerations as well, right? Like, it's unfortunately not a question of whether you're going to be hit by ransomware. It's almost become like, what do you do when you're hit by ransomware? And the ability to use the cloud scale to immediately bring up the resources. Use the cloud backers has become a very popular choice simply because of the speed with which you can bring the business back to normal operations. The agility and the power that cloud brings to the table. >> Yeah. Ransomware is scary. You don't, you don't even need a high school degree diploma to be a ransomware-ist. You could just go on the dark web and buy ransomware as a service and do bad things. And hopefully you'll end up in jail. Stephen, we know about the success of the AWS Marketplace. You guys are partnering here. I'm interested in how that partnership, you know, kind of where it started and how it's evolving. >> Yeah. And happy to highlight on that. So look, when we, when we started AWS or when the founders of AWS started AWS, as I said, 15 years ago. We realized very early on that while we were going to be able to provide a number of tools for customers to have on demand access to compute storage, networking databases, that many particularly, enterprise and government government customers still use a wide range of tools and solutions from hundreds, if not in some cases, thousands of different partners. I mean, I talked to enterprises who who literally used thousands of of different vendors to help them deliver those solutions for their customers. So almost 10 years ago, we're almost at our 10 year anniversary for AWS Marketplace. We launched the first instantiation of AWS Marketplace, which allowed builders and customers to find, try, buy, and then deploy third-party software solutions running on Amazon Machine Instances, also known as AMI's. Natively, right in their AWS and cloud accounts to compliment what they were doing in the cloud. And over the last, nearly 10 years, we've evolved quite a bit. To the point where we support software in multiple different packaging types. Whether it be Amazon Machine Instances, containers, machine learning models, and of course, SAS and the rise of software as a service, so customers don't have to manage the software themselves. But we also support a data products through the AWS data exchange and professional services for customers who want to get services to help them integrate the software into their environments. And we now do that across a wide range of procurement options. So what used to be pay as you go Amazon Machine Instances now includes multiple different ways to contract directly. The customer can do that directly with the vendor, with their channel partner or using kind of our, our public e-commerce capabilities. And we're super excited, um, over the last couple of months, we've been partnering with Commvault to get their industry leading backup and recovery solutions listed on AWS Marketplace. Which is available for our collective customers now. So not only do they have access to Commvault's awesome solutions to help them protect against ransomware, as we talked about and, and to manage their backup and recovery environments. But they can find and deploy that directly in one click right into their AWS accounts and consolidate their, their billing relationship right on the AWS invoice. And it's been awesome to work with with Ranga and the, and the product teams at Commvault to really expose those capabilities where Commvault's using a lot of different AWS services to, to provide a really great native experience for our collective customers as they migrate to the cloud. >> Yeah. The Marketplace has been amazing. We've watched it evolve over the past decade and it's just, it's a key characteristic of cloud. Everybody has a cloud today, right? Ah, we're a cloud too, but Marketplace is unique in, in, in that it's the power of the ecosystem versus the resources of one. And Ranga, I wonder if from your perspective, if you could talk about the partnership with AWS from your view, and and specifically you've got some hard news. Would, if you could, talk about that as well. >> Absolutely. So the partnership has been extending for more than 12 years, right? So AWS and Commvault have been bringing together solutions that help customers solve the data management challenges and everything that we've been doing has been driven by the customer demand that we see, right. Customers are moving their workloads to the cloud. They are finding new ways of deploying the workloads and protecting them. You know, earlier we introduced cloud native integration with the EBS AVI's which has driven almost 70% performance improvements in backup and restore. When you look at huge customers like Coca-Cola, who have standardized on AWS and Commvault, that is the scale that they want to operate on. They manage around one through 3,000 snapshots, 1200 easy, two instances across six regions, but with just one resource dedicated for the data management strategy, right? So that's where the real built-in integration comes into play. And we've been extending it to make use of the cloud efficiencies like power management and auto-scale, and so on. Another aspect is our commitment to a radically simple customer experience. And that's, you know, I'm sure Stephen would agree. It's a big mantra at AWS as well. That's really, together, the customer demand that's brought us together to introduce combo into the AWS Marketplace, exactly the way Stephen described it. Now the hot announcement is calmer, backup and recovery is available in AWS Marketplace. So the exact four steps that Stephen mentioned: find, try, buy, and deploy everything simplified to the Marketplace so that our AWS customers can start using our more backup software in less than 20 minutes. A 60 day trial version is included in the product through Marketplace. And, you know, it's a single click buy. We use the cloud formation templates to deploy. So it becomes a super simple approach to protect the AWS workloads. And we protect a lot of them starting from EC2, RDS DynamoDB, DocumentDB, you know, the, the containers, the list just keeps going on. So it becomes a very natural extension for our customers to make it super simple, to start using Commvault data protection for the AWS workloads. >> Well, the Commvault stack is very robust. You have an extremely mature stack. I want to, I'm curious as to how this sort of came about? I mean, it had to be customer driven, I'm sure. When your customers say, hey, we're moving to the cloud, we had a lot of workloads in the cloud. We're a Commvault customer, that intersection between Commvault and AWS customer. So, so again, I presume this was customer driven, but maybe you can give us a little insight and add some color to that, Ranga. >> Every everything, you know, in this collaboration has been customer driven. We were earlier talking about the multiple paths to cloud and a very good example, and Stephen might probably add more color from his own experience at Dow Jones, but I I'll, I'll bring it to reference Parsons. Who's, you know, civil engineering leader. They started with the cloud first mandate saying, we need to start moving all our backups to the cloud, but we averted that bad actors might find it easy to go and access the backups. AWS and Commvault came together with AWS security features and Commvault brought in its own authorization controls. And now we are moved more than 14 petabytes of backup data into the cloud, and it's sort of as that, not even the backup administrators can go and patch the backups without multiple levels of authorization, right? So the customer needs, whether it is from a security perspective, performance perspective, or in this case from a simplicity perspective is really what is driving us and, and the need came exactly like that. There are many customers who have now standardized on AWS, they want to find everything related to this Marketplace. They want to use their existing, you know, the AWS contracts and also bring data strategy as part of that. So that, that's the real driver behind this. Stephen and I were hoping that we could actually announce some of the customers that have actively started using it. You know, many notable customers have been behind this innovation. And Stephen I don't know if you wanted to add more to that. >> I would just, I would just add Dave, you know, like if I look back before I joined AWS seven years ago, I was the CIO at Dow Jones. And I was leading a, a fairly big cloud migration there over a number of years. And one of the impetuses for us moving to the cloud in the first place was when Hurricane Sandy hit, we had a real disaster recovery scenario in one of our New Jersey data centers. And we had to act pretty quickly. Commvault was, was part of that solution. And I remember very clearly, even back then, back in 2013, there being options available to help us accelerate our move to the cloud. And, and just to reiterate some of the stuff that Ranga was talking about, you know, Commvault's done a great job over the last, more than a decade. Taking features from things like EBS, and S3, and TC2 and some of our networking capabilities and embedding them directly into their services so that customers are able to, you know, more quickly move their backup and recovery workloads to the cloud. So each and every one of those features was, is a result of, I'm sure, Commvault working backwards from their customer needs just as we do at AWS. And we're super excited to take that to the next level, to give customers the option to then also buy that right on their AWS invoice on AWS Marketplace. >> Yeah. I mean, we're going to have to leave it there. Stephen you've mentioned this several times, there's sort of the early days of AWS. We went back then we were talking about gigabytes and terabytes, and now we're talking about petabytes and beyond. Guys thanks so much. We really appreciate your time and sharing the news with us. >> Dave, thanks for having us. >> All right, keep it right there more from Commvault Connections 21, you're watching theCUBE.
SUMMARY :
the way we think about backup, recovery pleasure to see you Dave. Great to be here. and recovery in the cloud? of moving to the cloud, and while So my question to you Ranga is, and simplify all the paths to the cloud So it's a, it's really all over the map And the ability to use the cloud scale You could just go on the dark web and the rise of software as a service, in that it's the power of the ecosystem that is the scale that I mean, it had to be the multiple paths to cloud And, and just to reiterate and sharing the news with us. you're watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephen | PERSON | 0.99+ |
Ranga Rajagopalan | PERSON | 0.99+ |
Stephen Orban | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Ranga | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Commvault | ORGANIZATION | 0.99+ |
Dow Jones | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
New Jersey | LOCATION | 0.99+ |
3,000 snapshots | QUANTITY | 0.99+ |
60 day | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
more than 14 petabytes | QUANTITY | 0.99+ |
more than 12 years | QUANTITY | 0.99+ |
less than 20 minutes | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Coca-Cola | ORGANIZATION | 0.99+ |
seven years ago | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
six regions | QUANTITY | 0.98+ |
1200 easy | QUANTITY | 0.98+ |
Hurricane Sandy | EVENT | 0.98+ |
EBS | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
15 years ago | DATE | 0.97+ |
EC2 | TITLE | 0.97+ |
two instances | QUANTITY | 0.97+ |
AWS Marketplace & Control Services | ORGANIZATION | 0.96+ |
March | DATE | 0.96+ |
one resource | QUANTITY | 0.96+ |
first mandate | QUANTITY | 0.96+ |
Venkat Venkataramani, Rockset & Carl Sjogreen, Seesaw | AWS Startup Showcase
(mid tempo digital music) >> Welcome to today's session of theCUBE' presentation of the AWS startup showcase. This is New Breakthroughs and DevOps, Data Analytics, and Cloud Management Tools. The segment is featuring Rockset and we're going to be talking about data analytics. I'm your host, Lisa Martin, and today I'm joined by one of our alumni, Venkat Venkataramani, the co-founder and CEO of Rockset, and Carl Sjogreen, the co-founder and CPO of Seesaw Learning. We're going to be talking about the fast path to real-time analytics at Seesaw. Guys, Thanks so much for joining me today. >> Thanks for having us >> Thank you for having us. >> Carl, let's go ahead and start with you. Give us an overview of Seesaw. >> Yeah, so Seesaw is a platform that brings educators, students, and families together to create engaging and learning experiences. We're really focused on elementary aged students, and have a suite of creative tools and engaging learning activities that helps get their learning and ideas out into the world and share that with family members. >> And this is used by over 10 million teachers and students and family members across 75% of the schools in the US and 150 countries. So you've got a great big global presence. >> Yeah, it's really an honor to serve so many teachers and students and families. >> I can imagine even more so now with the remote learning being such a huge focus for millions and millions across the country. Carl, let's go ahead and get the backstory. Let's talk about data. You've a ton of data on how your product is being used across millions of data points. Talk to me about the data goals that you set prior to using Rockset. >> Yeah, so, as you can imagine with that many users interacting with Seesaw, we have all sorts of information about how the product is being used, which schools, which districts, what those usage patterns look like. And before we started working with Rockset, a lot of data infrastructure was really custom built and cobbled together a bit over the years. We had a bunch of batch jobs processing data, we were using some tools, like Athena, to make that data visible to our internal customers. But we had a very sort disorganized data infrastructure that really as we've grown, we realized was getting in the way of helping our sales and marketing and support and customer success teams, really service our customers in the way that we wanted to past. >> So operationalizing that data to better serve internal users like sales and marketing, as well as your customers. Give me a picture, Carl, of those key technology challenges that you knew you needed to solve. >> Yeah, well, at the simplest level, just understanding, how an individual school or district is using Seesaw, where they're seeing success, where they need help, is a critical question for our customer support teams and frankly for our school and district partners. a lot of what they're asking us for is data about how Seesaw is being used in their school, so that they can help target interventions, They can understand where there is an opportunity to double down on where they are seeing success. >> Now, before you found Rockset, you did consider a more traditional data warehouse approach, but decided against it. Talk to me about the decision why was a traditional data warehouse not the right approach? >> Well, one of the key drivers is that, we are heavy users of DynamoDB. That's our main data store and has been tremendous aid in our scaling. Last year we scaled with the transition to remote learning, most of our metrics by, 10X and Dynamo didn't skip a beat, it was fantastic in that environment. But when we started really thinking about how to build a data infrastructure on top of it, using a sort of traditional data warehouse, a traditional ETL pipeline, it wasn't going to require a fair amount of work for us to really build that out on our own on top of Dynamo. And one of the key advantages of Rockset was that it was basically plug and play for our Dynamo instance. We turned Rockset on, connected it to our DynamoDB and were able within hours to start querying that data in ways that we hadn't before. >> Venkat let's bring you into the conversation. Let's talk about the problems that you're solving for Seesaw and also the complimentary relationship that you have with DynamoDB. >> Definitely, I think, Seesaw, big fan of the product. We have two kids in elementary school that are active users, so it's a pleasure to partner with Seesaw here. If you really think about what they're asking for, what Carl's vision was for their data stack. The way we look at is business observability. They have many customers and they want to make sure that they're doing the right thing and servicing them better. And all of their data is in a very scalable, large scale, no SEQUEL store like DynamoDB. So it makes it very easy for you to build applications, but it's very, very hard to do analytics on it. Rockset had comes with all batteries included, including real-time data connectors, with Amazon DynamoDB. And so literally you can just point Rockset at any of your Dynamo tables, even though it's a no SEQUEL store, Rockset will in real time replicate the data and automatically convert them into fast SEQUEL tables for you to do analytics on. And so within one to two seconds of data getting modified or new data arriving in DynamoDB from your application, within one to two seconds, it's available for query processing in Rockset with full feature SEQUEL. And not just that, I think another very important aspect that was very important for Seesaw is not just that they wanted me to do batch analytics. They wanted their analytics to be interactive because a lot of the time we just say something is wrong. It's good to know that, but oftentimes you have a lot more followup questions. Why is it wrong? When did it go wrong? Is it a particular release that we did? Is it something specific to the school district? Are they trying to use some part of the product more than other parts of the product and struggling with it? Or anything like that. It's really, I think it comes down to Seesaw's and Carl's vision of what that data stack should serve and how we can use that to better serve the customers. And Rockset's indexing technology, and whatnot allows you to not only get real-time in terms of data freshness, but also the interactivity that comes in ad-hoc drilling down and slicing and dicing kind of analytics that is just our bread and butter . And so that is really how I see not only us partnering with Seesaw and allowing them to get the business observerbility they care about, but also compliment Dynamo transactional databases that are massively scalable, born in the cloud, like DynamoDB. >> Carl talked to me about that complimentary relationship that Venkat just walked us through and how that is really critical to what you're trying to deliver at Seesaw. >> Yeah, well, just to reiterate what Venkat said, I think we have so much data that any question you ask about it, immediately leads to five other questions about it. We have a very seasonal business as one example. Obviously in the summertime when kids aren't in school, we have very different usage patterns, then during this time right now is our critical back to school season versus a steady state, maybe in the middle of the school year. And so really understanding how data is trending over time, how it compares year over year, what might be driving those things, is something that frankly we just haven't had the tools to really dig into. There's a lot about that, that we are still beginning to understand and dig into more. And so this iterative exploration of data is incredibly powerful to expose to our product team, our sales and marketing teams to really understand where Seesaw's working and where we still have work do with our customers. And that's so critical to us doing a good job for schools in districts. >> And how long have you been using Rockset, Carl? >> It's about six months now, maybe a little bit longer. >> Okay, so during the pandemic. So talk to me a little bit about in the last 18 months, where we saw the massive overnight transition to remote learning and there's still a lot of places that are in that or a hybrid environment. How critical was it to have Rockset to fuel real-time analytics interactivity, particularly in a very challenging last 18 month time period? >> The last 18 months have been hard for everyone, but I think have hit teachers and schools maybe harder than anyone, they have been struggling with. And then, overnight transition to remote learning challenges of returning to the classroom hybrid learning, teachers and schools are being asked to stretch in ways they have never been stretched before. And so, our real focus last year was in doing whatever we could to help them manage those transitions. And data around student attendance in a remote learning situation, data around which kids were completing lessons and which kids weren't, was really critical data to provide to our customers. And a lot of our data infrastructure had to be built out to support answering those questions in this really crazy time for schools. >> I want to talk about the data set, but I'd like to go back to Venkat 'cause what's interesting about this story is Seesaw is a customer of Rockset, Venkat, is a customer of Seesaw. Talk to me Venkat about how this has been helpful in the remote learning that your kids have been going through the last year and a half. >> Absolutely. I have two sons, nine and ten year olds, and they are in fourth and fifth grade now. And I still remember when I told them that Seesaw is considering using Rockset for the analytics, they were thrilled, they were overjoyed because finally they understood what I do for a living. (chuckling) And so that was really amazing. I think, it was a fantastic dual because for the first time I actually understood what kids do at school. I think every week at the end of the week, we would use Seesaw to just go look at, "Hey, well, let's see what you did last week." And we would see not only what the prompts and what the children were doing in the classroom, but also the comments from the educators, and then they comment back. And then we were like, "Hey, this is not how you speak to an educators." So it was really amazing to actually go through that, and so we are very, very big fans of the product, we really look forward to using it, whether it is remote learning or not, we try to use it as a family, me, my wife and the kids, as much as possible. And it's a very constant topic of conversation, every week when we are working with the kids and seeing how we can help them. >> So from an observability perspective, it sounds like it's giving parents and teachers that visibility that really without it, you don't get. >> That's absolutely correct . I think the product itself is about making connections, giving people more visibility into things that are constantly happening, but you're not in the know. Like, before Seesaw, I used to ask the kids, "How was school today? "what happened in the class?" And they'll say, "It was okay." It would be a very short answer, it wouldn't really have the depth that we are able to get from Seesaw. So, absolutely. And so it's only right that, that level of observability and that level of... Is also available for their business teams, the support teams so that they can also service all the organizations that Seesaw's working with, not only the parents and the educators and the students that are actually using the product. >> Carl, let's talk about that data stack And then I'm going to open the can on some of those impacts that it's making to your internal folks. We talked about DynamoDB, but give me an visual audio, visual picture of the data stack. >> Yeah. So, we use DynamoDB as our database of record. We're now in the process of centralizing all of our analytics into Rockset. So that rather than having different BaaS jobs in different systems, querying that data in different ways, trying to really set Rockset up as the source of truth for analytics on top of Dynamo. And then on top of Rockset, exposing that data, both to internal customers for that interactive iterative SEQUEL style queries, but also bridging that data into the other systems our business users use. So Salesforce, for example, is a big internal tool and have that data now piped into Salesforce so that a sales rep can run a report on a prospect to reach out to, or a customer that needs help getting started with Seesaw. And it's all plumbed through the Rockset infrastructure. >> From an outcome standpoint, So I mentioned sales and marketing getting that visibility, being able to act on real time data, how has it impacted sales in the last year and a half? six months rather since , it's now since months using it. >> Well, I don't know if I can draw a direct line between those things, but it's been a very busy year for Seesaw, as schools have transitioned to remote learning. And our business is really largely driven by teachers discovering our free product, finding it valuable in their classroom, and then asking their school or district leadership to purchase a school wide subscription. It's a very bottoms up sales motion. And so data on where teachers are starting to use Seesaw is the key input into our sales and marketing discussions with schools and districts. And so understanding that data quickly in real time is a key part of our sales strategy and a key part of how we grow at Seesaw over time. >> And it sounds like Rockset is empowering those users, the sales and marketing folks to really fine tune their interactions with existing customers, prospective customers. And I imagine you on the product side in terms of tuning the product. What are some of the things Carl that you've learned in the last six months that have helped you make better decisions on what you want Seesaw to deliver in the future? >> Well, one of the things that I think has been really interesting is how usage patterns have changed between the classroom and remote learning. We saw per student usage of Seesaw increased dramatically over the past year, and really understanding what that means for how the product needs to evolve to better meet teacher needs, to help organize that information, since it's now a lot more of it, really helped motivate our product roadmap over the last year. We launched a new progress dashboard that helps teachers get an added glance view of what's happening in their classroom. That was really in direct response to the changing usage patterns, that we were able to understand with better insights into data. >> And those insights allow you to pivot and iterate on the product. Venkat I want to just go back to the AWS relationship for a second. You both talked about the complimentary nature of Rockset and DynamoDB. Here we are at the AWS Startup Showcase. Venkat just give the audience a little overview of the partnership that you guys have with AWS. >> Rockset fully runs on AWS, so we are customer of AWS. We are also a partner. There are lots of amazing cloud data products that AWS has, including DynamoDB or AWS Kinesis. And so one with which we have built in integrations. So if you're managing data in AWS, we compliment and we can provide, very, very fast interactive real-time analytics on all of your datasets. So the partnership has been wonderful, we're very excited to be in the Startup Showcase. And so I hope this continuous for years to come. >> Let's talk about the synergies between a Rockset and Seesaw for a second. I know we talked about the huge value of real time analytics, especially in today's world, where we've learned many things in the last year and a half, including that real-time analytics is no longer a nice to have for a lot of industries, 'cause I think Carl as you said, if you can't get access to the data, then there's questions we can't ask. Or we can't iterate on operations, if we wait seconds for every query to load, then there's questions we can't ask. Talk to me Venkat, about how Rockset is benefiting from what you're learning from Seesaw's usage of the technology? >> Absolutely. I mean, if you go to the first part of the question on why do businesses really go after real time. What is the drive here? You might have heard the phrase, the world is going from batch to real-time. What does it really mean? What's the driving factor there? Our take on it is, I think it's about accelerating growth. Seesaw's product being amazing and it'll continue to grow, it'll continue to be a very, very important product in the world. With or without Rockset, that will be true. The way we look at once they have real-time business observability, is that inherent growth that they have, they can reach more people, they can put their product in the hands of more and more people, they can iterate faster. And at the end of the day, it is really about having this very interesting platform, very interesting architecture to really make a lot more data driven decisions and iterate much more quickly. And so in batch analytics, if you were able to make, let's say five decisions a quarter, in real time analytics you can make five decisions a day. So that's how we look at it. So that is really, I think, what is the underpinnings of why the world is going from batch to real time. And what have we learned from having a Seesaw as a customer? I think Seesaw has probably one of the largest DynamoDB installations that we have looked at. I think, we're talking about billions and billions of records, even though they have tens of millions of active users. And so I think it has been an incredible partnership working with them closely, and they have had a tremendous amount of input on our product roadmap and some of that like role-based access control and other things have already being a part of the product, thanks to the continuous feedback we get from their team. So we're delighted about this partnership and I am sure there's more input that they have, that we cannot wait to incorporate in our roadmap. >> I imagine Venkat as well, you as the parent user and your kids, you probably have some input that goes to the Seesaw side. So this seems like a very synergistic relationship. Carl, a couple more questions for you. I'd love to know how in this... Here we are kind of back to school timeframe, We've got a lot of students coming back, they're still remote learning. What are some of the things that you're excited about for this next school year that do you think Rockset is really going to fuel or power for Seesaw? >> Yeah, well, I think schools are navigating yet another transition now, from a world of remote learning to a world of back to the classroom. But back to the classroom feels very different than it does at any other back to school timeframe. Many of our users are in first or second grade. We serve early elementary age ranges and some of those students have never been in a classroom before. They are entering second grade and never having been at school. And that's hard. That's a hard transition for teachers in schools to make. And so as a partner to those schools, we want to do everything we can to help them manage that transition, in general and with Seesaw in particular. And the more we can understand how they're using Seesaw, where they're struggling with Seesaw, as part of that transition, the more we can be a good partner to them and help them really get the most value out of Seesaw, in this new world that we're living in, which is sort of like normal, and in many ways not. We are still not back to normal as far as schools are concerned. >> I'm sure though, the partnership that you provide to the teachers and the students can be a game changer in these, and still navigating some very uncertain times. Carl, last question for you. I want you to point folks to where they can go to learn more about Seesaw, and how for all those parents watching, they might be able to use this with their families. >> Yeah, well, seesaw.me is our website, and you can go to seesaw.me and learn more about Seesaw, and if any of this sounds interesting, ask your teacher, if they're not using Seesaw, to give it a look. >> Seesaw.me, excellent. Venkat, same question for you. Where do you want folks to go to learn more about Rockset and its capabilities? >> Rockset.com is our website. There is a free trial for... $300 worth of free trial credits. It's a self service platform, you don't need to talk to anybody, all the pricing and everything is out there. So, if real-time analytics and modernizing your data stack is on your roadmap, go give it a spin. >> Excellent guys. Thanks so much for joining me today, talking about real-time analytics, how it's really empowering both the data companies and the users to be able to navigate in challenging waters. Venkat, thank you, Carl, thank you for joining us. >> Thanks everyone. >> Thanks Lisa. >> For my guests, this has been our coverage of the AWS Startup Showcase, New Breakthroughs in DevOps, Data Analytics and Cloud Management Tools. I am Lisa Martin. Thanks for watching. (mid tempo music)
SUMMARY :
the fast path to real-time and start with you. out into the world and share across 75% of the schools to serve so many teachers and get the backstory. in the way that we wanted to past. that you knew you needed to solve. to double down on where Talk to me about the decision And one of the key advantages of Rockset that you have with DynamoDB. because a lot of the time we and how that is really critical is our critical back to school season It's about six months now, in the last 18 months, where we saw challenges of returning to the classroom in the remote learning And so that was really amazing. that visibility that really and the students that are And then I'm going to open the can and have that data now in the last year and a half? is the key input into our And I imagine you on the product side for how the product needs to evolve that you guys have with AWS. in the Startup Showcase. in the last year and a half, and it'll continue to grow, that goes to the Seesaw side. And the more we can understand the partnership that you provide and if any of this sounds interesting, to learn more about Rockset all the pricing and both the data companies and the users of the AWS Startup Showcase,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Venkat Venkataramani | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Carl Sjogreen | PERSON | 0.99+ |
Venkat | PERSON | 0.99+ |
Seesaw | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rockset | ORGANIZATION | 0.99+ |
$300 | QUANTITY | 0.99+ |
nine | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Venkat | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
fourth | QUANTITY | 0.99+ |
two kids | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
tens of millions | QUANTITY | 0.99+ |
five decisions | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
second grade | QUANTITY | 0.99+ |
five other questions | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
Dynamo | ORGANIZATION | 0.99+ |
ten year | QUANTITY | 0.99+ |
150 countries | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
billions | QUANTITY | 0.98+ |
two sons | QUANTITY | 0.98+ |
Merritt Baer, AWS | Fortinet Security Summit 2021
>> Narrator: From around the globe, It's theCUBE! Covering Fortinet Security Summit, brought to you by Fortinet. >> And welcome to the cube coverage here at the PGA champion-- Fortinet championship, where we're going to be here for Napa valley coverage of Fortinet's, the championships security summit, going on Fortinet, sponsoring the PGA, but a great guest Merritt Baer, who's the principal in the office of the CISO at Amazon web services. Great to see you. Thanks for coming on. >> Merritt: Thank you for having me. It's good to be here. >> So Fortinet, uh, big brand now, sponsoring the PGA. Pretty impressive that they're getting out there with the golf. It's very enterprise focused, a lot of action. A lot of customers here. >> Merritt: It seems like it, for sure. >> Bold move. Amazon, Amazon web services has become the gold standard in terms of cloud computing, seeing DevOps people refactoring. You've seen the rise of companies like Snowflake building on Amazon. People are moving not only to the cloud, but they're refactoring their business and security is top of mind for everyone. And obviously cybersecurity threats that Fortinet helps cover, you guys are partnering with them, is huge. What is your state of the union for cyber? What's the current situation with the threat landscape? Obviously there's no perimeter in the cloud. More end points are coming on board. The Edge is here. 5G, wavelength with outpost, a lot happening. >> That was a long question, but I'll, I'll try. So I think, you know, as always business in innovation is the driver. And security needs to be woven into that. And so I think increasingly we're seeing security not be a no shop, but be an enabler. And especially in cloud, when we're talking about the way that you do DevOps with security, I know folks don't like the term DevSecOps, but you know, to be able to do agile methodology and be able to do the short sprints that are really agile and, and innovative where you can-- So instead of nine months or whatever, nine week timelines, we're talking about short sprints that allow you to elastically scale up and down and be able to innovate really creatively. And to do that, you need to weave in your security because there's no like, okay, you pass go, you collect $200. Security is not an after the fact. So I think as part of that, of course the perimeter is dead, long live the perimeter, right? It does matter. And we can talk about that a little bit. You know, the term zero trust is really hot right now. We can dig into that if that's of interest. But I think part of this is just the business is kind of growing up. And as you alluded to we're at the start of what I think is an S curve that is just at the beginning. >> You know, I was really looking forward to Reinforced this year. It was got canceled last year, but the first inaugural event was in Boston. I remember covering that. This year it was virtual, but the keynote Steven gave was interesting, security hubs at the center of it. And I want to ask you, because I need you to share your view on how security's changed with the cloud, because there's now new things that are there to take advantage of if you're a business or an enterprise, yeah on premises, there's a standard operating procedure. You have the perimeter, et cetera. That's not there anymore, but with the cloud, there's a new, there's new ways to protect and security hub is one. What are some of the new things that cloud enables for security? >> Well, so just to clarify, like perimeters exist logically just like they do physically. So, you know, a VPC for example, would be a logical perimeter and that is very relevant, or a VPN. Now we're talking about a lot of remote work during COVID, for example. But one of the things that I think folks are really interested with Security Hub is just having that broad visibility and one of the beauties of cloud is that, you get this tactile sense of your estate and you can reason about it. So for example, when you're looking at identity and access management, you can look at something like access analyzer that will under the hood be running on a tool that our, our group came up with that is like reasoning about the permissions, because you're talking about software layers, you're talking about computer layer reasoning about security. And so another example is in inspector. We have a tool that will tell you without sending a single packet over the network, what your network reach ability is. There's just like this ability to do infrastructure as code that then allows you to do security as code. And then that allows for ephemeral and immutable infrastructures so that you could, for example, get back to a known good state. That being said, you know, you kill a, your web server gets popped and you kill it and you spin up a new one. You haven't solved your problem, right? You need to have some kind of awareness of networking and how principals work. But at the same time, there's a lot of beauties about cloud that you inherit from a security perspective to be able to work in those top layers. And that's of course the premise of cloud. >> Yeah, infrastructure as code, you mentioned that, it's awesome. And the program ability of it with, with server-less functions, you're starting to see new ways now to spin up resources. How is that changing the paradigm and creating opportunities for better security? Is it, is it more microservices? Is it, is, are there new things that people can do differently now that they didn't have a year ago or two years ago? Because you're starting to see things like server-less functions are very popular. >> So yes, and yes, I think that it is augmenting the way that we're doing business, but it's especially augmenting the way we do security in terms of automation. So server-less, under the hood, whether it's CloudWatch events or config rules, they are all a Lambda function. So that's the same thing that powers your Alexa at home. These are server-less functions and they're really simple. You can program them, you can find them on GitHub, but they are-- one way to really scale your enterprise is to have a lot of automation in place so that you put those decisions in ahead of time. So your gray area of human decision making is scaled down. So you've got, you know, what you know to be allowable, what you know to be not allowable. And then you increasingly kind of whittled down that center into things that really are novel, truly novel or high stakes or both. But the focus on automation is a little bit of a trope for us. We at Amazon like to talk about mechanisms, good intentions are not enough. If it's not someone's job, it's a hope and hope is not a plan, you know, but creating the actual, you know, computerized version of making it be done iteratively. And I think that is the key to scaling a security chain because as we all know, things can't be manual for long, or you won't be able to grow. >> I love the AWS reference. Mechanisms, one way doors, raising the bar. These are all kind of internal Amazon, but I got to ask you about the Edge. Okay. There's a lot of action going on with 5G and wavelength. Okay, and what's interesting is if the Edge becomes so much more robust, how do you guys see that security from a security posture standpoint? What should people be thinking about? Because certainly it's just a distributed Edge point. What's the security posture, How should we be thinking about Edge? >> You know, Edge is a kind of catch all, right, we're talking about Internet of Things. We're talking about points of contact. And a lot of times I think we focus so much on the confidentiality and integrity, but the availability is hugely important when we're talking about security. So one of the things that excites me is that we have so many points of contact and so many availability points at the Edge that actually, so for example, in DynamoDB, the more times you put a call on it, the more available it is because it's fresher, you've already been refreshing it, there are so many elements of this, and our core compute platform, EC2, all runs on Nitro, which is our, our custom hardware. And it's really fascinating, the availability benefits there. Like the best patching is a patching you don't have to do. And there are so many elements that are just so core to that Greengrass, you know, which is running on FreeRTOS, which has an open source software, for example, is, you know, one element of zero trust in play. And there are so many ways that we can talk about this in different incarnations. And of course that speaks to like the breadth and depth of the industries that use cloud. We're talking about automotive, we're talking about manufacturing and agriculture, and there are so many interesting use cases for the ways that we will use IOT. >> Yeah. It's interesting, you mentioned Nitro. we also got Annapurna acquisition years ago. You got latency at the Edge. You can handle low latency, high volume compute with the data. That's pretty powerful. It's a paradigm shift. That's a new dynamic. It's pretty compelling, these new architectures, most people are scratching their heads going, "okay, how do I do this, like what do I do?" >> No, you're right. So it is a security inheritance that we are extremely calculated about our hardware supply chain. And we build our own custom hardware. We build our own custom Silicon. Like, this is not a question. And you're right in that one of the things, one of the north stars that we have is that the security properties of our engineering infrastructure are built in. So there just is no button for it to be insecure. You know, like that is deliberate. And there are elements of the ways that nature works from it running, you know, with zero downtime, being able to be patched running. There are so many elements of it that are inherently security benefits that folks inherit as a product. >> Right. Well, we're here at the security summit. What are you excited for today? What's the conversations you're having here at the Fortinet security summit. >> Well, it's awesome to just meet folks and connect outside. It's beautiful outside today. I'm going to be giving a talk on securing the cloud journey and kind of that growth and moving to infrastructure as code and security as code. I'm excited about the opportunity to learn a little bit more about how folks are managing their hybrid environments, because of course, you know, I think sometimes folks perceive AWS as being like this city on a hill where we get it all right. We struggle with the same things. We empathize with the same security work. And we work on that, you know, as a principal in the office of the CISO, I spend a lot of my time on how we do security and then a lot of my time talking to customers and that empathy back and forth is really crucial. >> Yeah. And you've got to be on the bleeding edge and have the empathy. I can't help but notice your AWS crypto shirt. Tell me about the crypto, what's going on there. NFT's coming out, is there a S3 bucket at NFT now, I mean. (both laughing) >> Cryptography never goes out of style. >> I know, I'm just, I couldn't help-- We'll go back to the pyramids on that one. Yeah, no, this is not a, an advertisement for cryptocurrency. It is, I'm a fangirl of the AWS crypto team. And as a result of wearing their shirts, occasionally they send me more shirts. And I can't argue with that. >> Well, love, love, love the crypto. I'm big fan of crypto, I think crypto is awesome. Defi is amazing. New applications are going to come out. We think it's going to be pretty compelling, again, let's get today right. (laughing) >> Well, I don't think it's about like, so cryptocurrency is just like one small iteration of what we're really talking about, which is the idea that math resolves, and the idea that you can have value in your resolution that the math should resolve. And I think that is a fundamental principle and end-to-end encryption, I believe is a universal human right. >> Merritt, thank you for coming on the cube. Great, great to have you on. Thanks for sharing that awesome insight. Thanks for coming on. >> Merritt: Thank you. >> Appreciate it. Okay. CUBE coverage here in Napa valley, our remote set for Fortinet's security cybersecurity summit here as part of their PGA golf Pro-Am tournament happening here in Napa valley. I'm John Furrier. Thanks for watching.
SUMMARY :
brought to you by Fortinet. of Fortinet's, the It's good to be here. now, sponsoring the PGA. What's the current situation the way that you do DevOps You have the perimeter, et cetera. But one of the things that I think How is that changing the paradigm but creating the actual, you know, but I got to ask you about the Edge. And of course that speaks to You got latency at the Edge. is that the security properties What's the conversations you're having And we work on that, you know, and have the empathy. of the AWS crypto team. Well, love, love, love the crypto. and the idea that you can for coming on the cube. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Merritt | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
$200 | QUANTITY | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
Merritt Baer | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Napa valley | LOCATION | 0.99+ |
Napa valley | LOCATION | 0.99+ |
Steven | PERSON | 0.99+ |
nine months | QUANTITY | 0.99+ |
nine week | QUANTITY | 0.99+ |
Annapurna | ORGANIZATION | 0.99+ |
This year | DATE | 0.99+ |
two years ago | DATE | 0.99+ |
today | DATE | 0.98+ |
this year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
PGA golf Pro-Am | EVENT | 0.98+ |
NFT | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
one element | QUANTITY | 0.97+ |
Nitro | ORGANIZATION | 0.97+ |
a year ago | DATE | 0.97+ |
Fortinet championship | EVENT | 0.96+ |
Fortinet Security Summit | EVENT | 0.95+ |
Fortinet Security Summit 2021 | EVENT | 0.95+ |
CloudWatch | TITLE | 0.95+ |
EC2 | TITLE | 0.95+ |
DevSecOps | TITLE | 0.94+ |
Alexa | TITLE | 0.94+ |
Greengrass | ORGANIZATION | 0.94+ |
PGA | EVENT | 0.9+ |
single packet | QUANTITY | 0.89+ |
GitHub | ORGANIZATION | 0.89+ |
DynamoDB | TITLE | 0.87+ |
Fortinet | EVENT | 0.86+ |
COVID | TITLE | 0.86+ |
zero | QUANTITY | 0.85+ |
one way | QUANTITY | 0.85+ |
FreeRTOS | TITLE | 0.84+ |
zero trust | QUANTITY | 0.82+ |
Lambda | TITLE | 0.8+ |
Amazon web | ORGANIZATION | 0.8+ |
years ago | DATE | 0.78+ |
one small iteration | QUANTITY | 0.77+ |
security cybersecurity summit | EVENT | 0.76+ |
first inaugural | QUANTITY | 0.75+ |
DevOps | TITLE | 0.74+ |
Fortinet security summit | EVENT | 0.73+ |
championships security summit | EVENT | 0.72+ |
Silicon | ORGANIZATION | 0.71+ |
CISO | ORGANIZATION | 0.71+ |
Snowflake | ORGANIZATION | 0.71+ |
S3 | COMMERCIAL_ITEM | 0.69+ |
Edge | TITLE | 0.68+ |
things | QUANTITY | 0.58+ |
cases | QUANTITY | 0.52+ |
Security Hub | TITLE | 0.51+ |
5G | ORGANIZATION | 0.34+ |
Venkat Venkataramani and Dhruba Borthakur, Rockset | CUIBE Conversation
(bright intro music) >> Welcome to this "Cube Conversation". I'm your host, Lisa Martin. This is part of our third AWS Start-up Showcase. And I'm pleased to welcome two gentlemen from Rockset, Venkat Venkataramani is here, the CEO and co-founder and Dhruba Borthakur, CTO and co-founder. Gentlemen, welcome to the program. >> Thanks for having us. >> Thank you. >> Excited to learn more about Rockset, Venkat, talk to me about Rockset and how it's putting real-time analytics within the reach of every company. >> If you see the confluent IPO, if you see where the world is going in terms of analytics, I know, we look at this, real-time analytics is like the lost frontier. Everybody wants fast queries on fresh data. Nobody wants to say, "I don't need that. You know, give me slow queries on stale data," right? I think if you see what data warehouses and data lakes have done, especially in the cloud, they've really, really made batch analytics extremely accessible, but real-time analytics still seems too clumsy, too complex, and too expensive for most people. And we are on a mission to make, you know, real-time analytics, make it very, very easy and affordable for everybody to be able to take advantage of that. So that's our, that's what we do. >> But you're right, nobody wants a stale data or slower queries. And it seems like one of the things that we learned, Venkat, sticking with you in the last 18 months of a very strange world that we're living in, is that real-time is no longer a nice to have. It's really a differentiator and table stakes for businesses in every industry. How do you make it more affordable and accessible to businesses in so many different industries? >> I think that's a great question. I think there are, at a very high level, there are two categories of use cases we see. I think there is one full category of use cases where business teams and business units are demanding almost like business observability. You know, if you think about one domain that actually understood real-time and made everything work in real-time is the DevOps world, you know, metrics and monitoring coming out of like, you know, all these machines and because they really want to know as soon as something goes wrong, immediately, I want to, you know, be able to dive in and click and see what happens. But now businesses are demanding the same thing, right? Like a CEO wants to know, "Are we on track to hit our quarterly estimates or not? And tell me now what's happening," because you know, the larger the company, the more complex that have any operations dashboards are. And, you know, if you don't give them real-time visibility, the window of opportunity to do something about it disappears. And so they are really, businesses is really demanding that. And so that is one big use case we have. And the other strange thing we're also seeing is that customers are demanding real-time even from the products they are using. So you could be using a SaaS product for sales automation, support automation, marketing automation. Now I don't want to use a product if it doesn't have real-time analytics baked into the product itself. And so all these software companies, you know, providing a SaaS service to their cloud customers and clients, they are also looking to actually, you know, their proof of value really comes from the analytics that they can show within the product. And if that is not interactive and real-time, then they are also going to be left behind. So it's really a huge differentiator whether you're building a software product or your running a business, the real-time observability gives you a window of opportunity to actually do something about, you know, when something goes wrong, you can actually act on it very, very quickly. >> Right, which is absolutely critical. Dhruba, I want to get your take on this. As the CTO and co-founder as I introduced you, what were some of the gaps in the market back in 2016 that you saw that really necessitated the development of this technology? >> Yeah, for real-time analytics, the difference compared to what it was earlier is that all your things used to be a lot of batch processes. Again, the reason being because there was something called MapReduce, and that was a scanning system that was kind of a invention from Google, which talked about processing big data sets. And it was about scanning, scanning large data sets to give answers. Whereas for real-time analytics, the new trend is that how can you index these big datasets so that you can answer queries really fast? So this is what Rockset does as well, is that we have capabilities to index humongous amounts of data cheaply, efficiently, and economically feasible for our customers. And that's why query is the leverage the index to give fast (indistinct). This is one of the big changes. The other change obviously is that it has moved to the cloud, right? A lot of analytics have moved to the cloud. So Rockset is built natively for the cloud, which is why we can scale up, scale down resources when queries come and we can provide a great (indistinct) for people as data latency, and as far as query latencies comes on, both of these things. So these two trends, I think, are kind of the power behind moving, making people use more real-time analytics. >> Right, and as Venkat was talking about how it's an absolute differentiator for businesses, you know, last year we saw this really, this quick, all these quick pivots to survive and ultimately thrive. And we're seeing the businesses now coming out of this, that we're able to do that, and we're able to pivot to digital, to be successful and to out-compete those who maybe were not as fast. I saw that recently, Venkat, you guys had a new product release a few weeks ago, major product release, that is making real-time analytics on streaming data sources like Apache Kafka, Amazon Kinesis, Amazon DynamoDB, and data lakes a lot more accessible and affordable. Breakdown that launch for me, and how is it doing the accessibility and affordability that you talked about before? >> Extremely good question. So we're really excited about what we call SQL-based roll-ups, is what we call that release. So what does that do? So if you think about real-time analytics and even teeing off the previous question you asked on what is the gap in the market? The gap in the market is really, all that houses and lakes are built for batch. You know, they're really good at letting people accumulate huge volumes of data, and once a week, analyst asking a question, generating a report, and everybody's looking at it. And with real-time, the data never stops coming. The queries never stop coming. So how do you, if I want real-time metrics on all this huge volumes of data coming in, now if I drain it into a huge data lake and then I'm doing analytics on that, it gets very expensive and very complex very quickly. And so the new release that we had is called SQL-based roll-ups, where simply using SQL, you can define any real-time metric that you want to track across any dimensions you care about. It could be geo demographic and other dimensions you care about that and Rockset will automatically maintain all those real-time metrics for you in real-time in a highly accurate fashion. So you never have to doubt whether the metrics are valid and it will be accurate up to the second. And the best part is you don't have to learn a new language. You can actually use SQL to define those metrics and Rockset will automatically maintain that and scale that for you in the cloud. And that, I think, reduces the barrier. So like if somebody wants to build a real-time, you know, track something for their business in real-time, you know, you have to duct tape together multiple, disparate components and systems that were never meant to work with each other. Now you have a real-time database built for the cloud that is fully, you know, supports full feature SQL. So you can do this in a matter of minutes, which would probably take you days or weeks with alternate technologies. >> That's a dramatic X reduction in time there. I want to mention the Snowflake IPO since you guys mentioned the Confluent IPO. You say that Rockset does for real-time, what Snowflake did for batch. Dhruba, I want to get your perspective on that. Tell me about that. What do you mean by that? >> Yeah, so like we see this trend in the market where lot of analytics, which are very batch, they get a lot of value if they've moved more real-time, right? Like Venkat mentioned, when analytics powers, actual products, which need to use analytics into their, to make the product better. So Rockset very much plays in this area. So Rockset is the only solution. I shouldn't say solution. It's a database, it's a real-time database, which powers these kind of analytic systems. If you don't use Rockset, then you might be using maybe a warehouse or something, but you cannot get real-time because there is always a latency of putting data into the warehouse. It could be minutes, it could be hours. And then also you don't get too many people making concurrent queries on the warehouse. So this is another difference for real-time analytics because it powers applications, the query volume could be large. So that's why you need a real-time database and not a real-time warehouse or any other technologies for this. And this trend has really caught up because most people have either, are pretty much into this journey. You asked me this previous question about what has changed since 2016 as well. And this is a journey that most enterprises we see are already embarking upon. >> One thing too, that we're seeing is that more and more applications are becoming data intensive applications, right? We think of whether it's Instagram or DoorDash or whatnot, or even our banking app, we expect to have the information updated immediately. How do you help, Dhruba, sticking with you, how do you help businesses build and power those data intensive applications that the consumers are demanding? >> That's a great question. And we have booked, me and Venkat, we have seen these data applications at large scale when we were at Facebook earlier. We were both parts of the Facebook team. So we saw how real-time was really important for building that kind of a business, that was social media. But now we are taking the same kind of back ends, which can scale to like huge volumes of data to the enterprises as well. Venkat, do you have anything to add? >> Yeah, I think when you're trying to go from batch to real-time, you're 100% spot on that, a static report, a static dashboard actually becomes an application, becomes a data application, and it has to be interactive. So you're not just showing a newspaper where you just get to read. You want to click and deep dive, do slice and dice the data to not only understand what happened, but why it happened and come up with hypotheses to figure out what I want to do with it. So the interactivity is important and the real-timeliness now it becomes important. So the way we think about it is like, once you go into real-time analytics, you know, the data never stops coming. That's obvious. Data freshness is important. But the queries never stop coming also because one, when your dashboards and metrics are getting up to date real-time, you really want alerts and anomaly detection to be automatically built in. And so you don't even have to look at the graphs once a week. When something is off, the system will come and tap on your shoulder and say, "Hey, something is going on." And so that really is a real-time application at that point, because it's constantly looking at the data and querying on your behalf and only alerting you when something, actually, is interesting happening that you might need to look at. So yeah, the whole movement towards data applications and data intensive apps is a huge use case for us. I think most of our customers, I would say, are building a data application in one shape or form or another. >> And if I think of use cases like cutthroat customer 360, you know, as customers and consumers of whatever product or solution we're talking about, we expect that these brands know who we are, know what we've done with them, what we've bought, what to show me next is what I expect whether again, it's my bank or it's Instagram or something else. So that personalization approach is absolutely critical, and I imagine another big game changer, differentiator for the customers that use Rockset. What do you guys think about that? >> Absolutely, personalized recommendation is a huge use case. We see this all where we have, you know, Ritual is one of the customers. We have a case study on that, I think. They want to personalize. They generate offline recommendations for anything that the user is buying, but they want to use behavioral data from the product to personalize that experience and combine the two before they serve anything on the checkout lane, right? We also see in B2B companies, real-time analytics and data applications becoming a very important thing. And we have another customer, Command Alkon, who, you know, they have a supply chain platform for heavy construction and 80% of concrete in North America flows through their platform, for example. And what they want to know in real-time is reporting on how many concrete trucks are arriving at a big construction site, which ones are late and whatnot. And the real-time, you know, analytics needs to be accurate and needs to be, you know, up to the second, you know, don't tell me what trucks were, you know, coming like an hour ago. No, I need this right now. And so even in a B2B platform, we see that very similar trend trend where real-time reporting, real-time search, real-time indexing is actually a very, very important piece to the puzzle. And not just for B to C examples that you said, and the Instagram comment is also very appropriate because a hedge fund customer came to us and said, "I have kind of a dashboards built on top of like Snowflake. They're taking two to five seconds and I have certain parts of my dashboards, but I am actually having 50/60 visualizations. You do the math, it takes many minutes to load. And so they said, "Hey, you have some indexing deck. Can you make this faster?" Three weeks later, the queries that would take two to five seconds on a traditional warehouse or a cloud data warehouse came back in 18 milliseconds with Rockset. And so it is so fast that they said, you know, "If my internal dashboards are not as fast as Instagram, no one in my company uses it." These are their words. And so they are really, you know, the speed is really, really important. The scale is really, really important. Data freshness is important. If you combine all of these things and also make it simple for people to access with SQL-based, that's really the real unique value prop that we have a Rockset, which is what our customers love. >> You brought up something interesting, Venkat, that kind of made me think of the employee experience. You know, we always think of the customer 360. The customer experience with the employee experience, in my opinion, is inextricably linked. The employees have to have access to what they need to deliver and help these great customer relationships. And as you were saying, you know, the employees are expecting databases to be as fast as they see on Instagram, when they're, you know, surfing on their free time. Then adoption, I imagine, gets better, obviously, than the benefit from the end user and customers' perspective is that speed. Talk to me a little bit about how Rockset, and I would like to get both of your opinions here, is a facilitator of that employee productivity for your customers. >> This is a great question. In fact, the same hedge fund, you know, customer, I pushed them to go and measure how many times do people even look at all the data that you produce? (laughs) How many analysts and investors actually use your dashboards and ask them to go investigate at that. And one of the things that they eventually showed me was there was a huge uptake and their dashboards went from two to three second kind of like, you know, lags to 18 milliseconds. They almost got the daily active user for their own internal dashboards to be almost going from five people to the entire company, you know, so I think you're absolutely spot on. So it really goes back to, you know, really leveraging the data and actually doing something about it. Like, you know, if I ask a question and it's going to, you know, system is going to take 20 minutes to answer that, you know, I will probably not ask as many questions as I want to. When it becomes interactive and very, very fast, and all of a sudden, I not only start with a question and, you know, I can ask a follow-up question and then another follow-up question and make it really drive that to, you know, a conclusion and I can actually act upon it. And this really accelerates. So even if you kind of like, look at the macro, you hear these phrases, the world is going from batch to real-time, and in my opinion, when I look at this, people want to, you know, accelerate their growth. People want to make faster decisions. People want to get to, what can I do about this and get actionable insights. And that is not really going to come from systems that take 20 minutes to give a response. It's going to really come from systems that are interactive and real-time, and that's really the need for acceleration is what's really driving this movement from batch to real-time. And we're very happy to facilitate that and accelerate that moment. >> And it really drives the opportunity for your customers to monetize more and more data so that they can actually act on it, as you said, in real-time and do something about it, whether it's a positive experience or it is, you know, remediating a challenge. Last question guys, since we're almost out of time here, but I want to understand, talk to me about the Rockset-AWS partnership and what the value is for your customers. >> Okay, yeah. I'll get to that in a second, but I wanted to add something to your previous question. I think my opinion for all the customers that we see is that real-time analytics is addictive. Once they get used to it, they can go back to the old stuff. So this is what we have found with all our customers. So, yeah, for the AWS question, I think maybe Venkat can answer that better than me. >> Yeah, I mean, we love partnering with AWS. I think, they are the world's leader when it comes to public clouds. We have a lot of joint happy customers that are all AWS customers. Rockset is entirely built on top of AWS, and we love that. And there is a lot of integrations that Rockset natively comes with. So if you're already managing your data in AWS, you know, there are no data transfer costs or anything like that involved for you to also, you know, index that data in Rockset and actually build real-time applications and stream the data to Rockset. So I think the partnership goes in very, very deep in terms of like, we are an AWS customer, we are a partner and we, you know, our go-to market teams work with them. And so, yeah, we're very, very happy, you know, like, AWS fanboys here, yeah. >> Excellent, it sounds like a very great synergistic collaborative relationship, and I love, Dhruba, what you said. This is like, this is a great quote. "Real-time analytics is addictive." That sounds to me like a good addiction (all subtly laugh) for businesses and every industry to take out. Guys, it's been a pleasure talking to you. Thank you for joining me, talking to the audience about Rockset, what differentiates you, and how you're helping customers really improve their customer productivity, their employee productivity, and beyond. We appreciate your time. >> Thanks, Lisa. >> Thank you, thanks a lot. >> For my guests, I'm Lisa Martin. You're watching this "Cube Conversation". (bright ending music)
SUMMARY :
And I'm pleased to welcome the reach of every company. And we are on a mission to make, you know, How do you make it more is the DevOps world, you know, that you saw that really the new trend is that how can you index for businesses, you know, And the best part is you don't What do you mean by that? And then also you don't that the consumers are demanding? Venkat, do you have anything to add? that you might need to look at. you know, as customers and And the real-time, you And as you were saying, you know, So it really goes back to, you know, a positive experience or it is, you know, the customers that we see and stream the data to Rockset. and I love, Dhruba, what you said. For my guests, I'm Lisa Martin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rockset | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
20 minutes | QUANTITY | 0.99+ |
Dhruba Borthakur | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
five people | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
five seconds | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Venkat Venkataramani | PERSON | 0.99+ |
North America | LOCATION | 0.99+ |
two categories | QUANTITY | 0.99+ |
18 milliseconds | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Dhruba | ORGANIZATION | 0.99+ |
SQL | TITLE | 0.99+ |
Snowflake | ORGANIZATION | 0.98+ |
one domain | QUANTITY | 0.98+ |
two gentlemen | QUANTITY | 0.98+ |
third | QUANTITY | 0.98+ |
Three weeks later | DATE | 0.97+ |
three second | QUANTITY | 0.97+ |
two trends | QUANTITY | 0.97+ |
One thing | QUANTITY | 0.96+ |
second | QUANTITY | 0.96+ |
Venkat | ORGANIZATION | 0.95+ |
Ritual | ORGANIZATION | 0.93+ |
an hour ago | DATE | 0.92+ |
both parts | QUANTITY | 0.91+ |
once a week | QUANTITY | 0.91+ |
Snowflake | TITLE | 0.9+ |
one big use case | QUANTITY | 0.89+ |
50/60 | QUANTITY | 0.89+ |
few weeks ago | DATE | 0.87+ |
one shape | QUANTITY | 0.86+ |
Cube Conversation | TITLE | 0.84+ |
Erez Berkner, Lumigo | CUBE Conversation
(bouncy music) >> Welcome to this Cube Conversation, I'm Lisa Martin. I'm joined by Erez Berkner, the CEO and co-founder of lumigo. Erez, welcome to the program. >> Hey Lisa, thank you for having me. Glad to be here. >> Excellent, we're going to have a great conversation. We're going to be talking about the growing trend of using cloud native and serverless. But before we do, Erez, give our audience an overview of lumigo. >> Excellent, so lumigo is an observability platform. Basically allowing developer, architects, the technology person in the organization to understand what's going on with his modern cloud, with his serverless, with his cloud native application. So at the end of the day, lumigo as assess platform, allow you to know what's happening, get visibility, and be able to get to the root cause of issues, many times before they actually hit your production. >> I saw on your website, in terms of speed, getting up and running quickly, in four minutes with four clicks. Tell me how developers do this that quickly. >> Yeah, that's actually great point. Because in general, when we talk about the modern cloud, people are really fed up with deploying agents, long processes of servers, and more and more we see the trend towards APIs, toward code libraries. At the end of the day, at the heart of lumigo, we built a very strong automation engine based on APIs, based on lomdalier integration. And this allows a developer to basically connect lumigo via the APIs in couple of clicks. Doesn't require code changes, deployment of agents, deployment of services. And this is why it's so fast, because it's lightweight. And that's a trend of managed services, of serverless, and lumigo is another stone in that wall. >> Excellent, lightweight, key there. Define serverless, what is considered serverless? >> Mmm, ooh, don't get me involved in dispute of those definitions. But I can share my view, but this is a.. Anyone, I would say, have his own definition. But the main concept with serverless is at the end of the day, really, like it says, serverless. You don't deploy a server. You don't rent a server, you don't manage a server, you don't deploy an operating system, you don't patch a server. You don't take care of scalability, of high visibility. Basically, all the chores of managing, of maintaining a server, basically go away. Now, they don't really go away. Somebody else is dealing with them. So there is a server, but it's not your server to manage. And that someone is a cloud provider, is Amazon, is Microsoft, is Google, it's IBM. And this is how I view serverless. Basically, a managed service that doesn't require to deploy or manage a server, and you use it via APIs. And if you think about that, in the past when serverless started, 2015, serverless was function as a service, Lambda, AWS started that. But today, in 2021, serverless, yeah, it's function as a service, it's Lambda, but it's also storage as a service, like S3, and data as a service, like Snowflake, like DynamoDB. And queue as a service like SNS, like EventBridge, like Kenesis. And even Stripe, payment a as service, and Twilio, and SendGrid. So all these API based services, that you just consume, and they're like Lego pieces that you connect together and you just connect and you go, and you start working and they up and running, this is how I define serverless today. And that's basically allowing you to run any application today with zero servers. >> That's a great definition, that nice and clean, and I think the Lego bricks really kind of clicked in my mind when you talked about that. Let's talk now about for business critical production applications, what are you seeing in terms of adoption of serverless for those cases? >> That's a great question, because I think that we are in a critical point of time, in cloud native, in modern cloud, in serverless market. And I think it's an evolution. You know, when we started, again, back in 2015, serverless was just one or two services. But we got to a critical mass of services, including DynamoDB and S3 and Lambda and EventBridge and all the other services, that step function, that basically allow you to build your application based on serverless. And this critical point of the architecture of serverless being mature enough, being wide enough, to allow you to do what you want, to have the confidence running serverless in production, to know that you have the tooling that you used to have in the past to monitor, to debug, to secure, to understand cost, all of this are really coming together this year. We actually see this year, and a bit of end of last year, but this is what's driving a trend in the industry. I think it's still not known enough to many of the organizations, or not wide enough, or not public enough. But our customers are focused on cloud native and serverless. And we've seen a dramatic change in the last six months. And the main change is organizations that used to play around with serverless, that used to do non-business critical usage of serverless, because it's easy, because it makes sense, because it's fast, all of a sudden they got the confidence to do that with their business critical application in production. And this is a shift that we're seeing. And that goes many times with the technology maturity. You start, you play around with something, it makes sense, it makes sense, you get confidence, and boom! This become more and more mainstream technology. And we're at the verge of that. >> In terms of a catalyst for that confidence, do you think that the events, the world events of the last 12 months and this acceleration of digital transformation, has that played any part in the maturation of the technology that's giving customers the confidence to adopt serverless? >> Yeah, I think it's fascinating, what we're seeing. Because I think the last event really push a organization to innovate. Because of different reason, because they don't have the head count, so they need to reduce the maintenance that they do, they need to reduce the developer head count, the DevOps head count, they need to reduce costs. Serverless is running only when it need to run, so you pay only for what you use. So this is another method that our customer, for example, reduce their cost. So I think beyond the maturity of the architecture, the push forward for optimization, for lower usage or lower usage of engineering force, really pushed serverless forward. And this paradigm, once it worked for one team, it's viral. It's viral with in organization and the cross-organization. So this team managed to reduce 50% of the cost, and 70% of the developers that need to maintain the production. Let's duplicate that. And let's do that four times, and five times, and 10 times. And this is the point in time that we are. So that's a trend and I think it's very much impacted by the world economics. >> Interesting, that trend of virality. Let's dig into, you mentioned a couple of benefits. I heard reduction in total cost of ownership, or costs. Talk to me about the lumigo solution, the technology, and what some of those key benefits are that it is consistently delivering to your customers. >> So I think the basic is that serverless makes a lot of sense, economical, maintenance. That's why the cloud providers are putting so much effort and power in delivering more and more serverless maturity. One of the challenges that we see for almost any organization adopting the new technology, it goes back to we understand the values, but at the end of the day I need to make sure that if something goes wrong in production, I will know about it and I will know how to react and fix it in a matter of minutes. 'Cause that's my service, that's my business. And I know how to do it in a server world, where there's one server or three servers, and everything running in the same server. I have the tools for that. And I want to go serverless, I want to go cloud native, but all of a sudden there are dozens of services that I consume via APIs and they're a part of a bigger picture of my application. So I'm lacking many times the confidence, the tools, the awareness of, something goes wrong, I'll know about it, and I'll be able to fix it. And this is where lumigo comes in. So we built lumigo from the ground up to be very much focused on the modern cloud, on serverless. And that means two main things that we provide for our customers. One is, I would say one thing. We provide confidence. You can use serverless in production, and you can rest assured that if something goes wrong, you will be the one alerting and we'll give you all the information to debug it. And we do it by two main things. One is the visibility that we create. Because we're connected to the environment, we alert on things that are relevant to serverless. It's not about CPU, it's not about a iO. It's about concurrency limits, it's about cold start, it's about time outs, it's about reaching duration limits. These are the things that we know to alert you about. It's very specific to the serverless services. And it's not a generic metric, it's serverless metric. So that's number one, visibility, getting alert whenever something is about to go wrong. But what do you do then? Let's say I have one million invocations a day, and one of them is actually, I have a trigger, something went wrong. And this is where lumigo allow the developers to debug. Basically, you click on a specific issue, and lumigo tell you the entire story of what happened, from the very beginning, an API gateway triggering a Lambda, right into DynamoDB, triggering an Lambda, it tell you the entire story end to end of what happened with that specific request, with inputs, with outputs, with environment variables. All the things the developer need in order to debug, to find the root cause, and then fix it in matter of minutes. And that's the game-changer that allow those organizations to run serverless with confidence. >> You talk about confidence, it's a word that I hear often when I'm talking with customers of vendors. It's not something to be underestimated. It's incredibly important that technology provide that confidence, especially given the events of the last year and a half that we've seen where suddenly folks couldn't get into data centers, for example. Talk to me a little bit about some of the customers. I saw from your website some great brand names, but talk to me about a customer that you think really not only has that confidence that lumigo is delivering, but is really changing their business and their approach to modern monitoring with lumigo. >> Yeah, so there are several interesting. I'll choose maybe one of the more interesting cases, a company called Medtronic. It's one of the largest medical device companies in the U.S. And it's very interesting because they have an IoT backend. Basically they have medical devices around the world that send IoT information back to their cloud. And they get metrics, they run machine learning on that. And they took a strategic decision to run the system with serverless. Because it can scale automatically, because it can deploy one more million devices and they don't need to change anything, and many, many other benefits of serverless. And we met them back in 20, end of 2019. They were looking for exactly a solution that allows them to get issues and drill down to analyze those issues. And they were just in the beginning. The early days they had 20 million invocations, requests per month. They knew they were going to scale, they knew that when they scale, they cannot correlate logs, and try to understand what happened manually. They need a professional tool. And this is where they started using lumigo. And today, a year and a half after, they reached one billion invocations a month. Again, the same concept, IoT devices, medical devices, sending metrics and information for the backend for processing. And today, lumigo is monitoring everything in that environment. And alert them from, you're about to have a problem, or you have an application error, or you have high latency, you have spike of cost, all of that are covered by lumigo. And the developers, once they get this to slot, to play the duty, you're just able to click on it, and drill down and see, one by one, requests that created the trigger that alert. And they can understand, again, the inputs, the output, the logs, the return values, everything. I call it debugging heaven. Because it's always there, it's always post-mortem, you don't need to do anything. At the same time you get the visibility and you can fix it, because this is their production, this is their business critical application. >> Debugging heaven, I love that. That's for developers, that is probably a Nirvana state. I want to wrap up Erez, just giving our folks in the audience an overview of the relationship that lumigo has with AWS. >> AWS is one of our strongest partner. I think there's a great synergy working with AWS. We've been partners for the last three years. And I think the reason for the... You know, we're still... AWS has thousands, tens of thousands of partners. I think that this partnership is specifically strong because there is a win-win relationship over here. On the one hand side lumigo is very much invested on Amazon. Our customers are mostly Amazon customers, and we are solving, providing confidence for those customers to run serverless in production, and answering a need of a customer. And this is also the win for Amazon. Amazon is basically have a great, great technology of serverless. But the lack of visibility, the lack of confidence, is hindering the adoption. And Amazon decided to work with lumigo, saying, we'll develop the core, we'll develop the services, we'll develop the serverless architecture, and you can use lumigo for monitoring, for debugging, for everything that you need in order to run that in production. And that's been very, very strong relationship that just grows as we develop together. And it's been on working together with customers, introducing customers, but also on the technology level. For the audience who sees Amazon announcement on serverless, many times lumigo is a design partner. It's part of the announcement, of lumigo was a design partner and the launch partner, and support the new feature out of the box. This is because we want to get the support as soon as possible, as soon as new features are released. So that's where we are today. >> Sounds like a very collaborative and symbiotic relationship. Erez, thank you for joining me on the program today, talking to us about some of the trends in serverless, some of the things that are catalyzing adoption, that visibility, that confidence, that lumigo delivers to its customers. We thank you for your time. >> Excellent, thank you very much Lisa. Have a good day. >> You too! For Erez Berkner, I'm Lisa Martin. Thanks for watching this Cube Conversation. (bouncy music)
SUMMARY :
the CEO and co-founder of lumigo. Glad to be here. about the growing trend So at the end of the day, in four minutes with four clicks. At the end of the day, is considered serverless? is at the end of the day, and I think the Lego bricks And the main change is and 70% of the developers solution, the technology, allow the developers to debug. of the last year and At the same time you get the of the relationship that and support the new that lumigo delivers to its customers. Excellent, thank you very much Lisa. this Cube Conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Erez | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
five times | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
2015 | DATE | 0.99+ |
10 times | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Medtronic | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Erez Berkner | PERSON | 0.99+ |
one server | QUANTITY | 0.99+ |
three servers | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
end of 2019 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
one team | QUANTITY | 0.99+ |
20 million invocations | QUANTITY | 0.99+ |
20 | DATE | 0.99+ |
two services | QUANTITY | 0.99+ |
Lambda | TITLE | 0.98+ |
four minutes | QUANTITY | 0.98+ |
four clicks | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
one more million devices | QUANTITY | 0.97+ |
Lego | ORGANIZATION | 0.97+ |
Nirvana | LOCATION | 0.97+ |
one billion invocations | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
four times | QUANTITY | 0.97+ |
end | DATE | 0.96+ |
two main things | QUANTITY | 0.96+ |
one million invocations a day | QUANTITY | 0.95+ |
last year and a half | DATE | 0.95+ |
DynamoDB | TITLE | 0.95+ |
last six months | DATE | 0.95+ |
lumigo | ORGANIZATION | 0.94+ |
tomer | PERSON | 0.94+ |
Snowflake | TITLE | 0.93+ |
two main things | QUANTITY | 0.93+ |
tens of thousands | QUANTITY | 0.92+ |
dozens of services | QUANTITY | 0.91+ |
SNS | TITLE | 0.88+ |
a month | QUANTITY | 0.88+ |
last 12 months | DATE | 0.86+ |
EventBridge | TITLE | 0.82+ |
lumigo | TITLE | 0.82+ |
Danielle Royston & Robin Langdon, Totogi Talk | Cloud City Live 2021
(upbeat music) >> Okay, we're back. We're here in the main stage in Cloud City. I'm John Furrier and Dave Vellante. Normally, we're over there on theCUBE set, but here we've got a special presentation. We'll talk about Totogi and the new CEO of Totogi is Danielle, who is also the CEO of TelcoDR, Digital Revolution. Great to see you. And of course, Robin Langley, we interviewed you in theCUBE, CTO of Totogi. This is a main stage conversation because this is the big news. >> Yeah. >> You guys launched there with a hundred million dollar investment. We covered that news a couple weeks ago and you as the CEO. What's the story. Tell us what is happening with Totogi? Why such a big focus? What's the big push? >> Yeah, I'm really excited about Totogi because I really think this team is working to build public cloud tools for Telco the right way. It's everything I've been talking about. I talked about it yesterday in my keynote and this is really the execution of that vision. So, I'm super excited about that. A couple of days ago, Rob and I were talking about the charging system, but there's another product that Totogi introduced to the world and that's the webscale BSS system. So I think we're going to talk about that today. It's going to be great. >> Let's get into actually the charging system, which was great processing here. What is this focus? What is BSS about with cloud? How does the public cloud innovation change the game with this? >> Well, a little bit like charging. I mean, there are maybe, you know, a hundred plus BSS systems out there, why does the world need yet another BSS? And I think one thing is we're coupling up with public cloud, which gives it that webscale element. Right? We can have a platform. Never do another upgrade again, which I think is really exciting. But I think the really key thing that we're working on is we're building on top of an open API standard. And a lot of vendors talk about their APIs, why is this different? These are standards developed by TM forum, right? It's an independent body in our industry. They've been working on these, sorry, open APIs, and all the different vendors signed a manifesto that say, "I pledge. I pledge to support the open API", but if you look at the leaderboard and everyone is Sub10, Sub5, right? And so it's kind of like, going through the actions and not falling, you know, saying it, but not following it up and we're doing it. >> Wow, so... >> Yeah. >> Dave: Robin, you guys just popped up on the leaderboard. You went from a standing start to, I think more than 10. >> Yeah. >> I don't think that's ever been done before, has it? >> No, so we were out there. We published 12 APIs and we've got a quote from, you know, TM forums saying, essentially I've never seen anyone move so fast and to publish. And it's our intent to publish, you know, 50 plus, all of their APIs by the end of the year. >> So, how were you able to do that? I mean, like, were you holding them back? Just kind of dumping them on one day? This is the nature of the new business, isn't it? >> Yeah, absolutely and then you think about BSS. It's just, you know, been known for years to be a spaghetti of, you know, applications, you know, disparate data, data being duplicated, systems not talking to each other, lots of different interface types. And it was crying out to be just, you know, sold properly in the cloud. And the public cloud is perfect for this. You know, we can build a model and start, rather than looking at the applications first, you know, let's look at the model, the unified model and build on those open APIs and then start to, you know, allow people to come in and create an ecosystem of applications all using that same model. >> If you don't mind me asking you, if you can explain. 'Cause we talked before we weren't on camera, but we talked about the cloud and you were explaining to me how this is perfect for the challenges that you guys are trying to solve. What about the public cloud dynamic or innovation component that you guys are leveraging? Take us through a little bit on that, because I think that's a big story here that's under the covers is... >> Yeah. >> What you're capable of doing here. Do you mind explaining? >> Yeah, no, absolutely. So the cloud gives us this true scalability across everything. You know, we can scale to billions of records. So we can hook in, you know, to suck in data from, you know, our on-premise systems anywhere. We have, you know, a product called Devflow, so we used to do that. And it can really allow us to bring that data in, scale-out, use standard term cloud innovations, like Lambda functions and AWS, you know, DynamoDB, and present that, you know, through that open API. So we can use, you know graphQL, you know, present that with rest on top. And so you can then build on top of that. You can take any low code, no code application building tool you like, put that on top and then start building your own ecosystem. You can build inventory systems, CRM, anything you like. >> Well one thing that's really interesting about these projects is they usually take months, years to deploy, right? And what we're doing is we're providing, almost BSS as a service, right? It's an API layer that anyone can go to. Maybe you need to use it for five minutes, five months, five years, right? With the open standard and your own developers can learn how to use this text stack and code to it doesn't require us. And so we're really trying to get away from being an SI, you know, systems integrator or heavy services revenue, and instead build the product that enables the telcos to use their own people, to build the applications that they, they know what they want, and so, here you go. >> It's a platform. >> Yeah. >> It's a platform. >> So, how do you connect to systems on the ground? Like what's the modern approach to doing that? >> Yeah, go for it. >> Yeah so, telcos have, you know, a huge amount of data on premise. They have difficulties you can get to it. So, as I mentioned before, we had this Devflows product and it has connectors. We have like 30 plus connectors to all the standard sort of, billing systems, CRM systems, you know, we can hook into things like Salesforce. And we can create either, you know, couple of a real-time interface in there, or we can start to suck data into the cloud and then make it available. So, if they want to start with a nice, easy step and just build slowly, we can just hook in and pull that information out. If there may be, you know, an attribute that you want to, you know, use in some of that application, you can easily get to it. And then, you know, over time you start to build your data into the cloud and then you've got the scale, you know, and all the innovations of that brings with it. >> So is Devflow an on-ramp, if you will, for the public cloud, is that the way you were thinking about it? >> Yeah. >> Yeah. Yeah, I mean, I call it the slurper. (group chuckles) Right. I mean, these telcos have, like Robin was saying, spaghetti systems that have been, you know, customized and connected and integrated. I mean, it is a jungle out there of data. They're not going to be able to move this in one step. We just think of like a pile of spaghetti, like the whole bowl. >> Overcooked spaghetti. >> Right overcooked, the whole bowl comes out and it's really hard to just pull out one noodle and the rest is there and what are you going to do? And so the slurper, right, Devflows, allows you to select which data you want to pull out. It could be one time, you could have it sync. You don't have to do the whole thing and it doesn't disrupt the production environment that's on-premise. But now you're starting to move your data into the public cloud and then like Robin was saying, you can throw it up against quick sites. You can throw it up against different Amazon services. You can create new applications. And so it's not this like, you know, big bang kind of approach. You can start to do it in pieces and I think that's what the industry needs. >> I'm talking about this the other day, when we're talk about charging. What a lot of vendors will do is they'll put a wrapper around it, containerize it and then shove it into the public cloud and say, "Okay". >> Check mark. >> Yeah a checkbox. And it affects how they price, if they price the same way. But we talked a lot about pricing the other day, really pricing like cloud, consumption pricing. How are you pricing in this case? >> Same with the charging system. The BSS system is paid by the use, paid by the API call. So, really excited to introduce yet, again, a free tier. We think we're doing 500 million API calls per month for free. We think this is great for a smaller telco where like, you're experimenting and just getting to know the system and before you like, go all in and buy. And I think that API pricing is going to go right at the heart of some of these vendors that love to charge by the subscriber or a perpetual license agreement, right? They're not quite moving as a service. And so, yeah. >> Are you saying, they're going to be disruptive in the pricing in terms of lower cost or more, consumable. >> And I think it's also an easier on ramp, right? It's easier to start paying by the use and experimenting. And it's really easy, just like I was talking about with charging, where you're going to get the same great product that you would sell to a tier one at a price that you can afford. And now those smaller two or three guys aren't having to make a trade off between great technology, but I'm paying through the nose or sacrifice on the tech, but I can afford it. And so, I think you're going to see this ecosystem of people starting to learn how to code and think in this way. Telcos have already decided that they want to adopt the TM forum, open APIs. They're on all the RFPs. Do you support it? Everyone says they support it, but we don't see anyone really doing it. They're not on the leaderboard. >> And there's transparency, because you're pricing by API call, right? Versus the spaghetti, you guys call it, the hairball of what am I paying for? >> Right, you're getting, all of this. It's by the subscriber. It's millions and millions of dollars. Oh, and you know, you're going to need to buy a bunch of consulting revenue to make it all work and talk to each other. Pay up, right? And that's what we're living in today. And I'm taking us to the, you know, public cloud future by the API. >> This is the big cloud revolution. It's unbundling has been a really big part of the consumption of technology paid by the usage, get in, get some value, get some data, understand what it is, double down on it, iterate. >> Put it up with different services that are available that we don't have, but Amazon uses, right? They have call centers up there, they have ML that you may want to use like, start using it, start coding, start learning about the AWS tech stack. >> So is it available now? >> Yeah. >> Yeah. No, it's available now. We've already published the swagger for the BSS APIs. So, you know, they can come on board, they can go to access to all the API straight away and start using it. They can load up their favorite REST clients and then start developing. >> So you got a dozen APIs today. Where are we headed? What can we expect? >> All by the end of the year. There's over 50 APIs. You know, the number one guy on the board is at like 22, 21, 22 APIs covered. We'll be 50 plus by the end of the year. And we're just going to blow doors. >> The API economy has come to telco. >> Yeah, I mean, it's really BSS' Lego pieces, right. Assembling these different components and really opening it up. And I think there's been a lot of power by the vendors to keep it locked down, keep it close. Yes, we have an API, but you got to use our people to do it. Here's the hundreds of thousands or millions of dollars that you're going to pay us and keep us in business, and fat and happy, and I'm coming right in on the low end. Right, dropping that price, opening it up. I think telcos are going to love it. >> Well, Mike, you said too, you'll allow the smaller telcos to have the same, actually, better capabilities than the larger telcos, right? Maybe the stack's not as mature or whatever, but they'll get there and they'll get there with a simpler, easier to understand pricing model and way, way faster. >> Yeah. >> All right and that's where the disruption comes. >> And I Think this is where AWS has really done well as a hyper scaler against their competition, is that they've really gotten to market very quickly with their services. Maybe they're not perfect, but they ship 'em. And they get them out there and they get people using them. They use them internally and they get them out. And I think this is where maybe some of the other hyperscalers, they hold them back and they wait until they're a little bit more mature. And AWS is one because they've been fast. And I want to sort of copy that feat. >> I think your idea of subscriber love in your keynote, and I think applies here because Amazon web services has done such a great job of working backwards from the customer. So they'd ship it fast on used cases that they know have been proven through customer interactions. >> Yep. >> They don't just make up new features. And then they iterate. They go, "Okay". >> Start simple, grow on that, learn from the market. What are people using? What are they not using? Iterate, iterate, iterate. >> Okay, so with that in mind, working backwards from your customer, how do you see the feature set evolving for this functionality? How do you see it evolving as a product? >> Yeah, I mean, I think all of the BSS systems today have been designed with manual people on the other side of the screen, right? And we've seen chat bots take off, we've seen, you know, using chat as support. I think we need to start getting into more automation right? Which is really going to change up telco, right? They have thousands of customer support agents and you're like, "Dude, I just want a SIM, that's all I need". >> Yeah. >> Just like, where do I push a button and send an Uber to my house and drop it off or eSim. And so, speeding up business, empowering the subscriber. We know how to interact, we just went through COVID where we learned about different apps that overnight, you can like order all of your groceries and order all of your food and there it is, and it was contactless and... >> It's funny, you said future of work, which we love that term, "work". Workloads, work force, you got all these kind of new dynamics going on with cloud enablement and the changes is radical. And the value is there. There's value opportunities. >> I mean like, you know, where are the ARVR applications, right? Where your agent pops. I saw the demo. There's a strife in Austin and they're going to kill me 'cause I can't remember their name. But they had a little on your mobile phone, a little holographic customer support. Like, "How can I help you"? Right. And I'm like, "Where's that", like, imagine you're like, ATT, you're not like on the phone for like an hour and a half trying to like, figure out what's wrong. And it's like, you know, it knows what's wrong. It understands my needs and so, no one's working on that. We're still working on, keyboards. >> Right, that and chat bot is a great example because it's all AI, and where's the best AI? It's in the cloud because that's where the data is. That's where the best of modeling has been. (chuckles) >> I think your point, it's the scale of data. >> Absolutely. >> And machine learning and AI needs a lot of data points to get really good. I mean, I'm old, I'm 50. I graduated in 1993. I took an AI class from Niels Nielsen, like the godfather of AI, right? Okay, like that AI, even 10 years ago AI, it's just moving so quickly and it's now super affordable. >> Well, I really want to thank you guys for coming up and sharing that knowledge and insight, congratulations on the product and open APIs. Love open API's open source with some new revolution. Danielle and Robin. Thank you so much. >> Thanks so much. >> Thank you. >> Thank you. >> Congratulations. Thank you everyone for coming. (crowd applauding) (people whooping) Okay, back to you in the studio at Cloud City.
SUMMARY :
and the new CEO of Totogi and you as the CEO. and that's the webscale BSS system. change the game with this? and not falling, you know, Dave: Robin, you guys just And it's our intent to publish, you know, to be just, you know, that you guys are trying to solve. Do you mind explaining? And so you can then build on top of that. the telcos to use their own people, got the scale, you know, you know, customized and and the rest is there and shove it into the public cloud How are you pricing in this case? at the heart of some of these vendors in the pricing in terms of at a price that you can afford. Oh, and you know, you're of the consumption of technology that you may want to use like, So, you know, they can come on board, So you got a dozen APIs today. All by the end of the year. lot of power by the vendors Well, Mike, you said too, and that's where the disruption comes. And I think this is where maybe from the customer. And then they iterate. that, learn from the market. we've seen, you know, and send an Uber to my house And the value is there. And it's like, you know, It's in the cloud because it's the scale of data. like the godfather of AI, right? Well, I really want to thank you guys Okay, back to you in the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Danielle | PERSON | 0.99+ |
Robin Langley | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
Rob | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
five months | QUANTITY | 0.99+ |
Robin | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
TelcoDR | ORGANIZATION | 0.99+ |
1993 | DATE | 0.99+ |
Robin Langdon | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Austin | LOCATION | 0.99+ |
50 | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Danielle Royston | PERSON | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
telco | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Totogi | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Totogi | PERSON | 0.99+ |
12 APIs | QUANTITY | 0.99+ |
Niels Nielsen | PERSON | 0.99+ |
50 plus | QUANTITY | 0.99+ |
three guys | QUANTITY | 0.98+ |
one time | QUANTITY | 0.98+ |
Cloud City | LOCATION | 0.98+ |
30 plus connectors | QUANTITY | 0.98+ |
Devflo | ORGANIZATION | 0.98+ |
more than 10 | QUANTITY | 0.98+ |
Devflow | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
one step | QUANTITY | 0.97+ |
Lego | ORGANIZATION | 0.97+ |
Lambda | TITLE | 0.97+ |
millions of dollars | QUANTITY | 0.97+ |
an hour and a half | QUANTITY | 0.96+ |
one noodle | QUANTITY | 0.95+ |
10 years ago | DATE | 0.95+ |
one day | QUANTITY | 0.95+ |
over 50 APIs | QUANTITY | 0.94+ |
22 | QUANTITY | 0.94+ |
hundreds of thousands | QUANTITY | 0.94+ |
Uber | ORGANIZATION | 0.94+ |
billions of records | QUANTITY | 0.92+ |
21 | QUANTITY | 0.92+ |
Digital Revolution | ORGANIZATION | 0.91+ |
22 APIs | QUANTITY | 0.91+ |
hundred million dollar | QUANTITY | 0.9+ |
a dozen APIs | QUANTITY | 0.89+ |
CTO | PERSON | 0.89+ |
A couple of days ago | DATE | 0.88+ |
end of the year | DATE | 0.88+ |
Cloud City | ORGANIZATION | 0.88+ |
one thing | QUANTITY | 0.88+ |
500 million API calls | QUANTITY | 0.86+ |
COVID | TITLE | 0.85+ |
a hundred plus | QUANTITY | 0.82+ |
first | QUANTITY | 0.81+ |
couple weeks ago | DATE | 0.77+ |
Devflows | TITLE | 0.76+ |
DynamoDB | TITLE | 0.75+ |
customer | QUANTITY | 0.75+ |
graphQL | TITLE | 0.74+ |
BSS | ORGANIZATION | 0.73+ |
telcos | ORGANIZATION | 0.72+ |
Cloud City Live 2021 | EVENT | 0.68+ |
BSS | TITLE | 0.67+ |
Danielle Royston & Robin Langdon, Totogi | Cloud City Live 2021
(gentle music) >> Okay, thank you Adam. Thank you everyone for joining us on the main stage here, folks watching, appreciate it. I'm John Furrier, Dave Vellante co-hosts of theCube. We're here in the main stage to talk to the two main players of Totogi, Danielle Royston, CEO as of today, the big news. Congratulations. >> Danielle: Yeah. Thank you. >> And Robin Langdon the CTO, Totogi. >> Robin: Thanks. So big news, CEO news today and $100 million investment. Every wants to know where's all the action? Why is this so popular right now? (Danielle chuckles) What's going on? Give us the quick update. >> Yeah, I met the Totogi guys and they have this great product I was really excited about. They're focused purely on telco software and bringing, coupling that with the Public Cloud, which is everything that I talk about, what I've been about for so long. And I really wanted to give them enough funding so they could focus on building great products. A lot of times, telcos, startups, you know they try to get a quick win. They kind of chase the big guys and I really wanted to make sure they were focused on building a great product. #2, I really wanted to show the industry, they had the funding they needed to be a real player. This wasn't like $5 million or a couple million dollars, so that was really important. And then #3, I want to make sure that we could hire great talent and you need money for compensation. And so $100 million it is. >> $100 million is a lot of fresh fat financing as they say. I got to ask you, what's different? Because I've been researching on the refactoring aspect of with the Cloud, obviously public cloud with AWS, a big deal. What's different about the charging aspect of this? >> Yeah I mean, charging hasn't been exciting, maybe ever. I mean, it's kind of like this really sort of sleepy area, but I think what the Totogi guys are doing is they're really coupling the idea of charging and network data to bring hyper-personalization to subscribers. And I think that's where it changes from being a charging engine to become an engagement engine. Telcos know more about us than Google, which is kind of crazy to think about it. They know when we wake up, they know what apps we use. If we call or text, if we game or stream and it's time to start using that data to drive a better experience to us. And I think to Totogi is enabling that. I'm super excited to do that. >> So Robin, I wonder if you could talk about that a little bit. I mean, maybe we get into the plumbing and I don't want to go too deep in it, but I think it's important because we've seen this movie before where people take their on-prem stacks, they wrap it in containers and they shove it into the Public Cloud and they say, "Hey, we're cloud too." If reading a press release, you guys are taking advantage of things like Amazon Nitro of course, but also Graviton and Graviton2 and eventually 3, which is the underlying capabilities, give you a cloud native advantage. Can you explain that a little bit? >> Yeah, absolutely. I mean, we wanted to build this in the Cloud using all of those great cloud innovations. So Graviton2, DynamoDB and using their infrastructure, just allowing us to be able to scale out. These all available to us to use and essentially free for us to use. And it's great, so as you say, we're not shoehorning something in that's decade's old technology, wrapping it in some kind of container and pushing it in. Which is just then, you just can't use any of those great innovations. >> And you've selected DynamoDB as the database. Okay, that's fine. We don't have to get so much into why, but maybe you could explain the advantage because I saw some benchmark numbers which were, like an order of magnitude greater than the competition, like share with us, why? How you were able to get there? And maybe share those numbers. >> Yeah, no, we do. So we just launched our benchmark. So, a million transactions per second. So we just blew away everyone else out there. And that's really because we could take advantage of all that great AWS technology in there and the database side we're using DynamoDB, where we had a huge debate about using what kind of database to go and use? There's a lot of people out there probably get very religious about the kind of database technology that you should be using. And whether it should it be SQL in-memory object database type technology, but really a single table design, gives you that true scalability. You can just horizontally scale that easily, across the whole planet. >> You know, Danielle. Again, I said that we've seen this movie before. There are a lot of parallels in telco with the enterprise. And if you look at enterprise SAS pricing, a lot of it is very static, kind of lock you in, per seat pricing, kind of an old model. And you're seeing a lot of the modern SAS companies who are emerging with a consumption pricing models. How are you guys thinking about pricing? >> Yeah, I don't know of any other company in telco that's starting to price by usage. And that is a very standard offering with the cloud providers, right? Google we know, Amazon, all those guys have a price by the API, price by the transaction. So we're really excited to offer that to telcos. They've been asking for it for awhile, right? Pay for what you need, when you need it, by the use. And so we're really excited to offer that, but I think what's really cool is the idea of a free tier, right? And so I think it's smaller telcos have a trade-off to make, whether, am I going to buy the best technology and pay through the nose and maybe at an unaffordable level, or do I compromise and buy something more affordable, but not as great. And what's so great about Totogi, it's the same product just priced for what you need. And so I think a CSP it'll, below 250,000 subscribers should be able to use the Totogi absolutely for free. And that is, and it's the same product that the big guy would get. So it's not a junior version or scaled back. And so I think that's really exciting. I think we're the only ones that do it. So here we go. >> Love the freemium model. So Robin, maybe you could explain why that's so much, so important in the charging space, because you've got a lot of different options that you want to configure for the consumer. >> Yeah. >> Maybe you could talk about sort of how the old world does that, the old guard and how long it takes and how you're approaching this. >> Yeah so it's, I mean traditionally, charging design, there's as you say, there's lots of different pricing leavers you want to be able to move and change to charge different people. And these systems, even if they say they're configurable, if they normally turn into an IT project where it takes weeks, months, even years to build out the system, you know, marketing can't just go in there and configure the dials and push out your new plans and tariffs. They have to go and create a requirement specification. They hand it down to IT. Those guys go and create a big change project. And by the time they're finished, the market's moved on. They're on to their next plan, their next tariff to go and build. So we wanted to create something that was truly configurable from a marketing standpoint. You know, user-friendly, they can go in there, configure it and be live in minutes, not even days or weeks. >> No, IT necessary. >> Robin: No IT necessary. >> So you know, I've been thinking about, John and I talk about this all the time, It's that there's a data play here. And what I think you're doing is actually building a data product. I think there's a new metric emerging in the industry, which is how long does it take me to go from idea to monetization with a data product. And that's what this is. This is a data product >> Yeah. >> for your customers. >> Absolutely, what Robin was talking about is totally the way the industry works. It's weeks before you have an idea and get it out to the market. And like Robin was mentioning, the market's changed by the time you get it out there, the data's stale. And so we researched every single plan in the world from every single CSP. There is about 30,000 plans in the world, right? The bigger you are, the more plans you have. On average, a tier one telco has 40 to 50 plans. And so how many offers, I mean think about, that's how many phones to buy, plans to buy. And so we're like, let's get some insight on the plans. Let's drive it into a standardization, right? Let's make them, which ones work, which ones don't. And that's, I think you're right. I think it's a data play and putting the power back into the marketer's hands and not with IT. >> So there's a lot of data on-prem. Explain why I can't do this with my on-prem data. >> Oh, well today that, I mean, sorry if you want to jump in. Feel free to jump in, right. But today, the products are designed in a way where they're, perpetually licensed, by the subscriber, rigid systems, not API based. I mean, there might be an API, but you got to pay through the nose to use it or you got to use the provider's people to code against it. They're inflexible. They were written when voice was the primary revenue driver, not data, right? And so they've been shoehorned, right? Like Robin was saying, shoehorned to be able to move into the world that we are now. I mean, when the iPhone came about that introduced apps and data went through the roof and the systems were written for voice, not written for data. >> And that's a good point, if you think about the telco industry, it seems like it could be a glacier that just needs to just break and just like, just get modern because we all have phones. We have apps. We can delete them. And the billing plans, like either nonexistent or it has to be all free. >> Well I mean, I'll ask you. Do you know what your billing plan is? Do you know how much data you use on a monthly basis? No one knows. >> I have no clue. >> A lot. >> No one. And so what you do is you buy unlimited. >> Dave: Right. >> You overpay. And so what we're seeing in the plans is that if you actually knew how much you used, you would be able to maybe pay less, which I remember the telcos are not excited to hear that message, but it's a win for the subscriber. And if you could >> I mean it's only >> accurately predict that. >> get lower and lower. I having a conversation last night at dinner with industry analysts, we're talking about a vehicle e-commerce, commerce in your car as you're driving. You can get that kind of with a 5G. The trend is transactions everywhere, ad-hoc, ephemeral... >> Yeah. >> The new apps are going to demand this kind of subscriber billing. >> Yeah >> Do people get this? Are you guys the only ones kind of like on this? >> No I think people have been talking about it for years. I think there's vendors out there that have been trying to offer this idea of like, build your own plan and all that other stuff but I think it's more than just minutes, text and data. It's starting to really understand what subscribers are using, right? Are you a football fan? Are you a golf fan? Are you a shopper? Are you a concert goer? And couple that with how you use your phone and putting out offers that are really exciting to subscribers so that we love our telco. Like we should be loving our telco. And I don't... I don't know that people talk >> They saved us >> about loving their telco. >> from the pandemic >> They saved us during the pandemic. The internet didn't crash, we got our zoom meetings. We got everything going on. What's the technical issue on solving these problems? Is it just legacy? Is it just mindset? Robin, what's your take on that? >> I'll keep talking as long as Robin will let me. (Daniel laughing) >> So the big technical issues, you're trying to build in this flexibility so that you can have, we don't know what people are going to configure in the future. It's minutes and text messages are given away for free. They're unlimited. Data is where it's at, about charging for apps and about using all that data in the network the telcos have, which is extremely valuable and there's a wealth of information in there that can be used to be monetized and push that out. And they need a charging system on top that can manage that and we have the flexibility that you don't have to go off and then start creating programs and IT projects that are going to do that. >> Well it's funny Danielle, you say that the telcos might not like that, right? 'Cause you might pay less. But in fact, that is the kind of on-prem mindset because when you have a fixed resource, you say, okay, don't use too much because we have to buy more. Or you overbuy to your point. The cloud mindset is, I'll try it. I'll try some more, I'll try some more. I'm aligning it with business value. Oh, I'm making money. Oh, great. I'm going to keep buying more. And it's very clear. It's transparent where the business value is. So my question is when you think about your charging engine and all this data conversation, is there more than just a charging engine in this platform? >> Well, I think just going back to what Robin was talking about. I think what Totogi is doing differently is by building it on the Public Cloud gives you virtually unlimited resources, right? In a couple of different directions, certainly hardware and capacity and scalability and all those other things, right? But also as Amazon is putting out more and more product, when you build it in this new way, you can take advantage of these new services very, very easily. And that is a different mindset. It's a different way to deploy applications. And I think that's what makes Totogi really different. You couldn't build Totogi on-premise because you need the infinite scalability. You need the machine learning, you need the AI of Amazon, which they have been investing in for decades, if they now charge you by the API call. And you get to use it like you were saying. Just give it a try, don't like it, stop. And it's just a completely different way of thinking, yeah. >> If I have to ask you a question about the Public Cloud, because the theme here in Cloud City is the Public Cloud is driving innovation, which is also includes disruption. And the new brands are coming in, old brands are either reinventing themselves or falling away. What is the Public Cloud truly enabling? Is it the scale? Is it the data? Is it the refactoring capability? What is the real driver of the Public Cloud innovation? >> I think the insight that CSPs are going to have is what Jamie Dimon had in banking. Like I think he was pretty famously saying, "I'm never going to use the Public Cloud. Our data is too precious, you know, regulations and all that stuff." But I think the insight they're going to have, and I hopefully, I do a keynote and I mentioned this, which is feature velocity. The ability to put out features in a day or two. Our feature velocity in telco is months. Months, months. >> Seriously? >> Yeah, sometimes years. It's just so slow between big iterations of new capability and to be able to put out new features in minutes or days and be able to outmaneuver your competition is unheard of. So the CSPs that starts to get this, it's going to be a real big get, and then they're going to start to.. (Danielle makes swishing sound) >> We just interviewed (Dave speaking indistinctly) a venture capitalist, Dave and I last month. And he's a big investor in Snowflake, on the big deals. He said that the new strategy that's working is you go to be agile with feature acceleration. We just talked about this at lunch and you get data. And you can dismantle the bad features quickly and double down >> Yup. >> on the winners. >> Ones that are working. So what used to be feature creep now is a benefit if you play it right? >> Danielle: It's feature experimentation. >> That's essentially what you- >> It's experimentation, right? And you're like, that one worked, this one didn't, kill that one, double down on this one, go faster and faster and so feature experimentation, which you can't do in telco, because every time we ask for a feature from your current vendor, it's hundreds of thousands, if not millions of dollars. So you don't experiment. And so yeah- >> You can make features disposable. >> Correct. And I think that we just discovered that on this stage just now. (group chuckling) >> Hey look at this. Digital revolution, DR. Telco DR. >> Yeah. >> Great to have you guys. >> This is super awesome. Thanks so much. >> You guys are amazing. Congratulations. And we're looking forward to the more innovation stories again, get out there, get the momentum. Great stuff. >> Danielle: It's going to be great. >> And awesome. >> Feature experimentation. >> Yeah. >> Hashtag. >> And Dave and I are going to head back over to our Cube set here, here on the main stage. We'll toss it back to the Adam in the studio. Adam, back to you and take it from here.
SUMMARY :
We're here in the main stage to talk to Danielle: Yeah. and $100 million investment. and you need money for compensation. I got to ask you, what's different? And I think to Totogi is enabling that. So Robin, I wonder if you could talk And it's great, so as you but maybe you could explain the advantage that you should be using. And if you look at enterprise SAS pricing, And that is, and it's the same product that you want to configure Maybe you could talk about sort of how to build out the system, you know, So you know, I've been thinking about, by the time you get it out this with my on-prem data. or you got to use the provider's And the billing plans, Do you know what your billing plan is? And so what you do is you buy unlimited. And if you could You can get that kind of with a 5G. The new apps are going to demand And couple that with What's the technical issue I'll keep talking as so that you can have, But in fact, that is the And you get to use it If I have to ask you a Our data is too precious, you know, So the CSPs that starts to And you can dismantle if you play it right? So you don't experiment. And I think that we just discovered that This is super awesome. the more innovation stories Adam, back to you and take it from here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Danielle | PERSON | 0.99+ |
40 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Jamie Dimon | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Danielle Royston | PERSON | 0.99+ |
Robin Langdon | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
$5 million | QUANTITY | 0.99+ |
Daniel | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$100 million | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Totogi | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
a day | QUANTITY | 0.99+ |
50 plans | QUANTITY | 0.99+ |
hundreds of thousands | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
two main players | QUANTITY | 0.98+ |
Totogi | PERSON | 0.98+ |
telcos | ORGANIZATION | 0.98+ |
about 30,000 plans | QUANTITY | 0.98+ |
single table | QUANTITY | 0.97+ |
two | QUANTITY | 0.96+ |
last night | DATE | 0.96+ |
Telco | ORGANIZATION | 0.95+ |
pandemic | EVENT | 0.95+ |
below 250,000 subscribers | QUANTITY | 0.95+ |
decades | QUANTITY | 0.94+ |
DynamoDB | TITLE | 0.92+ |
Cloud City | TITLE | 0.91+ |
Snowflake | TITLE | 0.9+ |
2021 | DATE | 0.86+ |
couple million dollars | QUANTITY | 0.86+ |
Cube | COMMERCIAL_ITEM | 0.84+ |
a million transactions per second | QUANTITY | 0.82+ |
theCube | ORGANIZATION | 0.81+ |
Nipun Agarwal, Oracle | CUBEconversation
(bright upbeat music) >> Hello everyone, and welcome to the special exclusive CUBE Conversation, where we continue our coverage of the trends of the database market. With me is Nipun Agarwal, who's the vice president, MySQL HeatWave in advanced development at Oracle. Nipun, welcome. >> Thank you Dave. >> I love to have technical people on the Cube to educate, debate, inform, and we've extensively covered this market. We were all over the Snowflake IPO and at that time I remember, I challenged organizations bring your best people. Because I want to better understand what's happening at Database. After Oracle kind of won the Database wars 20 years ago, Database kind of got boring. And then it got really exciting with the big data movement, and all the, not only SQL stuff coming out, and Hadoop and blah, blah, blah. And now it's just exploding. You're seeing huge investments from many of your competitors, VCs are trying to get into the action. Meanwhile, as I've said many, many times, your chairman and head of technology, CTO, Larry Ellison, continues to invest to keep Oracle relevant. So it's really been fun to watch and I really appreciate you coming on. >> Sure thing. >> We have written extensively, we talked to a lot of Oracle customers. You get the leading mission critical database in the world. Everybody from Fortune 100, we evaluated what Gardner said about the operational databases. I think there's not a lot of question there. And we've written about that on WikiBound about you're converged databases, and the strategy there, and we're going to get into that. We've covered Autonomous Data Warehouse Exadata Cloud at Customer, and then we just want to really try to get into your area, which has been, kind of caught our attention recently. And I'm talking about the MySQL Database Service with HeatWave. I love the name, I laugh. It was an unveiled, I don't know, a few months ago. So Nipun, let's start the discussion today. Maybe you can update our viewers on what is HeatWave? What's the overall focus with Oracle? And how does it fit into the Cloud Database Service? >> Sure Dave. So HeatWave is a in-memory query accelerator for the MySQL Database Service for speeding up analytic queries as well as long running complex OLTP queries. And this is all done in the context of a single database which is the MySQL Database Service. Also, all existing MySQL applications or MySQL compatible tools and applications continue to work as is. So there is no change. And with this HeatWave, Oracle is delivering the only MySQL service which provides customers with a single unified platform for both analytic as well as transaction processing workloads. >> Okay, so, we've seen open source databases in the cloud growing very rapidly. I mentioned Snowflake, I think Google's BigQuery, get some mention, we'll talk, we'll maybe talk more about Redshift later on, but what I'm wondering, well let's talk about now, how does MySQL HeatWave service, how does that compare to MySQL-based services from other cloud vendors? I can get MySQL from others. In fact, I think we do. I think we run WikiBound on the LAMP stack. I think it's running on Amazon, but so how does your service compare? >> No other vendor, like, no other vendor offers this differentiated solution with an open source database namely, having a single database, which is optimized both for transactional processing and analytics, right? So the example is like MySQL. A lot of other cloud vendors provide MySQL service but MySQL has been optimized for transaction processing so when customs need to run analytics they need to move the data out of MySQL into some other database for any analytics, right? So we are the only vendor which is now offering this unified solution for both transactional processing analytics. That's the first point. Second thing is, most of the vendors out there have taken open source databases and they're basically hosting it in the cloud. Whereas HeatWave, has been designed from the ground up for the cloud, and it is a 100% compatible with MySQL applications. And the fact that we have designed it from the ground up for the cloud, maybe I'll spend 100s of person years of research and engineering means that we have a solution, which is very, very scalable, it's very optimized in terms of performance, and it is very inexpensive in terms of the cost. >> Are you saying, well, wait, are you saying that you essentially rewrote MySQL to create HeatWave but at the same time maintained compatibility with existing applications? >> Right. So we enhanced MySQL significantly and we wrote a whole bunch of new code which is brand new code optimized for the cloud in such a manner that yes, it is 100% compatible with all existing MySQL applications. >> What does it mean? And if I'm to optimize for the cloud, I mean, I hear that and I say, okay, it's taking advantage of cloud-native. I hear kind of the buzzwords, cloud-first, cloud-native. What does it specifically mean from a technical standpoint? >> Right. So first, let's talk about performance. What we have done is that we have looked at two aspects. We have worked with shapes like for instance, like, the compute shapes which provide the best performance for dollar, per dollar. So I'll give you a couple of examples. We have optimized for certain shifts. So, HeatWave is in-memory query accelerator. So the cost of the system is dominated by the cost. So we are working with chips which provide the cheapest cost per terabyte of memory. Secondly, we are using commodity cloud services in such a manner that it's in-optimized for both performance as well as performance per dollar. So, example is, we are not using any locally-attached SSDs. We use ObjectStore because it's very inexpensive. And then I guess at some point I will get into the details of the architecture. The system has been really, really designed for massive scalability. So as you add more compute, as you add more service, the system continues to scale almost perfectly linearly. So this is what I mean in terms of being optimized for the cloud. >> All right, great. >> And furthermore, (indistinct). >> Thank you. No, carry on. >> Over the next few months, you will see a bunch of other announcements where we're adding a whole bunch of machine learning and data driven-based automation which we believe is critical for the cloud. So optimized for performance, optimized for the cloud, and machine learning-based automation which we believe is critical for any good cloud-based service. >> All right, I want to come back and ask you more about the architecture, but you mentioned some of the others taking open source databases and shoving them into the cloud. Let's take the example of AWS. They have a series of specialized data stores and, for different workloads, Aurora is for OLTP I actually think it's based on MySQL Redshift which is based on ParAccel. And so, and I've asked Amazon about this, and their response is, actually kind of made sense to me. Look, we want the right tool for the right job, we want access to the primitives because when the market changes we can change faster as opposed to, if we put, if we start building bigger and bigger databases with more functionality, it's, we're not as agile. So that kind of made sense to me. I know we, again, we use a lot, we use, I think I said MySQL in Amazon we're using DynamoDB, works, that's cool. We're not huge. And I, we fully admit and we've researched this, when you start to get big that starts to get maybe expensive. But what do you think about that approach and why is your approach better? >> Right, we believe that there are multiple drawbacks of having different databases or different services, one, optimized for transactional processing and one for analytics and having to ETL between these different services. First of all, it's expensive because you have to manage different databases. Secondly, it's complex. From an application standpoint, applications need, now need to understand the semantics of two different databases. It's inefficient because you have to transfer data at some PRPC from one database to the other one. It's not secure because there is security aspects involved when your transferring data and also the identity of users in the two different databases is different. So it's, the approach which has been taken by Amazons and such, we believe, is more costly, complex, inefficient and not secure. Whereas with HeatWave, all the data resides in one database which is MySQL and it can run both transaction processing and analytics. So in addition to all the benefits I talked about, customers can also make their decisions in real time because there is no need to move the data. All the data resides in a single database. So as soon as you make any changes, those changes are visible to customers for queries right away, which is not the case when you have different siloed specialized databases. >> Okay, that, a lot of ways to skin a cat and that what you just said makes sense. By the way, we were saying before, companies have taken off the shelf or open source database has shoved them in the cloud. I have to give Amazon some props. They actually have done engineering to Aurora and Redshift. And they've got the engineering capabilities to do that. But you can see, for example, in Redshift the way they handle separating compute from storage it's maybe not as elegant as some of the other players like a Snowflake, for example, but they get there and they, maybe it's a little bit more brute force but so I don't want to just make it sound like they're just hosting off the shelf in the cloud. But is it fair to say that there's like a crossover point? So in other words, if I'm smaller and I'm not, like doing a bunch of big, like us, I mean, it's fine. It's easy, I spin it up. It's cheaper than having to host my own servers. So there's, presumably there's a sweet spot for that approach and a sweet spot for your approach. Is that fair or do you feel like you can cover a wider spectrum? >> We feel we can cover the entire spectrum, not wider, the entire spectrum. And we have benchmarks published which are actually available on GitHub for anyone to try. You will see that this approach you have taken with the MySQL Database Service in HeatWave, we are faster, we are cheaper without having to move the data. And the mileage or the amount of improvement you will get, surely vary. So if you have less data the amount of improvement you will get, maybe like say 100 times, right, or 500 times, but smaller data sizes. If you get to lots of data sizes this improvement amplifies to 1000 times or 10,000 times. And similarly for the cost, if the data size is smaller, the cost advantage you will have is less, maybe MySQL HeatWave is one third the cost. If the data size is larger, the cost advantage amplifies. So to your point, MySQL Database Service in HeatWave is going to be better for all sizes but the amount of mileage or the amount of benefit you will get increases as the size of the data increases. >> Okay, so you're saying you got better performance, better cost, better price performance. Let me just push back a little bit on this because I, having been around for awhile, I often see these performance and price comparisons. And what often happens is a vendor will take the latest and greatest, the one they just announced and they'll compare it to an N-1 or an N-2, running on old hardware. So, is, you're normalizing for that, is that the game you're playing here? I mean, how can you, give us confidence that this is easier kind of legitimate benchmarks in your GitHub repo. >> Absolutely. I'll give you a bunch of like, information. But let me preface this by saying that all of our scripts are available in the open source in the GitHub repo for anyone to try and we would welcome feedback otherwise. So we have taken, yes, the latest version of MySQL Database Service in HeatWave, we have optimized it, and we have run multiple benchmarks. For instance, TBC-H, TPC-DS, right? Because the amount of improvement a query will get depends upon the specific query, depends upon the predicates, it depends on the selectivity so we just wanted to use standard benchmarks. So it's not the case that if you're using certain classes of query, excuse me, benefit, get them more. So, standard benchmarks. Similarly, for the other vendors or other services like Redshift, we have run benchmarks on the latest shapes of Redshift the most optimized configuration which they recommend, running their scripts. So this is not something that, hey, we're just running out of the box. We have optimized Aurora, we have optimized (indistinct) to the best and possible extent we can based on their guidelines, based on their latest release, and that's what you're talking about in terms of the numbers. >> All right. Please continue. >> Now, for some other vendors, if we get to the benchmark section, we'll talk about, we are comparing with other services, let's say Snowflake. Well there, there are issues in terms of you can't legally run Snowflake numbers, right? So there, we have looked at some reports published by Gigaom report. and we are taking the numbers published by the Gigaom report for Snowflake, Google BigQuery and as you'll see maps numbers, right? So those, we have not won ourselves. But for AWS Redshift, as well as AWS Aurora, we have run the numbers and I believe these are the best numbers anyone can get. >> I saw that Gigaom report and I got to say, Gigaom, sometimes I'm like, eh, but I got to say that, I forget the guy's name, he knew what he was talking about. He did a good job, I thought. I was curious as to the workload. I always say, well, what's the workload. And, but I thought that report was pretty detailed. And Snowflake did not look great in that report. Oftentimes, and they've been marketing the heck out of it. I forget who sponsored it. It is, it was sponsored content. But, I did, I remember seeing that and thinking, hmm. So, I think maybe for Snowflake that sweet spot is not, maybe not that performance, maybe it's the simplicity and I think that's where they're making their mark. And most of their databases are small and a lot of read-only stuff. And so they've found a market there. But I want to come back to the architecture and really sort of understand how you've able, you've been able to get this range of both performance and cost you talked about. I thought I heard that you're optimizing the chips, you're using ObjectStore. You're, you've got an architecture that's not using SSD, it's using ObjectStore. So this, is their cashing there? I wonder if you could just give us some details of the architecture and tell us how you got to where you are. >> Right, so let me start off saying like, what are the kind of numbers we are talking about just to kind of be clear, like what the improvements are. So if you take the MySQL Database Service in HeatWave in Oracle Cloud and compare it with MySQL service in any other cloud, and if you look at smaller data sizes, say data sizes which are about half a terabyte or so, HeatWave is 400 times faster, 400 times faster. And as you get to... >> Sorry. Sorry to interrupt. What are you measuring there? Faster in terms of what? >> Latency. So we take TCP-H 22 queries, we run them on HeatWave, and we run the same queries on MySQL service on any other cloud, half a terabyte and the performance in terms of latency is 400 times faster in HeatWave. >> Thank you. Okay. >> If you go to a lot of other data sites, then the other data point of view, we're looking at say something like, 4 TB, there, we did two comparisons. One is with AWS Aurora, which is, as you said, they have taken MySQL. They have done a bunch of innovations over there and we are offering it as a premier service. So on 4 TB TPC-H, MySQL Database Service with HeatWave is 1100 times faster than Aurora. It is three times faster than the fastest shape of Redshift. So Redshift comes in different flavors some talking about dense compute too, right? And again, looking at the most recommended configuration from Redshift. So 1100 times faster that Aurora, three times faster than Redshift and at one third, the cost. So this where I just really want to point out that it is much faster and much cheaper. One third the cost. And then going back to the Gigaom report, there was a comparison done with Snowflake, Google BigQuery, Redshift, Azure Synapse. I wouldn't go into the numbers here but HeatWave was faster on both TPC-H as well as TPC-DS across all these products and cheaper compared to any of these products. So faster, cheaper on both the benchmarks across all these products. Now let's come to, like, what is the technology underneath? >> Great. >> So, basically there are three parts which you're going to see. One is, improve performance, very good scale, and improve a lower cost. So the first thing is that HeatWave has been optimized and, for the cloud. And when I say that, we talked about this a bit earlier. One is we are using the cheapest shapes which are available. We're using the cheapest services which are available without having to compromise the performance and then there is this machine learning-based automation. Now, underneath, in terms of the architecture of HeatWave there are basically, I would say, four key things. First is, HeatWave is an in-memory engine that a presentation which we have in memory is a hybrid columnar representation which is optimized for vector process. That's the first thing. And that's pretty table stakes these days for anyone who wants to do in-memory analytics except that it's hybrid columnar which is optimized for vector processing. So that's the first thing. The second thing which starts getting to be novel is that HeatWave has a massively parallel architecture which is enabled by a massively partitioned architecture. So we take the data, we read the data from MySQL into the memory of the HeatWave and we massively partition this data. So as we're reading the data, we're partitioning the data based on the workload, the sizes of these partitions is such that it fits in the cache of the underlying processor and then we're able to consume these partitions really, really fast. So that's the second bit which is like, massively parallel architecture enabled by massively partitioned architecture. Then the third thing is, that we have developed new state-of-art algorithms for distributed query processing. So for many of the workloads, we find that joints are the long pole in terms of the amount of time it takes. So we at Oracle have developed new algorithms for distributed joint processing and similarly for many other operators. And this is how we're being able to consume this data or process this data, which is in-memory really, really fast. And finally, and what we have, is that we have an eye for scalability and we have designed algorithms such that there's a lot of overlap between compute and communication, which means that as you're sending data across various nodes and there could be like, dozens of of nodes or 100s of nodes that they're able to overlap the computation time with the communication time and this is what gives us massive scalability in the cloud. >> Yeah, so, some hard core database techniques that you've brought to HeatWave, that's impressive. Thank you for that description. Let me ask you, just to go to quicker side. So, MySQL is open source, HeatWave is what? Is it like, open core? Is it open source? >> No, so, HeatWave is something which has been designed and optimized for the cloud. So it can't be open source. So any, it's not open service. >> It is a service. >> It is a service. That's correct. >> So it's a managed service that I pay Oracle to host for me. Okay. Got it. >> That's right. >> Okay, I wonder if you could talk about some of the use cases that you're seeing for HeatWave, any patterns that you're seeing with customers? >> Sure, so we've had the service, we had the HeatWave service in limited availability for almost 15 months and it's been about five months since we have gone G. And there's a very interesting trend of our customers we're seeing. The first one is, we are seeing many migrations from AWS specifically from Aurora. Similarly, we are seeing many migrations from Azure MySQL we're migrations from Google. And the number one reason customers are coming is because of ease of use. Because they have their databases currently siloed. As you were talking about some for optimized for transactional processing, some for analytics. Here, what customers find is that in a single database, they're able to get very good performance, they don't need to move the data around, they don't need to manage multiple databaes. So we are seeing many migrations from these services. And the number one reason is reduce complexity of ease of use. And the second one is, much better performance and reduced costs, right? So that's the first thing. We are very excited and delighted to see the number of migrations we're getting. The second thing which we're seeing is, initially, when we had the service announced, we were like, targeting really towards analytics. But now what are finding is, many of these customers, for instance, who have be running on Aurora, when they are moving from MySQL in HeatWave, they are finding that many of the OLTP queries as well, are seeing significant acceleration with the HeatWave. So now customers are moving their entire applications or, to HeatWave. So that's the second trend we're seeing. The third thing, and I think I kind of missed mentioning this earlier, one of the very key and unique value propositions we provide with the MySQL Database Service in HeatWave, is that we provide a mechanism where if customers have their data stored on premise they can still leverage the HeatWave service by enabling MySQL replication. So they can have their data on premise, they can replicate this data in the Oracle Cloud and then they can run analytics. So this deployment which we are calling the hybrid deployment is turning out to be very, very popular because there are customers, there are some customers who for various reasons, compliance or regulatory reasons cannot move the entire data to the cloud or migrate the data to the cloud completely. So this provides them a very good setup where they can continue to run their existing database and when it comes to getting benefits of HeatWave for query acceleration, they can set up this replication. >> And I can run that on anyone, any available server capacity or is there an appliance to facilitate that? >> No, this is just standard MySQL replication. So if a customer is running MySQL on premise they can just turn off this application. We have obviously enhanced it to support this inbound replication between on-premise and Oracle Cloud with something which can be enabled as long as the source and destination are both MySQL. >> Okay, so I want to come back to this sort of idea of the architecture a little bit. I mean, it's hard for me to go toe to toe with the, I'm not an engineer, but I'm going to try anyway. So you've talked about OLTP queries. I thought, I always thought HeatWave was optimized for analytics. But so, I want to push on this notion because people think of this the converged database, and what you're talking about here with HeatWave is sort of the Swiss army knife which is great 'cause you got a screwdriver and you got Phillips and a flathead and some scissors, maybe they're not as good. They're not as good necessarily as the purpose-built tool. But you're arguing that this is best of breed for OLTP and best of breed for analytics, both in terms of performance and cost. Am I getting that right or is this really a Swiss army knife where that flathead is really not as good as the big, long screwdriver that I have in my bag? >> Yes, so, you're getting it right but I did want to make a clarification. That HeatWave is definitely the accelerator for all your queries, all analytic queries and also for the long running complex transaction processing inquiries. So yes, HeatWave the uber query accelerator engine. However, when it comes to transaction processing in terms of your insert statements, delete statements, those are still all done and served by the MySQL database. So all, the transactions are still sent to the MySQL database and they're persistent there, it's the queries for which HeatWave is the accelerator. So what you said is correct. For all query acceleration, HeatWave is the engine. >> Makes sense. Okay, so if I'm a MySQL customer and I want to use HeatWave, what do I have to do? Do I have to make changes to my existing applications? You applied earlier that, no, it's just sort of plugs right in. But can you clarify that. >> Yes, there are absolutely no changes, which any MySQL or MySQL compatible application needs to make to take advantage of HeatWave. HeatWave is an in-memory accelerator and it's completely transparent to the application. So we have like, dozens and dozens of like, applications which have migrated to HeatWave, and they are seeing the same thing, similarly tools. So if you look at various tools which work for analytics like, Tableau, Looker, Oracle Analytics Cloud, all of them will work just seamlessly. And this is one of the reasons we had to do a lot of heavy lifting in the MySQL database itself. So the MySQL database engineering team was, has been very actively working on this. And one of the reasons is because we did the heavy lifting and we meet enhancements to the MySQL optimizer in the MySQL storage layer to do the integration of HeatWave in such a seamless manner. So there is absolutely no change which an application needs to make in order to leverage or benefit from HeatWave. >> You said earlier, Nipun, that you're seeing migrations from, I think you said Aurora and Google BigQuery, you might've said Redshift as well. Do you, what kind of tooling do you have to facilitate migrations? >> Right, now, there are multiple ways in which customers may want to do this, right? So the first tooling which we have is that customers, as I was talking about the replication or the inbound replication mechanism, customers can set up heat HeatWave in the Oracle Cloud and they can send the data, they can set up replication within their instances in their cloud and HeatWave. Second thing is we have various kinds of tools to like, facilitate the data migration in terms of like, fast ingestion sites. So there are a lot of such customers we are seeing who are kind of migrating and we have a plethora of like, tools and applications, in addition to like, setting up this inbound application, which is the most seamless way of getting customers started with HeatWave. >> So, I think you mentioned before, I have my notes, machine intelligence and machine learning. We've seen that with autonomous database it's a big, big deal obviously. How does HeatWave take advantage of machine intelligence and machine learning? >> Yeah, and I'm probably going to be talking more about this in the future, but what we have already is that HeatWave uses machine learning to intelligently automate many operations. So we know that when there's a service being offered in the cloud, our customers expect automation. And there're a lot of vendors and a lot of services which do a good job in automation. One of the places where we're going to be very unique is that HeatWave uses machine learning to automate many of these operations. And I'll give you one such example which is provisioning. Right now with HeatWave, when a customer wants to determine how many nodes are needed for running their workload, they don't need to make a guess. They invoke a provisioning advisor and this advisor uses machine learning to sample a very small percentage of the data. We're talking about, like, 0.1% sampling and it's able to predict the amount of memory with 95% accuracy, which this data is going to take. And based on that, it's able to make a prediction of how many servers are needed. So just a simple operation, the first step of provisioning, this is something which is done manually across, on any of the service, whereas at HeatWave, we have machine learning-based advisor. So this is an example of what we're doing. And in the future, we'll be offering many such innovations as a part of the MySQL Database and the HeatWave service. >> Well, I've got to say I was skeptic but I really appreciate it, you're, answering my questions. And, a lot of people when you made the acquisition and inherited MySQL, thought you were going to kill it because they thought it would be competitive to Oracle Database. I'm happy to see that you've invested and figured out a way to, hey, we can serve our community and continue to be the steward of MySQL. So Nipun, thanks very much for coming to the CUBE. Appreciate your time. >> Sure. Thank you so much for the time, Dave. I appreciate it. >> And thank you for watching everybody. This is Dave Vellante with another CUBE Conversation. We'll see you next time. (bright upbeat music)
SUMMARY :
of the trends of the database market. So it's really been fun to watch and the strategy there, for the MySQL Database Service on the LAMP stack. And the fact that we have designed it optimized for the cloud I hear kind of the buzzwords, So the cost of the system Thank you. critical for the cloud. So that kind of made sense to me. So it's, the approach which has been taken By the way, we were saying before, the amount of improvement you will get, is that the game you're playing here? So it's not the case All right. and we are taking the numbers published of the architecture and if you look at smaller data sizes, Sorry to interrupt. and the performance in terms of latency Thank you. So faster, cheaper on both the benchmarks So for many of the workloads, to go to quicker side. and optimized for the cloud. It is a service. So it's a managed cannot move the entire data to the cloud as long as the source and of the architecture a little bit. and also for the long running complex Do I have to make changes So the MySQL database engineering team to facilitate migrations? So the first tooling which and machine learning? and the HeatWave service. and continue to be the steward of MySQL. much for the time, Dave. And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Nipun Agarwal | PERSON | 0.99+ |
Nipun | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
400 times | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
1000 times | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
10,000 times | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
HeatWave | ORGANIZATION | 0.99+ |
second bit | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
95% | QUANTITY | 0.99+ |
100 times | QUANTITY | 0.99+ |
two aspects | QUANTITY | 0.99+ |
500 times | QUANTITY | 0.99+ |
0.1% | QUANTITY | 0.99+ |
half a terabyte | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
1100 times | QUANTITY | 0.99+ |
4 TB | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Phillips | ORGANIZATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
three times | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
One third | QUANTITY | 0.99+ |
one database | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.99+ |
Breaking Analysis: Tech Spending Roars Back in 2021
>> Narrator: From theCUBE Studios in Palo Alto, in Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> Tech spending is poised to rebound as the economy reopens in 2021. CIOs and IT buyers, they expect a 4% increase in 2021 spending based on ETR's latest surveys. And we believe that number will actually be higher, in the six to 7% range even. The big drivers are continued fine tuning of, and investment in digital strategies, for example, cloud security, AI data and automation. Application modernization initiatives continue to attract attention, and we also expect more support with work from home demand, for instance laptops, et cetera. And we're even seeing pent-up demand for data center infrastructure and other major risks to this scenario, they remain the pace of the reopening, of course, no surprise there, however, even if there are speed bumps to the vaccine rollout and achieving herd immunity, we believe tech spending will grow at least two points faster than GDP, which is currently forecast at 4.1%. Hello and welcome to this week's (indistinct) on Cube Insights powered by ETR. In this breaking analysis, we want to update you on our latest macro view of the market, and then highlight a few key sectors that we've been watching, namely cloud with a particular drill down on Microsoft and AWS, security, database, and then we'll look at Dell and VMware as a proxy for the data center. Now here's a look at what IT buyers and CIOs think. This chart shows the latest survey data from ETR and it compares the December results with the year earlier survey. Consistent with our earlier reporting, we see a kind of a swoosh-like recovery with a slower first half and accelerating in the second half. And we think that CIOs are being prudently conservative, 'cause if GDP grows at 4% plus, we fully expect tech spending to outperform. Now let's look at the factors that really drive some of our thinking on that. This is data that we've shown before it asks buyers if they're initiating any of the following strategies in the coming quarter, in the face of the pandemic and you can see there's no change in work from home, really no change in business travel, but hiring freezes, freezing new deployments, these continue to trend down. New deployments continue to be up, layoffs are trending down and hiring is also up. So these are all good signs. Now having said that, one part of our scenario assumes workers return and the current 75% of employees that work from home will moderate by the second half to around 35%. Now that's double the historical average, and that large percentage, that will necessitate continued work from home infrastructure spend, we think and drive HQ spending as well in the data center. Now the caveat of course is that lots of companies are downsizing corporate headquarters, so that could weigh on this dual investment premise that we have, but generally with the easy compare in these tailwinds, we expect solid growth in this coming year. Now, what sectors are showing growth? Well, the same big four that we've been talking about for 10 months, machine intelligence or AI/ML, RPA and broader automation agendas, these lead the pack along with containers and cloud. These four, you can see here above that red dotted line at 40%, that's a 40% net score which is a measure of spending momentum. Now cloud, it's the most impressive because what you see in this chart is spending momentum or net score in the vertical axis and market share or pervasiveness in the data center on the horizontal axis. Now cloud it stands out, as it's has a large market share and it's got spending velocity tied to it. So, I mean that is really impressive for that sector. Now, what we want to do here is do a quick update on the big three cloud revenue for 2020. And so we're looking back at 2020, and this really updates the chart that we showed last week at our CUBE on Cloud event, the only differences Azure, Microsoft reported and this chart shows IaaS estimates for the big three, we had had Microsoft Azure in Q4 at 6.8 billion, it came in at 6.9 billion based on our cloud model. Now the points we previously made on this chart, they stand out. AWS is the biggest, and it's growing more slowly but it throws off more absolute dollars, Azure grew 48% sent last quarter, we had it slightly lower and so we've adjusted that and that's incredible. And Azure continues to close that gap on AWS and we'll see how AWS and Google do when they report next week. We definitely think based on Microsoft result that AWS has upside to these numbers, especially given the Q4 push, year end, and the continued transition to cloud and even Google we think can benefit. Now what we want to do is take a closer look at Microsoft and AWS and drill down into those two cloud leaders. So take a look at this graphic, it shows ETR's survey data for net score across Microsoft's portfolio, and we've selected a couple of key areas. Virtually every sector is in the green and has forward momentum relative to the October survey. Power Automate, which is RPA, Teams is off the chart, Azure itself we've reported on that, is the linchpin of Microsoft's innovation strategy, serverless, AI analytics, containers, they all have over 60% net scores. Skype is the only dog and Microsoft is doing a fabulous job of transitioning its customers to Teams away from Skype. I think there are still people using Skype. Yes, I know it's crazy. Now let's take a look at the AWS portfolio drill down, there's a similar story here for Amazon and virtually all sectors are well into the 50% net scores or above. Yeah, it's lower than Microsoft, but still AWS, very, very large, so across the board strength for the company and it's impressive for a $45 billion cloud company. Only Chime is lagging behind AWS and maybe, maybe AWS needs a Teams-like version to migrate folks off of Chime. Although you do see it's an uptick there relative to the last survey, but still not burning the house down. Now let's take a look at security. It's a sector that we've highlighted for several quarters, and it's really undergoing massive change. This of course was accelerated by the work from home trend, and this chart ranks the CIO and CSO priorities for security, and here you see identity access management stands out. So this bodes well for the likes of Okta and SailPoint, of course endpoint security also ranks highly, and that's good news for a company like CrowdStrike or Forescout, Carbon Black, which was acquired by VMware. And you can see network security is right there as well, I mean, it's all kind of network security but Cisco, Palo Alto, Fortinet are some of the names that we follow closely there, and cloud security, Microsoft, Amazon and Zscaler also stands out. Now, what we want to do now is drill in a little bit and take a look at the vendor map for security. So this chart shows one of our favorite views, it's getting net score or spending momentum on the vertical axis and market share on the horizontal. Okta, note in the upper right of that little chart there that table, Okta remains the highest net score of all the players that we're showing here, SailPoint and CrowdStrike definitely looming large, Microsoft continues to be impressive because of its both presence, you can see that dot in the upper right there and it's momentum, and you know, for context, we've included some of the legacy names like RSA and McAfee and Symantec, you could see them in the red as is IBM, and then the rest of the pack, they're solidly in the green, we've said this before security remains a priority, it's a very strong market, CIOs and CSOs have to spend on it, they're accelerating that spending, and it's a fragmented space with lots of legitimate players, and it's undergoing a major change, and with the SolarWinds hack, it's on everyone's radar even more than we've seen with earlier high profile breaches, we have some other data that we'll share in the future, on that front, but in the interest of time, we'll press on here. Now, one of the other sectors that's undergoing significant changes, database. And so if you take a look at the latest survey data, so we're showing that same xy-view, the first thing that we call your attention to is Snowflake, and we've been reporting on this company for years now, and sharing ETR data for well over a year. The company continues to impress us with spending momentum, this last survey it increased from 75% last quarter to 83% in the latest survey. This is unbelievable because having now done this for quite some time, many, many quarters, these numbers are historically not sustainable and very rarely do you see that kind of increase from the mid-70s up into the '80s. So now AWS is the other big call out here. This is a company that has become a database powerhouse, and they've done that from a standing start and they've become a leader in the market. Google's momentum is also impressive, especially with it's technical chops, it gets very, very high marks for things like BigQuery, and so you can see it's got momentum, it does not have the presence in the market to the right, that for instance AWS and Microsoft have, and that brings me to Microsoft is also notable, because it's so large and look at the momentum, it's got very, very strong spending momentum as well, so look, this database market it's seeing dramatically different strategies. Take Amazon for example, it's all about the right tool for the right job, they get a lot of different data stores with specialized databases, for different use cases, Aurora for transaction processing, Redshift for analytics, I want a key value store, hey, some DynamoDB, graph database? You got little Neptune, document database? They've got that, they got time series database, so very, very granular portfolio. You got Oracle on the other end of the spectrum. It along with several others are converging capabilities and that's a big trend that we're seeing across the board, into, sometimes we call it a mono database instead of one database fits all. Now Microsoft's world kind of largely revolves around SQL and Azure SQL but it does offer other options. But the big difference between Microsoft and AWS is AWS' approach is really to maximize the granularity in the technical flexibility with fine-grained access to primitives and APIs, that's their philosophy, whereas Microsoft with synapse for example, they're willing to build that abstraction layer as a means of simplifying the experiences. AWS, they've been reluctant to do this, their approach favors optionality and their philosophy is as the market changes, that will give them the ability to move faster. Microsoft's philosophy favors really abstracting that complexity, now that adds overhead, but it does simplify, so these are two very interesting counter poised strategies that we're watching and we think there's room for both, they're just not necessarily one better than the other, it's just different philosophies and different approaches. Now Snowflake for its part is building a data cloud on top of AWS, Google and Azure, so it's another example of adding value by abstracting away the underlying infrastructure complexity and it obviously seems to be working well, albeit at a much smaller scale at this point. Now let's talk a little bit about some of the on-prem players, the legacy players, and we'll use Dell and VMware as proxies for these markets. So what we're showing here in this chart is Dell's net scores across select parts of its portfolio and it's a pretty nice picture for Dell, I mean everything, but Desktop is showing forward momentum relative to previous surveys, laptops continue to benefit from the remote worker trend, in fact, PCs actually grew this year if you saw our spot on Intel last week, PCs had peaked, PC volume at peaked in 2011 and it actually bumped up this year but it's not really, we don't think sustainable, but nonetheless it's been a godsend during the pandemic as data center infrastructure has been softer. Dell's cloud is up and that really comprises a bunch of infrastructure along with some services, so that's showing some strength that both, look at storage and server momentum, they seem to be picking up and this is really important because these two sectors have been lagging for Dell. But this data supports our pent-up demand premise for on-prem infrastructure, and we'll see if the ETR survey which is forward-looking translates into revenue growth for Dell and others like HPE. Now, what about Dell's favorite new toy over at VMware? Let's take a look at that picture for VMware, it's pretty solid. VMware cloud on AWS, we've been reporting on that for several quarters now, it's showing up in the ETR survey and it is well, it's somewhat moderating, it's coming down from very high spending momentum, so it's still, we think very positive. NSX momentum is coming back in the survey, I'm not sure what happened there, but it's been strong, VMware's on-prem cloud with VCF VMware Cloud Foundation, that's strong, Tanzu was a bit surprising because containers are very hot overall, so that's something we're watching, seems to be moderating, maybe the market says okay, you did great VMware, you're embracing containers, but Tanzu is maybe not the, we'll see, we'll see how that all plays out. I think it's the right strategy for VMware to embrace that container strategy, but we said remember, everybody said containers are going to kill VMware, well, VMware rightly, they've embraced cloud with VMware cloud on AWS, they're embracing containers. So we're seeing much more forward-thinking strategies and management philosophies. Carbon Black, that benefits from the security tailwind, and then the core infrastructure looks good, vSAN, vSphere and VDI. So the big thing that we're watching for VMware, is of course, who's going to be the next CEO. Is it going to be Zane Rowe, who's now the acting CEO? And of course he's been the CFO for years. Who's going to get that job? Will it be Sanjay Poonen? The choice I think is going to say much about the direction of VMware going forward in our view. Succeeding Pat Gelsinger is like, it's going to be like following Peyton Manning at QB, but this summer we expect Dell to spin out VMware or do some other kind of restructuring, and restructure both VMware and Dell's balance sheet, it wants to get both companies back to investment grade and it wants to set a new era in motion or it's going to set a new era in motion. Now that financial transaction, maybe it does call for a CFO in favor of such a move and can orchestrate such a move, but certainly Sanjay Poonen has been a loyal soldier and he's performed very well in his executive roles, not just at VMware, but previous roles, SAP and others. So my opinion there's no doubt he's ready and he's earned it, and with, of course with was no offense to Zane Rowe by the way, he's an outstanding executive too, but the big questions for Dell and VMware's what will the future of these two companies look like? They've dominated, VMware especially has dominated the data center for a decade plus, they're responding to cloud, and some of these new trends, they've made tons of acquisitions and Gelsinger has orchestrated TAM expansion. They still got to get through paying down the debt so they can really double down on an innovation agenda from an R&D perspective, that's been somewhat hamstrung and to their credit, they've done a great job of navigating through Dell's tendency to take VMware cash and restructure its business to go public, and now to restructure both companies to do the pivotal acquisition, et cetera, et cetera, et cetera and clean up it's corporate structure. So it's been a drag on VMware's ability to use its free cash flow for R&D, and again it's been very impressive what it's been able to accomplish there. On the Dell side of the house, it's R&D largely has gone to kind of new products, follow-on products and evolutionary kind of approach, and it would be nice to see Dell be able to really double down on the innovation agenda especially with the looming edge opportunity. Look R&D is the lifeblood of a tech company, and there's so many opportunities across the clouds and at The Edge we've talked this a lot, I haven't talked much about or any about IBM, we wrote a piece last year on IBM's innovation agenda, really hinges on its R&D. It seems to be continuing to favor dividends and stock buybacks, that makes it difficult for the company to really invest in its future and grow, its promised growth, Ginni Rometty promised growth, that never really happened, Arvind Krishna is now promising growth, hopefully it doesn't fall into the same pattern of missed promises, and my concern there is that R&D, you can't just flick a switch and pour money and get a fast return, it takes years to get that. (Dave chuckles) We talked about Intel last week, so similar things going on, but I digress. Look, these guys are going to require in my view, VMware, Dell, I'll put HPE in there, they're going to require organic investment to get back to growth, so we're watching these factors very, very closely. Okay, got to wrap up here, so we're seeing IT spending growth coming in as high as potentially 7% this year, and it's going to be powered by the same old culprits, cloud, AI, automation, we'll be doing an RPA update soon here, application modernization, and the new work paradigm that we think will force increased investments in digital initiatives. The doubling of the expectation of work from home is significant, and so we see this hybrid world, not just hybrid cloud but hybrid work from home and on-prem, this new digital world, and it's going to require investment in both cloud and on-prem, and we think that's going to lift both boats but cloud, clearly the big winner. And we're not by any means suggesting that their growth rates are going to somehow converge, they're not, cloud will continue to outpace on-prem by several hundred basis points, throughout the decade we think. And AWS and Microsoft are in the top division of that cloud bracket. Security markets are really shifting and we continue to like the momentum of companies in identity and endpoint and cloud security, especially the pure plays like CrowdStrike and Okta and SailPoint, and Zscaler and others that we've mentioned over the past several quarters, but CSOs tell us they want to work with the big guys too, because they trust them, especially Palo Alto networks, Cisco obviously in the mix, their security business continues to outperform the balance of Cisco's portfolio, and these companies, they have resources to withstand market shifts and we'll do a deeper drill down at the security soon and update you on other trends, on other companies in that space. Now the database world, it continues to heat up, I used to say on theCUBE all the time that decade and a half ago database was boring and now database is anything but, and thank you to cloud databases and especially Snowflake, it's data cloud vision, it's simplicity, we're seeing lots of different ways though, to skin the cat, and while there's disruption, we believe Oracle's position is solid because it owns Mission-Critical, that's its stronghold, and we really haven't seen those workloads migrate into the cloud, and frankly, I think it's going to be hard to rest those away from Oracle. Now, AWS and Microsoft, they continue to be the easy choice for a lot of their customers. Microsoft migrating its software state, AWS continues to innovate, we've got a lot of database choices, the right tool for the right job, so there's lots of innovation going on in databases beyond these names as well, and we'll continue to update you on these markets shortly. Now, lastly, it's quite notable how well some of the legacy names have navigated through COVID. Sure, they're not rocketing like many of the work-from-home stocks, but they've been able to thus far survive, and in the example of Dell and VMware, the portfolio diversity has been a blessing. The bottom line is the first half of 2021 seems to be shaping up as we expected, momentum for the strongest digital plays, low interest rates helping large established companies hang in there with strong balance sheets, and large customer bases. And what will be really interesting to see is what happens coming out of the pandemic. Will the rich get richer? Yeah, well we think so. But we see the legacy players adjusting their business models, embracing change in the market and steadily moving forward. And we see at least a dozen new players hitting the radar that could become leaders in the coming decade, and as always, we'll be highlighting many of those in our future episodes. Okay, that's it for now, listen, these episodes remember, they're all available as podcasts, all you got to do is search for Breaking Analysis Podcasts and you'll you'll get them so please listen, like them, if you like them, share them, really, I always appreciate that, I publish weekly on wikibon.com and siliconangle.com, and really would appreciate your comments and always do in my LinkedIn posts, or you can always DM me @dvellante or email me at david.vellante@siliconangle.com, and tell me what you think is happening out there. Don't forget to check out ETR+ for all the survey action, this is David Vellante, thanks for watching theCUBE Insights powered by ETR. Stay safe, we'll see you next time. (downbeat music)
SUMMARY :
Studios in Palo Alto, in Boston, and in the example of Dell and VMware,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sanjay Poonen | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
2011 | DATE | 0.99+ |
Zane Rowe | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
December | DATE | 0.99+ |
75% | QUANTITY | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
October | DATE | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
David Vellante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
4.1% | QUANTITY | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
4% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Peyton Manning | PERSON | 0.99+ |
48% | QUANTITY | 0.99+ |
$45 billion | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
second half | QUANTITY | 0.99+ |
7% | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
last week | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
10 months | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Rahul Pathak, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, welcome back to the cubes. Ongoing coverage of AWS reinvent virtual Cuba's Gone Virtual along with most events these days are all events and continues to bring our digital coverage of reinvent With me is Rahul Pathak, who is the vice president of analytics at AWS A Ro. It's great to see you again. Welcome. And thanks for joining the program. >>They have Great co two and always a pleasure. Thanks for having me on. >>You're very welcome. Before we get into your leadership discussion, I want to talk about some of the things that AWS has announced. Uh, in the early parts of reinvent, I want to start with a glue elastic views. Very notable announcement allowing people to, you know, essentially share data across different data stores. Maybe tell us a little bit more about glue. Elastic view is kind of where the name came from and what the implication is, >>Uh, sure. So, yeah, we're really excited about blue elastic views and, you know, as you mentioned, the idea is to make it easy for customers to combine and use data from a variety of different sources and pull them together into one or many targets. And the reason for it is that you know we're really seeing customers adopt what we're calling a lake house architectural, which is, uh, at its core Data Lake for making sense of data and integrating it across different silos, uh, typically integrated with the data warehouse, and not just that, but also a range of other purpose. Both stores like Aurora, Relation of Workloads or dynamodb for non relational ones. And while customers typically get a lot of benefit from using purpose built stores because you get the best possible functionality, performance and scale forgiven use case, you often want to combine data across them to get a holistic view of what's happening in your business or with your customers. And before glue elastic views, customers would have to either use E. T. L or data integration software, or they have to write custom code that could be complex to manage, and I could be are prone and tough to change. And so, with elastic views, you can now use sequel to define a view across multiple data sources pick one or many targets. And then the system will actually monitor the sources for changes and propagate them into the targets in near real time. And it manages the anti pipeline and can notify operators if if anything, changes. And so the you know the components of the name are pretty straightforward. Blues are survivalists E T Elling data integration service on blue elastic views about our about data integration their views because you could define these virtual tables using sequel and then elastic because it's several lists and will scale up and down to deal with the propagation of changes. So we're really excited about it, and customers are as well. >>Okay, great. So my understanding is I'm gonna be able to take what's called what the parlance of materialized views, which in my laypersons terms assumes I'm gonna run a query on the database and take that subset. And then I'm gonna be ableto thio. Copy that and move it to another data store. And then you're gonna automatically keep track of the changes and keep everything up to date. Is that right? >>Yes. That's exactly right. So you can imagine. So you had a product catalog for example, that's being updated in dynamodb, and you can create a view that will move that to Amazon Elasticsearch service. You could search through a current version of your catalog, and we will monitor your dynamodb tables for any changes and make sure those air all propagated in the real time. And all of that is is taken care of for our customers as soon as they defined the view on. But they don't be just kept in sync a za long as the views in effect. >>Let's see, this is being really valuable for a person who's building Looks like I like to think in terms of data services or data products that are gonna help me, you know, monetize my business. Maybe, you know, maybe it's a simple as a dashboard, but maybe it's actually a product. You know, it might be some content that I want to develop, and I've got transaction systems. I've got unstructured data, may be in a no sequel database, and I wanna actually combine those build new products, and I want to do that quickly. So So take me through what I would have to do. You you sort of alluded to it with, you know, a lot of e t l and but take me through in a little bit more detail how I would do that, you know, before this innovation. And maybe you could give us a sense as to what the possibilities are with glue. Elastic views? >>Sure. So, you know, before we announced elastic views, a customer would typically have toe think about using a T l software, so they'd have to write a neat L pipeline that would extract data periodically from a range of sources. They then have to write transformation code that would do things like matchup types. Make sure you didn't have any invalid values, and then you would combine it on periodically, Write that into a target. And so once you've got that pipeline set up, you've got to monitor it. If you see an unusual spike in data volume, you might have to add more. Resource is to the pipeline to make a complete on time. And then, if anything changed in either the source of the destination that prevented that data from flowing in the way you would expect it, you'd have toe manually, figure that out and have data, quality checks and all of that in place to make sure everything kept working but with elastic views just gets much simpler. So instead of having to write custom transformation code, you right view using sequel and um, sequel is, uh, you know, widely popular with data analysts and folks that work with data, as you well know. And so you can define that view and sequel. The view will look across multiple sources, and then you pick your destination and then glue. Elastic views essentially monitors both the source for changes as well as the source and the destination for any any issues like, for example, did the schema changed. The shape of the data change is something briefly unavailable, and it can monitor. All of that can handle any errors, but it can recover from automatically. Or if it can't say someone dropped an important table in the source. That was part of your view. You can actually get alerted and notified to take some action to prevent bad data from getting through your system or to prevent your pipeline from breaking without your knowledge and then the final pieces, the elasticity of it. It will automatically deal with adding more resource is if, for example, say you had a spiky day, Um, in the markets, maybe you're building a financial services application and you needed to add more resource is to process those changes into your targets more quickly. The system would handle that for you. And then, if you're monetizing data services on the back end, you've got a range of options for folks subscribing to those targets. So we've got capabilities like our, uh, Amazon data exchange, where people can exchange and monetize data set. So it allows this and to end flow in a much more straightforward way. It was possible before >>awesome. So a lot of automation, especially if something goes wrong. So something goes wrong. You can automatically recover. And if for whatever reason, you can't what happens? You quite ask the system and and let the operator No. Hey, there's an issue. You gotta go fix it. How does that work? >>Yes, exactly. Right. So if we can recover, say, for example, you can you know that for a short period of time, you can't read the target database. The system will keep trying until it can get through. But say someone dropped a column from your source. That was a key part of your ultimate view and destination. You just can't proceed at that point. So the pipeline stops and then we notify using a PS or an SMS alert eso that programmatic action can be taken. So this effectively provides a really great way to enforce the integrity of data that's going between the sources and the targets. >>All right, make it kindergarten proof of it. So let's talk about another innovation. You guys announced quicksight que, uh, kind of speaking to the machine in my natural language, but but give us some more detail there. What is quicksight Q and and how doe I interact with it. What What kind of questions can I ask it >>so quick? Like you is essentially a deep, learning based semantic model of your data that allows you to ask natural language questions in your dashboard so you'll get a search bar in your quick side dashboard and quick site is our service B I service. That makes it really easy to provide rich dashboards. Whoever needs them in the organization on what Q does is it's automatically developing relationships between the entities in your data, and it's able to actually reason about the questions you ask. So unlike earlier natural language systems, where you have to pre define your models, you have to pre define all the calculations that you might ask the system to do on your behalf. Q can actually figure it out. So you can say Show me the top five categories for sales in California and it'll look in your data and figure out what that is and will prevent. It will present you with how it parse that question, and there will, in line in seconds, pop up a dashboard of what you asked and actually automatically try and take a chart or visualization for that data. That makes sense, and you could then start to refine it further and say, How does this compare to what happened in New York? And we'll be able to figure out that you're tryingto overlay those two data sets and it'll add them. And unlike other systems, it doesn't need to have all of those things pre defined. It's able to reason about it because it's building a model of what your data means on the flight and we pre trained it across a variety of different domains So you can ask a question about sales or HR or any of that on another great part accused that when it presents to you what it's parsed, you're actually able toe correct it if it needs it and provide feedback to the system. So, for example, if it got something slightly off you could actually select from a drop down and then it will remember your selection for the next time on it will get better as you use it. >>I saw a demo on in Swamis Keynote on December 8. That was basically you were able to ask Quick psych you the same question, but in different ways, you know, like compare California in New York or and then the data comes up or give me the top, you know, five. And then the California, New York, the same exact data. So so is that how I kind of can can check and see if the answer that I'm getting back is correct is ask different questions. I don't have to know. The schema is what you're saying. I have to have knowledge of that is the user I can. I can triangulate from different angles and then look and see if that's correct. Is that is that how you verify or there are other ways? >>Eso That's one way to verify. You could definitely ask the same question a couple of different ways and ensure you're seeing the same results. I think the third option would be toe, uh, you know, potentially click and drill and filter down into that data through the dash one on, then the you know, the other step would be at data ingestion Time. Typically, data pipelines will have some quality controls, but when you're interacting with Q, I think the ability to ask the question multiple ways and make sure that you're getting the same result is a perfectly reasonable way to validate. >>You know what I like about that answer that you just gave, and I wonder if I could get your opinion on this because you're you've been in this business for a while? You work with a lot of customers is if you think about our operational systems, you know things like sales or E r. P systems. We've contextualized them. In other words, the business lines have inject context into the system. I mean, they kind of own it, if you will. They own the data when I put in quotes, but they do. They feel like they're responsible for it. There's not this constant argument because it's their data. It seems to me that if you look back in the last 10 years, ah, lot of the the data architecture has been sort of generis ized. In other words, the experts. Whether it's the data engineer, the quality engineer, they don't really have the business context. But the example that you just gave it the drill down to verify that the answer is correct. It seems to me, just in listening again to Swamis Keynote the other day is that you're really trying to put data in the hands of business users who have the context on the domain knowledge. And that seems to me to be a change in mindset that we're gonna see evolve over the next decade. I wonder if you could give me your thoughts on that change in the data architecture data mindset. >>David, I think you're absolutely right. I mean, we see this across all the customers that we speak with there's there's an increasing desire to get data broadly distributed into the hands of the organization in a well governed and controlled way. But customers want to give data to the folks that know what it means and know how they can take action on it to do something for the business, whether that's finding a new opportunity or looking for efficiencies. And I think, you know, we're seeing that increasingly, especially given the unpredictability that we've all gone through in 2020 customers are realizing that they need to get a lot more agile, and they need to get a lot more data about their business, their customers, because you've got to find ways to adapt quickly. And you know, that's not gonna change anytime in the future. >>And I've said many times in the The Cube, you know, there are industry. The technology industry used to be all about the products, and in the last decade it was really platforms, whether it's SAS platforms or AWS cloud platforms, and it seems like innovation in the coming years, in many respects is coming is gonna come from the ecosystem and the ability toe share data we've We've had some examples today and then But you hit on. You know, one of the key challenges, of course, is security and governance. And can you automate that if you will and protect? You know the users from doing things that you know, whether it's data access of corporate edicts for governance and compliance. How are you handling that challenge? >>That's a great question, and it's something that really emphasized in my leadership session. But the you know, the notion of what customers are doing and what we're seeing is that there's, uh, the Lake House architectural concept. So you've got a day late. Purpose build stores and customers are looking for easy data movement across those. And so we have things like blue elastic views or some of the other blue features we announced. But they're also looking for unified governance, and that's why we built it ws late formation. And the idea here is that it can quickly discover and catalog customer data assets and then allows customers to define granular access policies centrally around that data. And once you have defined that, it then sets customers free to give broader access to the data because they put the guardrails in place. They put the protections in place. So you know you can tag columns as being private so nobody can see them on gun were announced. We announced a couple of new capabilities where you can provide row based control. So only a certain set of users can see certain rose in the data, whereas a different set of users might only be able to see, you know, a different step. And so, by creating this fine grained but unified governance model, this actually sets customers free to give broader access to the data because they know that they're policies and compliance requirements are being met on it gets them out of the way of the analyst. For someone who can actually use the data to drive some value for the business, >>right? They could really focus on driving value. And I always talk about monetization. However monetization could be, you know, a generic term, for it could be saving lives, admission of the business or the or the organization I meant to ask you about acute customers in bed. Uh, looks like you into their own APs. >>Yes, absolutely so one of quick sites key strengths is its embed ability. And on then it's also serverless, so you could embed it at a really massive scale. And so we see customers, for example, like blackboard that's embedding quick side dashboards into information. It's providing the thousands of educators to provide data on the effectiveness of online learning. For example, on you could embed Q into that capability. So it's a really cool way to give a broad set of people the ability to ask questions of data without requiring them to be fluent in things like Sequel. >>If I ask you a question, we've talked a little bit about data movement. I think last year reinvent you guys announced our A three. I think it made general availability this year. And remember Andy speaking about it, talking about you know, the importance of having big enough pipes when you're moving, you know, data around. Of course you do. Doing tearing. You also announced Aqua Advanced Query accelerator, which kind of reduces bringing the computer. The data, I guess, is how I would think about that reducing that movement. But then we're talking about, you know, glue, elastic views you're copying and moving data. How are you ensuring you know, maintaining that that maximum performance for your customers. I mean, I know it's an architectural question, but as an analytics professional, you have toe be comfortable that that infrastructure is there. So how does what's A. W s general philosophy in that regard? >>So there's a few ways that we think about this, and you're absolutely right. I think there's data volumes were going up, and we're seeing customers going from terabytes, two petabytes and even people heading into the exabyte range. Uh, there's really a need to deliver performance at scale. And you know, the reality of customer architectures is that customers will use purpose built systems for different best in class use cases. And, you know, if you're trying to do a one size fits all thing, you're inevitably going to end up compromising somewhere. And so the reality is, is that customers will have more data. We're gonna want to get it to more people on. They're gonna want their analytics to be fast and cost effective. And so we look at strategies to enable all of this. So, for example, glue elastic views. It's about moving data, but it's about moving data efficiently. So What we do is we allow customers to define a view that represents the subset of their data they care about, and then we only look to move changes as efficiently as possible. So you're reducing the amount of data that needs to get moved and making sure it's focused on the essential. Similarly, with Aqua, what we've done, as you mentioned, is we've taken the compute down to the storage layer, and we're using our nitro chips to help with things like compression and encryption. And then we have F. P. J s in line to allow filtering an aggregation operation. So again, you're tryingto quickly and effectively get through as much data as you can so that you're only sending back what's relevant to the query that's being processed. And that again leads to more performance. If you can avoid reading a bite, you're going to speed up your queries. And that Awkward is trying to do. It's trying to push those operations down so that you're really reducing data as close to its origin as possible on focusing on what's essential. And that's what we're applying across our analytics portfolio. I would say one other piece we're focused on with performance is really about innovating across the stack. So you mentioned network performance. You know, we've got 100 gigabits per second throughout now, with the next 10 instances and then with things like Grab it on to your able to drive better price performance for customers, for general purpose workloads. So it's really innovating at all layers. >>It's amazing to watch it. I mean, you guys, it's a It's an incredible engineering challenge as you built this hyper distributed system. That's now, of course, going to the edge. I wanna come back to something you mentioned on do wanna hit on your leadership session as well. But you mentioned the one size fits all, uh, system. And I've asked Andy Jassy about this. I've had a discussion with many folks that because you're full and and of course, you mentioned the challenges you're gonna have to make tradeoffs if it's one size fits all. The flip side of that is okay. It's simple is you know, 11 of the Swiss Army knife of database, for example. But your philosophy is Amazon is you wanna have fine grained access and to the primitives in case the market changes you, you wanna be able to move quickly. So that puts more pressure on you to then simplify. You're not gonna build this big hairball abstraction layer. That's not what he gonna dio. Uh, you know, I think about, you know, layers and layers of paint. I live in a very old house. Eso your That's not your approach. So it puts greater pressure on on you to constantly listen to your customers, and and they're always saying, Hey, I want to simplify, simplify, simplify. We certainly again heard that in swamis presentation the other day, all about, you know, minimizing complexity. So that really is your trade office. It puts pressure on Amazon Engineering to continue to raise the bar on simplification. Isn't Is that a fair statement? >>Yeah, I think so. I mean, you know, I think any time we can do work, so our customers don't have to. I think that's a win for both of us. Um, you know, because I think we're delivering more value, and it makes it easier for our customers to get value from their data way. Absolutely believe in using the right tool for the right job. And you know you talked about an old house. You're not gonna build or renovate a house of the Swiss Army knife. It's just the wrong tool. It might work for small projects, but you're going to need something more specialized. The handle things that matter. It's and that is, uh, that's really what we see with that, you know, with that set of capabilities. So we want to provide customers with the best of both worlds. We want to give them purpose built tools so they don't have to compromise on performance or scale of functionality. And then we want to make it easy to use these together. Whether it's about data movement or things like Federated Queries, you can reach into each of them and through a single query and through a unified governance model. So it's all about stitching those together. >>Yeah, so far you've been on the right side of history. I think it serves you well on your customers. Well, I wanna come back to your leadership discussion, your your leadership session. What else could you tell us about? You know, what you covered there? >>So we we've actually had a bunch of innovations on the analytics tax. So some of the highlights are in m r, which is our managed spark. And to do service, we've been able to achieve 1.7 x better performance and open source with our spark runtime. So we've invested heavily in performance on now. EMR is also available for customers who are running and containerized environment. So we announced you Marnie chaos on then eh an integrated development environment and studio for you Marco D M R studio. So making it easier both for people at the infrastructure layer to run em are on their eks environments and make it available within their organizations but also simplifying life for data analysts and folks working with data so they can operate in that studio and not have toe mess with the details of the clusters underneath and then a bunch of innovation in red shift. We talked about Aqua already, but then we also announced data sharing for red Shift. So this makes it easy for red shift clusters to share data with other clusters without putting any load on the central producer cluster. And this also speaks to the theme of simplifying getting data from point A to point B so you could have central producer environments publishing data, which represents the source of truth, say into other departments within the organization or departments. And they can query the data, use it. It's always up to date, but it doesn't put any load on the producers that enables these really powerful data sharing on downstream data monetization capabilities like you've mentioned. In addition, like Swami mentioned in his keynote Red Shift ML, so you can now essentially train and run models that were built in sage maker and optimized from within your red shift clusters. And then we've also automated all of the performance tuning that's possible in red ships. So we really invested heavily in price performance, and now we've automated all of the things that make Red Shift the best in class data warehouse service from a price performance perspective up to three X better than others. But customers can just set red shift auto, and it'll handle workload management, data compression and data distribution. Eso making it easier to access all about performance and then the other big one was in Lake Formacion. We announced three new capabilities. One is transactions, so enabling consistent acid transactions on data lakes so you can do things like inserts and updates and deletes. We announced row based filtering for fine grained access control and that unified governance model and then automated storage optimization for Data Lake. So customers are dealing with an optimized small files that air coming off streaming systems, for example, like Formacion can auto compact those under the covers, and you can get a 78 x performance boost. It's been a busy year for prime lyrics. >>I'll say that, z that it no great great job, bro. Thanks so much for coming back in the Cube and, you know, sharing the innovations and, uh, great to see you again. And good luck in the coming here. Well, >>thank you very much. Great to be here. Great to see you. And hope we get Thio see each other in person against >>I hope so. All right. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after this short break
SUMMARY :
It's great to see you again. They have Great co two and always a pleasure. to, you know, essentially share data across different And so the you know the components of the name are pretty straightforward. And then you're gonna automatically keep track of the changes and keep everything up to date. So you can imagine. services or data products that are gonna help me, you know, monetize my business. that prevented that data from flowing in the way you would expect it, you'd have toe manually, And if for whatever reason, you can't what happens? So if we can recover, say, for example, you can you know that for a So let's talk about another innovation. that you might ask the system to do on your behalf. but in different ways, you know, like compare California in New York or and then the data comes then the you know, the other step would be at data ingestion Time. But the example that you just gave it the drill down to verify that the answer is correct. And I think, you know, we're seeing that increasingly, You know the users from doing things that you know, whether it's data access But the you know, the notion of what customers are doing and what we're seeing is that admission of the business or the or the organization I meant to ask you about acute customers And on then it's also serverless, so you could embed it at a really massive But then we're talking about, you know, glue, elastic views you're copying and moving And you know, the reality of customer architectures is that customers will use purpose built So that puts more pressure on you to then really what we see with that, you know, with that set of capabilities. I think it serves you well on your customers. speaks to the theme of simplifying getting data from point A to point B so you could have central in the Cube and, you know, sharing the innovations and, uh, great to see you again. thank you very much. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rahul Pathak | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
December 8 | DATE | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
third option | QUANTITY | 0.99+ |
Swami | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
A. W | PERSON | 0.99+ |
this year | DATE | 0.99+ |
10 instances | QUANTITY | 0.98+ |
A three | COMMERCIAL_ITEM | 0.98+ |
78 x | QUANTITY | 0.98+ |
two petabytes | QUANTITY | 0.98+ |
five | QUANTITY | 0.97+ |
Amazon Engineering | ORGANIZATION | 0.97+ |
Red Shift ML | TITLE | 0.97+ |
Formacion | ORGANIZATION | 0.97+ |
11 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
one way | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
five categories | QUANTITY | 0.94+ |
Aqua | ORGANIZATION | 0.93+ |
Elasticsearch | TITLE | 0.93+ |
terabytes | QUANTITY | 0.93+ |
both worlds | QUANTITY | 0.93+ |
next decade | DATE | 0.92+ |
two data sets | QUANTITY | 0.91+ |
Lake Formacion | ORGANIZATION | 0.9+ |
single query | QUANTITY | 0.9+ |
Data Lake | ORGANIZATION | 0.89+ |
thousands of educators | QUANTITY | 0.89+ |
Both stores | QUANTITY | 0.88+ |
Thio | PERSON | 0.88+ |
agile | TITLE | 0.88+ |
Cuba | LOCATION | 0.87+ |
dynamodb | ORGANIZATION | 0.86+ |
1.7 x | QUANTITY | 0.86+ |
Swamis | PERSON | 0.84+ |
EMR | TITLE | 0.82+ |
one size | QUANTITY | 0.82+ |
Red Shift | TITLE | 0.82+ |
up to three X | QUANTITY | 0.82+ |
100 gigabits per second | QUANTITY | 0.82+ |
Marnie | PERSON | 0.79+ |
last decade | DATE | 0.79+ |
reinvent 2020 | EVENT | 0.74+ |
Invent | EVENT | 0.74+ |
last 10 years | DATE | 0.74+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
today | DATE | 0.74+ |
A Ro | EVENT | 0.71+ |
three new capabilities | QUANTITY | 0.71+ |
two | QUANTITY | 0.7+ |
E T Elling | PERSON | 0.69+ |
Eso | ORGANIZATION | 0.66+ |
Aqua | TITLE | 0.64+ |
Cube | ORGANIZATION | 0.63+ |
Query | COMMERCIAL_ITEM | 0.63+ |
SAS | ORGANIZATION | 0.62+ |
Aurora | ORGANIZATION | 0.61+ |
Lake House | ORGANIZATION | 0.6+ |
Sequel | TITLE | 0.58+ |
P. | PERSON | 0.56+ |
Nimrod Vax, BigID | AWS re:Invent 2020 Partner Network Day
>> Announcer: From around the globe, it's theCUBE. With digital coverage of AWS re:Invent 2020. Special coverage sponsored by AWS global partner network. >> Okay, welcome back everyone to theCUBE virtual coverage of re:Invent 2020 virtual. Normally we're in person, this year because of the pandemic we're doing remote interviews and we've got a great coverage here of the APN, Amazon Partner Network experience. I'm your host John Furrier, we are theCUBE virtual. Got a great guest from Tel Aviv remotely calling in and videoing, Nimrod Vax, who is the chief product officer and co-founder of BigID. This is the beautiful thing about remote, you're in Tel Aviv, I'm in Palo Alto, great to see you. We're not in person but thanks for coming on. >> Thank you. Great to see you as well. >> So you guys have had a lot of success at BigID, I've noticed a lot of awards, startup to watch, company to watch, kind of a good market opportunity data, data at scale, identification, as the web evolves beyond web presence identification, authentication is super important. You guys are called BigID. What's the purpose of the company? Why do you exist? What's the value proposition? >> So first of all, best startup to work at based on Glassdoor worldwide, so that's a big achievement too. So look, four years ago we started BigID when we realized that there is a gap in the market between the new demands from organizations in terms of how to protect their personal and sensitive information that they collect about their customers, their employees. The regulations were becoming more strict but the tools that were out there, to the large extent still are there, were not providing to those requirements and organizations have to deal with some of those challenges in manual processes, right? For example, the right to be forgotten. Organizations need to be able to find and delete a person's data if they want to be deleted. That's based on GDPR and later on even CCPA. And organizations have no way of doing it because the tools that were available could not tell them whose data it is that they found. The tools were very siloed. They were looking at either unstructured data and file shares or windows and so forth, or they were looking at databases, there was nothing for Big Data, there was nothing for cloud business applications. And so we identified that there is a gap here and we addressed it by building BigID basically to address those challenges. >> That's great, great stuff. And I remember four years ago when I was banging on the table and saying, you know regulation can stunt innovation because you had the confluence of massive platform shifts combined with the business pressure from society. That's not stopping and it's continuing today. You seeing it globally, whether it's fake news in journalism, to privacy concerns where modern applications, this is not going away. You guys have a great market opportunity. What is the product? What is smallID? What do you guys got right now? How do customers maintain the success as the ground continues to shift under them as platforms become more prevalent, more tools, more platforms, more everything? >> So, I'll start with BigID. What is BigID? So BigID really helps organizations better manage and protect the data that they own. And it does that by connecting to everything you have around structured databases and unstructured file shares, big data, cloud storage, business applications and then providing very deep insight into that data. Cataloging all the data, so you know what data you have where and classifying it so you know what type of data you have. Plus you're analyzing the data to find similar and duplicate data and then correlating them to an identity. Very strong, very broad solution fit for IT organization. We have some of the largest organizations out there, the biggest retailers, the biggest financial services organizations, manufacturing and et cetera. What we are seeing is that there are, with the adoption of cloud and business success obviously of AWS, that there are a lot of organizations that are not as big, that don't have an IT organization, that have a very well functioning DevOps organization but still have a very big footprint in Amazon and in other kind of cloud services. And they want to get visibility and they want to do it quickly. And the SmallID is really built for that. SmallID is a lightweight version of BigID that is cloud-native built for your AWS environment. And what it means is that you can quickly install it using CloudFormation templates straight from the AWS marketplace. Quickly stand up an environment that can scan, discover your assets in your account automatically and give you immediate visibility into that, your S3 bucket, into your DynamoDB environments, into your EMR clusters, into your Athena databases and immediately building a full catalog of all the data, so you know what files you have where, you know where what tables, what technical metadata, operational metadata, business metadata and also classified data information. So you know where you have sensitive information and you can immediately address that and apply controls to that information. >> So this is data discovery. So the use case is, I'm an Amazon partner, I mean we use theCUBE virtuals on Amazon, but let's just say hypothetically, we're growing like crazy. Got S3 buckets over here secure, encrypted and the rest, all that stuff. Things are happening, we're growing like a weed. Do we just deploy smallIDs and how it works? Is that use cases, SmallID is for AWS and BigID for everything else or? >> You can start small with SmallID, you get the visibility you need, you can leverage the automation of AWS so that you automatically discover those data sources, connect to them and get visibility. And you could grow into BigID using the same deployment inside AWS. You don't have to switch migrate and you use the same container cluster that is running inside your account and automatically scale it up and then connect to other systems or benefit from the more advanced capabilities the BigID can offer such as correlation, by connecting to maybe your Salesforce, CRM system and getting the ability to correlate to your customer data and understand also whose data it is that you're storing. Connecting to your on-premise mainframe, with the same deployment connecting to your Google Drive or office 365. But the point is that with the smallID you can really start quickly, small with a very small team and get that visibility very quickly. >> Nimrod, I want to ask you a question. What is the definition of cloud native data discovery? What does that mean to you? >> So cloud native means that it leverages all the benefits of the cloud. Like it gets all of the automation and visibility that you get in a cloud environment versus any traditional on-prem environment. So one thing is that BigID is installed directly from your marketplace. So you could browse, find its solution on the AWS marketplace and purchase it. It gets deployed using CloudFormation templates very easily and very quickly. It runs on a elastic container service so that once it runs you can automatically scale it up and down to increase the scan and the scale capabilities of the solution. It connects automatically behind the scenes into the security hub of AWS. So you get those alerts, the policy alerts fed into your security hub. It has integration also directly into the native logging capabilities of AWS. So your existing Datadog or whatever you're using for monitoring can plug into it automatically. That's what we mean by cloud native. >> And if you're cloud native you got to be positioned to take advantage of the data and machine learning in particular. Can you expand on the role of machine learning in your solution? Customers are leaning in heavily this year, you're seeing more uptake on machine learning which is basically AI, AI is machine learning, but it's all tied together. ML is big on all the deployments. Can you share your thoughts? >> Yeah, absolutely. So data discovery is a very tough problem and it has been around for 20 years. And the traditional methods of classifying the data or understanding what type of data you have has been, you're looking at the pattern of the data. Typically regular expressions or types of kind of pattern-matching techniques that look at the data. But sometimes in order to know what is personal or what is sensitive it's not enough to look at the pattern of the data. How do you distinguish between a date of birth and any other date. Date of birth is much more sensitive. How do you find country of residency or how do you identify even a first name from the last name? So for that, you need more advanced, more sophisticated capabilities that go beyond just pattern matching. And BigID has a variety of those techniques, we call that discovery-in-depth. What it means is that very similar to security-in-depth where you can not rely on a single security control to protect your environment, you can not rely on a single discovery method to truly classify the data. So yes, we have regular expression, that's the table state basic capability of data classification but if you want to find data that is more contextual like a first name, last name, even a phone number and distinguish between a phone number and just a sequence of numbers, you need more contextual NLP based discovery, name entity recognition. We're using (indistinct) to extract and find data contextually. We also apply deep learning, CNN capable, it's called CNN, which is basically deep learning in order to identify and classify document types. Which is basically being able to distinguish between a resume and a application form. Finding financial records, finding medical records. So RA are advanced NLP classifiers can find that type of data. The more advanced capabilities that go beyond the smallID into BigID also include cluster analysis which is an unsupervised machine learning method of finding duplicate and similar data correlation and other techniques that are more contextual and need to use machine learning for that. >> Yeah, and unsupervised that's a lot harder than supervised. You need to have that ability to get that what you can't see. You got to get the blind spots identified and that's really the key observational data you need. This brings up the kind of operational you heard cluster, I hear governance security you mentioned earlier GDPR, this is an operational impact. Can you talk about how it impacts on specifically on the privacy protection and governance side because certainly I get the clustering side of it, operationally just great. Everyone needs to get that. But now on the business model side, this is where people are spending a lot of time scared and worried actually. What the hell to do? >> One of the things that we realized very early on when we started with BigID is that everybody needs a discovery. You need discovery and we actually started with privacy. You need discovery in route to map your data and apply the privacy controls. You need discovery for security, like we said, right? Find and identify sensitive data and apply controls. And you also need discovery for data enablement. You want to discover the data, you want to enable it, to govern it, to make it accessible to the other parts of your business. So discovery is really a foundation and starting point and that you get there with smallID. How do you operationalize that? So BigID has the concept of an application framework. Think about it like an Apple store for data discovery where you can run applications inside your kind of discovery iPhone in order to run specific (indistinct) use cases. So, how do you operationalize privacy use cases? We have applications for privacy use cases like subject access requests and data rights fulfillment, right? Under the CCPA, you have the right to request your data, what data is being stored about you. BigID can help you find all that data in the catalog that after we scan and find that information we can find any individual data. We have an application also in the privacy space for consent governance right under CCP. And you have the right to opt out. If you opt out, your data cannot be sold, cannot be used. How do you enforce that? How do you make sure that if someone opted out, that person's data is not being pumped into Glue, into some other system for analytics, into Redshift or Snowflake? BigID can identify a specific person's data and make sure that it's not being used for analytics and alert if there is a violation. So that's just an example of how you operationalize this knowledge for privacy. And we have more examples also for data enablement and data management. >> There's so much headroom opportunity to build out new functionality, make it programmable. I really appreciate what you guys are doing, totally needed in the industry. I could just see endless opportunities to make this operationally scalable, more programmable, once you kind of get the foundation out there. So congratulations, Nimrod and the whole team. The question I want to ask you, we're here at re:Invent's virtual, three weeks we're here covering Cube action, check out theCUBE experience zone, the partner experience. What is the difference between BigID and say Amazon's Macy? Let's think about that. So how do you compare and contrast, in Amazon they say we love partnering, but we promote our ecosystem. You guys sure have a similar thing. What's the difference? >> There's a big difference. Yes, there is some overlap because both a smallID and Macy can classify data in S3 buckets. And Macy does a pretty good job at it, right? I'm not arguing about it. But smallID is not only about scanning for sensitive data in S3. It also scans anything else you have in your AWS environment, like DynamoDB, like EMR, like Athena. We're also adding Redshift soon, Glue and other rare data sources as well. And it's not only about identifying and alerting on sensitive data, it's about building full catalog (indistinct) It's about giving you almost like a full registry of your data in AWS, where you can look up any type of data and see where it's found across structured, unstructured big data repositories that you're handling inside your AWS environment. So it's broader than just for security. Apart from the fact that they're used for privacy, I would say the biggest value of it is by building that catalog and making it accessible for data enablement, enabling your data across the board for other use cases, for analytics in Redshift, for Glue, for data integrations, for various other purposes. We have also integration into Kinesis to be able to scan and let you know which topics, use what type of data. So it's really a very, very robust full-blown catalog of the data that across the board that is dynamic. And also like you mentioned, accessible to APIs. Very much like the AWS tradition. >> Yeah, great stuff. I got to ask you a question while you're here. You're the co-founder and again congratulations on your success. Also the chief product officer of BigID, what's your advice to your colleagues and potentially new friends out there that are watching here? And let's take it from the entrepreneurial perspective. I have an application and I start growing and maybe I have funding, maybe I take a more pragmatic approach versus raising billions of dollars. But as you grow the pressure for AppSec reviews, having all the table stakes features, how do you advise developers or entrepreneurs or even business people, small medium-sized enterprises to prepare? Is there a way, is there a playbook to say, rather than looking back saying, oh, I didn't do with all the things I got to go back and retrofit, get BigID. Is there a playbook that you see that will help companies so they don't get killed with AppSec reviews and privacy compliance reviews? Could be a waste of time. What's your thoughts on all this? >> Well, I think that very early on when we started BigID, and that was our perspective is that we knew that we are a security and privacy company. So we had to take that very seriously upfront and be prepared. Security cannot be an afterthought. It's something that needs to be built in. And from day one we have taken all of the steps that were needed in order to make sure that what we're building is robust and secure. And that includes, obviously applying all of the code and CI/CD tools that are available for testing your code, whether it's (indistinct), these type of tools. Applying and providing, penetration testing and working with best in line kind of pen testing companies and white hat hackers that would look at your code. These are kind of the things that, that's what you get funding for, right? >> Yeah. >> And you need to take advantage of that and use them. And then as soon as we got bigger, we also invested in a very, kind of a very strong CSO that comes from the industry that has a lot of expertise and a lot of credibility. We also have kind of CSO group. So, each step of funding we've used extensively also to make RM kind of security poster a lot more robust and invisible. >> Final question for you. When should someone buy BigID? When should they engage? Is it something that people can just download immediately and integrate? Do you have to have, is the go-to-market kind of a new target the VP level or is it the... How does someone know when to buy you and download it and use the software? Take us through the use case of how customers engage with. >> Yeah, so customers directly have those requirements when they start hitting and having to comply with regulations around privacy and security. So very early on, especially organizations that deal with consumer information, get to a point where they need to be accountable for the data that they store about their customers and they want to be able to know their data and provide the privacy controls they need to their consumers. For our BigID product this typically is a kind of a medium size and up company, and with an IT organization. For smallID, this is a good fit for companies that are much smaller, that operate mostly out of their, their IT is basically their DevOps teams. And once they have more than 10, 20 data sources in AWS, that's where they start losing count of the data that they have and they need to get more visibility and be able to control what data is being stored there. Because very quickly you start losing count of data information, even for an organization like BigID, which isn't a bigger organization, right? We have 200 employees. We are at the point where it's hard to keep track and keep control of all the data that is being stored in all of the different data sources, right? In AWS, in Google Drive, in some of our other sources, right? And that's the point where you need to start thinking about having that visibility. >> Yeah, like all growth plan, dream big, start small and get big. And I think that's a nice pathway. So small gets you going and you lead right into the BigID. Great stuff. Final, final question for you while I gatchu here. Why the awards? Someone's like, hey, BigID is this cool company, love the founder, love the team, love the value proposition, makes a lot of sense. Why all the awards? >> Look, I think one of the things that was compelling about BigID from the beginning is that we did things differently. Our whole approach for personal data discovery is unique. And instead of looking at the data, we started by looking at the identities, the people and finally looking at their data, learning how their data looks like and then searching for that information. So that was a very different approach to the traditional approach of data discovery. And we continue to innovate and to look at those problems from a different perspective so we can offer our customers an alternative to what was done in the past. It's not saying that we don't do the basic stuffs. The Reg X is the connectivity that that is needed. But we always took a slightly different approach to diversify, to offer something slightly different and more comprehensive. And I think that was the thing that really attracted us from the beginning with the RSA Innovation Sandbox award that we won in 2018, the Gartner Cool Vendor award that we received. And later on also the other awards. And I think that's the unique aspect of BigID. >> You know you solve big problems than certainly as needed. We saw this early on and again I don't think that the problem is going to go away anytime soon, platforms are emerging, more tools than ever before that converge into platforms and as the logic changes at the top all of that's moving onto the underground. So, congratulations, great insight. >> Thank you very much. >> Thank you. Thank you for coming on theCUBE. Appreciate it Nimrod. Okay, I'm John Furrier. We are theCUBE virtual here for the partner experience APN virtual. Thanks for watching. (gentle music)
SUMMARY :
Announcer: From around the globe, of the APN, Amazon Partner Great to see you as well. So you guys have had a For example, the right to be forgotten. What is the product? of all the data, so you know and the rest, all that stuff. and you use the same container cluster What is the definition of Like it gets all of the automation of the data and machine and need to use machine learning for that. and that's really the key and that you get there with smallID. Nimrod and the whole team. of the data that across the things I got to go back These are kind of the things that, and a lot of credibility. is the go-to-market kind of And that's the point where you need and you lead right into the BigID. And instead of looking at the data, and as the logic changes at the top for the partner experience APN virtual.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Nimrod Vax | PERSON | 0.99+ |
Nimrod | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
2018 | DATE | 0.99+ |
Glassdoor | ORGANIZATION | 0.99+ |
BigID | TITLE | 0.99+ |
200 employees | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
BigID | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
SmallID | TITLE | 0.99+ |
GDPR | TITLE | 0.99+ |
four years ago | DATE | 0.98+ |
billions of dollars | QUANTITY | 0.98+ |
Redshift | TITLE | 0.98+ |
CloudFormation | TITLE | 0.97+ |
both | QUANTITY | 0.97+ |
DynamoDB | TITLE | 0.97+ |
single | QUANTITY | 0.97+ |
CNN | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
EMR | TITLE | 0.97+ |
one thing | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
each step | QUANTITY | 0.95+ |
Amazon Partner Network | ORGANIZATION | 0.95+ |
three weeks | QUANTITY | 0.95+ |
APN | ORGANIZATION | 0.95+ |
20 years | QUANTITY | 0.95+ |
S3 | TITLE | 0.94+ |
Athena | TITLE | 0.94+ |
office 365 | TITLE | 0.94+ |
today | DATE | 0.93+ |
first name | QUANTITY | 0.92+ |
smallIDs | TITLE | 0.91+ |
Gartner Cool Vendor | TITLE | 0.91+ |
Kinesis | TITLE | 0.91+ |
20 data sources | QUANTITY | 0.9+ |
RSA Innovation Sandbox | TITLE | 0.88+ |
CCP | TITLE | 0.88+ |
Invent 2020 Partner Network Day | EVENT | 0.88+ |
smallID | TITLE | 0.88+ |
more than 10, | QUANTITY | 0.88+ |
Macy | ORGANIZATION | 0.86+ |
Shawn Bice, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of aws reinvent 2024 sponsored by Intel and AWS. Yeah. >>Welcome back here to our coverage here on the Cube of AWS reinvent 2020. It's now pleasure. Welcome. Sean. Vice to the program was the vice president of databases at AWS and Sean. Good day to you. How you doing, sir? >>I'm doing great. Thank you for having me. >>You bet. You bet. Thanks for carving out time. I know it was a very a busy couple of weeks for the A. W s team on DSO certainly was kicked off key notes today. We heard right away that there's some fairly significant announcements that I know certainly affect your world at AWS. Tell us a little bit about those announcements, and then we'll do a little deeper divers. You you go through >>sure, you know. And he made three big announcements this morning as it relates to databases, one of whom was around Aurora serverless V two on. Do you could just think of that as, uh um, no infrastructure whatsoever to manage and Aurora server list that can scale for, you know, from zero to hundreds of thousands of transactions in a fraction of a second, literally with no infrastructure to manage. So it's a really easy way to build applications in the cloud. Eso excited about that? Another big announcement WAAS related to a lot of our customers today are really they're using the right tool for the right job. In other words, they're not trying toe GM all of their data into one database management systems. They're breaking app down into smaller parts. They pick the right tool for the right job. And with that context, we announce glue elastic views, which just allows you to very easily write a sequel. Query most. There's a lot of developers that understand sequel. So if I could easily write a sequel query to reach out to the source databases and then materialize, um, that data into a different target, Um, that's a really simple way toe. Build new customer experiences and make the most of the databases you have. Aan den. The third big announcement remained today was called Babble Eso Babel. Babel Fish is really a a compatibility or a sequel server compatibility layer on Aurora post grass. So if you have ah sequel server application. You've been trying to migrate it to post grass, and you've been wishing for an easier way to get that done. Babel Fish allows you to take your T sequel or your Microsoft sequel server application connected to post grass. Using your same client drivers with little to no code change eso That's a big deal for those that are trying to migrate from commercial systems to open source. And then finally, we didn't stop there as we thought about Babel, Um, and talked to a lot of customers about it. We actually are open sourcing the technology, so it will be available later in 21. All the development will be done open transparently hosted on get hub and licensed under Apache 20 so those that's kind of one lap around the track, if you will, of the big announcements from today How big >>the open source announcement to me. I mean, that's fairly significant that that you're opening up this new opportunity thio the entire community, um, that you're willing to open it up, and I'm sure you're gonna have you know, I mean, this is this is gonna be I would imagine Ah, very popular destination for a lot of folks. >>Yeah, I think so, too. You know, I'm I'm personally, I'm a believer that every customer can use data to build a foundation for future innovation. And to me, a lot of things start and end with data. As we know, data really is a foundational component of at a swell A systems and, you know, and you know, what we found is not every customer can plan for every contingency that happens. But what they can do is build a strong foundation. So, you know, and with a strong foundation, you really stand the best chance to overcome whatever that next unexpected thing is or innovate new ways. And with that is a backdrop. We think this open source piece is a big deal. Why? I'll tell you, you know, it's just us right now. But if I told you the story behind the story, I have met so many customers over the last few years that you know, John, if you and I were sitting down with them, it kind of sounds like this. You sit down, you talk to somebody and they'll say things like, Hey, I've built, you know, we've built years and years and years of application development against sequel server. We really don't like the punitive commercial licensing and, you know, we're trying to get over Thio open source, but we need an easier way and, you know, and we thought about that long and hard and, you know, we came up with the team, came up with a wonderful solution for this, But to tell you the truth, as we were building Babel fish and talking to customers, what became really clear with the community enterprises in I S V s and s eyes is they all basically said, Hey, if there was a way where we could go and extend this, um for, you know, like it could be Boy, if this thing supported to more features, that would be awesome. But if it was open source, that would be even better, because then we could we could take things under our own control so that that's what truly motivated this decision to go open source and based on conversations we've had in the decisions we made, we actually think it's it's really big. It's really big for everybody who has been trying to move off of commercial systems and over toe open source. You. >>Let's talk about transforming your kind of your database mindset in general right now from a client's perspective, especially for somebody who was considering, you know, substantial moves, you know, a major reconfigurations off their processes. What's the process that you go through with them to evaluate their needs, to evaluate their capabilities, to evaluate their storage? All that, you know, that comes into play here and help them to get thio kind of the end of the rainbow >>because it z absolutely, you know, so it really depends on who you're talking Thio and no, at this stage of the game, the clouds been around now for 10, 14 years. I think it is something in that range, you know? So a lot of the early cloud adopters, you know, they've been here and they've been building in a certain way. Um and you know, you and I know early cloud adopters by way of watching streaming media, ordering rideshare, taking a selfie, you know, and you know, we have these great application experiences and we expect them to work all the time at Super Low. Leighton See, they should always be available. So you know, the single biggest thing we learned from Early Cloud builders was there's no such thing as one size football. There's one thing doesn't fit anything at all. Um, that's kind of the way data was, you know, 20 years ago. But today, if you take the learning from these early cloud builders, the journey that we go on with, let's say a mid to late stage cloud a doctor. We're all excited on, you know, sort of. If they can start now today, where Early Cloud Wilders have done a bunch of pioneering, they get excited. So So what happens is, um, there's usually to kind of conversations. One is how do we you know, we've got all these databases that we self managed on premise. How do we bring those into the cloud? And then how do we stop doing undifferentiated heavy lifting? In other words, what they're saying is, we don't want to do patching and back up and monitoring that Z instead, our precious resources should be working on innovations for the business. So in that context, you and I would end up talking to somebody about moving to fully managed services like an already s, for example, um and then the other conversation we have with customers is is the one about breaking free, which is hey, a burn on commercial. I wanna move for open source. And in that context, there are a lot of customers today that they'll move to the cloud. And then and then when they get there as a first step, their second step is to is to migrate over toe open source. And then that third piece is folks that are trying to build for the cloud, these modern APS. And in that context, they follow the playbook of these early cloud builders, which is what you take this big app. You break it into smaller parts and then they pick the right tool for the right job. So that's that's kind of the conversation that we go through there. And finally, what I would say is, most customers say that they'll say to me, What do you mean by picking the right tool for the right job? And the mindset is very different than the one that we all grew up in from 20 years ago. 20 years ago, you just bought a database platform. And then whatever the business was trying to do, you you you would try to support that access pattern on on that database choice. But today, the new world that we live in, it really is. Let's start with the business use case first, understand the access pattern and then pick the best optimized database storage for that. So that's that's kind of how those conversations go. >>You've got what, 15, 14, 15 different data based instruments, you know, like in your tool chest? Um, how how is that evolution occurred? Um because I'm sure, you know one, but got another big at another big at another, looking at different capabilities, different needs. So I mean, >>kind of walked me >>through that a little bit and how you've gotten to the point that you've got 15 >>Tonto eso. So one of the things that you know I'd start off with here, like the question is, Well, if there's 15 today, is there gonna be 100 tomorrow? The real answer is, I don't know, you know, And but what I do know is there's really a handful of categories around data models and access patterns that if you will kind of fill out the portfolio if you will. Um, the first one is around relation. Also, relational databases have been around for a long time. It has a certain set of characteristics that people have come to appreciate and understand and, you know, and we provide a set of services that provide fully managed relational services. Let it be for things like Oracle or sequel, server or open source, like Maria DB or my sequel or Post Press and even Aurora, which provides commercial grade performance availability and scale it about 1/10 the cost of commercial. So you know, there's a handful of different services in that context. But there's new services in this key value. And think of a key value access pattern along the lines of you. Imagine. We order you order a ride share and you're trying to track a vehicle every second. So on your phone you can see it moving across your phone. And now imagine if you were building that at our a million people going to do that all at the same time or 10. So in that kind of access pattern, a product like dynamodb is excellent because It's designed for basically unlimited scale, really high throughput. So developer doesn't have toe really worry about a million people. 10 million people are one. This thing can just scale inevitably. Yeah, it's just not an issue. And, you know, I'll give you one other example like, um, in Neptune, which is a graph database. So you and I would know graph databases by way of seeing a product recommendation, for example, Um, and you know, grab the beauty of a graph databases. It's optimized for highly connected data. In other words, as a developer, I can what I can do with a few lines of code and a graph database because it's optimized for all these different relationships. I might try to do that in a different system that I might write 1500 lines of codes and because it was never designed for something like highly connect the data like graph. So that's kind of the evolution of how things there's just these different categories that have to do with access patterns and data models. And our strategy is simple. In each category, we wanna have the very best AP is available for our customers. Let's >>talk about security here for a moment because you have, you know, these just these tremendous reservoirs now, right that you've built up in capabilities got, you know, new data centers going up every day. It seems like around around the country and around the world, security or securing data nevermore important on dnep ver mawr, I guess on the radar of the bad actors to at the same time because of the value of that data. So just if you would paint the picture in terms of security awareness three encryption devices that you're now deploying the stuff that's keeping you up at night, I would think probably falls into this category a little bit. Eso Let's just take it on security and the level of concern. And then what you at a w s are doing about that? >>Yeah. So, you know, when I talked to customers, I always remind people security is a shared responsibility on De So Amazon's piece of that is the infrastructure that we build the processes that we have, you know, from how people you know can enter a building toe, what they can do in an environment. The auditing to the encryption systems that rebuild. Um, there's there's three infrastructure responsibility, which, you know, we think about every second of every day. Um, Andi, it's, you know, yes, it's one of those things that keeps you up at night. But you have to kind of have this level of paranoia, if you will. There's bad actors everywhere. And, you know, that mindset is kind of, you know, kind of helps you stay focused on Ben. There's the customers responsibility to in in terms of how they think about security. So, you know, um and what that means is, uh, you know, best practices around how they how they integrate identity and access management into their solution. Um, you know how they use how they rotate encryption keys, how they apply encryption and all the safeguards that you would expect the customer do so together, you know, we work with our customers to ensure that our systems are are secure. Um, and the only other thing that I would add to this is that, you know, kind of in the old world. And I keep bringing up the old world because security in the old world was sort of one of those things. Like if you go back 20 years ago. You know, security sometimes is one of those things that you think about a little bit later in the cycle. And I've met a lot of customers that tryto bolt on security and it never works. It's just hard to just bolt it into an app. But the really nice thing about thes fully managed services in the cloud they have security built right in. So security, performance and availability is built right into these fully managed A p I s eso customer doesn't have to think about Well, how do I add this capability onto it? You know, in some sense, it could be a simple is turning a feature on or something like encryption being turned on by default, and they don't have to do anything. So, you know, there it's just a completely different world that we live in today, and we try to improve it every second of every day. >>Well, Sean, it's nice to know that you're experiencing the paranoia for all your customers. That Zaveri very gracious yesterday There. Hey, thanks for the time. I appreciate it. I know you're very busy the next couple of weeks with the number of leadership sessions and intermediate sessions as well with AWS reinvent. So thanks again for carving a little bit of time for us here today on the Cube. >>You bet, John. Thank you. I really appreciate it. >>Take care.
SUMMARY :
It's the Cube with digital coverage How you doing, sir? Thank you for having me. You you go through Aurora server list that can scale for, you know, from zero to hundreds of thousands the open source announcement to me. but we need an easier way and, you know, and we thought about that long you know, substantial moves, you know, a major reconfigurations off their processes. So a lot of the early cloud adopters, you know, based instruments, you know, like in your tool chest? So one of the things that you the stuff that's keeping you up at night, that we build the processes that we have, you know, from how people you know can Hey, thanks for the time. I really appreciate it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sean | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
1500 lines | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
second step | QUANTITY | 0.99+ |
Shawn Bice | PERSON | 0.99+ |
10 million people | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
today | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
Ben | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
third piece | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Babel Fish | TITLE | 0.99+ |
Aurora | TITLE | 0.99+ |
14 | QUANTITY | 0.98+ |
Andi | PERSON | 0.98+ |
zero | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Microsoft | ORGANIZATION | 0.98+ |
100 | QUANTITY | 0.98+ |
Amazon | ORGANIZATION | 0.98+ |
each category | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
a million people | QUANTITY | 0.97+ |
14 years | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
Oracle | ORGANIZATION | 0.97+ |
single | QUANTITY | 0.97+ |
hundreds of thousands | QUANTITY | 0.95+ |
Apache 20 | TITLE | 0.95+ |
Intel | ORGANIZATION | 0.95+ |
Neptune | LOCATION | 0.95+ |
Cube | COMMERCIAL_ITEM | 0.94+ |
three big announcements | QUANTITY | 0.93+ |
third big announcement | QUANTITY | 0.93+ |
Early Cloud builders | ORGANIZATION | 0.93+ |
this morning | DATE | 0.92+ |
Maria DB | TITLE | 0.92+ |
about 1/10 | QUANTITY | 0.92+ |
three infrastructure | QUANTITY | 0.9+ |
Babble Eso Babel | TITLE | 0.88+ |
one size | QUANTITY | 0.88+ |
Early | ORGANIZATION | 0.85+ |
dynamodb | ORGANIZATION | 0.83+ |
DSO | ORGANIZATION | 0.78+ |
one thing | QUANTITY | 0.78+ |
transactions | QUANTITY | 0.78+ |
Zaveri | PERSON | 0.76+ |
three encryption devices | QUANTITY | 0.76+ |
WAAS | TITLE | 0.75+ |
one database | QUANTITY | 0.75+ |
a second | QUANTITY | 0.74+ |
about a million people | QUANTITY | 0.73+ |
Post Press | ORGANIZATION | 0.71+ |
Babel | ORGANIZATION | 0.71+ |
Invent | EVENT | 0.7+ |
2020 | TITLE | 0.67+ |
one lap | QUANTITY | 0.67+ |
Sabina Joseph, AWS & Chris White, Druva | AWS re:Invent 2020
(upbeat music) >> Announcer: From around the globe. It's theCUBE, with digital coverage of AWS reinvent 2020, sponsored by Intel, AWS and our community partners. >> Welcome to theCUBE's coverage of AWS reinvent 2020, the virtual edition. I'm Lisa Martin. I have a couple of guests joining me next to talk about AWS and Druva. From Druva, Chris White is here, the chief revenue officer. Hey Chris, nice to have you on the program. >> Excellent, thanks Lisa. Excited to be here. >> And from AWS Sabina Joseph joins us. She is the general manager of the Americas technology partners. Sabina, welcome. >> Thank you, Lisa. >> So looking forward to talking to you guys unfortunately, we can't be together in a very loud space in Las Vegas, so this will have to do but I'm excited to be able to talk to you guys today. So Chris, we're going to start with you, Druva and AWS have a longstanding partnership. Talk to us about that and some of the evolution that's going on there. >> Absolutely, yeah. we certainly have, we had a great long-term partnership. I'm excited to talk to everybody about it today and be here with Sabina and you Lisa as well. So, we actually re architect our entire environment on AWS, 100% on AWS back in 2013. That enables us to not only innovate back in 2013, but continue to innovate today and in the future, right. It gives us flexibility on a 100% platform to bring that to our customers, to our partners, and to the market out there, right? In doing so, we're delivering on data protection, disaster recovery, e-discovery, and ransomware protection, right? All of that's being leveraged on the AWS platform as I said, and that allows uniqueness from a standpoint of resiliency, protection, flexibility, and really future-proofing the environment, not only today, but in the future. And over this time AWS has been an outstanding partner for Druva. >> Excellent Chris, thank you. Sabina, you lead the America's technology partners as we mentioned, Druva is an AWS advanced technology partner. Talk to us from through AWS lens on the Druva AWS partnership and from your perspective as well. >> Sure, Lisa. So I've had the privilege of working with Druva since 2014 and it has been an amazing journey over the last six and a half years. You know, overall, when we work with partners on technical solutions, we have to talk in a better architect, their solution for AWS, but also take their feedback on our features and capabilities that our mutual customers want to see. So for example, Druva has actually provided feedback to AWS on performance, usability, enhancements, security, posture and suggestions on additional features and functionality that we could have on AWS snowball edge, AWS dynamoDB and other services in fact. And in the same way, we provide feedback to Druva, we provide recommendations and it really is a unique process of exposing our partners to AWS best practices. When customers use Druva, they are benefiting from the AWS recommended best practices for data durability, security and compliance. And our engineering teams work very closely together. We collaborate, we have regular meetings, and that really sets the foundation for a very strong solution for our mutual customers. >> So it sounds very symbiotic. And as you talked about that engineering collaboration and the collaboration across all levels. So now let's talk about some of the things that you're helping customers to do as we are all navigating a very different environment this year. Chris, talk to us about how Druva is helping customers navigate some of those big challenges you talked about ransomware for example, this massive pivot to remote workforce. Chris (mumbles) got going on there. >> Yeah, absolutely. So the, one of the things that we've seen consistently, right, it's been customers are looking for simplicity. Customers are looking for cost-effective solutions, and then you couple that with the ability to do that all on a single platform, that's what the combination of Druva and AWS does together, right? And as you mentioned, Lisa, you've got work from home. That's increased right with the unfortunate events going across the globe over the last almost 12 months now, nine months now. Increased ransomware that threats, right? The bad actors tend to take advantage of these situations unfortunately, and you've got to be working with partners like AWS like Druva, coming together, to build that barrier against the bad actors out there. So, right. We've got double layer of protection based on the partnership with AWS. And then if you look at the rising concerns around governance, right? The complexity of government, if you look at Japan adding some increased complexity to governance, you look at what's going on across, but across the globe across the pond with GDPR, number of different areas around compliance and governance that allows us to better report upon that. We built the right solution to support the migration of these customers. And everything I just talked about is just accelerated the need for folks to migrate to the cloud, migrate to AWS, migrate to leveraging, through the solutions. And there's no better time to partner with Druva and AWS, just because of that. >> Something we're all talking about. And every key segment we're doing, this acceleration of digital transformation and customers really having to make quick decisions and pivot their businesses over and over again to get from survival to thriving mode. Sabina talk to us about how Druva and AWS align on key customer use cases especially in these turbulent times. >> Yeah, so, for us as you said Lisa, right. When we start working with partners, we really focus on making sure that we are aligned on those customer use cases. And from the very first discussions, we want to ensure that feedback mechanisms are in place to help us understand and improve the services and the solutions. Chris has, he mentioned migrations, right? And we have customers who are migrating their applications to AWS and really want to move the data into the cloud. And you know what? This is not a simple problem because there's large amounts of data. And the customer has limited bandwidth Druva of course as they have always been, is an early adopter of AWS snowball edge and has worked closely with us to provide a solution where customers can just order a snowball edge directly from AWS. It gets shipped to them, they turn it on, they connect it to the network, and just start backing up their data to the snowball edge. And then once they are done, they can just pack it up, ship it back. And then all of this data gets loaded into the Druva solution on AWS. And then you also, those customers who are running applications locally on AWS Outposts, Druva was once again, an early adopter. In fact, last reinvent, they actually tested out AWS Outposts and they were one of the first launch partners. Once again, further expanding the data protection options they provide to our mutual customers. >> Well, as that landscape changes so dramatically it's imperative that customers have data center workloads, AWS workloads, cloud workloads, endpoints, protected especially as people scattered, right, in the last few months. And also, as we talked about the ransomware rise, Chris, I saw on Druva's website, one ransomware attack every 11 seconds. And so, now you've got to be able to help customers recover and have that resiliency, right. Cause it's not about, are we going to get hit? It's a matter of when, how does Druva help facilitate that resiliency? >> Yeah, now that's a great point Lisa. and as you look at our joint customer base, we've got thousands of joint customers together and we continue to see positive business impact because of that. And it's to your point, it's not if it's when you get hit and it's ultimately you've got to be prepared to recover in order to do that. And based on the security levels that we jointly have, based on our architecture and also the benefits of the architecture within AWS, we've got a double layer of defense up there that most companies just can't offer today. So, if we look at that from an example standpoint, right, transitioning offer specific use case of ransomware but really look at a cast media companies, right? One of the largest media companies out there across the globe, 400 radio stations, 800 TV stations, over a hundred thousand podcasts, over 4,000 or 5,000 streams happening on an annual basis, very active and candidly very public, which freaks the target. They really came to us for three key things, right? And they looked for reduced complexity, really reducing their workload internally from a backup and recovery standpoint, really to simplify that backup environment. And they started with Druva, really focused on the end points. How do we protect and manage the end points from a data protection standpoint, ultimately, the cost savings that they saw, the efficiency they saw, they ended up moving on and doing key workloads, right? So data protection, data center workloads that they were backing up and protecting. This all came from a great partnership and relationship from AWS as well. And as we continued to simplify that environment, it allowed them to expand their partnership with AWS. So not only was it a win for the customer, we helped solve those business problems for them. Ultimately, they got a (mumbles) benefit from both Druva and AWS and that partnership. So, we continue to see that partnership accelerate and evolve to go really look at the entire platform and where we can help them, in addition to AWS services that they're offering. >> And that was... It sounds like them going to cloud data production, was that an acceleration of their cloud strategy that they then had to accelerate even further during the last nine months, Chris? >> Yeah, well, the good news for cast is that at least from a backup and recovery standpoint, they've been ahead of the curve, right? They were one of those customers that was proactive, in driving on their cloud journey, and proactive and driving beyond the work from home. It did change the dynamics on how they work and how they act from a work from home standpoint, but they were already set up. So then they didn't really skip a beat as they continue to drive that. But overall, to your point, Lisa, we've seen an increase and acceleration and companies really moving towards the cloud, right. Which is why that migration strategy, joint migration strategy, that Sabina talked about is so important because it really has accelerated. And in some companies, this has become the safety net for them, in some ways their DR Strategy, to shift to the cloud, that maybe they weren't looking to do until maybe 2022 or 2023, it's all been accelerated. >> Everything's, but we have like whiplash on the acceleration going on. >> Sabina, talk to us about some of those joint successes through AWS's lens, a couple of customers, you're going to talk about the University of Manchester, and the Queensland Brain Institute, dig into those for us. >> Yeah, absolutely. So, I thank Chris sharing those stories there. So the two that kind of come into my mind is a University of Manchester. They have nearly 7,000 academic staff and researchers and they're, part of their digital transformation strategy was adopting VMware cloud on AWS. And the University actually chose Druva, to back up 160 plus virtual machine images, because Druva provided a simple and secure cloud-based backup solution. And in fact, saved them 50% of their data protection costs. Another one is Queensland Brain Institute, which has over 400 researchers who really worked on brain diseases and really finding therapeutic solutions for these brain diseases. As you can imagine, this research generates terabytes critical data that they not only needed protected, but they also wanted to collaborate and get access to this data continuously. They chose Druva and now using Druva solution, they can back up over 1200 plus research papers, residing on their devices, providing global and also reliable access 24 by seven. And I do want to mention, Lisa, right? The pandemic has changed all of humanity as we know it, right? Until we can all find a solution to this. And we've also together had to work to adjust what can we do to work effectively together? We've actually together with Druva shifted all of our day-to-day activities, 200% virtual. And we, but despite all of that, we've maintained regular cadence for our review business and technical roadmap updates and other regular activities. And if I may mention this, right, last month we AWS actually launched the digital workplace competency, clearly enabling customers to find specialized solutions around remote work and secure remote work and Druva, even though we are all in this virtual environment today, Druva was one of the launch partners for this competency. And it was a great fit given the solution that they have to enable the remote work environments securely, and also providing an end-to-end digital workplace in the cloud. >> That's absolutely critical because that's been one of the biggest challenges I think that we've all been through as well as, you know trying to go, do I live at work or do I work from home? I'm not sure some of the days, but being able to have that continuity and you know, your customers being able to access their data at 24 by seven, as you said, because there's no point in mapping up your data, if you can't recover it but being able to allow the continuation of the relationship that you have. I want to move on now to some of the announcements. Chris, you mentioned actually Sabina you did, when you were talking about the University of Manchester, the VMware ready certification Chris, Druva just announced a couple of things there. Talk to us about that. >> Thank you. Yeah, Lisa you're right. There's been a ton of great announcements over the past several months and throughout this entire fiscal year. To be in this touch base on a couple of them around the AWS digital workplace, we absolutely have certification on AWS around VMware cloud, both on AWS and Dell EMC, through AWS. In addition to continuing to drive innovation because of this unique partnership around powerful security encryption and overall security benefits across the board. So that includes AWS gov cloud. That includes HIPAA compliance, includes FedRAMP, as well as SOC two type two, certifications as well and protection there. So we're going to continue to drive that innovation. We just recently announced as well that we now have data protection for Kubernetes, 100% cloud offering, right? One of the most active and growing workloads around data, around orchestration platform, right? So, doing that with AWS, some of my opening comments back when we built this 100% AWS, that allows us to continue to innovate and be nimble and meet the needs of customers. So whether that be VMware workloads NAS workloads, new workloads, like Kubernetes we're always going to be well positioned to address those, not only over time, but on the front end. And as these emerging technologies come out the nimbleness of our joint partnership just continues to be demonstrated there. >> And Sabina, I know that AWS has a working backwards approach. Talk to me about how you use that to accomplish all of the things that Chris and you both described over the last six, seven plus years. >> Yes, so the working backwards process we use it internally when we build our own services, but we also worked through it with our partners, right? It's about putting the customers first, aligning on those use cases. And it all goes back to our Amazon leadership principle on customer obsession, focusing on the customer experience, making sure that we have mechanisms in place, to have feedback from the customers and operate that into our services solutions and also with our partners. Well, one of the nice things about Druva since I've been working with them since 2014 is their focus on customer obsession. Through this process, we've developed great relationship, Druva, together with our service team, building solutions that deliver value by providing a full Saas service for customers, who want to protect their data, not only in AWS, but also in a hybrid architecture model on premises. And this is really critical to us cause our customers want us to work with Druva, to solve the pain points, creating a completely maybe a new customer experience, right. That makes them happy. And ultimately what we have found together with Druva, is I think Chris would agree with this, is that when we focus on our mutual customers, it leads to a very longterm successful partnership as we have today with Druva. >> It sounds like you talked about that feedback loop in the beginning from customers, but it sounds like that's really intertwined the entire relationship. And certainly from what you guys described in terms of the evolution, the customer successes, and all of the things that have been announced recently, a lot of stuff going on. So we'll let you guys get back to work. We appreciate your time, Chris. Thank you for joining me today. For Chris white and Sabina Joseph, I'm Lisa Martin and you're watching theCUBE. (soft music fades)
SUMMARY :
Announcer: From around the globe. of AWS reinvent 2020, the virtual edition. Excited to be here. of the Americas technology partners. and some of the evolution and in the future, right. on the Druva AWS partnership And in the same way, we and the collaboration across all levels. the ability to do that all Sabina talk to us about and improve the services in the last few months. And based on the security that they then had to as they continue to drive that. on the acceleration going on. and the Queensland Brain that they have to enable of the relationship that you have. One of the most active all of the things that And this is really critical to us and all of the things that
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Sabina | PERSON | 0.99+ |
Sabina Joseph | PERSON | 0.99+ |
Sabina Joseph | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Chris White | PERSON | 0.99+ |
Queensland Brain Institute | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
400 radio stations | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
Queensland Brain Institute | ORGANIZATION | 0.99+ |
Chris white | PERSON | 0.99+ |
800 TV stations | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
2023 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
200% | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
Druva | TITLE | 0.99+ |
2014 | DATE | 0.99+ |
GDPR | TITLE | 0.99+ |
Corey Quinn, The Duckbill Group | Cloud Native Insights
>>from the Cube Studios in Palo Alto in Boston, connecting with thought leaders around the globe. These are cloud native insights. Hi, I'm stew Minimum and the host of Cloud Native Insights. And the threat that we've been pulling on with Cloud Native is that we needed to be able to take advantage of the innovation and agility that cloud in the ecosystem around it can bring, not just the location. It's It's not just the journey, but how do I take advantage of something today and keep being able to move for Happy to welcome back to the program one of our regulars and someone that I've had lots of discussion about? Cloud Cloud. Native Serverless So Cory Quinn, the Keith Cloud economists at the Duck Bill Group. Corey, always good to see you. Thanks for joining us. >>It is great to see me. And I always love having the opportunity to share my terrible opinions with people who then find themselves tarred by the mere association. And there's certainly no exception to use, too. Thanks for having me back. Although I question your judgment. >>Yeah, you know, what was that? Pandora's box. I open when I was like Hey, Corey, let's try you on video so much. And if people go out, they can look at your feet and you've spent lots of money on equipment. You have a nice looking set up. I guess you missed that one window of opportunity to get your hair cut in San Francisco during the pandemic. But be doesn't may Corey, why don't you give our audience just the update You went from a solo or mentor of the cloud? First you have a partner and a few other people, and you're now you've got economists. >>Yes, it comes down to separating out. What I'm doing with my nonsense from other people's other people's careers might very well be impacted by it considered tweet of mine. When you start having other clouds, economists and realize, okay, this is no longer just me we're talking about here. It forces a few changes. I was told one day that I would not be the chief economist. I smile drug put on a backlog item to order a new business cards because it's not like we're going to a lot of events these days, and from my perspective, things continue mostly a base. The back. To pretend people now means that there's things that my company does that I'm no longer directly involved with, which is a relief, that absolutely, ever. But it's been an interesting right. It's always strange. Is the number one thing that people who start businesses say is that if they knew what they were getting into, they'd never do it again. I'm starting to understand that. >>Yeah, well, Corey, as I mentioned you, and I have had lots of discussions about Cloud about multi Cloud server. Listen, like when you wrote an article talking about multi cloud is a worse practice. One of the things underneath is when I'm using cloud. I should really be able to leverage that cloud. One of the concerns that when you and I did a cube con and cloud native con is does multi cloud become a least common denominator? And a comment that I heard you say was if I'm just using cloud and the very basic services of it, you know, why don't I go to an AWS or an azure which have hundreds of services? Maybe I could just find something that is, you know, less expensive because I'm basically thinking of it as my server somewhere else. Which, of course, cloud is much more than so you do with a lot of very large companies that help them with their bills. What difference there differentiates the companies that get advantage from the cloud versus those that just kind of fit in another location, >>largely the stories that they tell themselves internally and how they wind up adapting to cloud. If the reason I got into my whole feel about why multi cloud is a worst practice is that of you best practices a sensible defaults, I view multi cloud as a ridiculous default. Sure, there are cases where it's important, and so I don't say I'm not suggesting for a second that those people who are deciding to go down that are necessarily making wrong decisions. But when you're building something from scratch with this idea toward taking a single workload and deploying it anywhere in almost every case, it's the wrong decision. Yes, there are going to be some workloads that are better suited. Other places. If we're talking about SAS, including that in the giant wrapper of cloud definition in terms of what was then, sure you would be nuts to wind of running on AWS and then decide you're also going to go with codecommit instead of git Hub. That's not something sensible people to use get up or got sick. But when I am suggesting, is that the idea of building absolutely every piece of infrastructure in a way that avoids any of the differentiated offerings that your primary cloud provider uses is just generally not a great occasionally you need to. But that's not the common case, and people are believing that it is >>well, and I'd like to dig a little deeper. Some of those differentiated services out there there are concerned, but some that said, You know, I think back to the past model. I want to build something. I can have it live ever anywhere. But those differentiated services are something that I should be able to get value out of it. So do you have any examples, or are there certain services that you have his favorites that you've seen customers use? And they say, Wow, it's it's something that is effective. It's something that is affordable, and I can get great value out of this because I didn't have to build it. And all of these hyper scaler have lots of engineers built, building lots of cool things. And I want to take advantage of that innovation. >>Sure, that's most of them. If we're being perfectly honest, there are remarkably few services that have no valid use cases for no customer anywhere. A lot of these solve an awful lot of pain that customers have. Dynamodb is a good example of this Is that one a lot of folks can relate to. It's super fast, charges you for what you use, and that is generally yet or a provision Great. But you don't have to worry about instances. You have to worry about scaling up or scaling down in the traditional sense. And that's great. The problem is, is great. How do I migrate off of this on to something else? Well, that's a good question. And if that is something that you need to at least have a theoretical exodus for, maybe Dynamo DV is the wrong service for you to pick your data store personally. If I have to build for a migration in mind on no sequel basis, I'll pick mongo DB every time, not because it's any easier to move it, but because it's so good at losing data, that'll have remarkably little bit left. Migrate. >>Yeah, Corey, of course. One of the things that you help customers with quite a bit is on the financial side of it. And one of the challenges if I moved from my environment and I move to the public cloud, is how do I take advantage not only of the capability to the cloud but the finances of the cloud. I've talked to many customers that when you modernize your pull things apart, maybe you start leveraging serverless capabilities. And if I tune things properly, I can have a much more affordable solution versus that. I just took my stuff and just shoved it all in the cloud kind of a traditional lift and shift. I might not have good economics. When I get to the cloud. What do you see along those lines? >>I'd say you're absolutely right with that assessment. If you are looking at hitting break even on your cloud migration in anything less than five years, it's probably wrong. The reason to go to Cloud is not to save money. There are edge cases where it makes sense, Sure, but by and large you're going to wind up spending longer in the in between state that you would believe eventually you're going to give up and call it hybrid game over. And at some point, if you stall long enough, you'll find that the cloud talent starts reaching out of your company. At which point that Okay, great. Now we're stuck in this scenario because no one wants to come in and finish the job is harder than we thought we landed. But it becomes this story of not being able to forecast what the economics are going to look like in advanced, largely because people don't understand where their workloads start and stop what the failure modes look like and how that's going to manifest itself in a cloud provider environment. That's why lift and shift is popular. People hate, lift and ship. It's a terrible direction to go in. Yeah, so are all the directions you can go in as far as migrating, short of burning it to the ground for insurance money and starting over, you've gotta have a way to get from where you are, where you're going. Otherwise, migration to be super simple. People with five weeks of experience and a certification consult that problem. It's but how do you take what's existing migrated end without causing massive outages or cost of fronts? It's harder than it looks. >>Well, okay, I remember Corey a few years ago when I talk to customers that were using AWS. Ah, common complaint was we had to dedicate an engineer just to look at the finances of what's happening. One of the early episodes I did of Cloud Native Insights talked to a company that was embracing this term called Been Ops. We have the finance team and the engineering team, not just looking back at the last quarter, but planning understanding what the engineering impacts were going forward so that the developers, while they don't need tohave all the spreadsheets and everything else, they understand what they architect and what the impact will be on the finance side. What are you hearing from your customers out there? What guidance do you give from an organizational standpoint as to how they make sure that their bill doesn't get ridiculous? >>Well, the term fin ops is a bit of a red herring in there because people immediately equate it back to cloud ability before their app. Geo acquisitions where the fin ops foundation vendors are not allowed to join except us, and it became effectively a marketing exercise that was incredibly poorly executed in sort of poisoned the well. Now the finance foundations been handed off to the Cloud Native Beauty Foundation slash Lennox Foundation. Maybe that's going to be rehabilitated, but we'll have to find out. One argument I made for a while was that developers do not need to know what the economic model in the cloud is going to be. As a general rule, I would stand by that. Now someone at your company needs to be able to have those conversations of understanding the ins and outs of various costs models. At some point you hit a point of complexity we're bringing in. Experts solve specific problems because it makes sense. But every developer you have does not need to sit with 3 to 5 days course understanding the economics of the cloud. Most of what they need to know if it's on a business card, it's on an index card or something small that is carplay and consult business and other index ramos. But the point is, is great. Big things cost more than small things. You're not charged for what you use your charger for. What you forget to turn off and being able to predict your usage model in advance is important and save money. Data transfers Weird. There are a bunch of edge cases, little slice it and ribbons, but inbound data transfer is generally free. Outbound, generally Austin arm and a leg and architect accordingly. But by and large for most development product teams, it's built something and see if it works first. We can always come back later and optimize costs as you wind up maturing the product offering. >>Yeah, Cory, it's some of those sharp edges I've love learning about in your newsletter or some of your online activities there, such as you talked about those egress fees. I know you've got a nice diagram that helps explain if you do this, it costs a lot of money. If you do this, it's gonna cost you. It cost you a lot less money. Um, you know, even something like serverless is something that in general looks like. It should be relatively expensive, but if you do something wrong, it could all of a sudden cost you a lot of money. You feel that companies are having a better understanding so that they don't just one month say, Oh my God, the CFO called us up because it was a big mistake or, you know, where are we along that maturation of cloud being a little bit more predictable? >>Unfortunately, no. Where near I'd like us to be it. The story that I think gets missed is that when you're month over, month span is 20% higher. Finance has a bunch of questions, but if they were somehow 20% lower, they have those same questions. They're trying to build out predictive models that align. They're not saying you're spending too much money, although by the time the issues of the game, yeah, it's instead help us understand and predict what's happening now. Server less is a great story around that, because you can tie charges back to individual transactions and that's great. Except find me a company that's doing that where the resulting bill isn't hilariously inconsequential. A cloud guru Before they bought Lennox, I can't get on stage and talk about this. It serverless kind of every year, but how? They're spending $600 a month in Lambda, and they have now well, over 100 employees. Yeah, no one cares about that money. You can trace the flow of capital all you want, but it grounds up to No one cares at some point that changes. But there's usually going to be far bigger fish to front with their case, I would imagine, given, you know, stream video, they're probably gonna have some data transfer questions that come into play long before we talk about their compute. >>Yeah, um, what else? Cory, when you look at the innovation in the cloud, are there things that common patterns that you see that customers are missing? Some of the opportunities there? How does the customers that you talk to, you know, other than reading your newsletter, talking Teoh their systems integrator or partner? How are they doing it? Keeping up with just the massive amount of change that happens out >>there. Get customers. AWS employees follow the newsletter specifically to figure out what's going on. We've long since passed a Rubicon where I can talk incredibly convincingly about services that don't really exist. And Amazon employees won't call me out on the joke that I've worked in there because what the world could ever say that and then single. It's well beyond any one person's ability to keep it all in their head. So what? We're increasingly seeing even one provider, let alone the rest. Their events are outpacing them and no one is keeping up. And now there's the persistent, never growing worry that there's something that just came out that could absolutely change your business for the better. And you'll never know about it because you're too busy trying to keep up with all the other number. Every release the cloud provider does is important to someone but none of its important everyone. >>Yeah, Corey, that's such a good point. When you've been using tools where you understand a certain way of doing things, how do you know that there's not a much better way of doing it? So, yeah, I guess the question is, you know, there's so much out there. How do people make sure that they're not getting left behind or, you know, keep their their their understanding of what might be able to be used >>the right answer. There, frankly, is to pick a direction and go in it. You can wind up in analysis paralysis issues very easily. And if you talk about what you've done on the Internet, the number one responsible to get immediately is someone suggesting an alternate approach you could have taken on day one. There is no one path forward for any six, and you can second guess yourself that the problem is that you have to pick a direction and go in it. Make sure it makes sense. Make sure the lines talk to people who know what's going on in the space and validate it out. But you're going to come up with a plan right head in that direction, I assure you, you are probably not the only person doing it unless you're using. Route 53 is a database. >>You know, it's an interesting thing. Corey used to be said that the best time to start a project was a year ago. But you can't turn back time, so you should start it now. I've been saying for the last few years the best time to start something would be a year from now, so you can take advantage of the latest things, but you can't wait a year, so you need to start now. So how how do you make sure you maintain flexibility but can keep moving projects moving forward? E think you touched on that with some of the analysis paralysis, Anything else as to just how do you make sure you're actually making the right bets and not going down? Some, you know, odd tangent that ends up being a debt. >>In my experience, the biggest problem people have with getting there is that they don't stop first to figure out alright a year from now. If this project has succeeded or failed, how will we know they wind up building these things and keeping them in place forever, despite the fact that cost more money to run than they bring in? In many cases, it's figure out what success looks like. Figure out what failure looks like. And if it isn't working, cut it. Otherwise, you're gonna wind up, went into this thing that you've got to support in perpetuity. One example of that one extreme is AWS. They famously never turn anything off. Google on the other spectrum turns things off as a core competence. Most folks wind up somewhere in the middle, but understand that right now between what? The day I start building this today and the time that this one's of working down the road. Well, great. There's a lot that needs to happen to make sure this is a viable business, and none of that is going to come down to, you know, build it on top of kubernetes. It's going to come down. Is its solving a problem for your customers? Are people they're people in to pay for the enhancement. Anytime you say yes to that project, you're saying no to a bunch of others. Opportunity Cost is a huge thing. >>Yeah, so it's such an important point, Cory. It's so fundamental when you look at what what cloud should enable is, I should be able to try more things. I should be able to fail fast on, and I shouldn't have to think about, you know, some cost nearly as much as I would in the past. We want to give you the final word as you look out in the cloud. Any you know, practices, guidelines, you can give practitioners out there as to make sure that they are taking advantage of the innovation that's available out there on being able to move their company just a little bit faster. >>Sure, by and large, for the practitioners out there, if you're rolling something out that you do not understand, that's usually a red flag. That's been my problem, to be blunt with kubernetes or an awful lot of the use cases that people effectively shove it into. What are you doing? What if the business problem you're trying to solve and you understand all of its different ways that it can fail in the ways that will help you succeed? In many cases, it is stupendous overkill for the scale of problem most people are throwing. It is not a multi cloud answer. It is not the way that everyone is going to be doing it or they'll make fun of you under resume. Remember, you just assume your own ego. In this sense, you need to deliver an outcome. You don't need to improve your own resume at the expense of your employer's business. One would hope, >>Well, Cory, always a pleasure catching up with you. Thanks so much for joining me on the cloud. Native insights. Thank you. Alright. Be sure to check out silicon angle dot com if you click on the cloud. There's a whole second for cloud Native insights on your host to minimum. And I look forward to hearing more from you and your cloud Native insights Yeah, yeah, yeah, yeah, yeah.
SUMMARY :
And the threat that we've been pulling on with Cloud Native is And I always love having the opportunity to share my terrible opinions with people Yeah, you know, what was that? When you start having other clouds, economists and realize, okay, this is no longer just me One of the concerns that when you and I did a cube is that of you best practices a sensible defaults, I view multi cloud as a ridiculous default. examples, or are there certain services that you have his favorites that you've maybe Dynamo DV is the wrong service for you to pick your data store personally. One of the things that you help customers with quite a bit is on the financial in the in between state that you would believe eventually you're going to give up and call it hybrid game over. One of the early episodes I did of Cloud Native Insights talked to a company that Well, the term fin ops is a bit of a red herring in there because people immediately equate it back to cloud but if you do something wrong, it could all of a sudden cost you a lot of money. I would imagine, given, you know, stream video, they're probably gonna have some data transfer questions that come into play AWS employees follow the newsletter specifically to figure out what's that they're not getting left behind or, you know, keep their their their understanding of what Make sure the lines talk to people who know what's going on in the space and validate it out. of the latest things, but you can't wait a year, so you need to start now. and none of that is going to come down to, you know, build it on top of kubernetes. on, and I shouldn't have to think about, you know, some cost nearly as much as I would in the past. of you under resume. And I look forward to hearing more from you and your cloud Native insights Yeah,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
3 | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Corey Quinn | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
five weeks | QUANTITY | 0.99+ |
Cory | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Corey | PERSON | 0.99+ |
Pandora | ORGANIZATION | 0.99+ |
Duck Bill Group | ORGANIZATION | 0.99+ |
last quarter | DATE | 0.99+ |
one month | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
less than five years | QUANTITY | 0.99+ |
Cube Studios | ORGANIZATION | 0.99+ |
over 100 employees | QUANTITY | 0.99+ |
Boston | LOCATION | 0.98+ |
ORGANIZATION | 0.98+ | |
5 days | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
hundreds of services | QUANTITY | 0.98+ |
Lennox | ORGANIZATION | 0.98+ |
one provider | QUANTITY | 0.97+ |
Cloud Cloud | ORGANIZATION | 0.97+ |
Lennox Foundation | ORGANIZATION | 0.96+ |
The Duckbill Group | ORGANIZATION | 0.96+ |
Cloud Native Beauty Foundation | ORGANIZATION | 0.96+ |
Dynamodb | ORGANIZATION | 0.96+ |
a year | QUANTITY | 0.95+ |
SAS | ORGANIZATION | 0.95+ |
Cory Quinn | PERSON | 0.95+ |
$600 a month | QUANTITY | 0.95+ |
a year ago | DATE | 0.95+ |
One example | QUANTITY | 0.94+ |
pandemic | EVENT | 0.94+ |
one extreme | QUANTITY | 0.93+ |
Cloud Native Insights | ORGANIZATION | 0.93+ |
day one | QUANTITY | 0.93+ |
Cloud Native | ORGANIZATION | 0.92+ |
first | QUANTITY | 0.89+ |
one window | QUANTITY | 0.88+ |
One argument | QUANTITY | 0.88+ |
one person | QUANTITY | 0.87+ |
Been Ops | ORGANIZATION | 0.85+ |
second | QUANTITY | 0.81+ |
few years ago | DATE | 0.8+ |
much | QUANTITY | 0.79+ |
one day | QUANTITY | 0.78+ |
single workload | QUANTITY | 0.75+ |
k | QUANTITY | 0.72+ |
Lambda | TITLE | 0.72+ |
last few years | DATE | 0.69+ |
egress | ORGANIZATION | 0.68+ |
Keith Cloud | ORGANIZATION | 0.67+ |
Native | ORGANIZATION | 0.62+ |
year | QUANTITY | 0.6+ |
stew Minimum | PERSON | 0.59+ |
a year | DATE | 0.57+ |
Route | TITLE | 0.56+ |
Dynamo DV | ORGANIZATION | 0.54+ |
Rubicon | COMMERCIAL_ITEM | 0.51+ |
Austin | LOCATION | 0.45+ |
53 | ORGANIZATION | 0.28+ |
Joel Lipkin, Four Points Technology & Ryan Hillard, US SBA | AWS Public Sector Awards 2020
>> Announcer: From around the globe, it's theCUBE with digital coverage of AWS Public Sector Partner Awards brought to you by Amazon web services. >> Hi, and welcome back. I'm Stu Miniman. This is theCUBE coverage of the AWS Public Sector Partner Awards. We going to be talking about the Customer Obsession Mission award winner. So happy to welcome to the program. First of all, welcoming back Joel Lipkin. He is the chief operating officer of Four Points Technologies, which is the winner of the aforementioned award and joining him one of his customers, Ryan Hillard, who is a assistant developer with the United States, Small Business Administration, and of course the SBA, an organization that a lot of people in the United States have gotten more familiar with this year. Joel and Ryan, thanks so much for joining us. >> Hi Stu? >> Hey Stu; Thank you. >> All right, so Ryan, I'm sorry, Joel, as I mentioned, you've been on the program, but maybe just give us a sketch if you would, Four Points, your role, your partnership with AWS. >> Sure, I'm Joel Lipkin. I'm the chief operating officer at Four Points Technology, Four Points is a value added technology reseller focused on the federal government and we've been working with federal customer since 2002. We're a service disabled veteran owned small business, and we've been in a Amazon partner since 2012. >> Wonderful; Ryan, if you could, obviously, as I mentioned, the SBA, a lot of people know for the PPP in 2020, if you could tell us a little bit about your role in your organization and tee up for us, if you would, the project that Four Points was involved with that you worked on. >> Sure; so I worked for the chief information officer and I don't have this official title, but I am the de facto manager of our Amazon Web Services presence. This year, we've had a very exciting time with what's been happening in the world, the Paycheck Protection Program, and the SBA have been kind of leveraged to help the US economy recover in the face of the pandemic. And a key part of that has been using Amazon Web Services and our partnership with Four Points Technology to launch new applications to address those requirements. >> Wonderful; Joel, maybe a connect for us. How long has Four Points been working with the SBA and start to give us a little bit more about the projects that you're working together, which I understand was predated the COVID incidents. >> Sure; we've been with SBA for several years now. And SBA was one of the earlier federal agencies that really saw the value in separating their procurement for cloud capacity, from the development implementation and managed services that they either did internally or use third party contractors for. So, Four Points came in as a true value added reseller of cloud to SBA providing cloud capacity and also Amazon professionals services. >> All right; so Ryan bring us in a little bit, the project that we're talking about here, what was the challenge? What were the goals you were looking to accomplish? Help flush out a little bit, what you're doing there? >> Yeah, so most recently Four Points partnered with us to deliver Lender Gateway. Lender Gateway is an application for small community oriented lenders to submit Paycheck Protection loans. So some of these lenders don't have giant established IT departments like big banks do, and they needed an easier way to help their customers. We built that application in six days and I called the Four Points cloud manager on a Saturday, and I said, help, help, I need two accounts by three o'clock and Four Points was there for us. We got new accounts set up. We were able to build the application and deploy it literally in a week and meet the requirements set for us. And that system has now moved billions of dollars of loans. I don't know the exact amount, but has done an incredible amount of work and it wouldn't have been possible without our partnership with Four Points. So we're really excited about that. >> Yeah, If I could drill in there for a second. Absolutely it's been an unprecedented, how fast that amount of money move through the legislature to out to the end user. Help us understand a little bit, how much were you using AWS technologies and solutions that Four Points had helped you with, and how much of this was kind of a net new, you said you built a new application, you had to activate some things fast, help us understand a little bit more. >> Yeah, that's so that's a great question. So we have five major systems in AWS today. And so we're very comfortable with AWS service offerings. What's interesting about Lender Gateway is that it's the first application we've built from scratch in a totally serverless capacity. So one of the hard technical requirements of the Paycheck Protection Program is that, it has huge amounts of demand. So when we're launching a system, we need to know that that system will not go down no matter how much traffic it receives or how many requests it has to handle. So we leaned on services like AWS Lambda, S3, dynamoDB, all of their serverless offerings to make sure that under no circumstances could this application fail. And it never did. We never even actually saw a performance degradation. So a massive success from my perspective as the program manager. >> well, that's wonderful. Joel, of course, you talk about scalability, you talk about uptime. Those are really the promise the public cloud has brought. Ryan did a good job of teeing out some of the services from AWS, but help us understand architecturally how you help put that together, and, the various pieces underneath. >> Yes Stu, it's interesting. Four Points is really focused on delivering capacity. Our delivery model is very much built around giving our customers like Ryan full control over their cloud environments so that they can use it as transparently as though they were working with Amazon directly. They have access to all of the 200+ services that AWS has. They also have a direct access to billing and usage information that lets them really optimize things. So this is sort of a perfect example of how well that works because SBA and Ryan knew their requirements better than anyone. And they were able to leverage exactly the right AWS tools without having to apply to use them. It was as though they were working directly with AWS and the AWS environment on the technology side. And I will say SBA has been really a leader in using of variety of AWS services beyond standard compute and storage, not just in a tested environment, but in a live very, very robust, really large environment. >> Yeah, right, and I was excited to hear about your Lambda usage, how you're building with the serverless architecture there. Could you just bring us through a little bit, how you ramped up on that, any tools or community solutions that you were leveraging to make sure you understood that and any lessons you learned along the way as you were building that application and rolling it out? >> Yeah, that's a great question. So I think one of the mistakes that I see program managers make all the time is thinking that they can migrate a workload to the cloud and keep it architecturally the same way it was. And what they quickly find out is that their old architecture that ran in their on premise data center might actually be more expensive in the cloud than it was in their data center. And so when you're thinking about migrating a workload, you really need to come in with the assumption that you will actually be redesigning that workload and building the system in cloud native technology. You know, the concept of Lambda is so powerful, but it didn't exist for, you know, it didn't exist 20 years ago when some of these systems and applications were being written and now being able to leverage Lambda to only use exactly the compute you need, means you can literally pay pennies on the dollar. One of the interesting things about the PPP program and everything happening in the world is that our main website, sba.gov is now serving a a hundred or a thousand times more traffic daily than it was used to doing. But because we lean on serverless technology like Lambda, we have scaled non-linearly in terms of costs. So we're only paying like two or three times more than we used to pay per month, but we're doing a hundred or a thousand times more work. That's a win, that's a huge victory for cloud technology, in my opinion. >> Yeah, and on that point, I think the other thing that SBA did really amazingly well was take advantage of first reserved instances. But I think it was the day that Amazon announced savings plans as a cost control mechanism. Ryan and SBA were on them. They were our first customer to use savings plans. And I think there were probably the first customer in the federal space to use them. So it's not just using the technology smart, it's using the cost control tools really well also. >> Yeah, so Stu, I wanted to jump in here just because I'm so glad Joel brought that up. I was describing how workloads need to morph and transform as they move from legacy setups into more cloud native ones. Well, we were the first federal agency to buy savings plans. And for folks who don't know savings plans essentially make your reserved instances fungible across services. So if you had a workload that was running on EC2 before, now instead of buying a reserved instance at a certain instant size, a certain family, you can instead buy a savings plan. And when your workload is ready to be moved from EC2 to something a little bit more containerized or cloud native, like Fargate or Lambda, then you don't actually forego your reserved instance. I see program managers get into this weird spot where they bought reserved instances, so they feel like they need to use them for a whole year. So they don't upgrade their system until their reserved instances expire. And that's really the tail wagging the dog. We were very excited about savings plans. I think we bought them four days after they came out and they have enabled us to do things like, be very ambitious with how we rethink our systems and how we rebuild them. And I'm so glad you brought that up to all because it's been such a key thing over this last year. >> Yeah, it's been a really interesting discussion point I've been having the last few years, is that the role between developers and that, that finance piece. So, Ryan, who is it that advises you on this? Is there somebody on the finance team from the SBA? is it Four Points? You know, being aware of savings plan, it was something that was announced at Reinvent, but it takes a while for that to trickle and oftentimes developers don't need to think about or think that they don't need to think about the financial implications of how they're architecting things. So how, how does that communication and decision making happen? >> That's such a great question. I think it goes back to how Four Points is customer obsessed. One of our favorite things about using a small business reseller like Four Points instead of dealing directly with our cloud service provider is that Four Points provides us a service where every quarter they do an independent assessment of our systems, how much we're spending and what that looks like from a service breakdown. And then we get that perspective and that opinion, and we enrich it with our conversation with our AWS account manager, with our finance people. But having that third party independent person come in and say, "Hey, this is what we think" has been so powerful because Joel and Dana and team have always had observations that nobody else has had. And those kinds of insights are nice to have, when you have people who are suspicious of a vendor telling you to buy more things with them, because they're the vendor >> From the lessons you've learned there, any final advice that you'd give to your peers out there, and how will you take what you've learned working on this project to other things, either in the SBA or in talking with your peers in other organizations. >> So I have two big things. So one is go use a small business reseller. I would be remiss if I didn't use this opportunity to tell you as a member of the US Small Business Administration, that there are some really, really great service providers out there. They are part of our programs like Four Points, and they can help you achieve that balance between trusting your cloud service provider and having that a third party entity that can come in and, call bowl and also call Yahtzee. So recognize good things and recognize bad things. So that would be number one. And then number two is moving to the cloud is so often sold as a technology project. And it's like 20% technology and 80% culture and workforce change. And so be honest with yourselves and your executive teams that this isn't a technology project. This is, we going to change how we do business project, and we going to change the culture of this organization kind of project. >> All right; and Joel, I'll let you have the final word on lessons learned here and also about Four Points and congratulations again, the Customer Obsession Mission award winner. >> Great, thanks Stu, we're so appreciative to Amazon for their recognition and to Ryan and SBA for giving us the opportunity to support such an important program. We are a small business, we are very much focused on delivering what our customers need in the cloud. And it's just such a tremendous feeling to be able to work on a program like this that has such, such payoff for the whole country. >> All right, Well, Joel and Ryan, thank you so much for sharing your updates, such an important project this year. Thanks so much. >> Thank you Stu. >> Thanks >> Stay with us for more covered from the AWS Public Sector Partner awards. I'm Stu Miniman, and thank you for watching theCUBE.
SUMMARY :
Announcer: From around the globe, and of course the SBA, been on the program, focused on the federal government that you worked on. and the SBA have been kind of leveraged more about the projects from the development and I called the Four Points and how much of this So one of the hard technical Those are really the promise on the technology side. and any lessons you learned along the way and everything happening in the world in the federal space to use them. And that's really the is that the role between developers and we enrich it with our conversation and how will you take what and they can help you achieve the Customer Obsession such payoff for the whole country. thank you so much for and thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joel | PERSON | 0.99+ |
Joel Lipkin | PERSON | 0.99+ |
Ryan Hillard | PERSON | 0.99+ |
Ryan | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
SBA | ORGANIZATION | 0.99+ |
Four Points | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
Four Points Technologies | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Four Points | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
200+ services | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
three o'clock | DATE | 0.99+ |
two accounts | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Dana | PERSON | 0.99+ |
US Small Business Administration | ORGANIZATION | 0.99+ |
first customer | QUANTITY | 0.99+ |
Four Points Technology | ORGANIZATION | 0.99+ |
first application | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
2002 | DATE | 0.99+ |
Lambda | TITLE | 0.99+ |
three times | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Four Points Technology | ORGANIZATION | 0.98+ |
six days | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
2012 | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
last year | DATE | 0.98+ |
today | DATE | 0.98+ |
Saturday | DATE | 0.97+ |
Jared Bell T-Rex Solutions & Michael Thieme US Census Bureau | AWS Public Sector Partner Awards 2020
>> Narrator: From around the globe, it's theCUBE with digital coverage of AWS Public Sector Partner Awards brought to you by Amazon web services. >> Hi, and welcome back, I'm Stu Miniman and we're here at the AWS Public Sector. Their Partner Awards, really enjoying this. We get to talk to some of the diverse ecosystem as well as they've all brought on their customers, some really phenomenal case studies. Happy to welcome to the program two first time guests. First of all, we have Jared Bell, he's the Chief Engineer of self response, operational readiness at T-Rex Solutions and T-Rex is the award winner for the most customer obsessed mission-based in Fed Civ. So Jared, congratulations to you and the T-Rex team and also joining him, his customer Michael Thieme, he's the Assistant Director for the Decennial Census Program systems and contracts for the US Census Bureau, thank you so much both for joining us. >> Good to be here. >> All right, Jared, if we could start with you, as I said, you're an award winner, you sit in the Fed Civ space, you've brought us to the Census Bureau, which most people understand the importance of that government program coming up on that, you know, every 10 year we've been hearing, you know, TV and radio ads talking about it, but Jared, if you could just give us a thumbnail of T-Rex and what you do in the AWS ecosystem. >> So yeah again, my name's Jared Bell and I work for T-Rex Solutions. T-Rex is a mid tier IT federal contracting company in Southern Maryland, recently graduated from hubs on status, and so T-Rex really focuses on four key areas, infrastructure in Cloud modernization, cybersecurity, and active cyber defense, big data management and analytics, and then overall enterprise system integration. And so we've been, you know, AWS partner for quite some time now and with decennial, you know, we got to really exercise a lot of the bells and whistles that are out there and really put it all to the test. >> All right, well, Michael, you know, so many people in IT, we talk about the peaks and valleys that we have, not too many companies in our organization say, well, we know exactly, you know, that 10 year spike of activity that we're going to have, I know there's lots of work that goes on beyond that, but it tells a little bit , your role inside the Census Bureau and what's under your purview. >> Yes, the Census Bureau, is actually does hundreds of surveys every year, but the decennial census is our sort of our main flagship activity. And I am the Assistant Director under our Associate Director for the IT and for the contracts for the decennial census. >> Wonderful and if you could tell us a little bit the project that you're working on, that eventually pulled T-Rex in. >> Sure. This is the 2020 census and the challenge of the 2020 census is we've done the census since 1790 in the United States. It's a pillar, a foundation of our democracy, and this was the most technologically advanced census we've ever done. Actually up until 2020, we have done our censuses mostly by pen, paper, and pencil. And this is a census where we opened up the internet for people to respond from home. We can have people respond on the phone, people can respond with an iPhone or an Android device. We tried to make it as easy as possible and as secure as possible for people to respond to the census where they were and we wanted to meet the respondent where they were. >> All right. So Jared, I'd love you to chime in here 'cause I'm here and talking about, you know, the technology adoption, you know, how much was already in plans there, where did T-Rex intersect with this census activity? >> Yeah. So, you know, census deserves a lot of credit for their kind of innovative approach with this technical integrator contract, which T-Rex was fortunate enough to win. When we came in, you know, we were just wrapping up the 2018 test. we really only had 18 months to go from start to, you know, a live operational tests to prepare for 2020. And it was really exciting to be brought in on such a large mission critical project and this is one of the largest federal IT products in the Cloud to date. And so, you know, when we came in, we had to really, you know, bring together a whole lot of solutions. I mean, the internet self response, which is what we're going to to talk about today was one of the major components. But we really had a lot of other activities that we had to engage in. You know, we had to design and prepare an IT solution to support 260 field offices, 16,000 field staff, 400,000 mobile devices and users that were going to go out and knock on doors for a numeration. So it was real6ly a big effort that we were honored to be a part of, you know, and on top of that, T-Rex actually brought to the table, a lot of its past experience with cybersecurity and active cyber defense, also, you know, because of the importance of all this data, you know, we had the role in security all throughout, and I think T-Rex was prepared for that and did a great job. And then, you know, overall I think that, not necessarily directly to your question, but I think, y6ou know, one of the things that we were able to do to make ourselves successful and to really engage with the census Bureau and be effective with our stakeholders was that we really build a culture of decennial within the technical integrator, you know, we had brown bags and working sessions to really teach the team the importance of the decennial, you know, not just as a career move, but also as a important activity for our country. And so I think that that really helped the team, you know, internalize that mission and really drove kind of our dedication to the census mission and really made us effective and again, a lot of the T-Rex leadership had a lot of experience there from past decennials and so they really brought that mindset to the team and I think it really paid off. >> Michael, if you could bring us inside a little a bit the project, you know, 18 months, obviously you have a specific deadline you need to hit, for that help us understand kind of the architectural considerations that you had there, any concerns that you had and I have to imagine that just the global activities, the impacts of COVID-19 has impacted some of the end stage, if you will, activities here in 2020. >> Absolutely. Yeah. The decennial census is, I believe a very unique IT problem. We have essentially 10 months out of the decade that we have to scale up to gigantic and then scale back down to run the rest of the Census Bureau's activities. But our project, you know, every year ending in zero, April 1st is census day. Now April 1st continued to be census day in 2020, but we also had COVID essentially taking over virtually everything in this country and in fact in the world. So, the way that we set up to do the census with the Cloud and with the IT approach and modernization that we took, actually, frankly, very luckily enabled us to kind of get through this whole thing. Now, we haven't had, Jared discussed a little bit the fact that we're here to talk about our internet self response, we haven't had one second of downtime for our response. We've taken 77 million. I think even more than 78 million responses from households, out of the 140 million households in the United States, we've gotten 77 million people to respond on our internet site without one second of downtime, a good user experience, a good supportability, but the project has always been the same. It's just this time, we're actually doing it with much more technology and hopefully the way that the Cloud has supported us will prove to be really effective for the COVID-19 situation. Because we've had changes in our plans, difference in timeframes, we are actually not even going into the field, or we're just starting to go into the field these next few weeks where we would have almost been coming out of the field at this time. So that flexibility, that expandability, that elasticity, that being in the Cloud gives all of our IT capabilities was really valuable this time. >> Well, Jared, I'm wondering if you can comment on that. All of the things that Michael just said, you know, seem like, you know, they are just the spotlight pieces that I looked at Cloud for. You know, being able to scale on demand, being able to use what I need when I need it, and then dial things down when I don't, and especially, you know, I don't want to have to, you know, I want to limit how much people actually need to get involved. So help understand a little bit, you know, what AWS services underneath, we're supporting this and anything else around the Cloud deployment. >> Sure, yeah. Michael is spot on. I mean, the cloud is tailor made for our operation and activity here. You know, I think all told, we use over 30 of the AWS FedRAMP solutions in standing up our environment across all those 52 system of systems that we were working with. You know, just to name a few, I mean, internet self response alone, you're relying heavily on auto scaling groups, elastic load balancers, you know, we relied a lot on Lambda Functions, DynamoDB. We're one of the first adopters through DynamoDB global tables, which we use for a session persistence across regions. And then on top of that, you know, the data was all flowing down into RDS databases and then from there to, you know, the census data Lake, which was built on EMR and Elasticsearch capabilities, and that's just to name a couple. I mean, you know, we had, we ran the gamut of AWS services to make all this work and they really helped us accelerate. And as Michael said, you know, we stood this up expecting to be working together in a war room, watching everything hand in hand, and because of the way we, were able to architect it in partnership with AWS, we all had to go out and stay at home, you know, the infrastructure remain rock solid. We can have to worry about, you know, being hands on with the equipment and, you know, again, the ability to automate and integrate with those solutions Cloud formation and things like that really let us keep a small agile team of, you know, DevSecOps there to handle the deployments. And we were doing full scale deployments with, you know, one or two people in the middle of the night without any problems. So it really streamlined things for us and helped us keep a tight natural, sure. >> Michael, I'm curious about what kind of training your team need to go through to take advantage of this solution. So from bringing it up to the ripple effect, as you said, you're only now starting to look at who would go into the field who uses devices and the like, so help us understand really the human aspect of undergoing this technology. >> Sure. Now, the census always has to ramp up this sort of immediate workforce. We hire, we actually process over 3 million people through, I think, 3.9 million people applied to work for the Census Bureau. And each decade we have to come up with a training program and actually training sites all over the country and the IT to support those. Now, again, modernization for the 2020 census, didn't only involve the things like our internet self response, it also involves our training. We have all online training now, we used to have what we called verbatim training, where we had individual teachers all over the country in places like libraries, essentially reading text exactly the same way to exactly over and over again to our, to the people that we trained. But now it's all electronic, it allows us to, and this goes to the COVID situation as well, it allows us to bring only three people in at a time to do training. Essentially get them started with our device that we have them use when they're knocking on doors and then go home and do the training, and then come back to work with us all with a minimal contact, human contact, sort of a model. And that, even though we designed it differently, the way that we set the technology of this time allowed us to change that design very quickly, get people trained, not essentially stop the census. We essentially had to slow it down because we weren't sure exactly when it was going to be safe to go knocking on door to door, but we were able to do the training and all of that worked and continues to work phenomenally. >> Wonderful. Jared, I wonder if you've got any lessons learned from working with the census group that might be applicable to kind of, the broader customers out there? >> Oh, sure. Well, working with the census, you know, it was really a great group to work with. I mean, one of the few groups I worked with who have such a clear vision and understanding of what they want their final outcome to be, I think again, you know, for us the internalization of the decennial mission, right? It's so big, it's so important. I think that because we adopted it early on we felt that we were true partners with census, we had a lot of credibility with our counterparts and I think that they understood that we were in it with them together and that was really important. I would also say that, you know, because we're talking about the go Cloud solutions that we worked, you know, we also engage heavily with the AWS engineering group and in partnership with them, you know, we relied on the infrastructure event management services they offer and was able to give us a lot of great insight into our architecture and our systems and monitoring to really make us feel like we were ready for the big show when the time came. So, you know, I think for me, another lesson learned there was that, you know, the Cloud providers like AWS, they're not just a vendor, they're a partner and I think that now going forward, we'll continue to engage with those partners early and often. >> Michael the question I have for you is, you know, what would you say to your peers? What lessons did you have learned and how much of what you've done for the census, do you think it will be applicable to all those other surveys that you do in between the big 10 year surveys? >> All right. I think we have actually set a good milestone for the rest of the Census Bureau, that the modernization that the 2020 census has allowed since it is our flagship really is something that we hope we can continue through the decade and into the next census, as a matter of fact. But I think one of the big lessons learned I wanted to talk about was we have always struggled with disaster recovery. And one of the things that having the Cloud and our partners in the Cloud has helped us do is essentially take advantage of the resilience of the Cloud. So there are data centers all over the country. If ever had a downtime somewhere, we knew that we were going to be able to stay up. For the decennial census, we've never had the budget to pay for a persistent disaster recovery. And the Cloud essentially gives us that kind of capability. Jared talked a lot about security. I think we have taken our security posture to a whole different level, something that allowed us to essentially, as I said before, keep our internet self response free of hacks and breaches through this whole process and through a much longer process than we even intended to keep it open. So, there's a lot here that I think we want to bring into the next decade, a lot that we want to continue, and we want the census to essentially stay as modern as it has become for 2020. >> Well, I will tell you personally Michael, I did take the census online, it was really easy to do, and I'll definitely recommend if they haven't already, everybody listening out there so important that you participate in the census so that they have complete data. So, Michael, Jared, thank you so much. Jared, congratulations to your team for winning the award and you know, such a great customer. Michael, thank you so much for what you and your team are doing. We Appreciate all that's being done, especially in these challenging times. >> Thank you and thanks for doing the census. >> All right and stay tuned for more coverage of the AWS public sector partner award I'm Stu Miniman and thank you for watching theCUBE. (upbeat music)
SUMMARY :
brought to you by Amazon web services. and T-Rex is the award winner you know, TV and radio and with decennial, you know, we know exactly, you know, and for the contracts Wonderful and if you and the challenge of the 2020 census you know, the technology adoption, the importance of the decennial, you know, some of the end stage, if you will, and in fact in the world. and especially, you know, and then from there to, you know, really the human aspect and the IT to support those. that might be applicable to kind of, and in partnership with them, you know, and our partners in the and you know, such a great customer. for doing the census. of the AWS public sector partner award
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jared | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Michael Thieme | PERSON | 0.99+ |
Jared Bell | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Census Bureau | ORGANIZATION | 0.99+ |
T-Rex | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
140 million | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
400,000 mobile devices | QUANTITY | 0.99+ |
Southern Maryland | LOCATION | 0.99+ |
April 1st | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
10 months | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
US Census Bureau | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
3.9 million people | QUANTITY | 0.99+ |
T-Rex Solutions | ORGANIZATION | 0.99+ |
77 million | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
10 year | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
next decade | DATE | 0.99+ |
over 3 million people | QUANTITY | 0.99+ |
77 million people | QUANTITY | 0.99+ |
one second | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
1790 | DATE | 0.98+ |
today | DATE | 0.98+ |
260 field offices | QUANTITY | 0.98+ |
COVID-19 | OTHER | 0.98+ |
DynamoDB | TITLE | 0.97+ |
each decade | QUANTITY | 0.97+ |
16,000 field staff | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
AWS Public Sector | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.96+ |
Cloud | TITLE | 0.95+ |
Breaking Analysis: Emerging Tech sees Notable Decline post Covid-19
>> Announcer: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> As you may recall, coming into the second part of 2019 we reported, based on ETR Survey data, that there was a narrowing of spending on emerging tech and an unplugging of a lot of legacy systems. This was really because people were going from experimentation into operationalizing their digital initiatives. When COVID hit, conventional wisdom suggested that there would be a flight to safety. Now, interestingly, we reported with Eric Bradley, based on one of the Venns, that a lot of CIOs were still experimenting with emerging vendors. But this was very anecdotal. Today, we have more data, fresh data, from the ETR Emerging Technology Study on private companies, which really does suggest that there's a notable decline in experimentation, and that's affecting emerging technology vendors. Hi, everybody, this is Dave Vellante, and welcome to this week's Wikibon Cube Insights, powered by ETR. Once again, Sagar Kadakia is joining us. Sagar is the Director of Research at ETR. Sagar, good to see you. Thanks for coming on. >> Good to see you again. Thanks for having me, Dave. >> So, it's really important to point out, this Emerging Tech Study that you guys do, it's different from your quarterly Technology Spending Intention Survey. Take us through the methodology. Guys, maybe you could bring up the first chart. And, Sagar, walk us through how you guys approach this. >> No problem. So, a lot of the viewers are used to seeing a lot of the results from the Technology Spending Intention Survey, or the TSIS, as we call it. That study, as the title says, it really tracks spending intentions on more pervasive vendors, right, Microsoft, AWS, as an example. What we're going to look at today is our Emerging Technology Study, which we conduct biannually, in May and November. This study is a little bit different. We ask CIOs around evaluations, awareness, planned evaluations, so think of this as pre-spend, right. So that's a major differentiator from the TSIS. That, and this study, really focuses on private emerging providers. We're really only focused on those really emerging private companies, say, like your Series B to Series G or H, whatever it may be, so, two big differences within those studies. And then today what we're really going to look at is the results from the Emerging Technology Study. Just a couple of quick things here. We had 811 CIOs participate, which represents about 380 billion in annual IT spend, so the results from this study matter. We had almost 75 Fortune 100s take it. So, again, we're really measuring how private emerging providers are doing in the largest organizations. And so today we're going to be reviewing notable sectors, but largely this survey tracks roughly 356 private technologies and frameworks. >> All right, guys, bring up the pie chart, the next slide. Now, Sagar, this is sort of a snapshot here, and it basically says that 44% of CIOs agree that COVID has decreased the organization's evaluation and utilization of emerging tech, despite what I mentioned, Eric Bradley's Venn, which suggested one CIO in particular said, "Hey, I always pick somebody in the lower left "of the magic quadrant." But, again, this is a static view. I know we have some other data, but take us through this, and how this compares to other surveys that you've done. >> No problem. So let's start with the high level takeaways. And I'll actually kind of get into to the point that Eric was debating, 'cause that point is true. It's just really how you kind of slice and dice the data to get to that. So, what you're looking at here, and what the overall takeaway from the Emerging Technology Study was, is, you know, you are going to see notable declines in POCs, of proof-of-concepts, any valuations because of COVID-19. Even though we had been communicating for quite some time, you know, the last few months, that there's increasing pressure for companies to further digitize with COVID-19, there are IT budget constraints. There is a huge pivot in IT resources towards supporting remote employees, a decrease in risk tolerance, and so that's why what you're seeing here is a rather notable number of CIOs, 44%, that said that they are decreasing their organization's evaluation and utilization of private emerging providers. So that is notable. >> Now, as you pointed out, you guys run this survey a couple of times a year. So now let's look at the time series. Guys, if you bring up the next chart. We can see how the sentiment has changed since last year. And, of course, we're isolating here on some of larger companies. So, take us through what this data means. >> No problem. So, how do we quantify what we just saw in the prior slide? We saw 44% of CIOs indicating that they are going to be decreasing their evaluations. But what exactly does that mean? We can pretty much determine that by looking at a lot of the data that we captured through our Emerging Technology Study. There's a lot going on in this slide, but I'll walk you through it. What you're looking at here is Fortune 1000 organizations, so we've really isolated the data to those organizations that matter. So, let's start with the teal, kind of green line first, because I think it's a little bit easier to understand. What you're looking at, Fortune 1000 evaluations, both planned and current, okay? And you're looking at a time series, one year ago and six months ago. So, two of the answer options that we provide CIOs in this survey, right, think about the survey as a grid, where you have seven answer options going horizontally, and then 300-plus vendors and technologies going vertically. For any given vendor, they can essentially indicate one of these options, two of them being on currently evaluating them or I plan to evaluate them in six months. So what you're looking at here is effectively the aggregate number, or the average number of Fortune 1000 evaluations. So if you look into May 2019, all the way on the left of that chart, that 24% roughly means that a quarter of selections made by Fortune 1000 of the survey, they selected plan to evaluate or currently evaluating. If you fast-forward six months, to the middle of the chart, November '19, it's roughly the same, one in four technologies that are Fortune 1000 selected, they indicated that I plan or am currently evaluating them. But now look at that big drop off going into May 2020, the 17%, right? So now one out of every six technologies, or one out of every selections that they made was an evaluation. So a very notable drop. And then if you look at the blue line, this is another answer option that we provided CIOs: I'm aware of the technology but I have no plans to evaluate. So this answer option essentially tracks awareness levels. If you look at the last six months, look at that big uptick from 44% to over 50%, right? So now, essentially one out of every two technologies, or private technologies that a CIO is aware of, they have no plans to evaluate. So this is going to have an impact on the general landscape, when we think about those private emerging providers. But there is one caveat, and, Dave, this is what you mentioned earlier, this is what Eric was talking about. The providers that are doing well are the ones that are work-from-home aligned. And so, just like a few years ago, we were really analyzing results based on are you cloud-native or are you Cloud-aligned, because those technologies are going to do the best, what we're seeing in the emerging space is now the same thing. Those emerging providers that enable organizations to maintain productivity for their employees, essentially allowing their employees to work remotely, those emerging providers are still doing well. And that is probably the second biggest takeaway from this study. >> So now what we're seeing here is this flight to perceive safety, which, to your point, Sagar, doesn't necessarily mean good news for all enterprise tech vendors, but certainly for those that are positioned for the work-from-home pivot. So now let's take a look at a couple of sectors. We'll start with information security. We've reported for years about how the perimeter's been broken down, and that more spend was going to shift from inside the moat to a distributed network, and that's clearly what's happened as a result of COVID. Guys, if you bring up the next chart. Sagar, you take us through this. >> No problem. And as you imagine, I think that the big theme here is zero trust. So, a couple of things here. And let me just explain this chart a little bit, because we're going to be going through a couple of these. What you're seeing on the X-axis here, is this is effectively what we're classifying as near term growth opportunity from all customers. The way we measure that effectively is we look at all the evaluations, current evaluations, planned evaluations, we look at people who are evaluated and plan to utilize these vendors. The more indications you get on that the more to the top right you're going to be. The more indications you get around I'm aware of but I don't plan to evaluate, or I'm replacing this early-stage vendor, the further down and on the left you're going to be. So, on the X-axis you have near term growth opportunity from all customers, and on the Y-axis you have near term growth opportunity from, really, the biggest shops in the world, your Global 2000, your Forbes Private 225, like Cargill, as an example, and then, of course, your federal agencies. So you really want to be positioned up and to the right here. So, the big takeaway here is zero trust. So, just a couple of things on this slide when we think about zero trust. As organizations accelerate their Cloud and Saas spend because of COVID-19, and, you know, what we were talking about earlier, Dave, remote work becomes the new normal, that perimeter security approach is losing appeal, because the perimeter's less defined, right? Apps and data are increasingly being stored in the Cloud. That, and employees are working remotely from everywhere, and they're accessing all of these items. And so what we're seeing now is a big move into zero trust. So, if we look at that chart again, what you're going to see in that upper right quadrant are a lot of identity and access management players. And look at the bifurcation in general. This is what we were talking about earlier in terms of the landscape not doing well. Most security vendors are in that red area, you know, in the middle to the bottom. But if you look at the top right, what are you seeing here? Unify ID, Auth0, WSO2, right, all identity and access management players. These are critical in your zero trust approach, and this is one of the few area where we are seeing upticks. You also see here BitSight, Lucideus. So that's going to be security assessment. You're seeing VECTRA and Netskope and Darktrace, and a few others here. And Cloud Security and IDPS, Intrusion Detection and Prevention System. So, very few sectors are seeing an uptick, very few security sectors actually look pretty good, based on opportunities that are coming. But, essentially, all of them are in that work-from-home aligned security stack, so to speak. >> Right, and of course, as we know, as we've been reporting, buyers have options, from both established companies and these emerging companies that are public, Okta, CrowdStrike, Zscaler. We've seen the work-from-home pivot benefit those guys, but even Palo Alto Networks, even CISCO, I asked (other speaker drowns out speech) last week, I said, "Hey, what about this pivot to work from home? "What about this zero trust?" And he said, "Look, the reality is, yes, "a big part of our portfolio is exposed "to that traditional infrastructure, "but we have options for zero trust as well." So, from a buyer's standpoint, that perceived flight to safety, you have a lot of established vendors, and that clearly is showing up in your data. Now, the other sector that we want to talk about is database. We've been reporting a lot on database, data warehouse. So, why don't you take us through the next graphic here, if you would. >> Sagar: No problem. So, our theme here is that Snowflake is really separating itself from the pack, and, again, you can see that here. Private database and data warehousing vendors really continue to impact a lot of their public peers, and Snowflake is leading the way. We expect Snowflake to gain momentum in the next few years. And, look, there's some rumors that IPOing soon. And so when we think about that set-up, we like it, because as organizations transition away from hybrid Cloud architectures to 100% or near-100% public Cloud, Snowflake is really going to benefit. So they look good, their data stacks look pretty good, right, that's resiliency, redundancy across data centers. So we kind of like them as well. Redis Labs bring a DB and they look pretty good here on the opportunity side, but we are seeing a little bit of churn, so I think probably Snowflake and DataStax are probably our two favorites here. And again, when you think about Snowflake, we continue to think more pervasive vendors, like Paradata and Cloudera, and some of the other larger database firms, they're going to continue seeing wallet and market share losses due to some of these emerging providers. >> Yeah. If you could just keep that slide up for a second, I would point out, in many ways Snowflake is kind of a safer bet, you know, we talk about flight to safety, because they're well-funded, they're established. You can go from zero to Snowflake very quickly, that's sort of their mantra, if you will. But I want to point out and recognize that it is somewhat oranges and tangerines here, Snowflake being an analytical database. You take MariaDB, for instance, I look at that, anyway, as relational and operational. And then you mentioned DataStax. I would say Couchbase, Redis Labs, Aerospike. Cockroach is really a... EValue Store. You've got some non-relational databases in there. But we're looking at the entire sector of databases, which has become a really interesting market. But again, some of those established players are going to do very well, and I would put Snowflake on that cusp. As you pointed out, Bloomberg broke the story, I think last week, that they were contemplating an IPO, which we've known for a while. >> Yeah. And just one last thing on that. We do like some of the more pervasive players, right. Obviously, AWS, all their products, Redshift and DynamoDB. Microsoft looks really good. It's just really some of the other legacy ones, like the Teradatas, the Oracles, the Hadoops, right, that we are going to be impacted. And so the claw providers look really good. >> So, the last decade has really brought forth this whole notion of DevOps, infrastructure as code, the whole API economy. And that's the piece we want to jump into now. And there are some real stand-outs here, you know, despite the early data that we showed you, where CIOs are less prone to look at emerging vendors. There are some, for instance, if you bring up the next chart, guys, like Hashi, that really are standing out, aren't they? >> That's right, Dave. So, again, what you're seeing here is you're seeing that bifurcation that we were talking about earlier. There are a lot of infrastructure software vendors that are not positioned well, but if you look at the ones at the top right that are positioned well... We have two kind of things on here, starting with infrastructure automation. We think a winner here is emerging with Terraform. Look all the way up to the right, how well-positioned they are, how many opportunities they're getting. And for the second straight survey now, Terraform is leading along their peers, Chef, Puppet, SaltStack. And they're leading their peers in so many different categories, notably on allocating more spend, which is obviously very important. For Chef, Puppet and SaltStack, which you can see a little bit below, probably a little bit higher than the middle, we are seeing some elevator churn levels. And so, really, Terraform looks like they're kind of separating themselves. And we've got this great quote from the CIO just a few months ago, on why Terraform is likely pulling away, and I'll read it out here quickly. "The Terraform tool creates "an entire infrastructure in a box. "Unlike vendors that use procedural languages, "like Ants, Bull and Chef, "it will show you the infrastructure "in the way you want it to be. "You don't have to worry about "the things that happen underneath." I know some companies where you can put your entire Amazon infrastructure through Terraform. If Amazon disappears, if your availability drops, load balancers, RDS, everything, you just run Terraform and everything will be created in 10 to 15 minutes. So that shows you the power of Terraform and why we think it's ranked better than some of the other vendors. >> Yeah, I think that really does sum it up. And, actually, guys, if you don't mind bringing that chart back up again. So, a point out, so, Mitchell Hashimoto, Hashi, really, I believe I'm correct, talking to Stu about this a little bit, he sort of led the Terraform project, which is an Open Source project, and, to your point, very easy to deploy. Chef, Puppet, Salt, they were largely disrupted by Cloud, because they're designed to automate deployment largely on-prem and DevOps, and now Terraform sort of packages everything up into a platform. So, Hashi actually makes money, and you'll see it on this slide, and things, Vault, which is kind of their security play. You see GitLab on here. That's really application tooling to deploy code. You see Docker containers, you know, Docker, really all about open source, and they've had great adoption, Docker's challenge has always been monetization. You see Turbonomic on here, which is application resource management. You can't go too deep on these things, but it's pretty deep within this sector. But we are comparing different types of companies, but just to give you a sense as to where the momentum is. All right, let's wrap here. So maybe some final thoughts, Sagar, on the Emerging Technology Study, and then what we can expect in the coming month here, on the update in the Technology Spending Intention Study, please. >> Yeah, no problem. One last thing on the zero trust side that has been a big issue that we didn't get to cover, is VPN spend. Our data is pointing that, yes, even though VPN spend did increase the last few months because of remote work, we actually think that people are going to move away from that as they move onto zero trust. So just one last point on that, just in terms of overall thoughts, you know, again, as we cover it, you can see how bifurcated all these spaces are. Really, if we were to go sector by sector by sector, right, storage and block chain and MLAI and all that stuff, you would see there's a few or maybe one or two vendors doing well, and the majority of vendors are not seeing as many opportunities. And so, again, are you work-from-home aligned? Are you the best vendor of all the other emerging providers? And if you fit those two criteria then you will continue seeing POCs and evaluations. And if you don't fit that criteria, unfortunately, you're going to see less opportunities. So think that's really the big takeaway on that. And then, just in terms of next steps, we're already transitioning now to our next Technology Spending Intention Survey. That launched last week. And so, again, we're going to start getting a feel for how CIOs are spending in 2H-20, right, so, for the back half of the year. And our question changes a little bit. We ask them, "How do you plan on spending in the back half year "versus how you actually spent "in the first half of the year, or 1H-20?" So, we're kind of, tighten the screw, so to speak, and really getting an idea of what's spend going to look like in the back half, and we're also going to get some updates as it relates to budget impacts from COVID-19, as well as how vendor-relationships have changed, as well as business impacts, like layoffs and furloughs, and all that stuff. So we have a tremendous amount of data that's going to be coming in the next few weeks, and it should really prepare us for what to see over the summer and into the fall. >> Yeah, very excited, Sagar, to see that. I just wanted to double down on what you said about changes in networking. We've reported with you guys on NPLS networks, shifting to SD-WAN. But even VPN and SD-WAN are being called into question as the internet becomes the new private network. And so lots of changes there. And again, very excited to see updated data, return of post-COVID, as we exit this isolation economy. Really want to point out to folks that this is not a snapshot survey, right? This is an ongoing exercise that ETR runs, and grateful for our partnership with you guys. Check out ETR.plus, that's the ETR website. I publish weekly on Wikibon.com and SiliconANGLE.com. Sagar, thanks so much for coming on. Once again, great to have you. >> Thank you so much, for having me, Dave. I really appreciate it, as always. >> And thank you for watching this episode of theCube Insights, powered by ETR. This Dave Vellante. We'll see you next time. (gentle music)
SUMMARY :
leaders all around the world, Sagar is the Director of Research at ETR. Good to see you again. So, it's really important to point out, So, a lot of the viewers that COVID has decreased the of slice and dice the data So now let's look at the time series. by looking at a lot of the data is this flight to perceive safety, and on the Y-axis you have Now, the other sector that we and Snowflake is leading the way. And then you mentioned DataStax. And so the claw providers And that's the piece we "in the way you want it to be. but just to give you a sense and the majority of vendors are not seeing on what you said about Thank you so much, for having me, Dave. And thank you for watching this episode
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sagar | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
May 2019 | DATE | 0.99+ |
CISCO | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
May 2020 | DATE | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Terraform | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mitchell Hashimoto | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
44% | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
November '19 | DATE | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
24% | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
17% | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Redis Labs | ORGANIZATION | 0.99+ |
Couchbase | ORGANIZATION | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
Aerospike | ORGANIZATION | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Paradata | ORGANIZATION | 0.99+ |
811 CIOs | QUANTITY | 0.99+ |
Hashi | PERSON | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
one caveat | QUANTITY | 0.99+ |
November | DATE | 0.99+ |
two criteria | QUANTITY | 0.99+ |
Series G | OTHER | 0.99+ |
Boston | LOCATION | 0.99+ |
X-axis | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Bloomberg | ORGANIZATION | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
DataStax | ORGANIZATION | 0.99+ |
two kind | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
six months | QUANTITY | 0.98+ |
Sagar Kadakia | PERSON | 0.98+ |
about 380 billion | QUANTITY | 0.98+ |
Oracles | ORGANIZATION | 0.98+ |
one year ago | DATE | 0.98+ |
MariaDB | TITLE | 0.98+ |
over 50% | QUANTITY | 0.98+ |
zero trust | QUANTITY | 0.98+ |
two vendors | QUANTITY | 0.98+ |
Series B | OTHER | 0.98+ |
first chart | QUANTITY | 0.98+ |