Image Title

Search Results for Alex sandyPentland:

Breaking Analysis: Databricks faces critical strategic decisions…here’s why


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Spark became a top level Apache project in 2014, and then shortly thereafter, burst onto the big data scene. Spark, along with the cloud, transformed and in many ways, disrupted the big data market. Databricks optimized its tech stack for Spark and took advantage of the cloud to really cleverly deliver a managed service that has become a leading AI and data platform among data scientists and data engineers. However, emerging customer data requirements are shifting into a direction that will cause modern data platform players generally and Databricks, specifically, we think, to make some key directional decisions and perhaps even reinvent themselves. Hello and welcome to this week's wikibon theCUBE Insights, powered by ETR. In this Breaking Analysis, we're going to do a deep dive into Databricks. We'll explore its current impressive market momentum. We're going to use some ETR survey data to show that, and then we'll lay out how customer data requirements are changing and what the ideal data platform will look like in the midterm future. We'll then evaluate core elements of the Databricks portfolio against that vision, and then we'll close with some strategic decisions that we think the company faces. And to do so, we welcome in our good friend, George Gilbert, former equities analyst, market analyst, and current Principal at TechAlpha Partners. George, good to see you. Thanks for coming on. >> Good to see you, Dave. >> All right, let me set this up. We're going to start by taking a look at where Databricks sits in the market in terms of how customers perceive the company and what it's momentum looks like. And this chart that we're showing here is data from ETS, the emerging technology survey of private companies. The N is 1,421. What we did is we cut the data on three sectors, analytics, database-data warehouse, and AI/ML. The vertical axis is a measure of customer sentiment, which evaluates an IT decision maker's awareness of the firm and the likelihood of engaging and/or purchase intent. The horizontal axis shows mindshare in the dataset, and we've highlighted Databricks, which has been a consistent high performer in this survey over the last several quarters. And as we, by the way, just as aside as we previously reported, OpenAI, which burst onto the scene this past quarter, leads all names, but Databricks is still prominent. You can see that the ETR shows some open source tools for reference, but as far as firms go, Databricks is very impressively positioned. Now, let's see how they stack up to some mainstream cohorts in the data space, against some bigger companies and sometimes public companies. This chart shows net score on the vertical axis, which is a measure of spending momentum and pervasiveness in the data set is on the horizontal axis. You can see that chart insert in the upper right, that informs how the dots are plotted, and net score against shared N. And that red dotted line at 40% indicates a highly elevated net score, anything above that we think is really, really impressive. And here we're just comparing Databricks with Snowflake, Cloudera, and Oracle. And that squiggly line leading to Databricks shows their path since 2021 by quarter. And you can see it's performing extremely well, maintaining an elevated net score and net range. Now it's comparable in the vertical axis to Snowflake, and it consistently is moving to the right and gaining share. Now, why did we choose to show Cloudera and Oracle? The reason is that Cloudera got the whole big data era started and was disrupted by Spark. And of course the cloud, Spark and Databricks and Oracle in many ways, was the target of early big data players like Cloudera. Take a listen to Cloudera CEO at the time, Mike Olson. This is back in 2010, first year of theCUBE, play the clip. >> Look, back in the day, if you had a data problem, if you needed to run business analytics, you wrote the biggest check you could to Sun Microsystems, and you bought a great big, single box, central server, and any money that was left over, you handed to Oracle for a database licenses and you installed that database on that box, and that was where you went for data. That was your temple of information. >> Okay? So Mike Olson implied that monolithic model was too expensive and inflexible, and Cloudera set out to fix that. But the best laid plans, as they say, George, what do you make of the data that we just shared? >> So where Databricks has really come up out of sort of Cloudera's tailpipe was they took big data processing, made it coherent, made it a managed service so it could run in the cloud. So it relieved customers of the operational burden. Where they're really strong and where their traditional meat and potatoes or bread and butter is the predictive and prescriptive analytics that building and training and serving machine learning models. They've tried to move into traditional business intelligence, the more traditional descriptive and diagnostic analytics, but they're less mature there. So what that means is, the reason you see Databricks and Snowflake kind of side by side is there are many, many accounts that have both Snowflake for business intelligence, Databricks for AI machine learning, where Snowflake, I'm sorry, where Databricks also did really well was in core data engineering, refining the data, the old ETL process, which kind of turned into ELT, where you loaded into the analytic repository in raw form and refine it. And so people have really used both, and each is trying to get into the other. >> Yeah, absolutely. We've reported on this quite a bit. Snowflake, kind of moving into the domain of Databricks and vice versa. And the last bit of ETR evidence that we want to share in terms of the company's momentum comes from ETR's Round Tables. They're run by Erik Bradley, and now former Gartner analyst and George, your colleague back at Gartner, Daren Brabham. And what we're going to show here is some direct quotes of IT pros in those Round Tables. There's a data science head and a CIO as well. Just make a few call outs here, we won't spend too much time on it, but starting at the top, like all of us, we can't talk about Databricks without mentioning Snowflake. Those two get us excited. Second comment zeros in on the flexibility and the robustness of Databricks from a data warehouse perspective. And then the last point is, despite competition from cloud players, Databricks has reinvented itself a couple of times over the year. And George, we're going to lay out today a scenario that perhaps calls for Databricks to do that once again. >> Their big opportunity and their big challenge for every tech company, it's managing a technology transition. The transition that we're talking about is something that's been bubbling up, but it's really epical. First time in 60 years, we're moving from an application-centric view of the world to a data-centric view, because decisions are becoming more important than automating processes. So let me let you sort of develop. >> Yeah, so let's talk about that here. We going to put up some bullets on precisely that point and the changing sort of customer environment. So you got IT stacks are shifting is George just said, from application centric silos to data centric stacks where the priority is shifting from automating processes to automating decision. You know how look at RPA and there's still a lot of automation going on, but from the focus of that application centricity and the data locked into those apps, that's changing. Data has historically been on the outskirts in silos, but organizations, you think of Amazon, think Uber, Airbnb, they're putting data at the core, and logic is increasingly being embedded in the data instead of the reverse. In other words, today, the data's locked inside the app, which is why you need to extract that data is sticking it to a data warehouse. The point, George, is we're putting forth this new vision for how data is going to be used. And you've used this Uber example to underscore the future state. Please explain? >> Okay, so this is hopefully an example everyone can relate to. The idea is first, you're automating things that are happening in the real world and decisions that make those things happen autonomously without humans in the loop all the time. So to use the Uber example on your phone, you call a car, you call a driver. Automatically, the Uber app then looks at what drivers are in the vicinity, what drivers are free, matches one, calculates an ETA to you, calculates a price, calculates an ETA to your destination, and then directs the driver once they're there. The point of this is that that cannot happen in an application-centric world very easily because all these little apps, the drivers, the riders, the routes, the fares, those call on data locked up in many different apps, but they have to sit on a layer that makes it all coherent. >> But George, so if Uber's doing this, doesn't this tech already exist? Isn't there a tech platform that does this already? >> Yes, and the mission of the entire tech industry is to build services that make it possible to compose and operate similar platforms and tools, but with the skills of mainstream developers in mainstream corporations, not the rocket scientists at Uber and Amazon. >> Okay, so we're talking about horizontally scaling across the industry, and actually giving a lot more organizations access to this technology. So by way of review, let's summarize the trend that's going on today in terms of the modern data stack that is propelling the likes of Databricks and Snowflake, which we just showed you in the ETR data and is really is a tailwind form. So the trend is toward this common repository for analytic data, that could be multiple virtual data warehouses inside of Snowflake, but you're in that Snowflake environment or Lakehouses from Databricks or multiple data lakes. And we've talked about what JP Morgan Chase is doing with the data mesh and gluing data lakes together, you've got various public clouds playing in this game, and then the data is annotated to have a common meaning. In other words, there's a semantic layer that enables applications to talk to the data elements and know that they have common and coherent meaning. So George, the good news is this approach is more effective than the legacy monolithic models that Mike Olson was talking about, so what's the problem with this in your view? >> So today's data platforms added immense value 'cause they connected the data that was previously locked up in these monolithic apps or on all these different microservices, and that supported traditional BI and AI/ML use cases. But now if we want to build apps like Uber or Amazon.com, where they've got essentially an autonomously running supply chain and e-commerce app where humans only care and feed it. But the thing is figuring out what to buy, when to buy, where to deploy it, when to ship it. We needed a semantic layer on top of the data. So that, as you were saying, the data that's coming from all those apps, the different apps that's integrated, not just connected, but it means the same. And the issue is whenever you add a new layer to a stack to support new applications, there are implications for the already existing layers, like can they support the new layer and its use cases? So for instance, if you add a semantic layer that embeds app logic with the data rather than vice versa, which we been talking about and that's been the case for 60 years, then the new data layer faces challenges that the way you manage that data, the way you analyze that data, is not supported by today's tools. >> Okay, so actually Alex, bring me up that last slide if you would, I mean, you're basically saying at the bottom here, today's repositories don't really do joins at scale. The future is you're talking about hundreds or thousands or millions of data connections, and today's systems, we're talking about, I don't know, 6, 8, 10 joins and that is the fundamental problem you're saying, is a new data error coming and existing systems won't be able to handle it? >> Yeah, one way of thinking about it is that even though we call them relational databases, when we actually want to do lots of joins or when we want to analyze data from lots of different tables, we created a whole new industry for analytic databases where you sort of mung the data together into fewer tables. So you didn't have to do as many joins because the joins are difficult and slow. And when you're going to arbitrarily join thousands, hundreds of thousands or across millions of elements, you need a new type of database. We have them, they're called graph databases, but to query them, you go back to the prerelational era in terms of their usability. >> Okay, so we're going to come back to that and talk about how you get around that problem. But let's first lay out what the ideal data platform of the future we think looks like. And again, we're going to come back to use this Uber example. In this graphic that George put together, awesome. We got three layers. The application layer is where the data products reside. The example here is drivers, rides, maps, routes, ETA, et cetera. The digital version of what we were talking about in the previous slide, people, places and things. The next layer is the data layer, that breaks down the silos and connects the data elements through semantics and everything is coherent. And then the bottom layers, the legacy operational systems feed that data layer. George, explain what's different here, the graph database element, you talk about the relational query capabilities, and why can't I just throw memory at solving this problem? >> Some of the graph databases do throw memory at the problem and maybe without naming names, some of them live entirely in memory. And what you're dealing with is a prerelational in-memory database system where you navigate between elements, and the issue with that is we've had SQL for 50 years, so we don't have to navigate, we can say what we want without how to get it. That's the core of the problem. >> Okay. So if I may, I just want to drill into this a little bit. So you're talking about the expressiveness of a graph. Alex, if you'd bring that back out, the fourth bullet, expressiveness of a graph database with the relational ease of query. Can you explain what you mean by that? >> Yeah, so graphs are great because when you can describe anything with a graph, that's why they're becoming so popular. Expressive means you can represent anything easily. They're conducive to, you might say, in a world where we now want like the metaverse, like with a 3D world, and I don't mean the Facebook metaverse, I mean like the business metaverse when we want to capture data about everything, but we want it in context, we want to build a set of digital twins that represent everything going on in the world. And Uber is a tiny example of that. Uber built a graph to represent all the drivers and riders and maps and routes. But what you need out of a database isn't just a way to store stuff and update stuff. You need to be able to ask questions of it, you need to be able to query it. And if you go back to prerelational days, you had to know how to find your way to the data. It's sort of like when you give directions to someone and they didn't have a GPS system and a mapping system, you had to give them turn by turn directions. Whereas when you have a GPS and a mapping system, which is like the relational thing, you just say where you want to go, and it spits out the turn by turn directions, which let's say, the car might follow or whoever you're directing would follow. But the point is, it's much easier in a relational database to say, "I just want to get these results. You figure out how to get it." The graph database, they have not taken over the world because in some ways, it's taking a 50 year leap backwards. >> Alright, got it. Okay. Let's take a look at how the current Databricks offerings map to that ideal state that we just laid out. So to do that, we put together this chart that looks at the key elements of the Databricks portfolio, the core capability, the weakness, and the threat that may loom. Start with the Delta Lake, that's the storage layer, which is great for files and tables. It's got true separation of compute and storage, I want you to double click on that George, as independent elements, but it's weaker for the type of low latency ingest that we see coming in the future. And some of the threats highlighted here. AWS could add transactional tables to S3, Iceberg adoption is picking up and could accelerate, that could disrupt Databricks. George, add some color here please? >> Okay, so this is the sort of a classic competitive forces where you want to look at, so what are customers demanding? What's competitive pressure? What are substitutes? Even what your suppliers might be pushing. Here, Delta Lake is at its core, a set of transactional tables that sit on an object store. So think of it in a database system, this is the storage engine. So since S3 has been getting stronger for 15 years, you could see a scenario where they add transactional tables. We have an open source alternative in Iceberg, which Snowflake and others support. But at the same time, Databricks has built an ecosystem out of tools, their own and others, that read and write to Delta tables, that's what makes the Delta Lake and ecosystem. So they have a catalog, the whole machine learning tool chain talks directly to the data here. That was their great advantage because in the past with Snowflake, you had to pull all the data out of the database before the machine learning tools could work with it, that was a major shortcoming. They fixed that. But the point here is that even before we get to the semantic layer, the core foundation is under threat. >> Yep. Got it. Okay. We got a lot of ground to cover. So we're going to take a look at the Spark Execution Engine next. Think of that as the refinery that runs really efficient batch processing. That's kind of what disrupted the DOOp in a large way, but it's not Python friendly and that's an issue because the data science and the data engineering crowd are moving in that direction, and/or they're using DBT. George, we had Tristan Handy on at Supercloud, really interesting discussion that you and I did. Explain why this is an issue for Databricks? >> So once the data lake was in place, what people did was they refined their data batch, and Spark has always had streaming support and it's gotten better. The underlying storage as we've talked about is an issue. But basically they took raw data, then they refined it into tables that were like customers and products and partners. And then they refined that again into what was like gold artifacts, which might be business intelligence metrics or dashboards, which were collections of metrics. But they were running it on the Spark Execution Engine, which it's a Java-based engine or it's running on a Java-based virtual machine, which means all the data scientists and the data engineers who want to work with Python are really working in sort of oil and water. Like if you get an error in Python, you can't tell whether the problems in Python or where it's in Spark. There's just an impedance mismatch between the two. And then at the same time, the whole world is now gravitating towards DBT because it's a very nice and simple way to compose these data processing pipelines, and people are using either SQL in DBT or Python in DBT, and that kind of is a substitute for doing it all in Spark. So it's under threat even before we get to that semantic layer, it so happens that DBT itself is becoming the authoring environment for the semantic layer with business intelligent metrics. But that's again, this is the second element that's under direct substitution and competitive threat. >> Okay, let's now move down to the third element, which is the Photon. Photon is Databricks' BI Lakehouse, which has integration with the Databricks tooling, which is very rich, it's newer. And it's also not well suited for high concurrency and low latency use cases, which we think are going to increasingly become the norm over time. George, the call out threat here is customers want to connect everything to a semantic layer. Explain your thinking here and why this is a potential threat to Databricks? >> Okay, so two issues here. What you were touching on, which is the high concurrency, low latency, when people are running like thousands of dashboards and data is streaming in, that's a problem because SQL data warehouse, the query engine, something like that matures over five to 10 years. It's one of these things, the joke that Andy Jassy makes just in general, he's really talking about Azure, but there's no compression algorithm for experience. The Snowflake guy started more than five years earlier, and for a bunch of reasons, that lead is not something that Databricks can shrink. They'll always be behind. So that's why Snowflake has transactional tables now and we can get into that in another show. But the key point is, so near term, it's struggling to keep up with the use cases that are core to business intelligence, which is highly concurrent, lots of users doing interactive query. But then when you get to a semantic layer, that's when you need to be able to query data that might have thousands or tens of thousands or hundreds of thousands of joins. And that's a SQL query engine, traditional SQL query engine is just not built for that. That's the core problem of traditional relational databases. >> Now this is a quick aside. We always talk about Snowflake and Databricks in sort of the same context. We're not necessarily saying that Snowflake is in a position to tackle all these problems. We'll deal with that separately. So we don't mean to imply that, but we're just sort of laying out some of the things that Snowflake or rather Databricks customers we think, need to be thinking about and having conversations with Databricks about and we hope to have them as well. We'll come back to that in terms of sort of strategic options. But finally, when come back to the table, we have Databricks' AI/ML Tool Chain, which has been an awesome capability for the data science crowd. It's comprehensive, it's a one-stop shop solution, but the kicker here is that it's optimized for supervised model building. And the concern is that foundational models like GPT could cannibalize the current Databricks tooling, but George, can't Databricks, like other software companies, integrate foundation model capabilities into its platform? >> Okay, so the sound bite answer to that is sure, IBM 3270 terminals could call out to a graphical user interface when they're running on the XT terminal, but they're not exactly good citizens in that world. The core issue is Databricks has this wonderful end-to-end tool chain for training, deploying, monitoring, running inference on supervised models. But the paradigm there is the customer builds and trains and deploys each model for each feature or application. In a world of foundation models which are pre-trained and unsupervised, the entire tool chain is different. So it's not like Databricks can junk everything they've done and start over with all their engineers. They have to keep maintaining what they've done in the old world, but they have to build something new that's optimized for the new world. It's a classic technology transition and their mentality appears to be, "Oh, we'll support the new stuff from our old stuff." Which is suboptimal, and as we'll talk about, their biggest patron and the company that put them on the map, Microsoft, really stopped working on their old stuff three years ago so that they could build a new tool chain optimized for this new world. >> Yeah, and so let's sort of close with what we think the options are and decisions that Databricks has for its future architecture. They're smart people. I mean we've had Ali Ghodsi on many times, super impressive. I think they've got to be keenly aware of the limitations, what's going on with foundation models. But at any rate, here in this chart, we lay out sort of three scenarios. One is re-architect the platform by incrementally adopting new technologies. And example might be to layer a graph query engine on top of its stack. They could license key technologies like graph database, they could get aggressive on M&A and buy-in, relational knowledge graphs, semantic technologies, vector database technologies. George, as David Floyer always says, "A lot of ways to skin a cat." We've seen companies like, even think about EMC maintained its relevance through M&A for many, many years. George, give us your thought on each of these strategic options? >> Okay, I find this question the most challenging 'cause remember, I used to be an equity research analyst. I worked for Frank Quattrone, we were one of the top tech shops in the banking industry, although this is 20 years ago. But the M&A team was the top team in the industry and everyone wanted them on their side. And I remember going to meetings with these CEOs, where Frank and the bankers would say, "You want us for your M&A work because we can do better." And they really could do better. But in software, it's not like with EMC in hardware because with hardware, it's easier to connect different boxes. With software, the whole point of a software company is to integrate and architect the components so they fit together and reinforce each other, and that makes M&A harder. You can do it, but it takes a long time to fit the pieces together. Let me give you examples. If they put a graph query engine, let's say something like TinkerPop, on top of, I don't even know if it's possible, but let's say they put it on top of Delta Lake, then you have this graph query engine talking to their storage layer, Delta Lake. But if you want to do analysis, you got to put the data in Photon, which is not really ideal for highly connected data. If you license a graph database, then most of your data is in the Delta Lake and how do you sync it with the graph database? If you do sync it, you've got data in two places, which kind of defeats the purpose of having a unified repository. I find this semantic layer option in number three actually more promising, because that's something that you can layer on top of the storage layer that you have already. You just have to figure out then how to have your query engines talk to that. What I'm trying to highlight is, it's easy as an analyst to say, "You can buy this company or license that technology." But the really hard work is making it all work together and that is where the challenge is. >> Yeah, and well look, I thank you for laying that out. We've seen it, certainly Microsoft and Oracle. I guess you might argue that well, Microsoft had a monopoly in its desktop software and was able to throw off cash for a decade plus while it's stock was going sideways. Oracle had won the database wars and had amazing margins and cash flow to be able to do that. Databricks isn't even gone public yet, but I want to close with some of the players to watch. Alex, if you'd bring that back up, number four here. AWS, we talked about some of their options with S3 and it's not just AWS, it's blob storage, object storage. Microsoft, as you sort of alluded to, was an early go-to market channel for Databricks. We didn't address that really. So maybe in the closing comments we can. Google obviously, Snowflake of course, we're going to dissect their options in future Breaking Analysis. Dbt labs, where do they fit? Bob Muglia's company, Relational.ai, why are these players to watch George, in your opinion? >> So everyone is trying to assemble and integrate the pieces that would make building data applications, data products easy. And the critical part isn't just assembling a bunch of pieces, which is traditionally what AWS did. It's a Unix ethos, which is we give you the tools, you put 'em together, 'cause you then have the maximum choice and maximum power. So what the hyperscalers are doing is they're taking their key value stores, in the case of ASW it's DynamoDB, in the case of Azure it's Cosmos DB, and each are putting a graph query engine on top of those. So they have a unified storage and graph database engine, like all the data would be collected in the key value store. Then you have a graph database, that's how they're going to be presenting a foundation for building these data apps. Dbt labs is putting a semantic layer on top of data lakes and data warehouses and as we'll talk about, I'm sure in the future, that makes it easier to swap out the underlying data platform or swap in new ones for specialized use cases. Snowflake, what they're doing, they're so strong in data management and with their transactional tables, what they're trying to do is take in the operational data that used to be in the province of many state stores like MongoDB and say, "If you manage that data with us, it'll be connected to your analytic data without having to send it through a pipeline." And that's hugely valuable. Relational.ai is the wildcard, 'cause what they're trying to do, it's almost like a holy grail where you're trying to take the expressiveness of connecting all your data in a graph but making it as easy to query as you've always had it in a SQL database or I should say, in a relational database. And if they do that, it's sort of like, it'll be as easy to program these data apps as a spreadsheet was compared to procedural languages, like BASIC or Pascal. That's the implications of Relational.ai. >> Yeah, and again, we talked before, why can't you just throw this all in memory? We're talking in that example of really getting down to differences in how you lay the data out on disk in really, new database architecture, correct? >> Yes. And that's why it's not clear that you could take a data lake or even a Snowflake and why you can't put a relational knowledge graph on those. You could potentially put a graph database, but it'll be compromised because to really do what Relational.ai has done, which is the ease of Relational on top of the power of graph, you actually need to change how you're storing your data on disk or even in memory. So you can't, in other words, it's not like, oh we can add graph support to Snowflake, 'cause if you did that, you'd have to change, or in your data lake, you'd have to change how the data is physically laid out. And then that would break all the tools that talk to that currently. >> What in your estimation, is the timeframe where this becomes critical for a Databricks and potentially Snowflake and others? I mentioned earlier midterm, are we talking three to five years here? Are we talking end of decade? What's your radar say? >> I think something surprising is going on that's going to sort of come up the tailpipe and take everyone by storm. All the hype around business intelligence metrics, which is what we used to put in our dashboards where bookings, billings, revenue, customer, those things, those were the key artifacts that used to live in definitions in your BI tools, and DBT has basically created a standard for defining those so they live in your data pipeline or they're defined in their data pipeline and executed in the data warehouse or data lake in a shared way, so that all tools can use them. This sounds like a digression, it's not. All this stuff about data mesh, data fabric, all that's going on is we need a semantic layer and the business intelligence metrics are defining common semantics for your data. And I think we're going to find by the end of this year, that metrics are how we annotate all our analytic data to start adding common semantics to it. And we're going to find this semantic layer, it's not three to five years off, it's going to be staring us in the face by the end of this year. >> Interesting. And of course SVB today was shut down. We're seeing serious tech headwinds, and oftentimes in these sort of downturns or flat turns, which feels like this could be going on for a while, we emerge with a lot of new players and a lot of new technology. George, we got to leave it there. Thank you to George Gilbert for excellent insights and input for today's episode. I want to thank Alex Myerson who's on production and manages the podcast, of course Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our EIC over at Siliconangle.com, he does some great editing. Remember all these episodes, they're available as podcasts. Wherever you listen, all you got to do is search Breaking Analysis Podcast, we publish each week on wikibon.com and siliconangle.com, or you can email me at David.Vellante@siliconangle.com, or DM me @DVellante. Comment on our LinkedIn post, and please do check out ETR.ai, great survey data, enterprise tech focus, phenomenal. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis.

Published Date : Mar 10 2023

SUMMARY :

bringing you data-driven core elements of the Databricks portfolio and pervasiveness in the data and that was where you went for data. and Cloudera set out to fix that. the reason you see and the robustness of Databricks and their big challenge and the data locked into in the real world and decisions Yes, and the mission of that is propelling the likes that the way you manage that data, is the fundamental problem because the joins are difficult and slow. and connects the data and the issue with that is the fourth bullet, expressiveness and it spits out the and the threat that may loom. because in the past with Snowflake, Think of that as the refinery So once the data lake was in place, George, the call out threat here But the key point is, in sort of the same context. and the company that put One is re-architect the platform and architect the components some of the players to watch. in the case of ASW it's DynamoDB, and why you can't put a relational and executed in the data and manages the podcast, of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

David FloyerPERSON

0.99+

Mike OlsonPERSON

0.99+

2014DATE

0.99+

George GilbertPERSON

0.99+

Dave VellantePERSON

0.99+

GeorgePERSON

0.99+

Cheryl KnightPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Andy JassyPERSON

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Erik BradleyPERSON

0.99+

DavePERSON

0.99+

UberORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Sun MicrosystemsORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Bob MugliaPERSON

0.99+

GartnerORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

60 yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Ali GhodsiPERSON

0.99+

2010DATE

0.99+

DatabricksORGANIZATION

0.99+

Kristin MartinPERSON

0.99+

Rob HofPERSON

0.99+

threeQUANTITY

0.99+

15 yearsQUANTITY

0.99+

Databricks'ORGANIZATION

0.99+

two placesQUANTITY

0.99+

BostonLOCATION

0.99+

Tristan HandyPERSON

0.99+

M&AORGANIZATION

0.99+

Frank QuattronePERSON

0.99+

second elementQUANTITY

0.99+

Daren BrabhamPERSON

0.99+

TechAlpha PartnersORGANIZATION

0.99+

third elementQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

50 yearQUANTITY

0.99+

40%QUANTITY

0.99+

ClouderaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

five yearsQUANTITY

0.99+

Breaking Analysis: MWC 2023 goes beyond consumer & deep into enterprise tech


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> While never really meant to be a consumer tech event, the rapid ascendancy of smartphones sucked much of the air out of Mobile World Congress over the years, now MWC. And while the device manufacturers continue to have a major presence at the show, the maturity of intelligent devices, longer life cycles, and the disaggregation of the network stack, have put enterprise technologies front and center in the telco business. Semiconductor manufacturers, network equipment players, infrastructure companies, cloud vendors, software providers, and a spate of startups are eyeing the trillion dollar plus communications industry as one of the next big things to watch this decade. Hello, and welcome to this week's Wikibon CUBE Insights, powered by ETR. In this Breaking Analysis, we bring you part two of our ongoing coverage of MWC '23, with some new data on enterprise players specifically in large telco environments, a brief glimpse at some of the pre-announcement news and corresponding themes ahead of MWC, and some of the key announcement areas we'll be watching at the show on theCUBE. Now, last week we shared some ETR data that showed how traditional enterprise tech players were performing, specifically within the telecoms vertical. Here's a new look at that data from ETR, which isolates the same companies, but cuts the data for what ETR calls large telco. The N in this cut is 196, down from 288 last week when we included all company sizes in the dataset. Now remember the two dimensions here, on the y-axis is net score, or spending momentum, and on the x-axis is pervasiveness in the data set. The table insert in the upper left informs how the dots and companies are plotted, and that red dotted line, the horizontal line at 40%, that indicates a highly elevated net score. Now while the data are not dramatically different in terms of relative positioning, there are a couple of changes at the margin. So just going down the list and focusing on net score. Azure is comparable, but slightly lower in this sector in the large telco than it was overall. Google Cloud comes in at number two, and basically swapped places with AWS, which drops slightly in the large telco relative to overall telco. Snowflake is also slightly down by one percentage point, but maintains its position. Remember Snowflake, overall, its net score is much, much higher when measuring across all verticals. Snowflake comes down in telco, and relative to overall, a little bit down in large telco, but it's making some moves to attack this market that we'll talk about in a moment. Next are Red Hat OpenStack and Databricks. About the same in large tech telco as they were an overall telco. Then there's Dell next that has a big presence at MWC and is getting serious about driving 16G adoption, and new servers, and edge servers, and other partnerships. Cisco and Red Hat OpenShift basically swapped spots when moving from all telco to large telco, as Cisco drops and Red Hat bumps up a bit. And VMware dropped about four percentage points in large telco. Accenture moved up dramatically, about nine percentage points in big telco, large telco relative to all telco. HPE dropped a couple of percentage points. Oracle stayed about the same. And IBM surprisingly dropped by about five points. So look, I understand not a ton of change in terms of spending momentum in the large sector versus telco overall, but some deltas. The bottom line for enterprise players is one, they're just getting started in this new disruption journey that they're on as the stack disaggregates. Two, all these players have experience in delivering horizontal solutions, but now working with partners and identifying big problems to be solved, and three, many of these companies are generally not the fastest moving firms relative to smaller disruptive disruptors. Now, cloud has been an exception in fairness. But the good news for the legacy infrastructure and IT companies is that the telco transformation and the 5G buildout is going to take years. So it's moving at a pace that is very favorable to many of these companies. Okay, so looking at just some of the pre-announcement highlights that have hit the wire this week, I want to give you a glimpse of the diversity of innovation that is occurring in the telecommunication space. You got semiconductor manufacturers, device makers, network equipment players, carriers, cloud vendors, enterprise tech companies, software companies, startups. Now we've included, you'll see in this list, we've included OpeRAN, that logo, because there's so much buzz around the topic and we're going to come back to that. But suffice it to say, there's no way we can cover all the announcements from the 2000 plus exhibitors at the show. So we're going to cherry pick here and make a few call outs. Hewlett Packard Enterprise announced an acquisition of an Italian private cellular network company called AthoNet. Zeus Kerravala wrote about it on SiliconANGLE if you want more details. Now interestingly, HPE has a partnership with Solana, which also does private 5G. But according to Zeus, Solona is more of an out-of-the-box solution, whereas AthoNet is designed for the core and requires more integration. And as you'll see in a moment, there's going to be a lot of talk at the show about private network. There's going to be a lot of news there from other competitors, and we're going to be watching that closely. And while many are concerned about the P5G, private 5G, encroaching on wifi, Kerravala doesn't see it that way. Rather, he feels that these private networks are really designed for more industrial, and you know mission critical environments, like factories, and warehouses that are run by robots, et cetera. 'Cause these can justify the increased expense of private networks. Whereas wifi remains a very low cost and flexible option for, you know, whatever offices and homes. Now, over to Dell. Dell announced its intent to go hard after opening up the telco network with the announcement that in the second half of this year it's going to begin shipping its infrastructure blocks for Red Hat. Remember it's like kind of the converged infrastructure for telco with a more open ecosystem and sort of more flexible, you know, more mature engineered system. Dell has also announced a range of PowerEdge servers for a variety of use cases. A big wide line bringing forth its 16G portfolio and aiming squarely at the telco space. Dell also announced, here we go, a private wireless offering with airspan, and Expedo, and a solution with AthoNet, the company HPE announced it was purchasing. So I guess Dell and HPE are now partnering up in the private wireless space, and yes, hell is freezing over folks. We'll see where that relationship goes in the mid- to long-term. Dell also announced new lab and certification capabilities, which we said last week was going to be critical for the further adoption of open ecosystem technology. So props to Dell for, you know, putting real emphasis and investment in that. AWS also made a number of announcements in this space including private wireless solutions and associated managed services. AWS named Deutsche Telekom, Orange, T-Mobile, Telefonica, and some others as partners. And AWS announced the stepped up partnership, specifically with T-Mobile, to bring AWS services to T-Mobile's network portfolio. Snowflake, back to Snowflake, announced its telecom data cloud. Remember we showed the data earlier, it's Snowflake not as strong in the telco sector, but they're continuing to move toward this go-to market alignment within key industries, realigning their go-to market by vertical. It also announced that AT&T, and a number of other partners, are collaborating to break down data silos specifically in telco. Look, essentially, this is Snowflake taking its core value prop to the telco vertical and forming key partnerships that resonate in the space. So think simplification, breaking down silos, data sharing, eventually data monetization. Samsung previewed its future capability to allow smartphones to access satellite services, something Apple has previously done. AMD, Intel, Marvell, Qualcomm, are all in the act, all the semiconductor players. Qualcomm for example, announced along with Telefonica, and Erickson, a 5G millimeter network that will be showcased in Spain at the event this coming week using Qualcomm Snapdragon chipset platform, based on none other than Arm technology. Of course, Arm we said is going to dominate the edge, and is is clearly doing so. It's got the volume advantage over, you know, traditional Intel, you know, X86 architectures. And it's no surprise that Microsoft is touting its open AI relationship. You're going to hear a lot of AI talk at this conference as is AI is now, you know, is the now topic. All right, we could go on and on and on. There's just so much going on at Mobile World Congress or MWC, that we just wanted to give you a glimpse of some of the highlights that we've been watching. Which brings us to the key topics and issues that we'll be exploring at MWC next week. We touched on some of this last week. A big topic of conversation will of course be, you know, 5G. Is it ever going to become real? Is it, is anybody ever going to make money at 5G? There's so much excitement around and anticipation around 5G. It has not lived up to the hype, but that's because the rollout, as we've previous reported, is going to take years. And part of that rollout is going to rely on the disaggregation of the hardened telco stack, as we reported last week and in previous Breaking Analysis episodes. OpenRAN is a big component of that evolution. You know, as our RAN intelligent controllers, RICs, which essentially the brain of OpenRAN, if you will. Now as we build out 5G networks at massive scale and accommodate unprecedented volumes of data and apply compute-hungry AI to all this data, the issue of energy efficiency is going to be front and center. It has to be. Not only is it a, you know, hot political issue, the reality is that improving power efficiency is compulsory or the whole vision of telco's future is going to come crashing down. So chip manufacturers, equipment makers, cloud providers, everybody is going to be doubling down and clicking on this topic. Let's talk about AI. AI as we said, it is the hot topic right now, but it is happening not only in consumer, with things like ChatGPT. And think about the theme of this Breaking Analysis in the enterprise, AI in the enterprise cannot be ChatGPT. It cannot be error prone the way ChatGPT is. It has to be clean, reliable, governed, accurate. It's got to be ethical. It's got to be trusted. Okay, we're going to have Zeus Kerravala on the show next week and definitely want to get his take on private networks and how they're going to impact wifi. You know, will private networks cannibalize wifi? If not, why not? He wrote about this again on SiliconANGLE if you want more details, and we're going to unpack that on theCUBE this week. And finally, as always we'll be following the data flows to understand where and how telcos, cloud players, startups, software companies, disruptors, legacy companies, end customers, how are they going to make money from new data opportunities? 'Cause we often say in theCUBE, don't ever bet against data. All right, that's a wrap for today. Remember theCUBE is going to be on location at MWC 2023 next week. We got a great set. We're in the walkway in between halls four and five, right in Congress Square, stand CS-60. Look for us, we got a full schedule. If you got a great story or you have news, stop by. We're going to try to get you on the program. I'll be there with Lisa Martin, co-hosting, David Nicholson as well, and the entire CUBE crew, so don't forget to come by and see us. I want to thank Alex Myerson, who's on production and manages the podcast, and Ken Schiffman, as well, in our Boston studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at SiliconANGLE.com. He does some great editing. Thank you. All right, remember all these episodes they are available as podcasts wherever you listen. All you got to do is search Breaking Analysis podcasts. I publish each week on Wikibon.com and SiliconANGLE.com. All the video content is available on demand at theCUBE.net, or you can email me directly if you want to get in touch David.Vellante@SiliconANGLE.com or DM me @DVellante, or comment on our LinkedIn posts. And please do check out ETR.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Mobile World Congress '23, MWC '23, or next time on Breaking Analysis. (bright music)

Published Date : Feb 25 2023

SUMMARY :

bringing you data-driven in the mid- to long-term.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Alex MyersonPERSON

0.99+

OrangeORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

HPEORGANIZATION

0.99+

TelefonicaORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

AMDORGANIZATION

0.99+

SpainLOCATION

0.99+

T-MobileORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Deutsche TelekomORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

MarvellORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

AppleORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

DellORGANIZATION

0.99+

IntelORGANIZATION

0.99+

Rob HofPERSON

0.99+

Palo AltoLOCATION

0.99+

OracleORGANIZATION

0.99+

40%QUANTITY

0.99+

last weekDATE

0.99+

AthoNetORGANIZATION

0.99+

EricksonORGANIZATION

0.99+

Congress SquareLOCATION

0.99+

AccentureORGANIZATION

0.99+

next weekDATE

0.99+

Mobile World CongressEVENT

0.99+

SolanaORGANIZATION

0.99+

BostonLOCATION

0.99+

two dimensionsQUANTITY

0.99+

ETRORGANIZATION

0.99+

MWC '23EVENT

0.99+

MWCEVENT

0.99+

288QUANTITY

0.98+

todayDATE

0.98+

this weekDATE

0.98+

SolonaORGANIZATION

0.98+

David.Vellante@SiliconANGLE.comOTHER

0.98+

telcoORGANIZATION

0.98+

TwoQUANTITY

0.98+

each weekQUANTITY

0.97+

Zeus KerravalaPERSON

0.97+

MWC 2023EVENT

0.97+

about five pointsQUANTITY

0.97+

theCUBE.netOTHER

0.97+

Red HatORGANIZATION

0.97+

SnowflakeTITLE

0.96+

oneQUANTITY

0.96+

DatabricksORGANIZATION

0.96+

threeQUANTITY

0.96+

theCUBE StudiosORGANIZATION

0.96+

Breaking Analysis: MWC 2023 highlights telco transformation & the future of business


 

>> From the Cube Studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> The world's leading telcos are trying to shed the stigma of being monopolies lacking innovation. Telcos have been great at operational efficiency and connectivity and living off of transmission, and the costs and expenses or revenue associated with that transmission. But in a world beyond telephone poles and basic wireless and mobile services, how will telcos modernize and become more agile and monetize new opportunities brought about by 5G and private wireless and a spate of new innovations and infrastructure, cloud data and apps? Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis and ahead of Mobile World Congress or now, MWC23, we explore the evolution of the telco business and how the industry is in many ways, mimicking transformations that took place decades ago in enterprise IT. We'll model some of the traditional enterprise vendors using ETR data and investigate how they're faring in the telecommunications sector, and we'll pose some of the key issues facing the industry this decade. First, let's take a look at what the GSMA has in store for MWC23. GSMA is the host of what used to be called Mobile World Congress. They've set the theme for this year's event as "Velocity" and they've rebranded MWC to reflect the fact that mobile technology is only one part of the story. MWC has become one of the world's premier events highlighting innovations not only in Telco, mobile and 5G, but the collision between cloud, infrastructure, apps, private networks, smart industries, machine intelligence, and AI, and more. MWC comprises an enormous ecosystem of service providers, technology companies, and firms from virtually every industry including sports and entertainment. And as well, GSMA, along with its venue partner at the Fira Barcelona, have placed a major emphasis on sustainability and public and private partnerships. Virtually every industry will be represented at the event because every industry is impacted by the trends and opportunities in this space. GSMA has said it expects 80,000 attendees at MWC this year, not quite back to 2019 levels, but trending in that direction. Of course, attendance from Chinese participants has historically been very high at the show, and obviously the continued travel issues from that region are affecting the overall attendance, but still very strong. And despite these concerns, Huawei, the giant Chinese technology company. has the largest physical presence of any exhibitor at the show. And finally, GSMA estimates that more than $300 million in economic benefit will result from the event which takes place at the end of February and early March. And The Cube will be back at MWC this year with a major presence thanks to our anchor sponsor, Dell Technologies and other supporters of our content program, including Enterprise Web, ArcaOS, VMware, Snowflake, Cisco, AWS, and others. And one of the areas we're interested in exploring is the evolution of the telco stack. It's a topic that's often talked about and one that we've observed taking place in the 1990s when the vertically integrated IBM mainframe monopoly gave way to a disintegrated and horizontal industry structure. And in many ways, the same thing is happening today in telecommunications, which is shown on the left-hand side of this diagram. Historically, telcos have relied on a hardened, integrated, and incredibly reliable, and secure set of hardware and software services that have been fully vetted and tested, and certified, and relied upon for decades. And at the top of that stack on the left are the crown jewels of the telco stack, the operational support systems and the business support systems. For the OSS, we're talking about things like network management, network operations, service delivery, quality of service, fulfillment assurance, and things like that. For the BSS systems, these refer to customer-facing elements of the stack, like revenue, order management, what products they sell, billing, and customer service. And what we're seeing is telcos have been really good at operational efficiency and making money off of transport and connectivity, but they've lacked the innovation in services and applications. They own the pipes and that works well, but others, be the over-the-top content companies, or private network providers and increasingly, cloud providers have been able to bypass the telcos, reach around them, if you will, and drive innovation. And so, the right-most diagram speaks to the need to disaggregate pieces of the stack. And while the similarities to the 1990s in enterprise IT are greater than the differences, there are things that are different. For example, the granularity of hardware infrastructure will not likely be as high where competition occurred back in the 90s at every layer of the value chain with very little infrastructure integration. That of course changed in the 2010s with converged infrastructure and hyper-converged and also software defined. So, that's one difference. And the advent of cloud, containers, microservices, and AI, none of that was really a major factor in the disintegration of legacy IT. And that probably means that disruptors can move even faster than did the likes of Intel and Microsoft, Oracle, Cisco, and the Seagates of the 1990s. As well, while many of the products and services will come from traditional enterprise IT names like Dell, HPE, Cisco, Red Hat, VMware, AWS, Microsoft, Google, et cetera, many of the names are going to be different and come from traditional network equipment providers. These are names like Ericsson and Huawei, and Nokia, and other names, like Wind River, and Rakuten, and Dish Networks. And there are enormous opportunities in data to help telecom companies and their competitors go beyond telemetry data into more advanced analytics and data monetization. There's also going to be an entirely new set of apps based on the workloads and use cases ranging from hospitals, sports arenas, race tracks, shipping ports, you name it. Virtually every vertical will participate in this transformation as the industry evolves its focus toward innovation, agility, and open ecosystems. Now remember, this is not a binary state. There are going to be greenfield companies disrupting the apple cart, but the incumbent telcos are going to have to continue to ensure newer systems work with their legacy infrastructure, in their OSS and BSS existing systems. And as we know, this is not going to be an overnight task. Integration is a difficult thing, transformations, migrations. So that's what makes this all so interesting because others can come in with Greenfield and potentially disrupt. There'll be interesting partnerships and ecosystems will form and coalitions will also form. Now, we mentioned that several traditional enterprise companies are or will be playing in this space. Now, ETR doesn't have a ton of data on specific telecom equipment and software providers, but it does have some interesting data that we cut for this breaking analysis. What we're showing here in this graphic is some of the names that we've followed over the years and how they're faring. Specifically, we did the cut within the telco sector. So the Y-axis here shows net score or spending velocity. And the horizontal axis, that shows the presence or pervasiveness in the data set. And that table insert in the upper left, that informs as to how the dots are plotted. You know, the two columns there, net score and the ends. And that red-dotted line, that horizontal line at 40%, that is an indicator of a highly elevated level. Anything above that, we consider quite outstanding. And what we'll do now is we'll comment on some of the cohorts and share with you how they're doing in telecommunications, and that sector, that vertical relative to their position overall in the data set. Let's start with the public cloud players. They're prominent in every industry. Telcos, telecommunications is no exception and it's quite an interesting cohort here. On the one hand, they can help telecommunication firms modernize and become more agile by eliminating the heavy lifting and you know, all the cloud, you know, value prop, data center costs, and the cloud benefits. At the same time, public cloud players are bringing their services to the edge, building out their own global networks and are a disruptive force to traditional telcos. All right, let's talk about Azure first. Their net score is basically identical to telco relative to its overall average. AWS's net score is higher in telco by just a few percentage points. Google Cloud platform is eight percentage points higher in telco with a 53% net score. So all three hyperscalers have an equal or stronger presence in telco than their average overall. Okay, let's look at the traditional enterprise hardware and software infrastructure cohort. Dell, Cisco, HPE, Red Hat, VMware, and Oracle. We've highlighted in this chart just as sort of indicators or proxies. Dell's net score's 10 percentage points higher in telco than its overall average. Interesting. Cisco's is a bit higher. HPE's is actually lower by about nine percentage points in the ETR survey, and VMware's is lower by about four percentage points. Now, Red Hat is really interesting. OpenStack, as we've previously reported is popular with telcos who want to build out their own private cloud. And the data shows that Red Hat OpenStack's net score is 15 percentage points higher in the telco sector than its overall average. OpenShift, on the other hand, has a net score that's four percentage points lower in telco than its overall average. So this to us talks to the pace of adoption of microservices and containers. You know, it's going to happen, but it's going to happen more slowly. Finally, Oracle's spending momentum is somewhat lower in the sector than its average, despite the firm having a decent telco business. IBM and Accenture, heavy services companies are both lower in this sector than their average. And real quickly, snowflake's net score is much lower by about 12 percentage points relative to its very high average net score of 62%. But we look for them to be a player in this space as telcos need to modernize their analytics stack and share data in a governed manner. Databricks' net score is also much lower than its average by about 13 points. And same, I would expect them to be a player as open architectures and cloud gains steam in telco. All right, let's close out now on what we're going to be talking about at MWC23 and some of the key issues that we'll be unpacking. We've talked about stack disaggregation in this breaking analysis, but the key here will be the pace at which it will reach the operational efficiency and reliability of closed stacks. Telcos, you know, in a large part, they're engineering heavy firms and much of their work takes place, kind of in the basement, in the dark. It's not really a big public hype machine, and they tend to move slowly and cautiously. While they understand the importance of agility, they're going to be careful because, you know, it's in their DNA. And so at the same time, if they don't move fast enough, they're going to get hurt and disrupted by competitors. So that's going to be a topic of conversation, and we'll be looking for proof points. And the other comment I'll make is around integration. Telcos because of their conservatism will benefit from better testing and those firms that can innovate on the testing front and have labs and certifications and innovate at that level, with an ecosystem are going to be in a better position. Because open sometimes means wild west. So the more players like Dell, HPE, Cisco, Red Hat, et cetera, that do that and align with their ecosystems and provide those resources, the faster adoption is going to go. So we'll be looking for, you know, who's actually doing that, Open RAN or Radio Access Networks. That fits in this discussion because O-RAN is an emerging network architecture. It essentially enables the use of open technologies from an ecosystem and over time, look at O-RAN is going to be open, but the questions, you know, a lot of questions remain as to when it will be able to deliver the operational efficiency of traditional RAN. Got some interesting dynamics going on. Rakuten is a company that's working hard on this problem, really focusing on operational efficiency. Then you got Dish Networks. They're also embracing O-RAN. They're coming at it more from service innovation. So that's something that we'll be monitoring and unpacking. We're going to look at cloud as a disruptor. On the one hand, cloud can help drive agility, as we said earlier and optionality, and innovation for incumbent telcos. But the flip side is going to also do the same for startups trying to disrupt and cloud attracts startups. While some of the telcos are actually embracing the cloud, many are being cautious. So that's going to be an interesting topic of discussion. And there's private wireless networks and 5G, and hyperlocal private networks, they're being deployed, you know, at the edge. This idea of open edge is also a really hot topic and this trend is going to accelerate. You know, the importance here is that the use cases are going to be widely varied. The needs of a hospital are going to be different than those of a sports venue are different from a remote drilling location, and energy or a concert venue. Things like real-time AI inference and data flows are going to bring new services and monetization opportunities. And many firms are going to be bypassing traditional telecommunications networks to build these out. Satellites as well, we're going to see, you know, in this decade, you're going to have, you're going to look down at Google Earth and you're going to see real-time. You know, today you see snapshots and so, lots of innovations going in that space. So how is this going to disrupt industries and traditional industry structures? Now, as always, we'll be looking at data angles, right? 'Cause it's in The Cube's DNA to follow the data and what opportunities and risks data brings. The Cube is going to be on location at MWC23 at the end of the month. We got a great set. We're in the walkway between halls four and five, right in Congress Square, it's booths CS60. So we'll have a full, they're called Stan CS60. We have a full schedule. I'm going to be there with Lisa Martin, Dave Nicholson and the entire Cube crew, so don't forget to stop by. All right, that's a wrap. I want to thank Alex Myerson, who's on production and manages the podcast, Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at Silicon Angle, does some great stuff for us. Thank you all. Remember, all these episodes are available as podcasts. Wherever you listen, just search "Breaking Analysis" podcasts I publish each week on wikibon.com and silicon angle.com. And all the video content is available on demand at thecube.net. You can email me directly at david.vellante@silicon angle.com. You can DM me at dvellante or comment on my LinkedIn post. Please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for The Cube Insights powered by ETR. Thanks for watching and we'll see you at Mobile World Congress, and/or at next time on "Breaking Analysis." (bright music) (bright music fades)

Published Date : Feb 18 2023

SUMMARY :

From the Cube Studios and some of the key issues

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DellORGANIZATION

0.99+

HuaweiORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Kristin MartinPERSON

0.99+

Cheryl KnightPERSON

0.99+

AWSORGANIZATION

0.99+

NokiaORGANIZATION

0.99+

RakutenORGANIZATION

0.99+

Rob HofPERSON

0.99+

OracleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GSMAORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

2019DATE

0.99+

53%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

Wind RiverORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

more than $300 millionQUANTITY

0.99+

40%QUANTITY

0.99+

TelcosORGANIZATION

0.99+

Congress SquareLOCATION

0.99+

FirstQUANTITY

0.99+

VMwareORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Dish NetworksORGANIZATION

0.99+

telcoORGANIZATION

0.99+

2010sDATE

0.99+

IntelORGANIZATION

0.99+

david.vellante@silicon angle.comOTHER

0.99+

MWC23EVENT

0.99+

1990sDATE

0.99+

62%QUANTITY

0.99+

Mobile World CongressEVENT

0.99+

two columnsQUANTITY

0.99+

each weekQUANTITY

0.99+

SeagatesORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

todayDATE

0.99+

early MarchDATE

0.99+

bothQUANTITY

0.99+

thecube.netOTHER

0.99+

MWCEVENT

0.99+

ETRORGANIZATION

0.98+

this yearDATE

0.98+

Cube StudiosORGANIZATION

0.98+

one partQUANTITY

0.98+

ChineseOTHER

0.98+

BostonLOCATION

0.98+

decades agoDATE

0.97+

threeQUANTITY

0.97+

90sDATE

0.97+

about 13 pointsQUANTITY

0.97+

Daren Brabham & Erik Bradley | What the Spending Data Tells us About Supercloud


 

(gentle synth music) (music ends) >> Welcome back to Supercloud 2, an open industry collaboration between technologists, consultants, analysts, and of course practitioners to help shape the future of cloud. At this event, one of the key areas we're exploring is the intersection of cloud and data. And how building value on top of hyperscale clouds and across clouds is evolving, a concept of course we call "Supercloud". And we're pleased to welcome our friends from Enterprise Technology research, Erik Bradley and Darren Brabham. Guys, thanks for joining us, great to see you. we love to bring the data into these conversations. >> Thank you for having us, Dave, I appreciate it. >> Yeah, thanks. >> You bet. And so, let me do the setup on what is Supercloud. It's a concept that we've floated, Before re:Invent 2021, based on the idea that cloud infrastructure is becoming ubiquitous, incredibly powerful, but there's a lack of standards across the big three clouds. That creates friction. So we defined over the period of time, you know, better part of a year, a set of essential elements, deployment models for so-called supercloud, which create this common experience for specific cloud services that, of course, again, span multiple clouds and even on-premise data. So Erik, with that as background, I wonder if you could add your general thoughts on the term supercloud, maybe play proxy for the CIO community, 'cause you do these round tables, you talk to these guys all the time, you gather a lot of amazing information from senior IT DMs that compliment your survey. So what are your thoughts on the term and the concept? >> Yeah, sure. I'll even go back to last year when you and I did our predictions panel, right? And we threw it out there. And to your point, you know, there's some haters. Anytime you throw out a new term, "Is it marketing buzz? Is it worth it? Why are you even doing it?" But you know, from my own perspective, and then also speaking to the IT DMs that we interview on a regular basis, this is just a natural evolution. It's something that's inevitable in enterprise tech, right? The internet was not built for what it has become. It was never intended to be the underlying infrastructure of our daily lives and work. The cloud also was not built to be what it's become. But where we're at now is, we have to figure out what the cloud is and what it needs to be to be scalable, resilient, secure, and have the governance wrapped around it. And to me that's what supercloud is. It's a way to define operantly, what the next generation, the continued iteration and evolution of the cloud and what its needs to be. And that's what the supercloud means to me. And what depends, if you want to call it metacloud, supercloud, it doesn't matter. The point is that we're trying to define the next layer, the next future of work, which is inevitable in enterprise tech. Now, from the IT DM perspective, I have two interesting call outs. One is from basically a senior developer IT architecture and DevSecOps who says he uses the term all the time. And the reason he uses the term, is that because multi-cloud has a stigma attached to it, when he is talking to his business executives. (David chuckles) the stigma is because it's complex and it's expensive. So he switched to supercloud to better explain to his business executives and his CFO and his CIO what he's trying to do. And we can get into more later about what it means to him. But the inverse of that, of course, is a good CSO friend of mine for a very large enterprise says the concern with Supercloud is the reduction of complexity. And I'll explain, he believes anything that takes the requirement of specific expertise out of the equation, even a little bit, as a CSO worries him. So as you said, David, always two sides to the coin, but I do believe supercloud is a relevant term, and it is necessary because the cloud is continuing to be defined. >> You know, that's really interesting too, 'cause you know, Darren, we use Snowflake a lot as an example, sort of early supercloud, and you think from a security standpoint, we've always pushed Amazon and, "Are you ever going to kind of abstract the complexity away from all these primitives?" and their position has always been, "Look, if we produce these primitives, and offer these primitives, we we can move as the market moves. When you abstract, then it becomes harder to peel the layers." But Darren, from a data standpoint, like I say, we use Snowflake a lot. I think of like Tim Burners-Lee when Web 2.0 came out, he said, "Well this is what the internet was always supposed to be." So in a way, you know, supercloud is maybe what multi-cloud was supposed to be. But I mean, you think about data sharing, Darren, across clouds, it's always been a challenge. Snowflake always, you know, obviously trying to solve that problem, as are others. But what are your thoughts on the concept? >> Yeah, I think the concept fits, right? It is reflective of, it's a paradigm shift, right? Things, as a pendulum have swung back and forth between needing to piece together a bunch of different tools that have specific unique use cases and they're best in breed in what they do. And then focusing on the duct tape that holds 'em all together and all the engineering complexity and skill, it shifted from that end of the pendulum all the way back to, "Let's streamline this, let's simplify it. Maybe we have budget crunches and we need to consolidate tools or eliminate tools." And so then you kind of see this back and forth over time. And with data and analytics for instance, a lot of organizations were trying to bring the data closer to the business. That's where we saw self-service analytics coming in. And tools like Snowflake, what they did was they helped point to different databases, they helped unify data, and organize it in a single place that was, you know, in a sense neutral, away from a single cloud vendor or a single database, and allowed the business to kind of be more flexible in how it brought stuff together and provided it out to the business units. So Snowflake was an example of one of those times where we pulled back from the granular, multiple points of the spear, back to a simple way to do things. And I think Snowflake has continued to kind of keep that mantle to a degree, and we see other tools trying to do that, but that's all it is. It's a paradigm shift back to this kind of meta abstraction layer that kind of simplifies what is the reality, that you need a complex multi-use case, multi-region way of doing business. And it sort of reflects the reality of that. >> And you know, to me it's a spectrum. As part of Supercloud 2, we're talking to a number of of practitioners, Ionis Pharmaceuticals, US West, we got Walmart. And it's a spectrum, right? In some cases the practitioner's saying, "You know, the way I solve multi-cloud complexity is mono-cloud, I just do one cloud." (laughs) Others like Walmart are saying, "Hey, you know, we actually are building an abstraction layer ourselves, take advantage of it." So my general question to both of you is, is this a concept, is the lack of standards across clouds, you know, really a problem, you know, or is supercloud a solution looking for a problem? Or do you hear from practitioners that "No, this is really an issue, we have to bring together a set of standards to sort of unify our cloud estates." >> Allow me to answer that at a higher level, and then we're going to hand it over to Dr. Brabham because he is a little bit more detailed on the realtime streaming analytics use cases, which I think is where we're going to get to. But to answer that question, it really depends on the size and the complexity of your business. At the very large enterprise, Dave, Yes, a hundred percent. This needs to happen. There is complexity, there is not only complexity in the compute and actually deploying the applications, but the governance and the security around them. But for lower end or, you know, business use cases, and for smaller businesses, it's a little less necessary. You certainly don't need to have all of these. Some of the things that come into mind from the interviews that Darren and I have done are, you know, financial services, if you're doing real-time trading, anything that has real-time data metrics involved in your transactions, is going to be necessary. And another use case that we hear about is in online travel agencies. So I think it is very relevant, the complexity does need to be solved, and I'll allow Darren to explain a little bit more about how that's used from an analytics perspective. >> Yeah, go for it. >> Yeah, exactly. I mean, I think any modern, you know, multinational company that's going to have a footprint in the US and Europe, in China, or works in different areas like manufacturing, where you're probably going to have on-prem instances that will stay on-prem forever, for various performance reasons. You have these complicated governance and security and regulatory issues. So inherently, I think, large multinational companies and or companies that are in certain areas like finance or in, you know, online e-commerce, or things that need real-time data, they inherently are going to have a very complex environment that's going to need to be managed in some kind of cleaner way. You know, they're looking for one door to open, one pane of glass to look at, one thing to do to manage these multi points. And, streaming's a good example of that. I mean, not every organization has a real-time streaming use case, and may not ever, but a lot of organizations do, a lot of industries do. And so there's this need to use, you know, they want to use open-source tools, they want to use Apache Kafka for instance. They want to use different megacloud vendors offerings, like Google Pub/Sub or you know, Amazon Kinesis Firehose. They have all these different pieces they want to use for different use cases at different stages of maturity or proof of concept, you name it. They're going to have to have this complexity. And I think that's why we're seeing this need, to have sort of this supercloud concept, to juggle all this, to wrangle all of it. 'Cause the reality is, it's complex and you have to simplify it somehow. >> Great, thanks you guys. All right, let's bring up the graphic, and take a look. Anybody who follows the breaking analysis, which is co-branded with ETR Cube Insights powered by ETR, knows we like to bring data to the table. ETR does amazing survey work every quarter, 1200 plus 1500 practitioners that that answer a number of questions. The vertical axis here is net score, which is ETR's proprietary methodology, which is a measure of spending momentum, spending velocity. And the horizontal axis here is overlap, but it's the presence pervasiveness, and the dataset, the ends, that table insert on the bottom right shows you how the dots are plotted, the net score and then the ends in the survey. And what we've done is we've plotted a bunch of the so-called supercloud suspects, let's start in the upper right, the cloud platforms. Without these hyperscale clouds, you can't have a supercloud. And as always, Azure and AWS, up and to the right, it's amazing we're talking about, you know, 80 plus billion dollar company in AWS. Azure's business is, if you just look at the IaaS is in the 50 billion range, I mean it's just amazing to me the net scores here. Anything above 40% we consider highly elevated. And you got Azure and you got Snowflake, Databricks, HashiCorp, we'll get to them. And you got AWS, you know, right up there at that size, it's quite amazing. With really big ends as well, you know, 700 plus ends in the survey. So, you know, kind of half the survey actually has these platforms. So my question to you guys is, what are you seeing in terms of cloud adoption within the big three cloud players? I wonder if you could could comment, maybe Erik, you could start. >> Yeah, sure. Now we're talking data, now I'm happy. So yeah, we'll get into some of it. Right now, the January, 2023 TSIS is approaching 1500 survey respondents. One caveat, it's not closed yet, it will close on Friday, but with an end that big we are over statistically significant. We also recently did a cloud survey, and there's a couple of key points on that I want to get into before we get into individual vendors. What we're seeing here, is that annual spend on cloud infrastructure is expected to grow at almost a 70% CAGR over the next three years. The percentage of those workloads for cloud infrastructure are expected to grow over 70% as three years as well. And as you mentioned, Azure and AWS are still dominant. However, we're seeing some share shift spreading around a little bit. Now to get into the individual vendors you mentioned about, yes, Azure is still number one, AWS is number two. What we're seeing, which is incredibly interesting, CloudFlare is number three. It's actually beating GCP. That's the first time we've seen it. What I do want to state, is this is on net score only, which is our measure of spending intentions. When you talk about actual pervasion in the enterprise, it's not even close. But from a spending velocity intention point of view, CloudFlare is now number three above GCP, and even Salesforce is creeping up to be at GCPs level. So what we're seeing here, is a continued domination by Azure and AWS, but some of these other players that maybe might fit into your moniker. And I definitely want to talk about CloudFlare more in a bit, but I'm going to stop there. But what we're seeing is some of these other players that fit into your Supercloud moniker, are starting to creep up, Dave. >> Yeah, I just want to clarify. So as you also know, we track IaaS and PaaS revenue and we try to extract, so AWS reports in its quarterly earnings, you know, they're just IaaS and PaaS, they don't have a SaaS play, a little bit maybe, whereas Microsoft and Google include their applications and so we extract those out and if you do that, AWS is bigger, but in the surveys, you know, customers, they see cloud, SaaS to them as cloud. So that's one of the reasons why you see, you know, Microsoft as larger in pervasion. If you bring up that survey again, Alex, the survey results, you see them further to the right and they have higher spending momentum, which is consistent with what you see in the earnings calls. Now, interesting about CloudFlare because the CEO of CloudFlare actually, and CloudFlare itself uses the term supercloud basically saying, "Hey, we're building a new type of internet." So what are your thoughts? Do you have additional information on CloudFlare, Erik that you want to share? I mean, you've seen them pop up. I mean this is a really interesting company that is pretty forward thinking and vocal about how it's disrupting the industry. >> Sure, we've been tracking 'em for a long time, and even from the disruption of just a traditional CDN where they took down Akamai and what they're doing. But for me, the definition of a true supercloud provider can't just be one instance. You have to have multiple. So it's not just the cloud, it's networking aspect on top of it, it's also security. And to me, CloudFlare is the only one that has all of it. That they actually have the ability to offer all of those things. Whereas you look at some of the other names, they're still piggybacking on the infrastructure or platform as a service of the hyperscalers. CloudFlare does not need to, they actually have the cloud, the networking, and the security all themselves. So to me that lends credibility to their own internal usage of that moniker Supercloud. And also, again, just what we're seeing right here that their net score is now creeping above AGCP really does state it. And then just one real last thing, one of the other things we do in our surveys is we track adoption and replacement reasoning. And when you look at Cloudflare's adoption rate, which is extremely high, it's based on technical capabilities, the breadth of their feature set, it's also based on what we call the ability to avoid stack alignment. So those are again, really supporting reasons that makes CloudFlare a top candidate for your moniker of supercloud. >> And they've also announced an object store (chuckles) and a database. So, you know, that's going to be, it takes a while as you well know, to get database adoption going, but you know, they're ambitious and going for it. All right, let's bring the chart back up, and I want to focus Darren in on the ecosystem now, and really, we've identified Snowflake and Databricks, it's always fun to talk about those guys, and there are a number of other, you know, data platforms out there, but we use those too as really proxies for leaders. We got a bunch of the backup guys, the data protection folks, Rubric, Cohesity, and Veeam. They're sort of in a cluster, although Rubric, you know, ahead of those guys in terms of spending momentum. And then VMware, Tanzu and Red Hat as sort of the cross cloud platform. But I want to focus, Darren, on the data piece of it. We're seeing a lot of activity around data sharing, governed data sharing. Databricks is using Delta Sharing as their sort of place, Snowflakes is sort of this walled garden like the app store. What are your thoughts on, you know, in the context of Supercloud, cross cloud capabilities for the data platforms? >> Yeah, good question. You know, I think Databricks is an interesting player because they sort of have made some interesting moves, with their Data Lakehouse technology. So they're trying to kind of complicate, or not complicate, they're trying to take away the complications of, you know, the downsides of data warehousing and data lakes, and trying to find that middle ground, where you have the benefits of a managed, governed, you know, data warehouse environment, but you have sort of the lower cost, you know, capability of a data lake. And so, you know, Databricks has become really attractive, especially by data scientists, right? We've been tracking them in the AI machine learning sector for quite some time here at ETR, attractive for a data scientist because it looks and acts like a lake, but can have some managed capabilities like a warehouse. So it's kind of the best of both worlds. So in some ways I think you've seen sort of a data science driver for the adoption of Databricks that has now become a little bit more mainstream across the business. Snowflake, maybe the other direction, you know, it's a cloud data warehouse that you know, is starting to expand its capabilities and add on new things like Streamlit is a good example in the analytics space, with apps. So you see these tools starting to branch and creep out a bit, but they offer that sort of neutrality, right? We heard one IT decision maker we recently interviewed that referred to Snowflake and Databricks as the quote unquote Switzerland of what they do. And so there's this desirability from an organization to find these tools that can solve the complex multi-headed use-case of data and analytics, which every business unit needs in different ways. And figure out a way to do that, an elegant way that's governed and centrally managed, that federated kind of best of both worlds that you get by bringing the data close to the business while having a central governed instance. So these tools are incredibly powerful and I think there's only going to be room for growth, for those two especially. I think they're going to expand and do different things and maybe, you know, join forces with others and a lot of the power of what they do well is trying to define these connections and find these partnerships with other vendors, and try to be seen as the nice add-on to your existing environment that plays nicely with everyone. So I think that's where those two tools are going, but they certainly fit this sort of label of, you know, trying to be that supercloud neutral, you know, layer that unites everything. >> Yeah, and if you bring the graphic back up, please, there's obviously big data plays in each of the cloud platforms, you know, Microsoft, big database player, AWS is, you know, 11, 12, 15, data stores. And of course, you know, BigQuery and other, you know, data platforms within Google. But you know, I'm not sure the big cloud guys are going to go hard after so-called supercloud, cross-cloud services. Although, we see Oracle getting in bed with Microsoft and Azure, with a database service that is cross-cloud, certainly Google with Anthos and you know, you never say never with with AWS. I guess what I would say guys, and I'll I'll leave you with this is that, you know, just like all players today are cloud players, I feel like anybody in the business or most companies are going to be so-called supercloud players. In other words, they're going to have a cross-cloud strategy, they're going to try to build connections if they're coming from on-prem like a Dell or an HPE, you know, or Pure or you know, many of these other companies, Cohesity is another one. They're going to try to connect to their on-premise states, of course, and create a consistent experience. It's natural that they're going to have sort of some consistency across clouds. You know, the big question is, what's that spectrum look like? I think on the one hand you're going to have some, you know, maybe some rudimentary, you know, instances of supercloud or maybe they just run on the individual clouds versus where Snowflake and others and even beyond that are trying to go with a single global instance, basically building out what I would think of as their own cloud, and importantly their own ecosystem. I'll give you guys the last thought. Maybe you could each give us, you know, closing thoughts. Maybe Darren, you could start and Erik, you could bring us home on just this entire topic, the future of cloud and data. >> Yeah, I mean I think, you know, two points to make on that is, this question of these, I guess what we'll call legacy on-prem players. These, mega vendors that have been around a long time, have big on-prem footprints and a lot of people have them for that reason. I think it's foolish to assume that a company, especially a large, mature, multinational company that's been around a long time, it's foolish to think that they can just uproot and leave on-premises entirely full scale. There will almost always be an on-prem footprint from any company that was not, you know, natively born in the cloud after 2010, right? I just don't think that's reasonable anytime soon. I think there's some industries that need on-prem, things like, you know, industrial manufacturing and so on. So I don't think on-prem is going away, and I think vendors that are going to, you know, go very cloud forward, very big on the cloud, if they neglect having at least decent connectors to on-prem legacy vendors, they're going to miss out. So I think that's something that these players need to keep in mind is that they continue to reach back to some of these players that have big footprints on-prem, and make sure that those integrations are seamless and work well, or else their customers will always have a multi-cloud or hybrid experience. And then I think a second point here about the future is, you know, we talk about the three big, you know, cloud providers, the Google, Microsoft, AWS as sort of the opposite of, or different from this new supercloud paradigm that's emerging. But I want to kind of point out that, they will always try to make a play to become that and I think, you know, we'll certainly see someone like Microsoft trying to expand their licensing and expand how they play in order to become that super cloud provider for folks. So also don't want to downplay them. I think you're going to see those three big players continue to move, and take over what players like CloudFlare are doing and try to, you know, cut them off before they get too big. So, keep an eye on them as well. >> Great points, I mean, I think you're right, the first point, if you're Dell, HPE, Cisco, IBM, your strategy should be to make your on-premise state as cloud-like as possible and you know, make those differences as minimal as possible. And you know, if you're a customer, then the business case is going to be low for you to move off of that. And I think you're right. I think the cloud guys, if this is a real problem, the cloud guys are going to play in there, and they're going to make some money at it. Erik, bring us home please. >> Yeah, I'm going to revert back to our data and this on the macro side. So to kind of support this concept of a supercloud right now, you know Dave, you and I know, we check overall spending and what we're seeing right now is total year spent is expected to only be 4.6%. We ended 2022 at 5% even though it began at almost eight and a half. So this is clearly declining and in that environment, we're seeing the top two strategies to reduce spend are actually vendor consolidation with 36% of our respondents saying they're actively seeking a way to reduce their number of vendors, and consolidate into one. That's obviously supporting a supercloud type of play. Number two is reducing excess cloud resources. So when I look at both of those combined, with a drop in the overall spending reduction, I think you're on the right thread here, Dave. You know, the overall macro view that we're seeing in the data supports this happening. And if I can real quick, couple of names we did not touch on that I do think deserve to be in this conversation, one is HashiCorp. HashiCorp is the number one player in our infrastructure sector, with a 56% net score. It does multiple things within infrastructure and it is completely agnostic to your environment. And if we're also speaking about something that's just a singular feature, we would look at Rubric for data, backup, storage, recovery. They're not going to offer you your full cloud or your networking of course, but if you are looking for your backup, recovery, and storage Rubric, also number one in that sector with a 53% net score. Two other names that deserve to be in this conversation as we watch it move and evolve. >> Great, thank you for bringing that up. Yeah, we had both of those guys in the chart and I failed to focus in on HashiCorp. And clearly a Supercloud enabler. All right guys, we got to go. Thank you so much for joining us, appreciate it. Let's keep this conversation going. >> Always enjoy talking to you Dave, thanks. >> Yeah, thanks for having us. >> All right, keep it right there for more content from Supercloud 2. This is Dave Valente for John Ferg and the entire Cube team. We'll be right back. (gentle synth music) (music fades)

Published Date : Feb 17 2023

SUMMARY :

is the intersection of cloud and data. Thank you for having period of time, you know, and evolution of the cloud So in a way, you know, supercloud the data closer to the business. So my general question to both of you is, the complexity does need to be And so there's this need to use, you know, So my question to you guys is, And as you mentioned, Azure but in the surveys, you know, customers, the ability to offer and there are a number of other, you know, and maybe, you know, join forces each of the cloud platforms, you know, the three big, you know, And you know, if you're a customer, you and I know, we check overall spending and I failed to focus in on HashiCorp. to you Dave, thanks. Ferg and the entire Cube team.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

ErikPERSON

0.99+

DellORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

John FergPERSON

0.99+

DavePERSON

0.99+

WalmartORGANIZATION

0.99+

Erik BradleyPERSON

0.99+

DavidPERSON

0.99+

AWSORGANIZATION

0.99+

Dave ValentePERSON

0.99+

January, 2023DATE

0.99+

ChinaLOCATION

0.99+

USLOCATION

0.99+

HPEORGANIZATION

0.99+

50 billionQUANTITY

0.99+

Ionis PharmaceuticalsORGANIZATION

0.99+

Darren BrabhamPERSON

0.99+

56%QUANTITY

0.99+

4.6%QUANTITY

0.99+

EuropeLOCATION

0.99+

OracleORGANIZATION

0.99+

53%QUANTITY

0.99+

36%QUANTITY

0.99+

TanzuORGANIZATION

0.99+

DarrenPERSON

0.99+

1200QUANTITY

0.99+

Red HatORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

FridayDATE

0.99+

RubricORGANIZATION

0.99+

last yearDATE

0.99+

two sidesQUANTITY

0.99+

DatabricksORGANIZATION

0.99+

5%QUANTITY

0.99+

CohesityORGANIZATION

0.99+

two toolsQUANTITY

0.99+

VeeamORGANIZATION

0.99+

CloudFlareTITLE

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

2022DATE

0.99+

OneQUANTITY

0.99+

Daren BrabhamPERSON

0.99+

three yearsQUANTITY

0.99+

TSISORGANIZATION

0.99+

BrabhamPERSON

0.99+

CloudFlareORGANIZATION

0.99+

1500 survey respondentsQUANTITY

0.99+

second pointQUANTITY

0.99+

first pointQUANTITY

0.98+

SnowflakeTITLE

0.98+

oneQUANTITY

0.98+

SupercloudORGANIZATION

0.98+

ETRORGANIZATION

0.98+

SnowflakeORGANIZATION

0.98+

AkamaiORGANIZATION

0.98+

Breaking Analysis: Google's Point of View on Confidential Computing


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data and isolating data from apps in a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show, but before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing. I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics, are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data and transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system. Arm, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images updates different services and the entire code flow aren't directly addressed by memory encryption, rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Branco sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign for memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the the consortium is seen as limiting by AWS. This is my guess, not AWS's words, and but I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got a lead with this Annapurna acquisition. This was way ahead with Arm integration and so it probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names including Arm, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic, Nelly Porter is head of product for GCP confidential computing and encryption, and Dr. Patricia Florissi is the technical director for the office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again security or infrastructure securities that I usually own. And we are talking about encryption and when encryption and confidential computing is a part of portfolio in additional areas that I contribute together with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operate in your confidential environment to have end-to-end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay. Patricia? >> Well, I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologists from large corporations, institutions and a lot of success, we're startups as well. And we have two main goals. First, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we are devise Google and Google Cloud engineering and product management and tech on there, on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO, I spend a lot of time collaborating with customers and the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing? From Google's perspective, how do you define it? >> Confidential computing is a tool and it's still one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do, Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential commuting matters, because at the end of the day, it reduces more and more the customer's thresh boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way, is a natural progression that you're using encryption to secure and protect the data. In the same way that we are encrypting data in transit and at rest, now we are also encrypting data while in use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud, and specifically double finance where you are, a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting. And I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can because there's a narrative out there that says confidential computing is a marketing ploy, I talked about this upfront, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption and it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree, as you can imagine, with this statement, but the most importantly is we mixing multiple concepts, I guess. And exactly as Patricia said, we need to look at the end-to-end story, not again the mechanism how confidential computing trying to again, execute and protect a customer's data and why it's so critically important because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud covering to offer additional stronger isolation. They called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenant that's running on the same host but also us because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers, so tenants from us. We also writing code, we also software providers will also make mistakes or have some zero days. Sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and amongst those tenants, we're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating to gather this very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. Operator access, yeah, maybe I trust my clouds provider, but if I can fence off your access even better, I'll sleep better at night. Separating a code from the data, everybody's, Arm, Intel, AMD, Nvidia, others, they're all doing it. I wonder if, Nelly, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally. We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely. And Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate on those VMs exactly as they would with normal non-confidential VMs, but to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any cloud can, something that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, when the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 called Titan. It was our specific ASIC, specific, again, inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tampered. We do it for everybody, confidential computing included. But for confidential computing, what we have to change, we bring in AMD, or again, future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity, not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine, as you can see, we validate that integrity of all of the system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD secure processor, it's special ASICs, best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop or Spark capability. We offer all of that. And those keys are not available to us. It's the best keys ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing provides so revolutionary technology, us cloud providers, who don't have access to the keys. They sitting in the hardware and they head to memory controller. And it means when hypervisors that also know about these wonderful things saying I need to get access to the memories that this particular VM trying to get access to, they do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but the most importantly, in hardware not exportable. And it means now you would be able to have this very interesting role that customers or cloud providers will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications, their VMs are running exactly as it should run and what you're running in VM, you actually see your memory in clear, it's not encrypted, but God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, they would not be able to do it. Now you'll see cyber and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified. And OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you, as customer, can verify. But the most interesting thing, I guess, how to ensure the super performance of this environment because you can imagine, Dave, that encrypting and it's additional performance, additional time, additional latency. So we were able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent. Appreciate that explanation. So, again, the narrative on this as well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is, in addition to, let's go pre confidential computing days, what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that recovered with Nelly, that it is. Confidential computing actually ensures that the applications and data internals remain secret, right? The code is actually looking at the data, the only the memory is decrypting the data with a key that is ephemeral and per VM and generated on demand. Then you have the second point where you have code and data integrity, and now customers want to know whether their data was corrupted, tampered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data, it's also, it has not been tampered and preserves integrity. I would also say that this is all verifiable. So you have attestation and these attestation actually generates a log trail and the log trail guarantees that, provides a proof that it was preserved. And I think that the offer's also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tampered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications, it's transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before. I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem, or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate in open, so again, our operating system, we working with operating system repository OSs, OS vendors to ensure that all capabilities that we need is part of the kernels, are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors a kernel, host kernel to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this whole, we moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed, Intel is pulling the lead and also announcing their trusted domain extension, very similar architecture. And no surprise, it's, again, a lot of work done with our partners to, again, convince, work with them and make this capability available. The same with Arm this year, actually last year, Arm announced their future design for confidential computing. It's called Confidential Computing Architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop, as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this attestation sig, the, again, the community based systems that we want to build and influence and work with Arm and every other cloud providers to ensure that we can interrupt and it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers way. And to do it, we need to continue what we are doing, working open, again, and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what we want it to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, the technology industry and sometimes is problematic. I know there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove that data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability, that you can actually survive if you are untethered to the cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing, it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here, Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data and the code. And that's similar because with data sovereignty we care about whether it resides, where, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data are going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement, now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in 23 and what's the maturity curve look like, this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years, as I started, it'll become utility. It'll become TLS as of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do and it's become ubiquity. It's exactly where confidential computing is getting and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we will be there. >> Thank you. And Patricia, what's your prediction? >> I will double that and say, hey, in the future, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes evermore top of mind with sovereign states and also for multi national organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, if I say, mode of operation. I like to compare that today is inconceivable. If we talk to the young technologists, it's inconceivable to think that at some point in history, and I happen to be alive that we had data at rest that was not encrypted, data in transit that was not encrypted, and I think that will be inconceivable at some point in the near future that to have unencrypted data while in use. >> And plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those, as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look, as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition, in our view, will moderate price hikes. And at the end of the day, this is under the covers technology that essentially will come for free. So we'll take it. I want to thank our guests today, Nelly and Patricia from Google, and thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio, Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at siliconangle.com. Does some great editing for us, thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or dm me @DVellante. And you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (upbeat music)

Published Date : Feb 11 2023

SUMMARY :

bringing you data-driven and at the end of the day, Just tell the audience a little and confidential computing Got it. and the industry at large for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. people that are scared of the cloud. and eliminate some of the we could stay with you and they head to memory controller. So, again, the narrative on this as well, and integrity of the data and of the code. how does Google ensure the compatibility and ideas of our partners to this role One of the frequent examples and that the data will be only used of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive beauty of the this industry and the constraints of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

International Data Space AssociationORGANIZATION

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BrancoPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

2019DATE

0.99+

2017DATE

0.99+

Kristin MartinPERSON

0.99+

Nelly PorterPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Rob HofPERSON

0.99+

Cheryl KnightPERSON

0.99+

last yearDATE

0.99+

Palo AltoLOCATION

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

AMDORGANIZATION

0.99+

Patricia FlorissiPERSON

0.99+

IntelORGANIZATION

0.99+

oneQUANTITY

0.99+

fiveQUANTITY

0.99+

second pointQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

MetaORGANIZATION

0.99+

secondQUANTITY

0.99+

thirdQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

ArmORGANIZATION

0.99+

eachQUANTITY

0.99+

two expertsQUANTITY

0.99+

FirstQUANTITY

0.99+

first questionQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

two decades agoDATE

0.99+

bothQUANTITY

0.99+

this yearDATE

0.99+

seven yearsQUANTITY

0.99+

OCTOORGANIZATION

0.99+

zero daysQUANTITY

0.98+

10 years agoDATE

0.98+

each weekQUANTITY

0.98+

todayDATE

0.97+

Breaking Analysis: Google's PoV on Confidential Computing


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security, by providing encrypted computation on sensitive data and isolating data, and apps that are fenced off enclave during processing. The concept of, I got to start over. I fucked that up, I'm sorry. That's not right, what I said was not right. On Dave in five, four, three. Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data, isolating data from apps and a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space, where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show. But before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing, I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data in transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system, ARM, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now, the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images, updates, different services and the entire code flow aren't directly addressed by memory encryption. Rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Bronco, sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign from memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the consortium is seen as limiting by AWS. This is my guess, not AWS' words. But I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got to lead with this Annapurna acquisition. It was way ahead with ARM integration, and so it's probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names, including Aem, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic. Nelly Porter is Head of Product for GCP Confidential Computing and Encryption and Dr. Patricia Florissi is the Technical Director for the Office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again, security or infrastructure securities that I usually own. And we are talking about encryption, end-to-end encryption, and confidential computing is a part of portfolio. Additional areas that I contribute to get with my team to Google and our customers is secure software supply chain because you need to trust your software. Is it operate in your confidential environment to have end-to-end security, about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay, Patricia? >> Well, I am a Technical Director in the Office of the CTO, OCTO for short in Google Cloud. And we are a global team, we include former CTOs like myself and senior technologies from large corporations, institutions and a lot of success for startups as well. And we have two main goals, first, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we advice Google and Google Cloud Engineering, product management on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool and one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they run them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end-to-end protection of our customer's data when they bring the workloads and data to cloud thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain? Do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential computing matters because at the end of the day, it reduces more and more the customer's thrush boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now, we are also encrypting data while in the use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused but very beneficial for highly regulated industries, it applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting and I want to understand that a little bit more but I got to push you a little bit on this, Nellie if I can, because there's a narrative out there that says confidential computing is a marketing ploy I talked about this up front, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine Dave, with this statement. But the most importantly is we mixing a multiple concepts I guess, and exactly as Patricia said, we need to look at the end-to-end story, not again, is a mechanism. How confidential computing trying to execute and protect customer's data and why it's so critically important. Because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud offering to offer additional stronger isolation, they called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants running on the same host but also us because they don't need to worry about against rats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers to tenants from us. We also writing code, we also software providers, we also make mistakes or have some zero days. Sometimes again us introduce, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and among those tenants, we really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together with very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. You know, operator access. Yeah, maybe I trust my cloud's provider, but if I can fence off your access even better, I'll sleep better at night separating a code from the data. Everybody's ARM, Intel, AMD, Nvidia and others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift though, no changing the apps and performing and having very, very, very low latency and scale as any cloud can, some things that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done, and as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine within the whole entire host has integrity guarantee, means nobody changing my code on the most low level of system, and we introduce this in 2017 called Titan. So our specific ASIC, specific inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing included, but for confidential computing is what we have to change, we bring in AMD or future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate intelligent not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD Secure Processor, it's special ASIC best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop spark capability. We offer all of that and those keys are not available to us. It's the best case ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, "Where's the key? Who will have access to the key?" because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing why it's so revolutionary technology, us cloud providers who don't have access to the keys, they're sitting in the hardware and they fed to memory controller. And it means when hypervisors that also know about this wonderful things saying I need to get access to the memories, that this particular VM I'm trying to get access to. They do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but most importantly in hardware not exportable. And it means now you will be able to have this very interesting world that customers or cloud providers will not be able to get access to your memory. And what we do, again as you can see, our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you've running in VM, you actually see your memory clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box, no, no, no, no, no, you will now be able to do it. Now, you'll see cyber test and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified and OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine Dave, that's increasing and it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is in addition to, let's go pre-confidential computing days, what are the sort of new guarantees that these hardware based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret. The code is actually looking at the data, only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tempered with. So the application, the workload as we call it, that is processing the data is also has not been tempered and preserves integrity. I would also say that this is all verifiable, so you have attestation and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call sealing, this idea that the secrets have been preserved and not tempered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications is transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before, I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way, and it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate and open. So again our operating system, we working this operating system repository OS is OS vendors to ensure that all capabilities that we need is part of the kernels are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors kernel, host kernel to support this capability and it means working this community to ensure that all of those pages are there. We also worked with every single silicon vendor as you've seen, and it's what I probably feel that Google contributed quite a bit in this world. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is following the lead and also announcing a trusted domain extension, very similar architecture and no surprise, it's a lot of work done with our partners to convince work with them and make this capability available. The same with ARM this year, actually last year, ARM announced future design for confidential computing, it's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at Station Sig, the community-based systems that we want to build, and influence, and work with ARM and every other cloud providers to ensure that they can interop. And it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers really. And to do it, we need to continue what we are doing, working open and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem in different regions and then of course data sovereignty comes up, typically public policy, lags, the technology industry and sometimes it's problematic. I know there's a lot of discussions about exceptions but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove the data is deleted with a hundred percent certainty, you got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it at all, that's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty, where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the cloud and that you can use open source. Now, let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing need to typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection, we want to ensure the confidentiality, and integrity, and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data, and this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and logging accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty, we care about whether it resides, who is operating on the data, but the moment that the data is being processed, I need to trust that the processing of the data we abide by user's control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now, the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is in cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user's control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year-end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post, so I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it will become utility, it will become TLS. As of freakin' 10 years ago, we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heeding and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you. And Patricia, what's your prediction? >> I would double that and say, hey, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations, and for organizations that want to collaborate with each other, confidential computing will become the norm, it will become the default, if I say mode of operation. I like to compare that today is inconceivable if we talk to the young technologists, it's inconceivable to think that at some point in history and I happen to be alive, that we had data at rest that was non-encrypted, data in transit that was not encrypted. And I think that we'll be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis, there's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much, yeah. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition in our view will moderate price hikes and at the end of the day, this is under-the-covers technology that essentially will come for free, so we'll take it. I want to thank our guests today, Nelly and Patricia from Google. And thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters, and Rob Hoof is our editor-in-chief over at siliconangle.com, does some great editing for us. Thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or DM me at D Vellante, and you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (subtle music)

Published Date : Feb 10 2023

SUMMARY :

bringing you data-driven and at the end of the day, and then Patricia, you can weigh in. contribute to get with my team Okay, Patricia? Director in the Office of the CTO, for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. that are scared of the cloud. and eliminate some of the we could stay with you and they fed to memory controller. to you is in addition to, and integrity of the data and of the code. that the applications is transparent, and ideas of our partners to this role One of the frequent examples and a lot of the initiatives of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive, the beauty of the this industry and at the end of the day,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

AWS'ORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Rob HoofPERSON

0.99+

Cheryl KnightPERSON

0.99+

Nelly PorterPERSON

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BroncoPERSON

0.99+

2019DATE

0.99+

Ken SchiffmanPERSON

0.99+

IntelORGANIZATION

0.99+

AMDORGANIZATION

0.99+

2017DATE

0.99+

ARMORGANIZATION

0.99+

AemORGANIZATION

0.99+

NelliePERSON

0.99+

Kristin MartinPERSON

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

Patricia FlorissiPERSON

0.99+

oneQUANTITY

0.99+

MetaORGANIZATION

0.99+

twoQUANTITY

0.99+

thirdQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

second pointQUANTITY

0.99+

two expertsQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

secondQUANTITY

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

OneQUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

two decades agoDATE

0.99+

'23DATE

0.99+

eachQUANTITY

0.99+

a decade agoDATE

0.99+

threeQUANTITY

0.99+

zero daysQUANTITY

0.98+

fourQUANTITY

0.98+

OCTOORGANIZATION

0.98+

todayDATE

0.98+

Breaking Analysis: Cloud players sound a cautious tone for 2023


 

>> From the Cube Studios in Palo Alto in Boston bringing you data-driven insights from the Cube and ETR. This is Breaking Analysis with Dave Vellante. >> The unraveling of market enthusiasm continued in Q4 of 2022 with the earnings reports from the US hyperscalers, the big three now all in. As we said earlier this year, even the cloud is an immune from the macro headwinds and the cracks in the armor that we saw from the data that we shared last summer, they're playing out into 2023. For the most part actuals are disappointing beyond expectations including our own. It turns out that our estimates for the big three hyperscaler's revenue missed by 1.2 billion or 2.7% lower than we had forecast from even our most recent November estimates. And we expect continued decelerating growth rates for the hyperscalers through the summer of 2023 and we don't think that's going to abate until comparisons get easier. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we share our view of what's happening in cloud markets not just for the hyperscalers but other firms that have hitched a ride on the cloud. And we'll share new ETR data that shows why these trends are playing out tactics that customers are employing to deal with their cost challenges and how long the pain is likely to last. You know, riding the cloud wave, it's a two-edged sword. Let's look at the players that have gone all in on or are exposed to both the positive and negative trends of cloud. Look the cloud has been a huge tailwind for so many companies like Snowflake and Databricks, Workday, Salesforce, Mongo's move with Atlas, Red Hats Cloud strategy with OpenShift and so forth. And you know, the flip side is because cloud is elastic what comes up can also go down very easily. Here's an XY graphic from ETR that shows spending momentum or net score on the vertical axis and market presence in the dataset on the horizontal axis provision or called overlap. This is data from the January 2023 survey and that the red dotted lines show the positions of several companies that we've highlighted going back to January 2021. So let's unpack this for a bit starting with the big three hyperscalers. The first point is AWS and Azure continue to solidify their moat relative to Google Cloud platform. And we're going to get into this in a moment, but Azure and AWS revenues are five to six times that of GCP for IaaS. And at those deltas, Google should be gaining ground much faster than the big two. The second point on Google is notice the red line on GCP relative to its starting point. While it appears to be gaining ground on the horizontal axis, its net score is now below that of AWS and Azure in the survey. So despite its significantly smaller size it's just not keeping pace with the leaders in terms of market momentum. Now looking at AWS and Microsoft, what we see is basically AWS is holding serve. As we know both Google and Microsoft benefit from including SaaS in their cloud numbers. So the fact that AWS hasn't seen a huge downward momentum relative to a January 2021 position is one positive in the data. And both companies are well above that magic 40% line on the Y-axis, anything above 40% we consider to be highly elevated. But the fact remains that they're down as are most of the names on this chart. So let's take a closer look. I want to start with Snowflake and Databricks. Snowflake, as we reported from several quarters back came down to Earth, it was up in the 80% range in the Y-axis here. And it's still highly elevated in the 60% range and it continues to move to the right, which is positive but as we'll address in a moment it's customers can dial down consumption just as in any cloud. Now, Databricks is really interesting. It's not a public company, it never made it to IPO during the sort of tech bubble. So we don't have the same level of transparency that we do with other companies that did make it through. But look at how much more prominent it is on the X-axis relative to January 2021. And it's net score is basically held up over that period of time. So that's a real positive for Databricks. Next, look at Workday and Salesforce. They've held up relatively well, both inching to the right and generally holding their net scores. Same from Mongo, which is the brown dot above its name that says Elastic, it says a little gets a little crowded which Elastic's actually the blue dot above it. But generally, SaaS is harder to dial down, Workday, Salesforce, Oracles, SaaS and others. So it's harder to dial down because commitments have been made in advance, they're kind of locked in. Now, one of the discussions from last summer was as Mongo, less discretionary than analytics i.e. Snowflake. And it's an interesting debate but maybe Snowflake customers, you know, they're also generally committed to a dollar amount. So over time the spending is going to be there. But in the short term, yeah maybe Snowflake customers can dial down. Now that highlighted dotted red line, that bolded one is Datadog and you can see it's made major strides on the X-axis but its net score has decelerated quite dramatically. Openshift's momentum in the survey has dropped although IBM just announced that OpenShift has a a billion dollar ARR and I suspect what's happening there is IBM consulting is bundling OpenShift into its modernization projects. It's got a, that sort of captive base if you will. And as such it's probably not as top of mind to the respondents but I'll bet you the developers are certainly aware of it. Now the other really notable call out here is CloudFlare, We've reported on them earlier. Cloudflare's net score has held up really well since January of 2021. It really hasn't seen the downdraft of some of these others, but it's making major major moves to the right gaining market presence. We really like how CloudFlare is performing. And the last comment is on Oracle which as you can see, despite its much, much lower net score continues to gain ground in the market and thrive from a profitability standpoint. But the data pretty clearly shows that there's a downdraft in the market. Okay, so what's happening here? Let's dig deeper into this data. Here's a graphic from the most recent ETR drill down asking customers that said they were going to cut spending what technique they're using to do so. Now, as we've previously reported, consolidating redundant vendors is by far the most cited approach but there's two key points we want to make here. One is reducing excess cloud resources. As you can see in the bars is the second most cited technique and it's up from the previous polling period. The second we're not showing, you know directly but we've got some red call outs there. Reducing cloud costs jumps to 29% and 28% respectively in financial services and tech telco. And it's much closer to second. It's basically neck and neck with consolidating redundant vendors in those two industries. So they're being really aggressive about optimizing cloud cost. Okay, so as we said, cloud is great 'cause you can dial it up but it's just as easy to dial down. We've identified six factors that customers tell us are affecting their cloud consumption and there are probably more, if you got more we'd love to hear them but these are the ones that are fairly prominent that have hit our radar. First, rising mortgage rates mean banks are processing fewer loans means less cloud. The crypto crash means less trading activity and that means less cloud resources. Third lower ad spend has led companies to reduce not only you know, their ad buying but also their frequency of running their analytics and their calculations. And they're also often using less data, maybe compressing the timeframe of the corpus down to a shorter time period. Also very prominent is down to the bottom left, using lower cost compute instances. For example, Graviton from AWS or AMD chips and tiering storage to cheaper S3 or deep archived tiers. And finally, optimizing based on better pricing plans. So customers are moving from, you know, smaller companies in particular moving maybe from on demand or other larger companies that are experimenting using on demand or they're moving to spot pricing or reserved instances or optimized savings plans. That all lowers cost and that means less cloud resource consumption and less cloud revenue. Now in the days when everything was on prem CFOs, what would they do? They would freeze CapEx and IT Pros would have to try to do more with less and often that meant a lot of manual tasks. With the cloud it's much easier to move things around. It still takes some thinking and some effort but it's dramatically simpler to do so. So you can get those savings a lot faster. Now of course the other huge factor is you can cut or you can freeze. And this graphic shows data from a recent ETR survey with 159 respondents and you can see the meaningful uptick in hiring freezes, freezing new IT deployments and layoffs. And as we've been reporting, this has been trending up since earlier last year. And note the call out, this is especially prominent in retail sectors, all three of these techniques jump up in retail and that's a bit of a concern because oftentimes consumer spending helps the economy make a softer landing out of a pullback. But this is a potential canary in the coal mine. If retail firms are pulling back it's because consumers aren't spending as much. And so we're keeping a close eye on that. So let's boil this down to the market data and what this all means. So in this graphic we show our estimates for Q4 IaaS revenues compared to the "actual" IaaS revenues. And we say quote because AWS is the only one that reports, you know clean revenue and IaaS, Azure and GCP don't report actuals. Why would they? Because it would make them look even, you know smaller relative to AWS. Rather, they bury the figures in overall cloud which includes their, you know G-Suite for Google and all the Microsoft SaaS. And then they give us little tidbits about in Microsoft's case, Azure, they give growth rates. Google gives kind of relative growth of GCP. So, and we use survey data and you know, other data to try to really pinpoint and we've been covering this for, I don't know, five or six years ever since the cloud really became a thing. But looking at the data, we had AWS growing at 25% this quarter and it came in at 20%. So a significant decline relative to our expectations. AWS announced that it exited December, actually, sorry it's January data showed about a 15% mid-teens growth rate. So that's, you know, something we're watching. Azure was two points off our forecast coming in at 38% growth. It said it exited December in the 35% growth range and it said that it's expecting five points of deceleration off of that. So think 30% for Azure. GCP came in three points off our expectation coming in 35% and Alibaba has yet to report but we've shaved a bid off that forecast based on some survey data and you know what maybe 9% is even still not enough. Now for the year, the big four hyperscalers generated almost 160 billion of revenue, but that was 7 billion lower than what what we expected coming into 2022. For 2023, we're expecting 21% growth for a total of 193.3 billion. And while it's, you know, lower, you know, significantly lower than historical expectations it's still four to five times the overall spending forecast that we just shared with you in our predictions post of between 4 and 5% for the overall market. We think AWS is going to come in in around 93 billion this year with Azure closing in at over 71 billion. This is, again, we're talking IaaS here. Now, despite Amazon focusing investors on the fact that AWS's absolute dollar growth is still larger than its competitors. By our estimates Azure will come in at more than 75% of AWS's forecasted revenue. That's a significant milestone. AWS is operating margins by the way declined significantly this past quarter, dropping from 30% of revenue to 24%, 30% the year earlier to 24%. Now that's still extremely healthy and we've seen wild fluctuations like this before so I don't get too freaked out about that. But I'll say this, Microsoft has a marginal cost advantage relative to AWS because one, it has a captive cloud on which to run its massive software estate. So it can just throw software at its own cloud and two software marginal costs. Marginal economics despite AWS's awesomeness in high degrees of automation, software is just a better business. Now the upshot for AWS is the ecosystem. AWS is essentially in our view positioning very smartly as a platform for data partners like Snowflake and Databricks, security partners like CrowdStrike and Okta and Palo Alto and many others and SaaS companies. You know, Microsoft is more competitive even though AWS does have competitive products. Now of course Amazon's competitive to retail companies so that's another factor but generally speaking for tech players, Amazon is a really thriving ecosystem that is a secret weapon in our view. AWS happy to spin the meter with its partners even though it sells competitive products, you know, more so in our view than other cloud players. Microsoft, of course is, don't forget is hyping now, we're hearing a lot OpenAI and ChatGPT we reported last week in our predictions post. How OpenAI is shot up in terms of market sentiment in ETR's emerging technology company surveys and people are moving to Azure to get OpenAI and get ChatGPT that is a an interesting lever. Amazon in our view has to have a response. They have lots of AI and they're going to have to make some moves there. Meanwhile, Google is emphasizing itself as an AI first company. In fact, Google spent at least five minutes of continuous dialogue, nonstop on its AI chops during its latest earnings call. So that's an area that we're watching very closely as the buzz around large language models continues. All right, let's wrap up with some assumptions for 2023. We think SaaS players are going to continue to be sticky. They're going to be somewhat insulated from all these downdrafts because they're so tied in and customers, you know they make the commitment up front, you've got the lock in. Now having said that, we do expect some backlash over time on the onerous and generally customer unfriendly pricing models of most large SaaS companies. But that's going to play out over a longer period of time. Now for cloud generally and the hyperscalers specifically we do expect accelerating growth rates into Q3 but the amplitude of the demand swings from this rubber band economy, we expect to continue to compress and become more predictable throughout the year. Estimates are coming down, CEOs we think are going to be more cautious when the market snaps back more cautious about hiring and spending and as such a perhaps we expect a more orderly return to growth which we think will slightly accelerate in Q4 as comps get easier. Now of course the big risk to these scenarios is of course the economy, the FED, consumer spending, inflation, supply chain, energy prices, wars, geopolitics, China relations, you know, all the usual stuff. But as always with our partners at ETR and the Cube community, we're here for you. We have the data and we'll be the first to report when we see a change at the margin. Okay, that's a wrap for today. I want to thank Alex Morrison who's on production and manages the podcast, Ken Schiffman as well out of our Boston studio getting this up on LinkedIn Live. Thank you for that. Kristen Martin also and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our Editor-in-Chief over at siliconangle.com. He does some great editing for us. Thank you all. Remember all these episodes are available as podcast. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com, at siliconangle.com where you can see all the data and you want to get in touch. Just all you can do is email me david.vellante@siliconangle.com or DM me @dvellante if you if you got something interesting, I'll respond. If you don't, it's either 'cause I'm swamped or it's just not tickling me. You can comment on our LinkedIn post as well. And please check out ETR.ai for the best survey data in the enterprise tech business. This is Dave Vellante for the Cube Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (gentle upbeat music)

Published Date : Feb 4 2023

SUMMARY :

From the Cube Studios and how long the pain is likely to last.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

AWSORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Kristen MartinPERSON

0.99+

Dave VellantePERSON

0.99+

Ken SchiffmanPERSON

0.99+

January 2021DATE

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Rob HofPERSON

0.99+

2.7%QUANTITY

0.99+

JanuaryDATE

0.99+

AmazonORGANIZATION

0.99+

DecemberDATE

0.99+

January of 2021DATE

0.99+

fiveQUANTITY

0.99+

January 2023DATE

0.99+

SnowflakeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

1.2 billionQUANTITY

0.99+

20%QUANTITY

0.99+

IBMORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

29%QUANTITY

0.99+

30%QUANTITY

0.99+

six factorsQUANTITY

0.99+

second pointQUANTITY

0.99+

24%QUANTITY

0.99+

2022DATE

0.99+

david.vellante@siliconangle.comOTHER

0.99+

X-axisORGANIZATION

0.99+

2023DATE

0.99+

28%QUANTITY

0.99+

193.3 billionQUANTITY

0.99+

ETRORGANIZATION

0.99+

38%QUANTITY

0.99+

7 billionQUANTITY

0.99+

21%QUANTITY

0.99+

EarthLOCATION

0.99+

25%QUANTITY

0.99+

MongoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AtlasORGANIZATION

0.99+

two industriesQUANTITY

0.99+

last weekDATE

0.99+

six yearsQUANTITY

0.99+

first pointQUANTITY

0.99+

Red HatsORGANIZATION

0.99+

35%QUANTITY

0.99+

fourQUANTITY

0.99+

159 respondentsQUANTITY

0.99+

OktaORGANIZATION

0.99+

Breaking Analysis: Enterprise Technology Predictions 2023


 

(upbeat music beginning) >> From the Cube Studios in Palo Alto and Boston, bringing you data-driven insights from the Cube and ETR, this is "Breaking Analysis" with Dave Vellante. >> Making predictions about the future of enterprise tech is more challenging if you strive to lay down forecasts that are measurable. In other words, if you make a prediction, you should be able to look back a year later and say, with some degree of certainty, whether the prediction came true or not, with evidence to back that up. Hello and welcome to this week's Wikibon Cube Insights, powered by ETR. In this breaking analysis, we aim to do just that, with predictions about the macro IT spending environment, cost optimization, security, lots to talk about there, generative AI, cloud, and of course supercloud, blockchain adoption, data platforms, including commentary on Databricks, snowflake, and other key players, automation, events, and we may even have some bonus predictions around quantum computing, and perhaps some other areas. To make all this happen, we welcome back, for the third year in a row, my colleague and friend Eric Bradley from ETR. Eric, thanks for all you do for the community, and thanks for being part of this program. Again. >> I wouldn't miss it for the world. I always enjoy this one. Dave, good to see you. >> Yeah, so let me bring up this next slide and show you, actually come back to me if you would. I got to show the audience this. These are the inbounds that we got from PR firms starting in October around predictions. They know we do prediction posts. And so they'll send literally thousands and thousands of predictions from hundreds of experts in the industry, technologists, consultants, et cetera. And if you bring up the slide I can show you sort of the pattern that developed here. 40% of these thousands of predictions were from cyber. You had AI and data. If you combine those, it's still not close to cyber. Cost optimization was a big thing. Of course, cloud, some on DevOps, and software. Digital... Digital transformation got, you know, some lip service and SaaS. And then there was other, it's kind of around 2%. So quite remarkable, when you think about the focus on cyber, Eric. >> Yeah, there's two reasons why I think it makes sense, though. One, the cybersecurity companies have a lot of cash, so therefore the PR firms might be working a little bit harder for them than some of their other clients. (laughs) And then secondly, as you know, for multiple years now, when we do our macro survey, we ask, "What's your number one spending priority?" And again, it's security. It just isn't going anywhere. It just stays at the top. So I'm actually not that surprised by that little pie chart there, but I was shocked that SaaS was only 5%. You know, going back 10 years ago, that would've been the only thing anyone was talking about. >> Yeah. So true. All right, let's get into it. First prediction, we always start with kind of tech spending. Number one is tech spending increases between four and 5%. ETR has currently got it at 4.6% coming into 2023. This has been a consistently downward trend all year. We started, you know, much, much higher as we've been reporting. Bottom line is the fed is still in control. They're going to ease up on tightening, is the expectation, they're going to shoot for a soft landing. But you know, my feeling is this slingshot economy is going to continue, and it's going to continue to confound, whether it's supply chains or spending. The, the interesting thing about the ETR data, Eric, and I want you to comment on this, the largest companies are the most aggressive to cut. They're laying off, smaller firms are spending faster. They're actually growing at a much larger, faster rate as are companies in EMEA. And that's a surprise. That's outpacing the US and APAC. Chime in on this, Eric. >> Yeah, I was surprised on all of that. First on the higher level spending, we are definitely seeing it coming down, but the interesting thing here is headlines are making it worse. The huge research shop recently said 0% growth. We're coming in at 4.6%. And just so everyone knows, this is not us guessing, we asked 1,525 IT decision-makers what their budget growth will be, and they came in at 4.6%. Now there's a huge disparity, as you mentioned. The Fortune 500, global 2000, barely at 2% growth, but small, it's at 7%. So we're at a situation right now where the smaller companies are still playing a little bit of catch up on digital transformation, and they're spending money. The largest companies that have the most to lose from a recession are being more trepidatious, obviously. So they're playing a "Wait and see." And I hope we don't talk ourselves into a recession. Certainly the headlines and some of their research shops are helping it along. But another interesting comment here is, you know, energy and utilities used to be called an orphan and widow stock group, right? They are spending more than anyone, more than financials insurance, more than retail consumer. So right now it's being driven by mid, small, and energy and utilities. They're all spending like gangbusters, like nothing's happening. And it's the rest of everyone else that's being very cautious. >> Yeah, so very unpredictable right now. All right, let's go to number two. Cost optimization remains a major theme in 2023. We've been reporting on this. You've, we've shown a chart here. What's the primary method that your organization plans to use? You asked this question of those individuals that cited that they were going to reduce their spend and- >> Mhm. >> consolidating redundant vendors, you know, still leads the way, you know, far behind, cloud optimization is second, but it, but cloud continues to outpace legacy on-prem spending, no doubt. Somebody, it was, the guy's name was Alexander Feiglstorfer from Storyblok, sent in a prediction, said "All in one becomes extinct." Now, generally I would say I disagree with that because, you know, as we know over the years, suites tend to win out over, you know, individual, you know, point products. But I think what's going to happen is all in one is going to remain the norm for these larger companies that are cutting back. They want to consolidate redundant vendors, and the smaller companies are going to stick with that best of breed and be more aggressive and try to compete more effectively. What's your take on that? >> Yeah, I'm seeing much more consolidation in vendors, but also consolidation in functionality. We're seeing people building out new functionality, whether it's, we're going to talk about this later, so I don't want to steal too much of our thunder right now, but data and security also, we're seeing a functionality creep. So I think there's further consolidation happening here. I think niche solutions are going to be less likely, and platform solutions are going to be more likely in a spending environment where you want to reduce your vendors. You want to have one bill to pay, not 10. Another thing on this slide, real quick if I can before I move on, is we had a bunch of people write in and some of the answer options that aren't on this graph but did get cited a lot, unfortunately, is the obvious reduction in staff, hiring freezes, and delaying hardware, were three of the top write-ins. And another one was offshore outsourcing. So in addition to what we're seeing here, there were a lot of write-in options, and I just thought it would be important to state that, but essentially the cost optimization is by and far the highest one, and it's growing. So it's actually increased in our citations over the last year. >> And yeah, specifically consolidating redundant vendors. And so I actually thank you for bringing that other up, 'cause I had asked you, Eric, is there any evidence that repatriation is going on and we don't see it in the numbers, we don't see it even in the other, there was, I think very little or no mention of cloud repatriation, even though it might be happening in this in a smattering. >> Not a single mention, not one single mention. I went through it for you. Yep. Not one write-in. >> All right, let's move on. Number three, security leads M&A in 2023. Now you might say, "Oh, well that's a layup," but let me set this up Eric, because I didn't really do a great job with the slide. I hid the, what you've done, because you basically took, this is from the emerging technology survey with 1,181 responses from November. And what we did is we took Palo Alto and looked at the overlap in Palo Alto Networks accounts with these vendors that were showing on this chart. And Eric, I'm going to ask you to explain why we put a circle around OneTrust, but let me just set it up, and then have you comment on the slide and take, give us more detail. We're seeing private company valuations are off, you know, 10 to 40%. We saw a sneak, do a down round, but pretty good actually only down 12%. We've seen much higher down rounds. Palo Alto Networks we think is going to get busy. Again, they're an inquisitive company, they've been sort of quiet lately, and we think CrowdStrike, Cisco, Microsoft, Zscaler, we're predicting all of those will make some acquisitions and we're thinking that the targets are somewhere in this mess of security taxonomy. Other thing we're predicting AI meets cyber big time in 2023, we're going to probably going to see some acquisitions of those companies that are leaning into AI. We've seen some of that with Palo Alto. And then, you know, your comment to me, Eric, was "The RSA conference is going to be insane, hopping mad, "crazy this April," (Eric laughing) but give us your take on this data, and why the red circle around OneTrust? Take us back to that slide if you would, Alex. >> Sure. There's a few things here. First, let me explain what we're looking at. So because we separate the public companies and the private companies into two separate surveys, this allows us the ability to cross-reference that data. So what we're doing here is in our public survey, the tesis, everyone who cited some spending with Palo Alto, meaning they're a Palo Alto customer, we then cross-reference that with the private tech companies. Who also are they spending with? So what you're seeing here is an overlap. These companies that we have circled are doing the best in Palo Alto's accounts. Now, Palo Alto went and bought Twistlock a few years ago, which this data slide predicted, to be quite honest. And so I don't know if they necessarily are going to go after Snyk. Snyk, sorry. They already have something in that space. What they do need, however, is more on the authentication space. So I'm looking at OneTrust, with a 45% overlap in their overall net sentiment. That is a company that's already existing in their accounts and could be very synergistic to them. BeyondTrust as well, authentication identity. This is something that Palo needs to do to move more down that zero trust path. Now why did I pick Palo first? Because usually they're very inquisitive. They've been a little quiet lately. Secondly, if you look at the backdrop in the markets, the IPO freeze isn't going to last forever. Sooner or later, the IPO markets are going to open up, and some of these private companies are going to tap into public equity. In the meantime, however, cash funding on the private side is drying up. If they need another round, they're not going to get it, and they're certainly not going to get it at the valuations they were getting. So we're seeing valuations maybe come down where they're a touch more attractive, and Palo knows this isn't going to last forever. Cisco knows that, CrowdStrike, Zscaler, all these companies that are trying to make a push to become that vendor that you're consolidating in, around, they have a chance now, they have a window where they need to go make some acquisitions. And that's why I believe leading up to RSA, we're going to see some movement. I think it's going to pretty, a really exciting time in security right now. >> Awesome. Thank you. Great explanation. All right, let's go on the next one. Number four is, it relates to security. Let's stay there. Zero trust moves from hype to reality in 2023. Now again, you might say, "Oh yeah, that's a layup." A lot of these inbounds that we got are very, you know, kind of self-serving, but we always try to put some meat in the bone. So first thing we do is we pull out some commentary from, Eric, your roundtable, your insights roundtable. And we have a CISO from a global hospitality firm says, "For me that's the highest priority." He's talking about zero trust because it's the best ROI, it's the most forward-looking, and it enables a lot of the business transformation activities that we want to do. CISOs tell me that they actually can drive forward transformation projects that have zero trust, and because they can accelerate them, because they don't have to go through the hurdle of, you know, getting, making sure that it's secure. Second comment, zero trust closes that last mile where once you're authenticated, they open up the resource to you in a zero trust way. That's a CISO of a, and a managing director of a cyber risk services enterprise. Your thoughts on this? >> I can be here all day, so I'm going to try to be quick on this one. This is not a fluff piece on this one. There's a couple of other reasons this is happening. One, the board finally gets it. Zero trust at first was just a marketing hype term. Now the board understands it, and that's why CISOs are able to push through it. And what they finally did was redefine what it means. Zero trust simply means moving away from hardware security, moving towards software-defined security, with authentication as its base. The board finally gets that, and now they understand that this is necessary and it's being moved forward. The other reason it's happening now is hybrid work is here to stay. We weren't really sure at first, large companies were still trying to push people back to the office, and it's going to happen. The pendulum will swing back, but hybrid work's not going anywhere. By basically on our own data, we're seeing that 69% of companies expect remote and hybrid to be permanent, with only 30% permanent in office. Zero trust works for a hybrid environment. So all of that is the reason why this is happening right now. And going back to our previous prediction, this is why we're picking Palo, this is why we're picking Zscaler to make these acquisitions. Palo Alto needs to be better on the authentication side, and so does Zscaler. They're both fantastic on zero trust network access, but they need the authentication software defined aspect, and that's why we think this is going to happen. One last thing, in that CISO round table, I also had somebody say, "Listen, Zscaler is incredible. "They're doing incredibly well pervading the enterprise, "but their pricing's getting a little high," and they actually think Palo Alto is well-suited to start taking some of that share, if Palo can make one move. >> Yeah, Palo Alto's consolidation story is very strong. Here's my question and challenge. Do you and me, so I'm always hardcore about, okay, you've got to have evidence. I want to look back at these things a year from now and say, "Did we get it right? Yes or no?" If we got it wrong, we'll tell you we got it wrong. So how are we going to measure this? I'd say a couple things, and you can chime in. One is just the number of vendors talking about it. That's, but the marketing always leads the reality. So the second part of that is we got to get evidence from the buying community. Can you help us with that? >> (laughs) Luckily, that's what I do. I have a data company that asks thousands of IT decision-makers what they're adopting and what they're increasing spend on, as well as what they're decreasing spend on and what they're replacing. So I have snapshots in time over the last 11 years where I can go ahead and compare and contrast whether this adoption is happening or not. So come back to me in 12 months and I'll let you know. >> Now, you know, I will. Okay, let's bring up the next one. Number five, generative AI hits where the Metaverse missed. Of course everybody's talking about ChatGPT, we just wrote last week in a breaking analysis with John Furrier and Sarjeet Joha our take on that. We think 2023 does mark a pivot point as natural language processing really infiltrates enterprise tech just as Amazon turned the data center into an API. We think going forward, you're going to be interacting with technology through natural language, through English commands or other, you know, foreign language commands, and investors are lining up, all the VCs are getting excited about creating something competitive to ChatGPT, according to (indistinct) a hundred million dollars gets you a seat at the table, gets you into the game. (laughing) That's before you have to start doing promotion. But he thinks that's what it takes to actually create a clone or something equivalent. We've seen stuff from, you know, the head of Facebook's, you know, AI saying, "Oh, it's really not that sophisticated, ChatGPT, "it's kind of like IBM Watson, it's great engineering, "but you know, we've got more advanced technology." We know Google's working on some really interesting stuff. But here's the thing. ETR just launched this survey for the February survey. It's in the field now. We circle open AI in this category. They weren't even in the survey, Eric, last quarter. So 52% of the ETR survey respondents indicated a positive sentiment toward open AI. I added up all the sort of different bars, we could double click on that. And then I got this inbound from Scott Stevenson of Deep Graham. He said "AI is recession-proof." I don't know if that's the case, but it's a good quote. So bring this back up and take us through this. Explain this chart for us, if you would. >> First of all, I like Scott's quote better than the Facebook one. I think that's some sour grapes. Meta just spent an insane amount of money on the Metaverse and that's a dud. Microsoft just spent money on open AI and it is hot, undoubtedly hot. We've only been in the field with our current ETS survey for a week. So my caveat is it's preliminary data, but I don't care if it's preliminary data. (laughing) We're getting a sneak peek here at what is the number one net sentiment and mindshare leader in the entire machine-learning AI sector within a week. It's beating Data- >> 600. 600 in. >> It's beating Databricks. And we all know Databricks is a huge established enterprise company, not only in machine-learning AI, but it's in the top 10 in the entire survey. We have over 400 vendors in this survey. It's number eight overall, already. In a week. This is not hype. This is real. And I could go on the NLP stuff for a while. Not only here are we seeing it in open AI and machine-learning and AI, but we're seeing NLP in security. It's huge in email security. It's completely transforming that area. It's one of the reasons I thought Palo might take Abnormal out. They're doing such a great job with NLP in this email side, and also in the data prep tools. NLP is going to take out data prep tools. If we have time, I'll discuss that later. But yeah, this is, to me this is a no-brainer, and we're already seeing it in the data. >> Yeah, John Furrier called, you know, the ChatGPT introduction. He said it reminded him of the Netscape moment, when we all first saw Netscape Navigator and went, "Wow, it really could be transformative." All right, number six, the cloud expands to supercloud as edge computing accelerates and CloudFlare is a big winner in 2023. We've reported obviously on cloud, multi-cloud, supercloud and CloudFlare, basically saying what multi-cloud should have been. We pulled this quote from Atif Kahn, who is the founder and CTO of Alkira, thanks, one of the inbounds, thank you. "In 2023, highly distributed IT environments "will become more the norm "as organizations increasingly deploy hybrid cloud, "multi-cloud and edge settings..." Eric, from one of your round tables, "If my sources from edge computing are coming "from the cloud, that means I have my workloads "running in the cloud. "There is no one better than CloudFlare," That's a senior director of IT architecture at a huge financial firm. And then your analysis shows CloudFlare really growing in pervasion, that sort of market presence in the dataset, dramatically, to near 20%, leading, I think you had told me that they're even ahead of Google Cloud in terms of momentum right now. >> That was probably the biggest shock to me in our January 2023 tesis, which covers the public companies in the cloud computing sector. CloudFlare has now overtaken GCP in overall spending, and I was shocked by that. It's already extremely pervasive in networking, of course, for the edge networking side, and also in security. This is the number one leader in SaaSi, web access firewall, DDoS, bot protection, by your definition of supercloud, which we just did a couple of weeks ago, and I really enjoyed that by the way Dave, I think CloudFlare is the one that fits your definition best, because it's bringing all of these aspects together, and most importantly, it's cloud agnostic. It does not need to rely on Azure or AWS to do this. It has its own cloud. So I just think it's, when we look at your definition of supercloud, CloudFlare is the poster child. >> You know, what's interesting about that too, is a lot of people are poo-pooing CloudFlare, "Ah, it's, you know, really kind of not that sophisticated." "You don't have as many tools," but to your point, you're can have those tools in the cloud, Cloudflare's doing serverless on steroids, trying to keep things really simple, doing a phenomenal job at, you know, various locations around the world. And they're definitely one to watch. Somebody put them on my radar (laughing) a while ago and said, "Dave, you got to do a breaking analysis on CloudFlare." And so I want to thank that person. I can't really name them, 'cause they work inside of a giant hyperscaler. But- (Eric laughing) (Dave chuckling) >> Real quickly, if I can from a competitive perspective too, who else is there? They've already taken share from Akamai, and Fastly is their really only other direct comp, and they're not there. And these guys are in poll position and they're the only game in town right now. I just, I don't see it slowing down. >> I thought one of your comments from your roundtable I was reading, one of the folks said, you know, CloudFlare, if my workloads are in the cloud, they are, you know, dominant, they said not as strong with on-prem. And so Akamai is doing better there. I'm like, "Okay, where would you want to be?" (laughing) >> Yeah, which one of those two would you rather be? >> Right? Anyway, all right, let's move on. Number seven, blockchain continues to look for a home in the enterprise, but devs will slowly begin to adopt in 2023. You know, blockchains have got a lot of buzz, obviously crypto is, you know, the killer app for blockchain. Senior IT architect in financial services from your, one of your insight roundtables said quote, "For enterprises to adopt a new technology, "there have to be proven turnkey solutions. "My experience in talking with my peers are, "blockchain is still an open-source component "where you have to build around it." Now I want to thank Ravi Mayuram, who's the CTO of Couchbase sent in, you know, one of the predictions, he said, "DevOps will adopt blockchain, specifically Ethereum." And he referenced actually in his email to me, Solidity, which is the programming language for Ethereum, "will be in every DevOps pro's playbook, "mirroring the boom in machine-learning. "Newer programming languages like Solidity "will enter the toolkits of devs." His point there, you know, Solidity for those of you don't know, you know, Bitcoin is not programmable. Solidity, you know, came out and that was their whole shtick, and they've been improving that, and so forth. But it, Eric, it's true, it really hasn't found its home despite, you know, the potential for smart contracts. IBM's pushing it, VMware has had announcements, and others, really hasn't found its way in the enterprise yet. >> Yeah, and I got to be honest, I don't think it's going to, either. So when we did our top trends series, this was basically chosen as an anti-prediction, I would guess, that it just continues to not gain hold. And the reason why was that first comment, right? It's very much a niche solution that requires a ton of custom work around it. You can't just plug and play it. And at the end of the day, let's be very real what this technology is, it's a database ledger, and we already have database ledgers in the enterprise. So why is this a priority to move to a different database ledger? It's going to be very niche cases. I like the CTO comment from Couchbase about it being adopted by DevOps. I agree with that, but it has to be a DevOps in a very specific use case, and a very sophisticated use case in financial services, most likely. And that's not across the entire enterprise. So I just think it's still going to struggle to get its foothold for a little bit longer, if ever. >> Great, thanks. Okay, let's move on. Number eight, AWS Databricks, Google Snowflake lead the data charge with Microsoft. Keeping it simple. So let's unpack this a little bit. This is the shared accounts peer position for, I pulled data platforms in for analytics, machine-learning and AI and database. So I could grab all these accounts or these vendors and see how they compare in those three sectors. Analytics, machine-learning and database. Snowflake and Databricks, you know, they're on a crash course, as you and I have talked about. They're battling to be the single source of truth in analytics. They're, there's going to be a big focus. They're already started. It's going to be accelerated in 2023 on open formats. Iceberg, Python, you know, they're all the rage. We heard about Iceberg at Snowflake Summit, last summer or last June. Not a lot of people had heard of it, but of course the Databricks crowd, who knows it well. A lot of other open source tooling. There's a company called DBT Labs, which you're going to talk about in a minute. George Gilbert put them on our radar. We just had Tristan Handy, the CEO of DBT labs, on at supercloud last week. They are a new disruptor in data that's, they're essentially making, they're API-ifying, if you will, KPIs inside the data warehouse and dramatically simplifying that whole data pipeline. So really, you know, the ETL guys should be shaking in their boots with them. Coming back to the slide. Google really remains focused on BigQuery adoption. Customers have complained to me that they would like to use Snowflake with Google's AI tools, but they're being forced to go to BigQuery. I got to ask Google about that. AWS continues to stitch together its bespoke data stores, that's gone down that "Right tool for the right job" path. David Foyer two years ago said, "AWS absolutely is going to have to solve that problem." We saw them start to do it in, at Reinvent, bringing together NoETL between Aurora and Redshift, and really trying to simplify those worlds. There's going to be more of that. And then Microsoft, they're just making it cheap and easy to use their stuff, you know, despite some of the complaints that we hear in the community, you know, about things like Cosmos, but Eric, your take? >> Yeah, my concern here is that Snowflake and Databricks are fighting each other, and it's allowing AWS and Microsoft to kind of catch up against them, and I don't know if that's the right move for either of those two companies individually, Azure and AWS are building out functionality. Are they as good? No they're not. The other thing to remember too is that AWS and Azure get paid anyway, because both Databricks and Snowflake run on top of 'em. So (laughing) they're basically collecting their toll, while these two fight it out with each other, and they build out functionality. I think they need to stop focusing on each other, a little bit, and think about the overall strategy. Now for Databricks, we know they came out first as a machine-learning AI tool. They were known better for that spot, and now they're really trying to play catch-up on that data storage compute spot, and inversely for Snowflake, they were killing it with the compute separation from storage, and now they're trying to get into the MLAI spot. I actually wouldn't be surprised to see them make some sort of acquisition. Frank Slootman has been a little bit quiet, in my opinion there. The other thing to mention is your comment about DBT Labs. If we look at our emerging technology survey, last survey when this came out, DBT labs, number one leader in that data integration space, I'm going to just pull it up real quickly. It looks like they had a 33% overall net sentiment to lead data analytics integration. So they are clearly growing, it's fourth straight survey consecutively that they've grown. The other name we're seeing there a little bit is Cribl, but DBT labs is by far the number one player in this space. >> All right. Okay, cool. Moving on, let's go to number nine. With Automation mixer resurgence in 2023, we're showing again data. The x axis is overlap or presence in the dataset, and the vertical axis is shared net score. Net score is a measure of spending momentum. As always, you've seen UI path and Microsoft Power Automate up until the right, that red line, that 40% line is generally considered elevated. UI path is really separating, creating some distance from Automation Anywhere, they, you know, previous quarters they were much closer. Microsoft Power Automate came on the scene in a big way, they loom large with this "Good enough" approach. I will say this, I, somebody sent me a results of a (indistinct) survey, which showed UiPath actually had more mentions than Power Automate, which was surprising, but I think that's not been the case in the ETR data set. We're definitely seeing a shift from back office to front soft office kind of workloads. Having said that, software testing is emerging as a mainstream use case, we're seeing ML and AI become embedded in end-to-end automations, and low-code is serving the line of business. And so this, we think, is going to increasingly have appeal to organizations in the coming year, who want to automate as much as possible and not necessarily, we've seen a lot of layoffs in tech, and people... You're going to have to fill the gaps with automation. That's a trend that's going to continue. >> Yep, agreed. At first that comment about Microsoft Power Automate having less citations than UiPath, that's shocking to me. I'm looking at my chart right here where Microsoft Power Automate was cited by over 60% of our entire survey takers, and UiPath at around 38%. Now don't get me wrong, 38% pervasion's fantastic, but you know you're not going to beat an entrenched Microsoft. So I don't really know where that comment came from. So UiPath, looking at it alone, it's doing incredibly well. It had a huge rebound in its net score this last survey. It had dropped going through the back half of 2022, but we saw a big spike in the last one. So it's got a net score of over 55%. A lot of people citing adoption and increasing. So that's really what you want to see for a name like this. The problem is that just Microsoft is doing its playbook. At the end of the day, I'm going to do a POC, why am I going to pay more for UiPath, or even take on another separate bill, when we know everyone's consolidating vendors, if my license already includes Microsoft Power Automate? It might not be perfect, it might not be as good, but what I'm hearing all the time is it's good enough, and I really don't want another invoice. >> Right. So how does UiPath, you know, and Automation Anywhere, how do they compete with that? Well, the way they compete with it is they got to have a better product. They got a product that's 10 times better. You know, they- >> Right. >> they're not going to compete based on where the lowest cost, Microsoft's got that locked up, or where the easiest to, you know, Microsoft basically give it away for free, and that's their playbook. So that's, you know, up to UiPath. UiPath brought on Rob Ensslin, I've interviewed him. Very, very capable individual, is now Co-CEO. So he's kind of bringing that adult supervision in, and really tightening up the go to market. So, you know, we know this company has been a rocket ship, and so getting some control on that and really getting focused like a laser, you know, could be good things ahead there for that company. Okay. >> One of the problems, if I could real quick Dave, is what the use cases are. When we first came out with RPA, everyone was super excited about like, "No, UiPath is going to be great for super powerful "projects, use cases." That's not what RPA is being used for. As you mentioned, it's being used for mundane tasks, so it's not automating complex things, which I think UiPath was built for. So if you were going to get UiPath, and choose that over Microsoft, it's going to be 'cause you're doing it for more powerful use case, where it is better. But the problem is that's not where the enterprise is using it. The enterprise are using this for base rote tasks, and simply, Microsoft Power Automate can do that. >> Yeah, it's interesting. I've had people on theCube that are both Microsoft Power Automate customers and UiPath customers, and I've asked them, "Well you know, "how do you differentiate between the two?" And they've said to me, "Look, our users and personal productivity users, "they like Power Automate, "they can use it themselves, and you know, "it doesn't take a lot of, you know, support on our end." The flip side is you could do that with UiPath, but like you said, there's more of a focus now on end-to-end enterprise automation and building out those capabilities. So it's increasingly a value play, and that's going to be obviously the challenge going forward. Okay, my last one, and then I think you've got some bonus ones. Number 10, hybrid events are the new category. Look it, if I can get a thousand inbounds that are largely self-serving, I can do my own here, 'cause we're in the events business. (Eric chuckling) Here's the prediction though, and this is a trend we're seeing, the number of physical events is going to dramatically increase. That might surprise people, but most of the big giant events are going to get smaller. The exception is AWS with Reinvent, I think Snowflake's going to continue to grow. So there are examples of physical events that are growing, but generally, most of the big ones are getting smaller, and there's going to be many more smaller intimate regional events and road shows. These micro-events, they're going to be stitched together. Digital is becoming a first class citizen, so people really got to get their digital acts together, and brands are prioritizing earned media, and they're beginning to build their own news networks, going direct to their customers. And so that's a trend we see, and I, you know, we're right in the middle of it, Eric, so you know we're going to, you mentioned RSA, I think that's perhaps going to be one of those crazy ones that continues to grow. It's shrunk, and then it, you know, 'cause last year- >> Yeah, it did shrink. >> right, it was the last one before the pandemic, and then they sort of made another run at it last year. It was smaller but it was very vibrant, and I think this year's going to be huge. Global World Congress is another one, we're going to be there end of Feb. That's obviously a big big show, but in general, the brands and the technology vendors, even Oracle is going to scale down. I don't know about Salesforce. We'll see. You had a couple of bonus predictions. Quantum and maybe some others? Bring us home. >> Yeah, sure. I got a few more. I think we touched upon one, but I definitely think the data prep tools are facing extinction, unfortunately, you know, the Talons Informatica is some of those names. The problem there is that the BI tools are kind of including data prep into it already. You know, an example of that is Tableau Prep Builder, and then in addition, Advanced NLP is being worked in as well. ThoughtSpot, Intelius, both often say that as their selling point, Tableau has Ask Data, Click has Insight Bot, so you don't have to really be intelligent on data prep anymore. A regular business user can just self-query, using either the search bar, or even just speaking into what it needs, and these tools are kind of doing the data prep for it. I don't think that's a, you know, an out in left field type of prediction, but it's the time is nigh. The other one I would also state is that I think knowledge graphs are going to break through this year. Neo4j in our survey is growing in pervasion in Mindshare. So more and more people are citing it, AWS Neptune's getting its act together, and we're seeing that spending intentions are growing there. Tiger Graph is also growing in our survey sample. I just think that the time is now for knowledge graphs to break through, and if I had to do one more, I'd say real-time streaming analytics moves from the very, very rich big enterprises to downstream, to more people are actually going to be moving towards real-time streaming, again, because the data prep tools and the data pipelines have gotten easier to use, and I think the ROI on real-time streaming is obviously there. So those are three that didn't make the cut, but I thought deserved an honorable mention. >> Yeah, I'm glad you did. Several weeks ago, we did an analyst prediction roundtable, if you will, a cube session power panel with a number of data analysts and that, you know, streaming, real-time streaming was top of mind. So glad you brought that up. Eric, as always, thank you very much. I appreciate the time you put in beforehand. I know it's been crazy, because you guys are wrapping up, you know, the last quarter survey in- >> Been a nuts three weeks for us. (laughing) >> job. I love the fact that you're doing, you know, the ETS survey now, I think it's quarterly now, right? Is that right? >> Yep. >> Yep. So that's phenomenal. >> Four times a year. I'll be happy to jump on with you when we get that done. I know you were really impressed with that last time. >> It's unbelievable. This is so much data at ETR. Okay. Hey, that's a wrap. Thanks again. >> Take care Dave. Good seeing you. >> All right, many thanks to our team here, Alex Myerson as production, he manages the podcast force. Ken Schiffman as well is a critical component of our East Coast studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hoof is our editor-in-chief. He's at siliconangle.com. He's just a great editing for us. Thank you all. Remember all these episodes that are available as podcasts, wherever you listen, podcast is doing great. Just search "Breaking analysis podcast." Really appreciate you guys listening. I publish each week on wikibon.com and siliconangle.com, or you can email me directly if you want to get in touch, david.vellante@siliconangle.com. That's how I got all these. I really appreciate it. I went through every single one with a yellow highlighter. It took some time, (laughing) but I appreciate it. You could DM me at dvellante, or comment on our LinkedIn post and please check out etr.ai. Its data is amazing. Best survey data in the enterprise tech business. This is Dave Vellante for theCube Insights, powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis." (upbeat music beginning) (upbeat music ending)

Published Date : Jan 29 2023

SUMMARY :

insights from the Cube and ETR, do for the community, Dave, good to see you. actually come back to me if you would. It just stays at the top. the most aggressive to cut. that have the most to lose What's the primary method still leads the way, you know, So in addition to what we're seeing here, And so I actually thank you I went through it for you. I'm going to ask you to explain and they're certainly not going to get it to you in a zero trust way. So all of that is the One is just the number of So come back to me in 12 So 52% of the ETR survey amount of money on the Metaverse and also in the data prep tools. the cloud expands to the biggest shock to me "Ah, it's, you know, really and Fastly is their really the folks said, you know, for a home in the enterprise, Yeah, and I got to be honest, in the community, you know, and I don't know if that's the right move and the vertical axis is shared net score. So that's really what you want Well, the way they compete So that's, you know, One of the problems, if and that's going to be obviously even Oracle is going to scale down. and the data pipelines and that, you know, Been a nuts three I love the fact I know you were really is so much data at ETR. and we'll see you next time

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

EricPERSON

0.99+

Eric BradleyPERSON

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Rob HoofPERSON

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

10QUANTITY

0.99+

Ravi MayuramPERSON

0.99+

Cheryl KnightPERSON

0.99+

George GilbertPERSON

0.99+

Ken SchiffmanPERSON

0.99+

AWSORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

DavePERSON

0.99+

Atif KahnPERSON

0.99+

NovemberDATE

0.99+

Frank SlootmanPERSON

0.99+

APACORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

PaloORGANIZATION

0.99+

David FoyerPERSON

0.99+

FebruaryDATE

0.99+

January 2023DATE

0.99+

DBT LabsORGANIZATION

0.99+

OctoberDATE

0.99+

Rob EnsslinPERSON

0.99+

Scott StevensonPERSON

0.99+

John FurrierPERSON

0.99+

69%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

4.6%QUANTITY

0.99+

10 timesQUANTITY

0.99+

2023DATE

0.99+

ScottPERSON

0.99+

1,181 responsesQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

third yearQUANTITY

0.99+

BostonLOCATION

0.99+

AlexPERSON

0.99+

thousandsQUANTITY

0.99+

OneTrustORGANIZATION

0.99+

45%QUANTITY

0.99+

33%QUANTITY

0.99+

DatabricksORGANIZATION

0.99+

two reasonsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

BeyondTrustORGANIZATION

0.99+

7%QUANTITY

0.99+

IBMORGANIZATION

0.99+

Breaking Analysis: ChatGPT Won't Give OpenAI First Mover Advantage


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> OpenAI The company, and ChatGPT have taken the world by storm. Microsoft reportedly is investing an additional 10 billion dollars into the company. But in our view, while the hype around ChatGPT is justified, we don't believe OpenAI will lock up the market with its first mover advantage. Rather, we believe that success in this market will be directly proportional to the quality and quantity of data that a technology company has at its disposal, and the compute power that it could deploy to run its system. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this Breaking Analysis, we unpack the excitement around ChatGPT, and debate the premise that the company's early entry into the space may not confer winner take all advantage to OpenAI. And to do so, we welcome CUBE collaborator, alum, Sarbjeet Johal, (chuckles) and John Furrier, co-host of the Cube. Great to see you Sarbjeet, John. Really appreciate you guys coming to the program. >> Great to be on. >> Okay, so what is ChatGPT? Well, actually we asked ChatGPT, what is ChatGPT? So here's what it said. ChatGPT is a state-of-the-art language model developed by OpenAI that can generate human-like text. It could be fine tuned for a variety of language tasks, such as conversation, summarization, and language translation. So I asked it, give it to me in 50 words or less. How did it do? Anything to add? >> Yeah, think it did good. It's large language model, like previous models, but it started applying the transformers sort of mechanism to focus on what prompt you have given it to itself. And then also the what answer it gave you in the first, sort of, one sentence or two sentences, and then introspect on itself, like what I have already said to you. And so just work on that. So it it's self sort of focus if you will. It does, the transformers help the large language models to do that. >> So to your point, it's a large language model, and GPT stands for generative pre-trained transformer. >> And if you put the definition back up there again, if you put it back up on the screen, let's see it back up. Okay, it actually missed the large, word large. So one of the problems with ChatGPT, it's not always accurate. It's actually a large language model, and it says state of the art language model. And if you look at Google, Google has dominated AI for many times and they're well known as being the best at this. And apparently Google has their own large language model, LLM, in play and have been holding it back to release because of backlash on the accuracy. Like just in that example you showed is a great point. They got almost right, but they missed the key word. >> You know what's funny about that John, is I had previously asked it in my prompt to give me it in less than a hundred words, and it was too long, I said I was too long for Breaking Analysis, and there it went into the fact that it's a large language model. So it largely, it gave me a really different answer the, for both times. So, but it's still pretty amazing for those of you who haven't played with it yet. And one of the best examples that I saw was Ben Charrington from This Week In ML AI podcast. And I stumbled on this thanks to Brian Gracely, who was listening to one of his Cloudcasts. Basically what Ben did is he took, he prompted ChatGPT to interview ChatGPT, and he simply gave the system the prompts, and then he ran the questions and answers into this avatar builder and sped it up 2X so it didn't sound like a machine. And voila, it was amazing. So John is ChatGPT going to take over as a cube host? >> Well, I was thinking, we get the questions in advance sometimes from PR people. We should actually just plug it in ChatGPT, add it to our notes, and saying, "Is this good enough for you? Let's ask the real question." So I think, you know, I think there's a lot of heavy lifting that gets done. I think the ChatGPT is a phenomenal revolution. I think it highlights the use case. Like that example we showed earlier. It gets most of it right. So it's directionally correct and it feels like it's an answer, but it's not a hundred percent accurate. And I think that's where people are seeing value in it. Writing marketing, copy, brainstorming, guest list, gift list for somebody. Write me some lyrics to a song. Give me a thesis about healthcare policy in the United States. It'll do a bang up job, and then you got to go in and you can massage it. So we're going to do three quarters of the work. That's why plagiarism and schools are kind of freaking out. And that's why Microsoft put 10 billion in, because why wouldn't this be a feature of Word, or the OS to help it do stuff on behalf of the user. So linguistically it's a beautiful thing. You can input a string and get a good answer. It's not a search result. >> And we're going to get your take on on Microsoft and, but it kind of levels the playing- but ChatGPT writes better than I do, Sarbjeet, and I know you have some good examples too. You mentioned the Reed Hastings example. >> Yeah, I was listening to Reed Hastings fireside chat with ChatGPT, and the answers were coming as sort of voice, in the voice format. And it was amazing what, he was having very sort of philosophy kind of talk with the ChatGPT, the longer sentences, like he was going on, like, just like we are talking, he was talking for like almost two minutes and then ChatGPT was answering. It was not one sentence question, and then a lot of answers from ChatGPT and yeah, you're right. I, this is our ability. I've been thinking deep about this since yesterday, we talked about, like, we want to do this segment. The data is fed into the data model. It can be the current data as well, but I think that, like, models like ChatGPT, other companies will have those too. They can, they're democratizing the intelligence, but they're not creating intelligence yet, definitely yet I can say that. They will give you all the finite answers. Like, okay, how do you do this for loop in Java, versus, you know, C sharp, and as a programmer you can do that, in, but they can't tell you that, how to write a new algorithm or write a new search algorithm for you. They cannot create a secretive code for you to- >> Not yet. >> Have competitive advantage. >> Not yet, not yet. >> but you- >> Can Google do that today? >> No one really can. The reasoning side of the data is, we talked about at our Supercloud event, with Zhamak Dehghani who's was CEO of, now of Nextdata. This next wave of data intelligence is going to come from entrepreneurs that are probably cross discipline, computer science and some other discipline. But they're going to be new things, for example, data, metadata, and data. It's hard to do reasoning like a human being, so that needs more data to train itself. So I think the first gen of this training module for the large language model they have is a corpus of text. Lot of that's why blog posts are, but the facts are wrong and sometimes out of context, because that contextual reasoning takes time, it takes intelligence. So machines need to become intelligent, and so therefore they need to be trained. So you're going to start to see, I think, a lot of acceleration on training the data sets. And again, it's only as good as the data you can get. And again, proprietary data sets will be a huge winner. Anyone who's got a large corpus of content, proprietary content like theCUBE or SiliconANGLE as a publisher will benefit from this. Large FinTech companies, anyone with large proprietary data will probably be a big winner on this generative AI wave, because it just, it will eat that up, and turn that back into something better. So I think there's going to be a lot of interesting things to look at here. And certainly productivity's going to be off the charts for vanilla and the internet is going to get swarmed with vanilla content. So if you're in the content business, and you're an original content producer of any kind, you're going to be not vanilla, so you're going to be better. So I think there's so much at play Dave (indistinct). >> I think the playing field has been risen, so we- >> Risen and leveled? >> Yeah, and leveled to certain extent. So it's now like that few people as consumers, as consumers of AI, we will have a advantage and others cannot have that advantage. So it will be democratized. That's, I'm sure about that. But if you take the example of calculator, when the calculator came in, and a lot of people are, "Oh, people can't do math anymore because calculator is there." right? So it's a similar sort of moment, just like a calculator for the next level. But, again- >> I see it more like open source, Sarbjeet, because like if you think about what ChatGPT's doing, you do a query and it comes from somewhere the value of a post from ChatGPT is just a reuse of AI. The original content accent will be come from a human. So if I lay out a paragraph from ChatGPT, did some heavy lifting on some facts, I check the facts, save me about maybe- >> Yeah, it's productive. >> An hour writing, and then I write a killer two, three sentences of, like, sharp original thinking or critical analysis. I then took that body of work, open source content, and then laid something on top of it. >> And Sarbjeet's example is a good one, because like if the calculator kids don't do math as well anymore, the slide rule, remember we had slide rules as kids, remember we first started using Waze, you know, we were this minority and you had an advantage over other drivers. Now Waze is like, you know, social traffic, you know, navigation, everybody had, you know- >> All the back roads are crowded. >> They're car crowded. (group laughs) Exactly. All right, let's, let's move on. What about this notion that futurist Ray Amara put forth and really Amara's Law that we're showing here, it's, the law is we, you know, "We tend to overestimate the effect of technology in the short run and underestimate it in the long run." Is that the case, do you think, with ChatGPT? What do you think Sarbjeet? >> I think that's true actually. There's a lot of, >> We don't debate this. >> There's a lot of awe, like when people see the results from ChatGPT, they say what, what the heck? Like, it can do this? But then if you use it more and more and more, and I ask the set of similar question, not the same question, and it gives you like same answer. It's like reading from the same bucket of text in, the interior read (indistinct) where the ChatGPT, you will see that in some couple of segments. It's very, it sounds so boring that the ChatGPT is coming out the same two sentences every time. So it is kind of good, but it's not as good as people think it is right now. But we will have, go through this, you know, hype sort of cycle and get realistic with it. And then in the long term, I think it's a great thing in the short term, it's not something which will (indistinct) >> What's your counter point? You're saying it's not. >> I, no I think the question was, it's hyped up in the short term and not it's underestimated long term. That's what I think what he said, quote. >> Yes, yeah. That's what he said. >> Okay, I think that's wrong with this, because this is a unique, ChatGPT is a unique kind of impact and it's very generational. People have been comparing it, I have been comparing to the internet, like the web, web browser Mosaic and Netscape, right, Navigator. I mean, I clearly still remember the days seeing Navigator for the first time, wow. And there weren't not many sites you could go to, everyone typed in, you know, cars.com, you know. >> That (indistinct) wasn't that overestimated, the overhyped at the beginning and underestimated. >> No, it was, it was underestimated long run, people thought. >> But that Amara's law. >> That's what is. >> No, they said overestimated? >> Overestimated near term underestimated- overhyped near term, underestimated long term. I got, right I mean? >> Well, I, yeah okay, so I would then agree, okay then- >> We were off the charts about the internet in the early days, and it actually exceeded our expectations. >> Well there were people who were, like, poo-pooing it early on. So when the browser came out, people were like, "Oh, the web's a toy for kids." I mean, in 1995 the web was a joke, right? So '96, you had online populations growing, so you had structural changes going on around the browser, internet population. And then that replaced other things, direct mail, other business activities that were once analog then went to the web, kind of read only as you, as we always talk about. So I think that's a moment where the hype long term, the smart money, and the smart industry experts all get the long term. And in this case, there's more poo-pooing in the short term. "Ah, it's not a big deal, it's just AI." I've heard many people poo-pooing ChatGPT, and a lot of smart people saying, "No this is next gen, this is different and it's only going to get better." So I think people are estimating a big long game on this one. >> So you're saying it's bifurcated. There's those who say- >> Yes. >> Okay, all right, let's get to the heart of the premise, and possibly the debate for today's episode. Will OpenAI's early entry into the market confer sustainable competitive advantage for the company. And if you look at the history of tech, the technology industry, it's kind of littered with first mover failures. Altair, IBM, Tandy, Commodore, they and Apple even, they were really early in the PC game. They took a backseat to Dell who came in the scene years later with a better business model. Netscape, you were just talking about, was all the rage in Silicon Valley, with the first browser, drove up all the housing prices out here. AltaVista was the first search engine to really, you know, index full text. >> Owned by Dell, I mean DEC. >> Owned by Digital. >> Yeah, Digital Equipment >> Compaq bought it. And of course as an aside, Digital, they wanted to showcase their hardware, right? Their super computer stuff. And then so Friendster and MySpace, they came before Facebook. The iPhone certainly wasn't the first mobile device. So lots of failed examples, but there are some recent successes like AWS and cloud. >> You could say smartphone. So I mean. >> Well I know, and you can, we can parse this so we'll debate it. Now Twitter, you could argue, had first mover advantage. You kind of gave me that one John. Bitcoin and crypto clearly had first mover advantage, and sustaining that. Guys, will OpenAI make it to the list on the right with ChatGPT, what do you think? >> I think categorically as a company, it probably won't, but as a category, I think what they're doing will, so OpenAI as a company, they get funding, there's power dynamics involved. Microsoft put a billion dollars in early on, then they just pony it up. Now they're reporting 10 billion more. So, like, if the browsers, Microsoft had competitive advantage over Netscape, and used monopoly power, and convicted by the Department of Justice for killing Netscape with their monopoly, Netscape should have had won that battle, but Microsoft killed it. In this case, Microsoft's not killing it, they're buying into it. So I think the embrace extend Microsoft power here makes OpenAI vulnerable for that one vendor solution. So the AI as a company might not make the list, but the category of what this is, large language model AI, is probably will be on the right hand side. >> Okay, we're going to come back to the government intervention and maybe do some comparisons, but what are your thoughts on this premise here? That, it will basically set- put forth the premise that it, that ChatGPT, its early entry into the market will not confer competitive advantage to >> For OpenAI. >> To Open- Yeah, do you agree with that? >> I agree with that actually. It, because Google has been at it, and they have been holding back, as John said because of the scrutiny from the Fed, right, so- >> And privacy too. >> And the privacy and the accuracy as well. But I think Sam Altman and the company on those guys, right? They have put this in a hasty way out there, you know, because it makes mistakes, and there are a lot of questions around the, sort of, where the content is coming from. You saw that as your example, it just stole the content, and without your permission, you know? >> Yeah. So as quick this aside- >> And it codes on people's behalf and the, those codes are wrong. So there's a lot of, sort of, false information it's putting out there. So it's a very vulnerable thing to do what Sam Altman- >> So even though it'll get better, others will compete. >> So look, just side note, a term which Reid Hoffman used a little bit. Like he said, it's experimental launch, like, you know, it's- >> It's pretty damn good. >> It is clever because according to Sam- >> It's more than clever. It's good. >> It's awesome, if you haven't used it. I mean you write- you read what it writes and you go, "This thing writes so well, it writes so much better than you." >> The human emotion drives that too. I think that's a big thing. But- >> I Want to add one more- >> Make your last point. >> Last one. Okay. So, but he's still holding back. He's conducting quite a few interviews. If you want to get the gist of it, there's an interview with StrictlyVC interview from yesterday with Sam Altman. Listen to that one it's an eye opening what they want- where they want to take it. But my last one I want to make it on this point is that Satya Nadella yesterday did an interview with Wall Street Journal. I think he was doing- >> You were not impressed. >> I was not impressed because he was pushing it too much. So Sam Altman's holding back so there's less backlash. >> Got 10 billion reasons to push. >> I think he's almost- >> Microsoft just laid off 10000 people. Hey ChatGPT, find me a job. You know like. (group laughs) >> He's overselling it to an extent that I think it will backfire on Microsoft. And he's over promising a lot of stuff right now, I think. I don't know why he's very jittery about all these things. And he did the same thing during Ignite as well. So he said, "Oh, this AI will write code for you and this and that." Like you called him out- >> The hyperbole- >> During your- >> from Satya Nadella, he's got a lot of hyperbole. (group talks over each other) >> All right, Let's, go ahead. >> Well, can I weigh in on the whole- >> Yeah, sure. >> Microsoft thing on whether OpenAI, here's the take on this. I think it's more like the browser moment to me, because I could relate to that experience with ChatG, personally, emotionally, when I saw that, and I remember vividly- >> You mean that aha moment (indistinct). >> Like this is obviously the future. Anything else in the old world is dead, website's going to be everywhere. It was just instant dot connection for me. And a lot of other smart people who saw this. Lot of people by the way, didn't see it. Someone said the web's a toy. At the company I was worked for at the time, Hewlett Packard, they like, they could have been in, they had invented HTML, and so like all this stuff was, like, they just passed, the web was just being passed over. But at that time, the browser got better, more websites came on board. So the structural advantage there was online web usage was growing, online user population. So that was growing exponentially with the rise of the Netscape browser. So OpenAI could stay on the right side of your list as durable, if they leverage the category that they're creating, can get the scale. And if they can get the scale, just like Twitter, that failed so many times that they still hung around. So it was a product that was always successful, right? So I mean, it should have- >> You're right, it was terrible, we kept coming back. >> The fail whale, but it still grew. So OpenAI has that moment. They could do it if Microsoft doesn't meddle too much with too much power as a vendor. They could be the Netscape Navigator, without the anti-competitive behavior of somebody else. So to me, they have the pole position. So they have an opportunity. So if not, if they don't execute, then there's opportunity. There's not a lot of barriers to entry, vis-a-vis say the CapEx of say a cloud company like AWS. You can't replicate that, Many have tried, but I think you can replicate OpenAI. >> And we're going to talk about that. Okay, so real quick, I want to bring in some ETR data. This isn't an ETR heavy segment, only because this so new, you know, they haven't coverage yet, but they do cover AI. So basically what we're seeing here is a slide on the vertical axis's net score, which is a measure of spending momentum, and in the horizontal axis's is presence in the dataset. Think of it as, like, market presence. And in the insert right there, you can see how the dots are plotted, the two columns. And so, but the key point here that we want to make, there's a bunch of companies on the left, is he like, you know, DataRobot and C3 AI and some others, but the big whales, Google, AWS, Microsoft, are really dominant in this market. So that's really the key takeaway that, can we- >> I notice IBM is way low. >> Yeah, IBM's low, and actually bring that back up and you, but then you see Oracle who actually is injecting. So I guess that's the other point is, you're not necessarily going to go buy AI, and you know, build your own AI, you're going to, it's going to be there and, it, Salesforce is going to embed it into its platform, the SaaS companies, and you're going to purchase AI. You're not necessarily going to build it. But some companies obviously are. >> I mean to quote IBM's general manager Rob Thomas, "You can't have AI with IA." information architecture and David Flynn- >> You can't Have AI without IA >> without, you can't have AI without IA. You can't have, if you have an Information Architecture, you then can power AI. Yesterday David Flynn, with Hammersmith, was on our Supercloud. He was pointing out that the relationship of storage, where you store things, also impacts the data and stressablity, and Zhamak from Nextdata, she was pointing out that same thing. So the data problem factors into all this too, Dave. >> So you got the big cloud and internet giants, they're all poised to go after this opportunity. Microsoft is investing up to 10 billion. Google's code red, which was, you know, the headline in the New York Times. Of course Apple is there and several alternatives in the market today. Guys like Chinchilla, Bloom, and there's a company Jasper and several others, and then Lena Khan looms large and the government's around the world, EU, US, China, all taking notice before the market really is coalesced around a single player. You know, John, you mentioned Netscape, they kind of really, the US government was way late to that game. It was kind of game over. And Netscape, I remember Barksdale was like, "Eh, we're going to be selling software in the enterprise anyway." and then, pshew, the company just dissipated. So, but it looks like the US government, especially with Lena Khan, they're changing the definition of antitrust and what the cause is to go after people, and they're really much more aggressive. It's only what, two years ago that (indistinct). >> Yeah, the problem I have with the federal oversight is this, they're always like late to the game, and they're slow to catch up. So in other words, they're working on stuff that should have been solved a year and a half, two years ago around some of the social networks hiding behind some of the rules around open web back in the days, and I think- >> But they're like 15 years late to that. >> Yeah, and now they got this new thing on top of it. So like, I just worry about them getting their fingers. >> But there's only two years, you know, OpenAI. >> No, but the thing (indistinct). >> No, they're still fighting other battles. But the problem with government is that they're going to label Big Tech as like a evil thing like Pharma, it's like smoke- >> You know Lena Khan wants to kill Big Tech, there's no question. >> So I think Big Tech is getting a very seriously bad rap. And I think anything that the government does that shades darkness on tech, is politically motivated in most cases. You can almost look at everything, and my 80 20 rule is in play here. 80% of the government activity around tech is bullshit, it's politically motivated, and the 20% is probably relevant, but off the mark and not organized. >> Well market forces have always been the determining factor of success. The governments, you know, have been pretty much failed. I mean you look at IBM's antitrust, that, what did that do? The market ultimately beat them. You look at Microsoft back in the day, right? Windows 95 was peaking, the government came in. But you know, like you said, they missed the web, right, and >> so they were hanging on- >> There's nobody in government >> to Windows. >> that actually knows- >> And so, you, I think you're right. It's market forces that are going to determine this. But Sarbjeet, what do you make of Microsoft's big bet here, you weren't impressed with with Nadella. How do you think, where are they going to apply it? Is this going to be a Hail Mary for Bing, or is it going to be applied elsewhere? What do you think. >> They are saying that they will, sort of, weave this into their products, office products, productivity and also to write code as well, developer productivity as well. That's a big play for them. But coming back to your antitrust sort of comments, right? I believe the, your comment was like, oh, fed was late 10 years or 15 years earlier, but now they're two years. But things are moving very fast now as compared to they used to move. >> So two years is like 10 Years. >> Yeah, two years is like 10 years. Just want to make that point. (Dave laughs) This thing is going like wildfire. Any new tech which comes in that I think they're going against distribution channels. Lina Khan has commented time and again that the marketplace model is that she wants to have some grip on. Cloud marketplaces are a kind of monopolistic kind of way. >> I don't, I don't see this, I don't see a Chat AI. >> You told me it's not Bing, you had an interesting comment. >> No, no. First of all, this is great from Microsoft. If you're Microsoft- >> Why? >> Because Microsoft doesn't have the AI chops that Google has, right? Google is got so much core competency on how they run their search, how they run their backends, their cloud, even though they don't get a lot of cloud market share in the enterprise, they got a kick ass cloud cause they needed one. >> Totally. >> They've invented SRE. I mean Google's development and engineering chops are off the scales, right? Amazon's got some good chops, but Google's got like 10 times more chops than AWS in my opinion. Cloud's a whole different story. Microsoft gets AI, they get a playbook, they get a product they can render into, the not only Bing, productivity software, helping people write papers, PowerPoint, also don't forget the cloud AI can super help. We had this conversation on our Supercloud event, where AI's going to do a lot of the heavy lifting around understanding observability and managing service meshes, to managing microservices, to turning on and off applications, and or maybe writing code in real time. So there's a plethora of use cases for Microsoft to deploy this. combined with their R and D budgets, they can then turbocharge more research, build on it. So I think this gives them a car in the game, Google may have pole position with AI, but this puts Microsoft right in the game, and they already have a lot of stuff going on. But this just, I mean everything gets lifted up. Security, cloud, productivity suite, everything. >> What's under the hood at Google, and why aren't they talking about it? I mean they got to be freaked out about this. No? Or do they have kind of a magic bullet? >> I think they have the, they have the chops definitely. Magic bullet, I don't know where they are, as compared to the ChatGPT 3 or 4 models. Like they, but if you look at the online sort of activity and the videos put out there from Google folks, Google technology folks, that's account you should look at if you are looking there, they have put all these distinctions what ChatGPT 3 has used, they have been talking about for a while as well. So it's not like it's a secret thing that you cannot replicate. As you said earlier, like in the beginning of this segment, that anybody who has more data and the capacity to process that data, which Google has both, I think they will win this. >> Obviously living in Palo Alto where the Google founders are, and Google's headquarters next town over we have- >> We're so close to them. We have inside information on some of the thinking and that hasn't been reported by any outlet yet. And that is, is that, from what I'm hearing from my sources, is Google has it, they don't want to release it for many reasons. One is it might screw up their search monopoly, one, two, they're worried about the accuracy, 'cause Google will get sued. 'Cause a lot of people are jamming on this ChatGPT as, "Oh it does everything for me." when it's clearly not a hundred percent accurate all the time. >> So Lina Kahn is looming, and so Google's like be careful. >> Yeah so Google's just like, this is the third, could be a third rail. >> But the first thing you said is a concern. >> Well no. >> The disruptive (indistinct) >> What they will do is do a Waymo kind of thing, where they spin out a separate company. >> They're doing that. >> The discussions happening, they're going to spin out the separate company and put it over there, and saying, "This is AI, got search over there, don't touch that search, 'cause that's where all the revenue is." (chuckles) >> So, okay, so that's how they deal with the Clay Christensen dilemma. What's the business model here? I mean it's not advertising, right? Is it to charge you for a query? What, how do you make money at this? >> It's a good question, I mean my thinking is, first of all, it's cool to type stuff in and see a paper get written, or write a blog post, or gimme a marketing slogan for this or that or write some code. I think the API side of the business will be critical. And I think Howie Xu, I know you're going to reference some of his comments yesterday on Supercloud, I think this brings a whole 'nother user interface into technology consumption. I think the business model, not yet clear, but it will probably be some sort of either API and developer environment or just a straight up free consumer product, with some sort of freemium backend thing for business. >> And he was saying too, it's natural language is the way in which you're going to interact with these systems. >> I think it's APIs, it's APIs, APIs, APIs, because these people who are cooking up these models, and it takes a lot of compute power to train these and to, for inference as well. Somebody did the analysis on the how many cents a Google search costs to Google, and how many cents the ChatGPT query costs. It's, you know, 100x or something on that. You can take a look at that. >> A 100x on which side? >> You're saying two orders of magnitude more expensive for ChatGPT >> Much more, yeah. >> Than for Google. >> It's very expensive. >> So Google's got the data, they got the infrastructure and they got, you're saying they got the cost (indistinct) >> No actually it's a simple query as well, but they are trying to put together the answers, and they're going through a lot more data versus index data already, you know. >> Let me clarify, you're saying that Google's version of ChatGPT is more efficient? >> No, I'm, I'm saying Google search results. >> Ah, search results. >> What are used to today, but cheaper. >> But that, does that, is that going to confer advantage to Google's large language (indistinct)? >> It will, because there were deep science (indistinct). >> Google, I don't think Google search is doing a large language model on their search, it's keyword search. You know, what's the weather in Santa Cruz? Or how, what's the weather going to be? Or you know, how do I find this? Now they have done a smart job of doing some things with those queries, auto complete, re direct navigation. But it's, it's not entity. It's not like, "Hey, what's Dave Vellante thinking this week in Breaking Analysis?" ChatGPT might get that, because it'll get your Breaking Analysis, it'll synthesize it. There'll be some, maybe some clips. It'll be like, you know, I mean. >> Well I got to tell you, I asked ChatGPT to, like, I said, I'm going to enter a transcript of a discussion I had with Nir Zuk, the CTO of Palo Alto Networks, And I want you to write a 750 word blog. I never input the transcript. It wrote a 750 word blog. It attributed quotes to him, and it just pulled a bunch of stuff that, and said, okay, here it is. It talked about Supercloud, it defined Supercloud. >> It's made, it makes you- >> Wow, But it was a big lie. It was fraudulent, but still, blew me away. >> Again, vanilla content and non accurate content. So we are going to see a surge of misinformation on steroids, but I call it the vanilla content. Wow, that's just so boring, (indistinct). >> There's so many dangers. >> Make your point, cause we got to, almost out of time. >> Okay, so the consumption, like how do you consume this thing. As humans, we are consuming it and we are, like, getting a nicely, like, surprisingly shocked, you know, wow, that's cool. It's going to increase productivity and all that stuff, right? And on the danger side as well, the bad actors can take hold of it and create fake content and we have the fake sort of intelligence, if you go out there. So that's one thing. The second thing is, we are as humans are consuming this as language. Like we read that, we listen to it, whatever format we consume that is, but the ultimate usage of that will be when the machines can take that output from likes of ChatGPT, and do actions based on that. The robots can work, the robot can paint your house, we were talking about, right? Right now we can't do that. >> Data apps. >> So the data has to be ingested by the machines. It has to be digestible by the machines. And the machines cannot digest unorganized data right now, we will get better on the ingestion side as well. So we are getting better. >> Data, reasoning, insights, and action. >> I like that mall, paint my house. >> So, okay- >> By the way, that means drones that'll come in. Spray painting your house. >> Hey, it wasn't too long ago that robots couldn't climb stairs, as I like to point out. Okay, and of course it's no surprise the venture capitalists are lining up to eat at the trough, as I'd like to say. Let's hear, you'd referenced this earlier, John, let's hear what AI expert Howie Xu said at the Supercloud event, about what it takes to clone ChatGPT. Please, play the clip. >> So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest to get a, you know, another shot to the openAI sort of the level." You know, I did a (indistinct) >> Line up. >> A hundred million dollar is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So a hundred- >> Guys a hundred million dollars, that's an astoundingly low figure. What do you make of it? >> I was in an interview with, I was interviewing, I think he said hundred million or so, but in the hundreds of millions, not a billion right? >> You were trying to get him up, you were like "Hundreds of millions." >> Well I think, I- >> He's like, eh, not 10, not a billion. >> Well first of all, Howie Xu's an expert machine learning. He's at Zscaler, he's a machine learning AI guy. But he comes from VMware, he's got his technology pedigrees really off the chart. Great friend of theCUBE and kind of like a CUBE analyst for us. And he's smart. He's right. I think the barriers to entry from a dollar standpoint are lower than say the CapEx required to compete with AWS. Clearly, the CapEx spending to build all the tech for the run a cloud. >> And you don't need a huge sales force. >> And in some case apps too, it's the same thing. But I think it's not that hard. >> But am I right about that? You don't need a huge sales force either. It's, what, you know >> If the product's good, it will sell, this is a new era. The better mouse trap will win. This is the new economics in software, right? So- >> Because you look at the amount of money Lacework, and Snyk, Snowflake, Databrooks. Look at the amount of money they've raised. I mean it's like a billion dollars before they get to IPO or more. 'Cause they need promotion, they need go to market. You don't need (indistinct) >> OpenAI's been working on this for multiple five years plus it's, hasn't, wasn't born yesterday. Took a lot of years to get going. And Sam is depositioning all the success, because he's trying to manage expectations, To your point Sarbjeet, earlier. It's like, yeah, he's trying to "Whoa, whoa, settle down everybody, (Dave laughs) it's not that great." because he doesn't want to fall into that, you know, hero and then get taken down, so. >> It may take a 100 million or 150 or 200 million to train the model. But to, for the inference to, yeah to for the inference machine, It will take a lot more, I believe. >> Give it, so imagine, >> Because- >> Go ahead, sorry. >> Go ahead. But because it consumes a lot more compute cycles and it's certain level of storage and everything, right, which they already have. So I think to compute is different. To frame the model is a different cost. But to run the business is different, because I think 100 million can go into just fighting the Fed. >> Well there's a flywheel too. >> Oh that's (indistinct) >> (indistinct) >> We are running the business, right? >> It's an interesting number, but it's also kind of, like, context to it. So here, a hundred million spend it, you get there, but you got to factor in the fact that the ways companies win these days is critical mass scale, hitting a flywheel. If they can keep that flywheel of the value that they got going on and get better, you can almost imagine a marketplace where, hey, we have proprietary data, we're SiliconANGLE in theCUBE. We have proprietary content, CUBE videos, transcripts. Well wouldn't it be great if someone in a marketplace could sell a module for us, right? We buy that, Amazon's thing and things like that. So if they can get a marketplace going where you can apply to data sets that may be proprietary, you can start to see this become bigger. And so I think the key barriers to entry is going to be success. I'll give you an example, Reddit. Reddit is successful and it's hard to copy, not because of the software. >> They built the moat. >> Because you can, buy Reddit open source software and try To compete. >> They built the moat with their community. >> Their community, their scale, their user expectation. Twitter, we referenced earlier, that thing should have gone under the first two years, but there was such a great emotional product. People would tolerate the fail whale. And then, you know, well that was a whole 'nother thing. >> Then a plane landed in (John laughs) the Hudson and it was over. >> I think verticals, a lot of verticals will build applications using these models like for lawyers, for doctors, for scientists, for content creators, for- >> So you'll have many hundreds of millions of dollars investments that are going to be seeping out. If, all right, we got to wrap, if you had to put odds on it that that OpenAI is going to be the leader, maybe not a winner take all leader, but like you look at like Amazon and cloud, they're not winner take all, these aren't necessarily winner take all markets. It's not necessarily a zero sum game, but let's call it winner take most. What odds would you give that open AI 10 years from now will be in that position. >> If I'm 0 to 10 kind of thing? >> Yeah, it's like horse race, 3 to 1, 2 to 1, even money, 10 to 1, 50 to 1. >> Maybe 2 to 1, >> 2 to 1, that's pretty low odds. That's basically saying they're the favorite, they're the front runner. Would you agree with that? >> I'd say 4 to 1. >> Yeah, I was going to say I'm like a 5 to 1, 7 to 1 type of person, 'cause I'm a skeptic with, you know, there's so much competition, but- >> I think they're definitely the leader. I mean you got to say, I mean. >> Oh there's no question. There's no question about it. >> The question is can they execute? >> They're not Friendster, is what you're saying. >> They're not Friendster and they're more like Twitter and Reddit where they have momentum. If they can execute on the product side, and if they don't stumble on that, they will continue to have the lead. >> If they say stay neutral, as Sam is, has been saying, that, hey, Microsoft is one of our partners, if you look at their company model, how they have structured the company, then they're going to pay back to the investors, like Microsoft is the biggest one, up to certain, like by certain number of years, they're going to pay back from all the money they make, and after that, they're going to give the money back to the public, to the, I don't know who they give it to, like non-profit or something. (indistinct) >> Okay, the odds are dropping. (group talks over each other) That's a good point though >> Actually they might have done that to fend off the criticism of this. But it's really interesting to see the model they have adopted. >> The wildcard in all this, My last word on this is that, if there's a developer shift in how developers and data can come together again, we have conferences around the future of data, Supercloud and meshs versus, you know, how the data world, coding with data, how that evolves will also dictate, 'cause a wild card could be a shift in the landscape around how developers are using either machine learning or AI like techniques to code into their apps, so. >> That's fantastic insight. I can't thank you enough for your time, on the heels of Supercloud 2, really appreciate it. All right, thanks to John and Sarbjeet for the outstanding conversation today. Special thanks to the Palo Alto studio team. My goodness, Anderson, this great backdrop. You guys got it all out here, I'm jealous. And Noah, really appreciate it, Chuck, Andrew Frick and Cameron, Andrew Frick switching, Cameron on the video lake, great job. And Alex Myerson, he's on production, manages the podcast for us, Ken Schiffman as well. Kristen Martin and Cheryl Knight help get the word out on social media and our newsletters. Rob Hof is our editor-in-chief over at SiliconANGLE, does some great editing, thanks to all. Remember, all these episodes are available as podcasts. All you got to do is search Breaking Analysis podcast, wherever you listen. Publish each week on wikibon.com and siliconangle.com. Want to get in touch, email me directly, david.vellante@siliconangle.com or DM me at dvellante, or comment on our LinkedIn post. And by all means, check out etr.ai. They got really great survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, We'll see you next time on Breaking Analysis. (electronic music)

Published Date : Jan 20 2023

SUMMARY :

bringing you data-driven and ChatGPT have taken the world by storm. So I asked it, give it to the large language models to do that. So to your point, it's So one of the problems with ChatGPT, and he simply gave the system the prompts, or the OS to help it do but it kind of levels the playing- and the answers were coming as the data you can get. Yeah, and leveled to certain extent. I check the facts, save me about maybe- and then I write a killer because like if the it's, the law is we, you know, I think that's true and I ask the set of similar question, What's your counter point? and not it's underestimated long term. That's what he said. for the first time, wow. the overhyped at the No, it was, it was I got, right I mean? the internet in the early days, and it's only going to get better." So you're saying it's bifurcated. and possibly the debate the first mobile device. So I mean. on the right with ChatGPT, and convicted by the Department of Justice the scrutiny from the Fed, right, so- And the privacy and thing to do what Sam Altman- So even though it'll get like, you know, it's- It's more than clever. I mean you write- I think that's a big thing. I think he was doing- I was not impressed because You know like. And he did the same thing he's got a lot of hyperbole. the browser moment to me, So OpenAI could stay on the right side You're right, it was terrible, They could be the Netscape Navigator, and in the horizontal axis's So I guess that's the other point is, I mean to quote IBM's So the data problem factors and the government's around the world, and they're slow to catch up. Yeah, and now they got years, you know, OpenAI. But the problem with government to kill Big Tech, and the 20% is probably relevant, back in the day, right? are they going to apply it? and also to write code as well, that the marketplace I don't, I don't see you had an interesting comment. No, no. First of all, the AI chops that Google has, right? are off the scales, right? I mean they got to be and the capacity to process that data, on some of the thinking So Lina Kahn is looming, and this is the third, could be a third rail. But the first thing What they will do out the separate company Is it to charge you for a query? it's cool to type stuff in natural language is the way and how many cents the and they're going through Google search results. It will, because there were It'll be like, you know, I mean. I never input the transcript. Wow, But it was a big lie. but I call it the vanilla content. Make your point, cause we And on the danger side as well, So the data By the way, that means at the Supercloud event, So one of the VCs actually What do you make of it? you were like "Hundreds of millions." not 10, not a billion. Clearly, the CapEx spending to build all But I think it's not that hard. It's, what, you know This is the new economics Look at the amount of And Sam is depositioning all the success, or 150 or 200 million to train the model. So I think to compute is different. not because of the software. Because you can, buy They built the moat And then, you know, well that the Hudson and it was over. that are going to be seeping out. Yeah, it's like horse race, 3 to 1, 2 to 1, that's pretty low odds. I mean you got to say, I mean. Oh there's no question. is what you're saying. and if they don't stumble on that, the money back to the public, to the, Okay, the odds are dropping. the model they have adopted. Supercloud and meshs versus, you know, on the heels of Supercloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

SarbjeetPERSON

0.99+

Brian GracelyPERSON

0.99+

Lina KhanPERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Reid HoffmanPERSON

0.99+

Alex MyersonPERSON

0.99+

Lena KhanPERSON

0.99+

Sam AltmanPERSON

0.99+

AppleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Rob ThomasPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

GoogleORGANIZATION

0.99+

David FlynnPERSON

0.99+

SamPERSON

0.99+

NoahPERSON

0.99+

Ray AmaraPERSON

0.99+

10 billionQUANTITY

0.99+

150QUANTITY

0.99+

Rob HofPERSON

0.99+

ChuckPERSON

0.99+

Palo AltoLOCATION

0.99+

Howie XuPERSON

0.99+

AndersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

John FurrierPERSON

0.99+

Hewlett PackardORGANIZATION

0.99+

Santa CruzLOCATION

0.99+

1995DATE

0.99+

Lina KahnPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

50 wordsQUANTITY

0.99+

Hundreds of millionsQUANTITY

0.99+

CompaqORGANIZATION

0.99+

10QUANTITY

0.99+

Kristen MartinPERSON

0.99+

two sentencesQUANTITY

0.99+

DavePERSON

0.99+

hundreds of millionsQUANTITY

0.99+

Satya NadellaPERSON

0.99+

CameronPERSON

0.99+

100 millionQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

one sentenceQUANTITY

0.99+

10 millionQUANTITY

0.99+

yesterdayDATE

0.99+

Clay ChristensenPERSON

0.99+

Sarbjeet JohalPERSON

0.99+

NetscapeORGANIZATION

0.99+

Breaking Analysis: Supercloud2 Explores Cloud Practitioner Realities & the Future of Data Apps


 

>> Narrator: From theCUBE Studios in Palo Alto and Boston bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante >> Enterprise tech practitioners, like most of us they want to make their lives easier so they can focus on delivering more value to their businesses. And to do so, they want to tap best of breed services in the public cloud, but at the same time connect their on-prem intellectual property to emerging applications which drive top line revenue and bottom line profits. But creating a consistent experience across clouds and on-prem estates has been an elusive capability for most organizations, forcing trade-offs and injecting friction into the system. The need to create seamless experiences is clear and the technology industry is starting to respond with platforms, architectures, and visions of what we've called the Supercloud. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis we give you a preview of Supercloud 2, the second event of its kind that we've had on the topic. Yes, folks that's right Supercloud 2 is here. As of this recording, it's just about four days away 33 guests, 21 sessions, combining live discussions and fireside chats from theCUBE's Palo Alto Studio with prerecorded conversations on the future of cloud and data. You can register for free at supercloud.world. And we are super excited about the Supercloud 2 lineup of guests whereas Supercloud 22 in August, was all about refining the definition of Supercloud testing its technical feasibility and understanding various deployment models. Supercloud 2 features practitioners, technologists and analysts discussing what customers need with real-world examples of Supercloud and will expose thinking around a new breed of cross-cloud apps, data apps, if you will that change the way machines and humans interact with each other. Now the example we'd use if you think about applications today, say a CRM system, sales reps, what are they doing? They're entering data into opportunities they're choosing products they're importing contacts, et cetera. And sure the machine can then take all that data and spit out a forecast by rep, by region, by product, et cetera. But today's applications are largely about filling in forms and or codifying processes. In the future, the Supercloud community sees a new breed of applications emerging where data resides on different clouds, in different data storages, databases, Lakehouse, et cetera. And the machine uses AI to inspect the e-commerce system the inventory data, supply chain information and other systems, and puts together a plan without any human intervention whatsoever. Think about a system that orchestrates people, places and things like an Uber for business. So at Supercloud 2, you'll hear about this vision along with some of today's challenges facing practitioners. Zhamak Dehghani, the founder of Data Mesh is a headliner. Kit Colbert also is headlining. He laid out at the first Supercloud an initial architecture for what that's going to look like. That was last August. And he's going to present his most current thinking on the topic. Veronika Durgin of Sachs will be featured and talk about data sharing across clouds and you know what she needs in the future. One of the main highlights of Supercloud 2 is a dive into Walmart's Supercloud. Other featured practitioners include Western Union Ionis Pharmaceuticals, Warner Media. We've got deep, deep technology dives with folks like Bob Muglia, David Flynn Tristan Handy of DBT Labs, Nir Zuk, the founder of Palo Alto Networks focused on security. Thomas Hazel, who's going to talk about a new type of database for Supercloud. It's several analysts including Keith Townsend Maribel Lopez, George Gilbert, Sanjeev Mohan and so many more guests, we don't have time to list them all. They're all up on supercloud.world with a full agenda, so you can check that out. Now let's take a look at some of the things that we're exploring in more detail starting with the Walmart Cloud native platform, they call it WCNP. We definitely see this as a Supercloud and we dig into it with Jack Greenfield. He's the head of architecture at Walmart. Here's a quote from Jack. "WCNP is an implementation of Kubernetes for the Walmart ecosystem. We've taken Kubernetes off the shelf as open source." By the way, they do the same thing with OpenStack. "And we have integrated it with a number of foundational services that provide other aspects of our computational environment. Kubernetes off the shelf doesn't do everything." And so what Walmart chose to do, they took a do-it-yourself approach to build a Supercloud for a variety of reasons that Jack will explain, along with Walmart's so-called triplet architecture connecting on-prem, Azure and GCP. No surprise, there's no Amazon at Walmart for obvious reasons. And what they do is they create a common experience for devs across clouds. Jack is going to talk about how Walmart is evolving its Supercloud in the future. You don't want to miss that. Now, next, let's take a look at how Veronica Durgin of SAKS thinks about data sharing across clouds. Data sharing we think is a potential killer use case for Supercloud. In fact, let's hear it in Veronica's own words. Please play the clip. >> How do we talk to each other? And more importantly, how do we data share? You know, I work with data, you know this is what I do. So if you know I want to get data from a company that's using, say Google, how do we share it in a smooth way where it doesn't have to be this crazy I don't know, SFTP file moving? So that's where I think Supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> Now data mesh is a possible architectural approach that will enable more facile data sharing and the monetization of data products. You'll hear Zhamak Dehghani live in studio talking about what standards are missing to make this vision a reality across the Supercloud. Now one of the other things that we're really excited about is digging deeper into the right approach for Supercloud adoption. And we're going to share a preview of a debate that's going on right now in the community. Bob Muglia, former CEO of Snowflake and Microsoft Exec was kind enough to spend some time looking at the community's supercloud definition and he felt that it needed to be simplified. So in near real time he came up with the following definition that we're showing here. I'll read it. "A Supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers." So not only did Bob simplify the initial definition he's stressed that the Supercloud is a platform versus an architecture implying that the platform provider eg Snowflake, VMware, Databricks, Cohesity, et cetera is responsible for determining the architecture. Now interestingly in the shared Google doc that the working group uses to collaborate on the supercloud de definition, Dr. Nelu Mihai who is actually building a Supercloud responded as follows to Bob's assertion "We need to avoid creating many Supercloud platforms with their own architectures. If we do that, then we create other proprietary clouds on top of existing ones. We need to define an architecture of how Supercloud interfaces with all other clouds. What is the information model? What is the execution model and how users will interact with Supercloud?" What does this seemingly nuanced point tell us and why does it matter? Well, history suggests that de facto standards will emerge more quickly to resolve real world practitioner problems and catch on more quickly than consensus-based architectures and standards-based architectures. But in the long run, the ladder may serve customers better. So we'll be exploring this topic in more detail in Supercloud 2, and of course we'd love to hear what you think platform, architecture, both? Now one of the real technical gurus that we'll have in studio at Supercloud two is David Flynn. He's one of the people behind the the movement that enabled enterprise flash adoption, that craze. And he did that with Fusion IO and he is now working on a system to enable read write data access to any user in any application in any data center or on any cloud anywhere. So think of this company as a Supercloud enabler. Allow me to share an excerpt from a conversation David Flore and I had with David Flynn last year. He as well gave a lot of thought to the Supercloud definition and was really helpful with an opinionated point of view. He said something to us that was, we thought relevant. "What is the operating system for a decentralized cloud? The main two functions of an operating system or an operating environment are one the process scheduler and two, the file system. The strongest argument for supercloud is made when you go down to the platform layer and talk about it as an operating environment on which you can run all forms of applications." So a couple of implications here that will be exploring with David Flynn in studio. First we're inferring from his comment that he's in the platform camp where the platform owner is responsible for the architecture and there are obviously trade-offs there and benefits but we'll have to clarify that with him. And second, he's basically saying, you kill the concept the further you move up the stack. So the weak, the further you move the stack the weaker the supercloud argument becomes because it's just becoming SaaS. Now this is something we're going to explore to better understand is thinking on this, but also whether the existing notion of SaaS is changing and whether or not a new breed of Supercloud apps will emerge. Which brings us to this really interesting fellow that George Gilbert and I RIFed with ahead of Supercloud two. Tristan Handy, he's the founder and CEO of DBT Labs and he has a highly opinionated and technical mind. Here's what he said, "One of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse inside of your data lake. These are core concepts that the business should be able to create applications around very easily. In fact, that's not the case because it involves a lot of data engineering pipeline and other work to make these available. So if you really want to make it easy to create these data experiences for users you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes and they don't need to." A lot of implications to this statement that will explore at Supercloud two versus Jamma Dani's data mesh comes into play here with her critique of hyper specialized data pipeline experts with little or no domain knowledge. Also the need for simplified self-service infrastructure which Kit Colbert is likely going to touch upon. Veronica Durgin of SAKS and her ideal state for data shearing along with Harveer Singh of Western Union. They got to deal with 200 locations around the world in data privacy issues, data sovereignty how do you share data safely? Same with Nick Taylor of Ionis Pharmaceutical. And not to blow your mind but Thomas Hazel and Bob Muglia deposit that to make data apps a reality across the Supercloud you have to rethink everything. You can't just let in memory databases and caching architectures take care of everything in a brute force manner. Rather you have to get down to really detailed levels even things like how data is laid out on disk, ie flash and think about rewriting applications for the Supercloud and the MLAI era. All of this and more at Supercloud two which wouldn't be complete without some data. So we pinged our friends from ETR Eric Bradley and Darren Bramberm to see if they had any data on Supercloud that we could tap. And so we're going to be analyzing a number of the players as well at Supercloud two. Now, many of you are familiar with this graphic here we show some of the players involved in delivering or enabling Supercloud-like capabilities. On the Y axis is spending momentum and on the horizontal accesses market presence or pervasiveness in the data. So netscore versus what they call overlap or end in the data. And the table insert shows how the dots are plotted now not to steal ETR's thunder but the first point is you really can't have supercloud without the hyperscale cloud platforms which is shown on this graphic. But the exciting aspect of Supercloud is the opportunity to build value on top of that hyperscale infrastructure. Snowflake here continues to show strong spending velocity as those Databricks, Hashi, Rubrik. VMware Tanzu, which we all put under the magnifying glass after the Broadcom announcements, is also showing momentum. Unfortunately due to a scheduling conflict we weren't able to get Red Hat on the program but they're clearly a player here. And we've put Cohesity and Veeam on the chart as well because backup is a likely use case across clouds and on-premises. And now one other call out that we drill down on at Supercloud two is CloudFlare, which actually uses the term supercloud maybe in a different way. They look at Supercloud really as you know, serverless on steroids. And so the data brains at ETR will have more to say on this topic at Supercloud two along with many others. Okay, so why should you attend Supercloud two? What's in it for me kind of thing? So first of all, if you're a practitioner and you want to understand what the possibilities are for doing cross-cloud services for monetizing data how your peers are doing data sharing, how some of your peers are actually building out a Supercloud you're going to get real world input from practitioners. If you're a technologist, you're trying to figure out various ways to solve problems around data, data sharing, cross-cloud service deployment there's going to be a number of deep technology experts that are going to share how they're doing it. We're also going to drill down with Walmart into a practical example of Supercloud with some other examples of how practitioners are dealing with cross-cloud complexity. Some of them, by the way, are kind of thrown up their hands and saying, Hey, we're going mono cloud. And we'll talk about the potential implications and dangers and risks of doing that. And also some of the benefits. You know, there's a question, right? Is Supercloud the same wine new bottle or is it truly something different that can drive substantive business value? So look, go to Supercloud.world it's January 17th at 9:00 AM Pacific. You can register for free and participate directly in the program. Okay, that's a wrap. I want to give a shout out to the Supercloud supporters. VMware has been a great partner as our anchor sponsor Chaos Search Proximo, and Alura as well. For contributing to the effort I want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman is his supporting cast as well. Kristen Martin and Cheryl Knight to help get the word out on social media and at our newsletters. And Rob Ho is our editor-in-chief over at Silicon Angle. Thank you all. Remember, these episodes are all available as podcast. Wherever you listen we really appreciate the support that you've given. We just saw some stats from from Buzz Sprout, we hit the top 25% we're almost at 400,000 downloads last year. So really appreciate your participation. All you got to do is search Breaking Analysis podcast and you'll find those I publish each week on wikibon.com and siliconangle.com. Or if you want to get ahold of me you can email me directly at David.Vellante@siliconangle.com or dm me DVellante or comment on our LinkedIn post. I want you to check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Supercloud two or next time on breaking analysis. (light music)

Published Date : Jan 14 2023

SUMMARY :

with Dave Vellante of the things that we're So if you know I want to get data and on the horizontal

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Bob MugliaPERSON

0.99+

Alex MyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

David FlynnPERSON

0.99+

VeronicaPERSON

0.99+

JackPERSON

0.99+

Nelu MihaiPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Thomas HazelPERSON

0.99+

Nick TaylorPERSON

0.99+

Dave VellantePERSON

0.99+

Jack GreenfieldPERSON

0.99+

Kristen MartinPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Veronica DurginPERSON

0.99+

WalmartORGANIZATION

0.99+

Rob HoPERSON

0.99+

Warner MediaORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

Veronika DurginPERSON

0.99+

George GilbertPERSON

0.99+

Ionis PharmaceuticalORGANIZATION

0.99+

George GilbertPERSON

0.99+

Bob MugliaPERSON

0.99+

David FlorePERSON

0.99+

DBT LabsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

BobPERSON

0.99+

Palo AltoLOCATION

0.99+

21 sessionsQUANTITY

0.99+

Darren BrambermPERSON

0.99+

33 guestsQUANTITY

0.99+

Nir ZukPERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

Harveer SinghPERSON

0.99+

Kit ColbertPERSON

0.99+

DatabricksORGANIZATION

0.99+

Sanjeev MohanPERSON

0.99+

Supercloud 2TITLE

0.99+

SnowflakeORGANIZATION

0.99+

last yearDATE

0.99+

Western UnionORGANIZATION

0.99+

CohesityORGANIZATION

0.99+

SupercloudORGANIZATION

0.99+

200 locationsQUANTITY

0.99+

AugustDATE

0.99+

Keith TownsendPERSON

0.99+

Data MeshORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

David.Vellante@siliconangle.comOTHER

0.99+

next weekDATE

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

secondQUANTITY

0.99+

first pointQUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

VMwareORGANIZATION

0.98+

Silicon AngleORGANIZATION

0.98+

ETRORGANIZATION

0.98+

Eric BradleyPERSON

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

SachsORGANIZATION

0.98+

SAKSORGANIZATION

0.98+

SupercloudEVENT

0.98+

last AugustDATE

0.98+

each weekQUANTITY

0.98+

Breaking Analysis: CIOs in a holding pattern but ready to strike at monetization


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> Recent conversations with IT decision makers show a stark contrast between exiting 2023 versus the mindset when we were leaving 2022. CIOs are generally funding new initiatives by pushing off or cutting lower priority items, while security efforts are still being funded. Those that enable business initiatives that generate revenue or taking priority over cleaning up legacy technical debt. The bottom line is, for the moment, at least, the mindset is not cut everything, rather, it's put a pause on cleaning up legacy hairballs and fund monetization. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we tap recent discussions from two primary sources, year-end ETR roundtables with IT decision makers, and CUBE conversations with data, cloud, and IT architecture practitioners. The sources of data for this breaking analysis come from the following areas. Eric Bradley's recent ETR year end panel featured a financial services DevOps and SRE manager, a CSO in a large hospitality firm, a director of IT for a big tech company, the head of IT infrastructure for a financial firm, and a CTO for global travel enterprise, and for our upcoming Supercloud2 conference on January 17th, which you can register free by the way, at supercloud.world, we've had CUBE conversations with data and cloud practitioners, specifically, heads of data in retail and financial services, a cloud architect and a biotech firm, the director of cloud and data at a large media firm, and the director of engineering at a financial services company. Now we've curated commentary from these sources and now we share them with you today as anecdotal evidence supporting what we've been reporting on in the marketplace for these last couple of quarters. On this program, we've likened the economy to the slingshot effect when you're driving, when you're cruising along at full speed on the highway, and suddenly you see red brake lights up ahead, so, you tap your own brakes and then you speed up again, and traffic is moving along at full speed, so, you think nothing of it, and then, all of a sudden, the same thing happens. You slow down to a crawl and you start wondering, "What the heck is happening?" And you become a lot more cautious about the rate of acceleration when you start moving again. Well, that's the trend in IT spend right now. Back in June, we reported that despite the macro headwinds, CIOs were still expecting 6% to 7% spending growth for 2022. Now that was down from 8%, which we reported at the beginning of 2022. That was before Ukraine, and Fed tightening, but given those two factors, you know that that seemed pretty robust, but throughout the fall, we began reporting consistently declining expectations where CIOs are now saying Q4 will come in at around 3% growth relative to last year, and they're expecting, or should we say hoping that it pops back up in 2023 to 4% to 5%. The recent ETR panelists, when they heard this, are saying based on their businesses and discussions with their peers, they could see low single digit growth for 2023, so, 1%, 2%, 3%, so, this sort of slingshotting, or sometimes we call it a seesaw economy, has caught everyone off guard. Amazon is a good example of this, and there are others, but Amazon entered the pandemic with around 800,000 employees. It doubled that workforce during the pandemic. Now, right before Thanksgiving in 2022, Amazon announced that it was laying off 10,000 employees, and, Jassy, the CEO of Amazon, just last week announced that number is now going to grow to 18,000. Now look, this is a rounding error at Amazon from a headcount standpoint and their headcount remains far above 2019 levels. Its stock price, however, does not and it's back down to 2019 levels. The point is that visibility is very poor right now and it's reflected in that uncertainty. We've seen a lot of layoffs, obviously, the stock market's choppy, et cetera. Now importantly, not everything is on hold, and this downturn is different from previous tech pullbacks in that the speed at which new initiatives can be rolled out is much greater thanks to the cloud, and if you can show a fast return, you're going to get funding. Organizations are pausing on the cleanup of technical debt, unless it's driving fast business value. They're holding off on modernization projects. Those business enablement initiatives are still getting funded. CIOs are finding the money by consolidating redundant vendors, and they're stealing from other pockets of budget, so, it's not surprising that cybersecurity remains the number one technology priority in 2023. We've been reporting that for quite some time now. It's specifically cloud, cloud native security container and API security. That's where all the action is, because there's still holes to plug from that forced march to digital that occurred during COVID. Cloud migration, kind of showing here on number two on this chart, still a high priority, while optimizing cloud spend is definitely a strategy that organizations are taking to cut costs. It's behind consolidating redundant vendors by a long shot. There's very little evidence that cloud repatriation, i.e., moving workloads back on prem is a major cost cutting trend. The data just doesn't show it. What is a trend is getting more real time with analytics, so, companies can do faster and more accurate customer targeting, and they're really prioritizing that, obviously, in this down economy. Real time, we sometimes lose it, what's real time? Real time, we sometimes define as before you lose the customer. Now in the hiring front, customers tell us they're still having a hard time finding qualified site reliability engineers, SREs, Kubernetes expertise, and deep analytics pros. These job markets remain very tight. Let's stay with security for just a moment. We said many times that, prior to COVID, zero trust was this undefined buzzword, and the joke, of course, is, if you ask three people, "What is zero trust?" You're going to get three different answers, but the truth is that virtually every security company that was resisting taking a position on zero trust in an attempt to avoid... They didn't want to get caught up in the buzzword vortex, but they're now really being forced to go there by CISOs, so, there are some good quotes here on cyber that we want to share that came out of the recent conversations that we cited up front. The first one, "Zero trust is the highest ROI, because it enables business transformation." In other words, if I can have good security, I can move fast, it's not a blocker anymore. Second quote here, "ZTA," zero trust architecture, "Is more than securing the perimeter. It encompasses strong authentication and multiple identity layers. It requires taking a software approach to security instead of a hardware focus." The next one, "I'd love to have a security data lake that I could apply to asset management, vulnerability management, incident management, incident response, and all aspects for my security team. I see huge promise in that space," and the last one, I see NLP, natural language processing, as the foundation for email security, so, instead of searching for IP addresses, you can now read emails at light speed and identify phishing threats, so, look at, this is a small snapshot of the mindset around security, but I'll add, when you talk to the likes of CrowdStrike, and Zscaler, and Okta, and Palo Alto Networks, and many other security firms, they're listening to these narratives around zero trust. I'm confident they're working hard on skating to this puck, if you will. A good example is this idea of a security data lake and using analytics to improve security. We're hearing a lot about that. We're hearing architectures, there's acquisitions in that regard, and so, that's becoming real, and there are many other examples, because data is at the heart of digital business. This is the next area that we want to talk about. It's obvious that data, as a topic, gets a lot of mind share amongst practitioners, but getting data right is still really hard. It's a challenge for most organizations to get ROI and expected return out of data. Most companies still put data at the periphery of their businesses. It's not at the core. Data lives within silos or different business units, different clouds, it's on-prem, and increasingly it's at the edge, and it seems like the problem is getting worse before it gets better, so, here are some instructive comments from our recent conversations. The first one, "We're publishing events onto Kafka, having those events be processed by Dataproc." Dataproc is a Google managed service to run Hadoop, and Spark, and Flank, and Presto, and a bunch of other open source tools. We're putting them into the appropriate storage models within Google, and then normalize the data into BigQuery, and only then can you take advantage of tools like ThoughtSpot, so, here's a company like ThoughtSpot, and they're all about simplifying data, democratizing data, but to get there, you have to go through some pretty complex processes, so, this is a good example. All right, another comment. "In order to use Google's AI tools, we have to put the data into BigQuery. They haven't integrated in the way AWS and Snowflake have with SageMaker. Moving the data is too expensive, time consuming, and risky," so, I'll just say this, sharing data is a killer super cloud use case, and firms like Snowflake are on top of it, but it's still not pretty across clouds, and Google's posture seems to be, "We're going to let our database product competitiveness drive the strategy first, and the ecosystem is going to take a backseat." Now, in a way, I get it, owning the database is critical, and Google doesn't want to capitulate on that front. Look, BigQuery is really good and competitive, but you can't help but roll your eyes when a CEO stands up, and look, I'm not calling out Thomas Kurian, every CEO does this, and talks about how important their customers are, and they'll do whatever is right by the customer, so, look, I'm telling you, I'm rolling my eyes on that. Now let me also comment, AWS has figured this out. They're killing it in database. If you take Redshift for example, it's still growing, as is Aurora, really fast growing services and other data stores, but AWS realizes it can make more money in the long-term partnering with the Snowflakes and Databricks of the world, and other ecosystem vendors versus sub optimizing their relationships with partners and customers in order to sell more of their own homegrown tools. I get it. It's hard not to feature your own product. IBM chose OS/2 over Windows, and tried for years to popularize it. It failed. Lotus, go back way back to Lotus 1, 2, and 3, they refused to run on Windows when it first came out. They were running on DEC VAX. Many of you young people in the United States have never even heard of DEC VAX. IBM wanted to run every everything only in its cloud, the same with Oracle, originally. VMware, as you might recall, tried to build its own cloud, but, eventually, when the market speaks and reveals what seems to be obvious to analysts, years before, the vendors come around, they face reality, and they stop wasting money, fighting a losing battle. "The trend is your friend," as the saying goes. All right, last pull quote on data, "The hardest part is transformations, moving traditional Informatica, Teradata, or Oracle infrastructure to something more modern and real time, and that's why people still run apps in COBOL. In IT, we rarely get rid of stuff, rather we add on another coat of paint until the wood rots out or the roof is going to cave in. All right, the last key finding we want to highlight is going to bring us back to the cloud repatriation myth. Followers of this program know it's a real sore spot with us. We've heard the stories about repatriation, we've read the thoughtful articles from VCs on the subject, we've been whispered to by vendors that you should investigate this trend. It's really happening, but the data simply doesn't support it. Here's the question that was posed to these practitioners. If you had unlimited budget and the economy miraculously flipped, what initiatives would you tackle first? Where would you really lean into? The first answer, "I'd rip out legacy on-prem infrastructure and move to the cloud even faster," so, the thing here is, look, maybe renting infrastructure is more expensive than owning, maybe, but if I can optimize my rental with better utilization, turn off compute, use things like serverless, get on a steeper and higher performance over time, and lower cost Silicon curve with things like Graviton, tap best of breed tools in AI, and other areas that make my business more competitive. Move faster, fail faster, experiment more quickly, and cheaply, what's that worth? Even the most hard-o CFOs understand the business benefits far outweigh the possible added cost per gigabyte, and, again, I stress "possible." Okay, other interesting comments from practitioners. "I'd hire 50 more data engineers and accelerate our real-time data capabilities to better target customers." Real-time is becoming a thing. AI is being injected into data and apps to make faster decisions, perhaps, with less or even no human involvement. That's on the rise. Next quote, "I'd like to focus on resolving the concerns around cloud data compliance," so, again, despite the risks of data being spread out in different clouds, organizations realize cloud is a given, and they want to find ways to make it work better, not move away from it. The same thing in the next one, "I would automate the data analytics pipeline and focus on a safer way to share data across the states without moving it," and, finally, "The way I'm addressing complexity is to standardize on a single cloud." MonoCloud is actually a thing. We're hearing this more and more. Yes, my company has multiple clouds, but in my group, we've standardized on a single cloud to simplify things, and this is a somewhat dangerous trend, because it's creating even more silos and it's an opportunity that needs to be addressed, and that's why we've been talking so much about supercloud is a cross-cloud, unifying, architectural framework, or, perhaps, it's a platform. In fact, that's a question that we will be exploring later this month at Supercloud2 live from our Palo Alto Studios. Is supercloud an architecture or is it a platform? And in this program, we're featuring technologists, analysts, practitioners to explore the intersection between data and cloud and the future of cloud computing, so, you don't want to miss this opportunity. Go to supercloud.world. You can register for free and participate in the event directly. All right, thanks for listening. That's a wrap. I'd like to thank Alex Myerson, who's on production and manages our podcast, Ken Schiffman as well, Kristen Martin and Cheryl Knight, they helped get the word out on social media, and in our newsletters, and Rob Hof is our editor-in-chief over at siliconangle.com. He does some great editing. Thank you, all. Remember, all these episodes are available as podcasts wherever you listen. All you've got to do is search "breaking analysis podcasts." I publish each week on wikibon.com and siliconangle.com where you can email me directly at david.vellante@siliconangle.com or DM me, @Dante, or comment on our LinkedIn posts. By all means, check out etr.ai. They get the best survey data in the enterprise tech business. We'll be doing our annual predictions post in a few weeks, once the data comes out from the January survey. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, everybody, and we'll see you next time on "Breaking Analysis." (upbeat music)

Published Date : Jan 7 2023

SUMMARY :

This is "Breaking Analysis" and the director of engineering

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

JassyPERSON

0.99+

Cheryl KnightPERSON

0.99+

Eric BradleyPERSON

0.99+

Rob HofPERSON

0.99+

OktaORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

ZscalerORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Thomas KurianPERSON

0.99+

6%QUANTITY

0.99+

IBMORGANIZATION

0.99+

2023DATE

0.99+

18,000QUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

10,000 employeesQUANTITY

0.99+

CrowdStrikeORGANIZATION

0.99+

JanuaryDATE

0.99+

2022DATE

0.99+

January 17thDATE

0.99+

BostonLOCATION

0.99+

Lotus 1TITLE

0.99+

2019DATE

0.99+

JuneDATE

0.99+

8%QUANTITY

0.99+

United StatesLOCATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

SnowflakesORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

LotusTITLE

0.99+

two factorsQUANTITY

0.99+

OracleORGANIZATION

0.99+

DataprocORGANIZATION

0.99+

three peopleQUANTITY

0.99+

last weekDATE

0.99+

Supercloud2EVENT

0.99+

TeradataORGANIZATION

0.99+

1%QUANTITY

0.99+

3TITLE

0.99+

WindowsTITLE

0.99+

5%QUANTITY

0.99+

3%QUANTITY

0.99+

BigQueryTITLE

0.99+

Second quoteQUANTITY

0.99+

4%QUANTITY

0.99+

DEC VAXTITLE

0.99+

ThanksgivingEVENT

0.98+

OS/2TITLE

0.98+

7%QUANTITY

0.98+

last yearDATE

0.98+

two primary sourcesQUANTITY

0.98+

each weekQUANTITY

0.98+

InformaticaORGANIZATION

0.98+

pandemicEVENT

0.98+

first oneQUANTITY

0.98+

siliconangle.comOTHER

0.97+

first answerQUANTITY

0.97+

2%QUANTITY

0.97+

around 800,000 employeesQUANTITY

0.97+

50 more data engineersQUANTITY

0.97+

zero trustQUANTITY

0.97+

SnowflakeORGANIZATION

0.96+

single cloudQUANTITY

0.96+

2TITLE

0.96+

todayDATE

0.95+

ETRORGANIZATION

0.95+

single cloudQUANTITY

0.95+

LinkedInORGANIZATION

0.94+

later this monthDATE

0.94+

Breaking Analysis: AI Goes Mainstream But ROI Remains Elusive


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> A decade of big data investments combined with cloud scale, the rise of much more cost effective processing power. And the introduction of advanced tooling has catapulted machine intelligence to the forefront of technology investments. No matter what job you have, your operation will be AI powered within five years and machines may actually even be doing your job. Artificial intelligence is being infused into applications, infrastructure, equipment, and virtually every aspect of our lives. AI is proving to be extremely helpful at things like controlling vehicles, speeding up medical diagnoses, processing language, advancing science, and generally raising the stakes on what it means to apply technology for business advantage. But business value realization has been a challenge for most organizations due to lack of skills, complexity of programming models, immature technology integration, sizable upfront investments, ethical concerns, and lack of business alignment. Mastering AI technology will not be a requirement for success in our view. However, figuring out how and where to apply AI to your business will be crucial. That means understanding the business case, picking the right technology partner, experimenting in bite-sized chunks, and quickly identifying winners to double down on from an investment standpoint. Hello and welcome to this week's Wiki-bond CUBE Insights powered by ETR. In this breaking analysis, we update you on the state of AI and what it means for the competition. And to do so, we invite into our studios Andy Thurai of Constellation Research. Andy covers AI deeply. He knows the players, he knows the pitfalls of AI investment, and he's a collaborator. Andy, great to have you on the program. Thanks for coming into our CUBE studios. >> Thanks for having me on. >> You're very welcome. Okay, let's set the table with a premise and a series of assertions we want to test with Andy. I'm going to lay 'em out. And then Andy, I'd love for you to comment. So, first of all, according to McKinsey, AI adoption has more than doubled since 2017, but only 10% of organizations report seeing significant ROI. That's a BCG and MIT study. And part of that challenge of AI is it requires data, is requires good data, data proficiency, which is not trivial, as you know. Firms that can master both data and AI, we believe are going to have a competitive advantage this decade. Hyperscalers, as we show you dominate AI and ML. We'll show you some data on that. And having said that, there's plenty of room for specialists. They need to partner with the cloud vendors for go to market productivity. And finally, organizations increasingly have to put data and AI at the center of their enterprises. And to do that, most are going to rely on vendor R&D to leverage AI and ML. In other words, Andy, they're going to buy it and apply it as opposed to build it. What are your thoughts on that setup and that premise? >> Yeah, I see that a lot happening in the field, right? So first of all, the only 10% of realizing a return on investment. That's so true because we talked about this earlier, the most companies are still in the innovation cycle. So they're trying to innovate and see what they can do to apply. A lot of these times when you look at the solutions, what they come up with or the models they create, the experimentation they do, most times they don't even have a good business case to solve, right? So they just experiment and then they figure it out, "Oh my God, this model is working. Can we do something to solve it?" So it's like you found a hammer and then you're trying to find the needle kind of thing, right? That never works. >> 'Cause it's cool or whatever it is. >> It is, right? So that's why, I always advise, when they come to me and ask me things like, "Hey, what's the right way to do it? What is the secret sauce?" And, we talked about this. The first thing I tell them is, "Find out what is the business case that's having the most amount of problems, that that can be solved using some of the AI use cases," right? Not all of them can be solved. Even after you experiment, do the whole nine yards, spend millions of dollars on that, right? And later on you make it efficient only by saving maybe $50,000 for the company or a $100,000 for the company, is it really even worth the experiment, right? So you got to start with the saying that, you know, where's the base for this happening? Where's the need? What's a business use case? It doesn't have to be about cost efficient and saving money in the existing processes. It could be a new thing. You want to bring in a new revenue stream, but figure out what is a business use case, how much money potentially I can make off of that. The same way that start-ups go after. Right? >> Yeah. Pretty straightforward. All right, let's take a look at where ML and AI fit relative to the other hot sectors of the ETR dataset. This XY graph shows net score spending velocity in the vertical axis and presence in the survey, they call it sector perversion for the October survey, the January survey's in the field. Then that squiggly line on ML/AI represents the progression. Since the January 21 survey, you can see the downward trajectory. And we position ML and AI relative to the other big four hot sectors or big three, including, ML/AI is four. Containers, cloud and RPA. These have consistently performed above that magic 40% red dotted line for most of the past two years. Anything above 40%, we think is highly elevated. And we've just included analytics and big data for context and relevant adjacentness, if you will. Now note that green arrow moving toward, you know, the 40% mark on ML/AI. I got a glimpse of the January survey, which is in the field. It's got more than a thousand responses already, and it's trending up for the current survey. So Andy, what do you make of this downward trajectory over the past seven quarters and the presumed uptick in the coming months? >> So one of the things you have to keep in mind is when the pandemic happened, it's about survival mode, right? So when somebody's in a survival mode, what happens, the luxury and the innovations get cut. That's what happens. And this is exactly what happened in the situation. So as you can see in the last seven quarters, which is almost dating back close to pandemic, everybody was trying to keep their operations alive, especially digital operations. How do I keep the lights on? That's the most important thing for them. So while the numbers spent on AI, ML is less overall, I still think the AI ML to spend to sort of like a employee experience or the IT ops, AI ops, ML ops, as we talked about, some of those areas actually went up. There are companies, we talked about it, Atlassian had a lot of platform issues till the amount of money people are spending on that is exorbitant and simply because they are offering the solution that was not available other way. So there are companies out there, you can take AoPS or incident management for that matter, right? A lot of companies have a digital insurance, they don't know how to properly manage it. How do you find an intern solve it immediately? That's all using AI ML and some of those areas actually growing unbelievable, the companies in that area. >> So this is a really good point. If you can you bring up that chart again, what Andy's saying is a lot of the companies in the ETR taxonomy that are doing things with AI might not necessarily show up in a granular fashion. And I think the other point I would make is, these are still highly elevated numbers. If you put on like storage and servers, they would read way, way down the list. And, look in the pandemic, we had to deal with work from home, we had to re-architect the network, we had to worry about security. So those are really good points that you made there. Let's, unpack this a little bit and look at the ML AI sector and the ETR data and specifically at the players and get Andy to comment on this. This chart here shows the same x y dimensions, and it just notes some of the players that are specifically have services and products that people spend money on, that CIOs and IT buyers can comment on. So the table insert shows how the companies are plotted, it's net score, and then the ends in the survey. And Andy, the hyperscalers are dominant, as you can see. You see Databricks there showing strong as a specialist, and then you got to pack a six or seven in there. And then Oracle and IBM, kind of the big whales of yester year are in the mix. And to your point, companies like Salesforce that you mentioned to me offline aren't in that mix, but they do a lot in AI. But what are your takeaways from that data? >> If you could put the slide back on please. I want to make quick comments on a couple of those. So the first one is, it's surprising other hyperscalers, right? As you and I talked about this earlier, AWS is more about logo blocks. We discussed that, right? >> Like what? Like a SageMaker as an example. >> We'll give you all the components what do you need. Whether it's MLOps component or whether it's, CodeWhisperer that we talked about, or a oral platform or data or data, whatever you want. They'll give you the blocks and then you'll build things on top of it, right? But Google took a different way. Matter of fact, if we did those numbers a few years ago, Google would've been number one because they did a lot of work with their acquisition of DeepMind and other things. They're way ahead of the pack when it comes to AI for longest time. Now, I think Microsoft's move of partnering and taking a huge competitor out would open the eyes is unbelievable. You saw that everybody is talking about chat GPI, right? And the open AI tool and ChatGPT rather. Remember as Warren Buffet is saying that, when my laundry lady comes and talk to me about stock market, it's heated up. So that's how it's heated up. Everybody's using ChatGPT. What that means is at the end of the day is they're creating, it's still in beta, keep in mind. It's not fully... >> Can you play with it a little bit? >> I have a little bit. >> I have, but it's good and it's not good. You know what I mean? >> Look, so at the end of the day, you take the massive text of all the available text in the world today, mass them all together. And then you ask a question, it's going to basically search through that and figure it out and answer that back. Yes, it's good. But again, as we discussed, if there's no business use case of what problem you're going to solve. This is building hype. But then eventually they'll figure out, for example, all your chats, online chats, could be aided by your AI chat bots, which is already there, which is not there at that level. This could build help that, right? Or the other thing we talked about is one of the areas where I'm more concerned about is that it is able to produce equal enough original text at the level that humans can produce, for example, ChatGPT or the equal enough, the large language transformer can help you write stories as of Shakespeare wrote it. Pretty close to it. It'll learn from that. So when it comes down to it, talk about creating messages, articles, blogs, especially during political seasons, not necessarily just in US, but anywhere for that matter. If people are able to produce at the emission speed and throw it at the consumers and confuse them, the elections can be won, the governments can be toppled. >> Because to your point about chatbots is chatbots have obviously, reduced the number of bodies that you need to support chat. But they haven't solved the problem of serving consumers. Most of the chat bots are conditioned response, which of the following best describes your problem? >> The current chatbot. >> Yeah. Hey, did we solve your problem? No. Is the answer. So that has some real potential. But if you could bring up that slide again, Ken, I mean you've got the hyperscalers that are dominant. You talked about Google and Microsoft is ubiquitous, they seem to be dominant in every ETR category. But then you have these other specialists. How do those guys compete? And maybe you could even, cite some of the guys that you know, how do they compete with the hyperscalers? What's the key there for like a C3 ai or some of the others that are on there? >> So I've spoken with at least two of the CEOs of the smaller companies that you have on the list. One of the things they're worried about is that if they continue to operate independently without being part of hyperscaler, either the hyperscalers will develop something to compete against them full scale, or they'll become irrelevant. Because at the end of the day, look, cloud is dominant. Not many companies are going to do like AI modeling and training and deployment the whole nine yards by independent by themselves. They're going to depend on one of the clouds, right? So if they're already going to be in the cloud, by taking them out to come to you, it's going to be extremely difficult issue to solve. So all these companies are going and saying, "You know what? We need to be in hyperscalers." For example, you could have looked at DataRobot recently, they made announcements, Google and AWS, and they are all over the place. So you need to go where the customers are. Right? >> All right, before we go on, I want to share some other data from ETR and why people adopt AI and get your feedback. So the data historically shows that feature breadth and technical capabilities were the main decision points for AI adoption, historically. What says to me that it's too much focus on technology. In your view, is that changing? Does it have to change? Will it change? >> Yes. Simple answer is yes. So here's the thing. The data you're speaking from is from previous years. >> Yes >> I can guarantee you, if you look at the latest data that's coming in now, those two will be a secondary and tertiary points. The number one would be about ROI. And how do I achieve? I've spent ton of money on all of my experiments. This is the same thing theme I'm seeing across when talking to everybody who's spending money on AI. I've spent so much money on it. When can I get it live in production? How much, how can I quickly get it? Because you know, the board is breathing down their neck. You already spend this much money. Show me something that's valuable. So the ROI is going to become, take it from me, I'm predicting this for 2023, that's going to become number one. >> Yeah, and if people focus on it, they'll figure it out. Okay. Let's take a look at some of the top players that won, some of the names we just looked at and double click on that and break down their spending profile. So the chart here shows the net score, how net score is calculated. So pay attention to the second set of bars that Databricks, who was pretty prominent on the previous chart. And we've annotated the colors. The lime green is, we're bringing the platform in new. The forest green is, we're going to spend 6% or more relative to last year. And the gray is flat spending. The pinkish is our spending's going to be down on AI and ML, 6% or worse. And the red is churn. So you don't want big red. You subtract the reds from the greens and you get net score, which is shown by those blue dots that you see there. So AWS has the highest net score and very little churn. I mean, single low single digit churn. But notably, you see Databricks and DataRobot are next in line within Microsoft and Google also, they've got very low churn. Andy, what are your thoughts on this data? >> So a couple of things that stands out to me. Most of them are in line with my conversation with customers. Couple of them stood out to me on how bad IBM Watson is doing. >> Yeah, bring that back up if you would. Let's take a look at that. IBM Watson is the far right and the red, that bright red is churning and again, you want low red here. Why do you think that is? >> Well, so look, IBM has been in the forefront of innovating things for many, many years now, right? And over the course of years we talked about this, they moved from a product innovation centric company into more of a services company. And over the years they were making, as at one point, you know that they were making about majority of that money from services. Now things have changed Arvind has taken over, he came from research. So he's doing a great job of trying to reinvent themselves as a company. But it's going to have a long way to catch up. IBM Watson, if you think about it, that played what, jeopardy and chess years ago, like 15 years ago? >> It was jaw dropping when you first saw it. And then they weren't able to commercialize that. >> Yeah. >> And you're making a good point. When Gerstner took over IBM at the time, John Akers wanted to split the company up. He wanted to have a database company, he wanted to have a storage company. Because that's where the industry trend was, Gerstner said no, he came from AMEX, right? He came from American Express. He said, "No, we're going to have a single throat to choke for the customer." They bought PWC for relatively short money. I think it was $15 billion, completely transformed and I would argue saved IBM. But the trade off was, it sort of took them out of product leadership. And so from Gerstner to Palmisano to Remedi, it was really a services led company. And I think Arvind is really bringing it back to a product company with strong consulting. I mean, that's one of the pillars. And so I think that's, they've got a strong story in data and AI. They just got to sort of bring it together and better. Bring that chart up one more time. I want to, the other point is Oracle, Oracle sort of has the dominant lock-in for mission critical database and they're sort of applying AI there. But to your point, they're really not an AI company in the sense that they're taking unstructured data and doing sort of new things. It's really about how to make Oracle better, right? >> Well, you got to remember, Oracle is about database for the structure data. So in yesterday's world, they were dominant database. But you know, if you are to start storing like videos and texts and audio and other things, and then start doing search of vector search and all that, Oracle is not necessarily the database company of choice. And they're strongest thing being apps and building AI into the apps? They are kind of surviving in that area. But again, I wouldn't name them as an AI company, right? But the other thing that that surprised me in that list, what you showed me is yes, AWS is number one. >> Bring that back up if you would, Ken. >> AWS is number one as you, it should be. But what what actually caught me by surprise is how DataRobot is holding, you know? I mean, look at that. The either net new addition and or expansion, DataRobot seem to be doing equally well, even better than Microsoft and Google. That surprises me. >> DataRobot's, and again, this is a function of spending momentum. So remember from the previous chart that Microsoft and Google, much, much larger than DataRobot. DataRobot more niche. But with spending velocity and has always had strong spending velocity, despite some of the recent challenges, organizational challenges. And then you see these other specialists, H2O.ai, Anaconda, dataiku, little bit of red showing there C3.ai. But these again, to stress are the sort of specialists other than obviously the hyperscalers. These are the specialists in AI. All right, so we hit the bigger names in the sector. Now let's take a look at the emerging technology companies. And one of the gems of the ETR dataset is the emerging technology survey. It's called ETS. They used to just do it like twice a year. It's now run four times a year. I just discovered it kind of mid-2022. And it's exclusively focused on private companies that are potential disruptors, they might be M&A candidates and if they've raised enough money, they could be acquirers of companies as well. So Databricks would be an example. They've made a number of investments in companies. SNEAK would be another good example. Companies that are private, but they're buyers, they hope to go IPO at some point in time. So this chart here, shows the emerging companies in the ML AI sector of the ETR dataset. So the dimensions of this are similar, they're net sentiment on the Y axis and mind share on the X axis. Basically, the ETS study measures awareness on the x axis and intent to do something with, evaluate or implement or not, on that vertical axis. So it's like net score on the vertical where negatives are subtracted from the positives. And again, mind share is vendor awareness. That's the horizontal axis. Now that inserted table shows net sentiment and the ends in the survey, which informs the position of the dots. And you'll notice we're plotting TensorFlow as well. We know that's not a company, but it's there for reference as open source tooling is an option for customers. And ETR sometimes like to show that as a reference point. Now we've also drawn a line for Databricks to show how relatively dominant they've become in the past 10 ETS surveys and sort of mind share going back to late 2018. And you can see a dozen or so other emerging tech vendors. So Andy, I want you to share your thoughts on these players, who were the ones to watch, name some names. We'll bring that data back up as you as you comment. >> So Databricks, as you said, remember we talked about how Oracle is not necessarily the database of the choice, you know? So Databricks is kind of trying to solve some of the issue for AI/ML workloads, right? And the problem is also there is no one company that could solve all of the problems. For example, if you look at the names in here, some of them are database names, some of them are platform names, some of them are like MLOps companies like, DataRobot (indistinct) and others. And some of them are like future based companies like, you know, the Techton and stuff. >> So it's a mix of those sub sectors? >> It's a mix of those companies. >> We'll talk to ETR about that. They'd be interested in your input on how to make this more granular and these sub-sectors. You got Hugging Face in here, >> Which is NLP, yeah. >> Okay. So your take, are these companies going to get acquired? Are they going to go IPO? Are they going to merge? >> Well, most of them going to get acquired. My prediction would be most of them will get acquired because look, at the end of the day, hyperscalers need these capabilities, right? So they're going to either create their own, AWS is very good at doing that. They have done a lot of those things. But the other ones, like for particularly Azure, they're going to look at it and saying that, "You know what, it's going to take time for me to build this. Why don't I just go and buy you?" Right? Or or even the smaller players like Oracle or IBM Cloud, this will exist. They might even take a look at them, right? So at the end of the day, a lot of these companies are going to get acquired or merged with others. >> Yeah. All right, let's wrap with some final thoughts. I'm going to make some comments Andy, and then ask you to dig in here. Look, despite the challenge of leveraging AI, you know, Ken, if you could bring up the next chart. We're not repeating, we're not predicting the AI winter of the 1990s. Machine intelligence. It's a superpower that's going to permeate every aspect of the technology industry. AI and data strategies have to be connected. Leveraging first party data is going to increase AI competitiveness and shorten time to value. Andy, I'd love your thoughts on that. I know you've got some thoughts on governance and AI ethics. You know, we talked about ChatGBT, Deepfakes, help us unpack all these trends. >> So there's so much information packed up there, right? The AI and data strategy, that's very, very, very important. If you don't have a proper data, people don't realize that AI is, your AI is the morals that you built on, it's predominantly based on the data what you have. It's not, AI cannot predict something that's going to happen without knowing what it is. It need to be trained, it need to understand what is it you're talking about. So 99% of the time you got to have a good data for you to train. So this where I mentioned to you, the problem is a lot of these companies can't afford to collect the real world data because it takes too long, it's too expensive. So a lot of these companies are trying to do the synthetic data way. It has its own set of issues because you can't use all... >> What's that synthetic data? Explain that. >> Synthetic data is basically not a real world data, but it's a created or simulated data equal and based on real data. It looks, feels, smells, taste like a real data, but it's not exactly real data, right? This is particularly useful in the financial and healthcare industry for world. So you don't have to, at the end of the day, if you have real data about your and my medical history data, if you redact it, you can still reverse this. It's fairly easy, right? >> Yeah, yeah. >> So by creating a synthetic data, there is no correlation between the real data and the synthetic data. >> So that's part of AI ethics and privacy and, okay. >> So the synthetic data, the issue with that is that when you're trying to commingle that with that, you can't create models based on just on synthetic data because synthetic data, as I said is artificial data. So basically you're creating artificial models, so you got to blend in properly that that blend is the problem. And you know how much of real data, how much of synthetic data you could use. You got to use judgment between efficiency cost and the time duration stuff. So that's one-- >> And risk >> And the risk involved with that. And the secondary issues which we talked about is that when you're creating, okay, you take a business use case, okay, you think about investing things, you build the whole thing out and you're trying to put it out into the market. Most companies that I talk to don't have a proper governance in place. They don't have ethics standards in place. They don't worry about the biases in data, they just go on trying to solve a business case >> It's wild west. >> 'Cause that's what they start. It's a wild west! And then at the end of the day when they are close to some legal litigation action or something or something else happens and that's when the Oh Shit! moments happens, right? And then they come in and say, "You know what, how do I fix this?" The governance, security and all of those things, ethics bias, data bias, de-biasing, none of them can be an afterthought. It got to start with the, from the get-go. So you got to start at the beginning saying that, "You know what, I'm going to do all of those AI programs, but before we get into this, we got to set some framework for doing all these things properly." Right? And then the-- >> Yeah. So let's go back to the key points. I want to bring up the cloud again. Because you got to get cloud right. Getting that right matters in AI to the points that you were making earlier. You can't just be out on an island and hyperscalers, they're going to obviously continue to do well. They get more and more data's going into the cloud and they have the native tools. To your point, in the case of AWS, Microsoft's obviously ubiquitous. Google's got great capabilities here. They've got integrated ecosystems partners that are going to continue to strengthen through the decade. What are your thoughts here? >> So a couple of things. One is the last mile ML or last mile AI that nobody's talking about. So that need to be attended to. There are lot of players in the market that coming up, when I talk about last mile, I'm talking about after you're done with the experimentation of the model, how fast and quickly and efficiently can you get it to production? So that's production being-- >> Compressing that time is going to put dollars in your pocket. >> Exactly. Right. >> So once, >> If you got it right. >> If you get it right, of course. So there are, there are a couple of issues with that. Once you figure out that model is working, that's perfect. People don't realize, the moment you decide that moment when the decision is made, it's like a new car. After you purchase the value decreases on a minute basis. Same thing with the models. Once the model is created, you need to be in production right away because it starts losing it value on a seconds minute basis. So issue number one, how fast can I get it over there? So your deployment, you are inferencing efficiently at the edge locations, your optimization, your security, all of this is at issue. But you know what is more important than that in the last mile? You keep the model up, you continue to work on, again, going back to the car analogy, at one point you got to figure out your car is costing more than to operate. So you got to get a new car, right? And that's the same thing with the models as well. If your model has reached a stage, it is actually a potential risk for your operation. To give you an idea, if Uber has a model, the first time when you get a car from going from point A to B cost you $60. If the model decayed the next time I might give you a $40 rate, I would take it definitely. But it's lost for the company. The business risk associated with operating on a bad model, you should realize it immediately, pull the model out, retrain it, redeploy it. That's is key. >> And that's got to be huge in security model recency and security to the extent that you can get real time is big. I mean you, you see Palo Alto, CrowdStrike, a lot of other security companies are injecting AI. Again, they won't show up in the ETR ML/AI taxonomy per se as a pure play. But ServiceNow is another company that you have have mentioned to me, offline. AI is just getting embedded everywhere. >> Yep. >> And then I'm glad you brought up, kind of real-time inferencing 'cause a lot of the modeling, if we can go back to the last point that we're going to make, a lot of the AI today is modeling done in the cloud. The last point we wanted to make here, I'd love to get your thoughts on this, is real-time AI inferencing for instance at the edge is going to become increasingly important for us. It's going to usher in new economics, new types of silicon, particularly arm-based. We've covered that a lot on "Breaking Analysis", new tooling, new companies and that could disrupt the sort of cloud model if new economics emerge. 'Cause cloud obviously very centralized, they're trying to decentralize it. But over the course of this decade we could see some real disruption there. Andy, give us your final thoughts on that. >> Yes and no. I mean at the end of the day, cloud is kind of centralized now, but a lot of this companies including, AWS is kind of trying to decentralize that by putting their own sub-centers and edge locations. >> Local zones, outposts. >> Yeah, exactly. Particularly the outpost concept. And if it can even become like a micro center and stuff, it won't go to the localized level of, I go to a single IOT level. But again, the cloud extends itself to that level. So if there is an opportunity need for it, the hyperscalers will figure out a way to fit that model. So I wouldn't too much worry about that, about deployment and where to have it and what to do with that. But you know, figure out the right business use case, get the right data, get the ethics and governance place and make sure they get it to production and make sure you pull the model out when it's not operating well. >> Excellent advice. Andy, I got to thank you for coming into the studio today, helping us with this "Breaking Analysis" segment. Outstanding collaboration and insights and input in today's episode. Hope we can do more. >> Thank you. Thanks for having me. I appreciate it. >> You're very welcome. All right. I want to thank Alex Marson who's on production and manages the podcast. Ken Schiffman as well. Kristen Martin and Cheryl Knight helped get the word out on social media and our newsletters. And Rob Hoof is our editor-in-chief over at Silicon Angle. He does some great editing for us. Thank you all. Remember all these episodes are available as podcast. Wherever you listen, all you got to do is search "Breaking Analysis" podcast. I publish each week on wikibon.com and silicon angle.com or you can email me at david.vellante@siliconangle.com to get in touch, or DM me at dvellante or comment on our LinkedIn posts. Please check out ETR.AI for the best survey data and the enterprise tech business, Constellation Research. Andy publishes there some awesome information on AI and data. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching everybody and we'll see you next time on "Breaking Analysis". (gentle closing tune plays)

Published Date : Dec 29 2022

SUMMARY :

bringing you data-driven Andy, great to have you on the program. and AI at the center of their enterprises. So it's like you found a of the AI use cases," right? I got a glimpse of the January survey, So one of the things and it just notes some of the players So the first one is, Like a And the open AI tool and ChatGPT rather. I have, but it's of all the available text of bodies that you need or some of the others that are on there? One of the things they're So the data historically So here's the thing. So the ROI is going to So the chart here shows the net score, Couple of them stood out to me IBM Watson is the far right and the red, And over the course of when you first saw it. I mean, that's one of the pillars. Oracle is not necessarily the how DataRobot is holding, you know? So it's like net score on the vertical database of the choice, you know? on how to make this more Are they going to go IPO? So at the end of the day, of the technology industry. So 99% of the time you What's that synthetic at the end of the day, and the synthetic data. So that's part of AI that blend is the problem. And the risk involved with that. So you got to start at data's going into the cloud So that need to be attended to. is going to put dollars the first time when you that you can get real time is big. a lot of the AI today is I mean at the end of the day, and make sure they get it to production Andy, I got to thank you for Thanks for having me. and manages the podcast.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Alex MarsonPERSON

0.99+

AndyPERSON

0.99+

Andy ThuraiPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Tom DavenportPERSON

0.99+

AMEXORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Rashmi KumarPERSON

0.99+

Rob HoofPERSON

0.99+

GoogleORGANIZATION

0.99+

UberORGANIZATION

0.99+

KenPERSON

0.99+

OracleORGANIZATION

0.99+

OctoberDATE

0.99+

6%QUANTITY

0.99+

$40QUANTITY

0.99+

January 21DATE

0.99+

ChipotleORGANIZATION

0.99+

$15 billionQUANTITY

0.99+

fiveQUANTITY

0.99+

RashmiPERSON

0.99+

$50,000QUANTITY

0.99+

$60QUANTITY

0.99+

USLOCATION

0.99+

JanuaryDATE

0.99+

AntonioPERSON

0.99+

John AkersPERSON

0.99+

Warren BuffetPERSON

0.99+

late 2018DATE

0.99+

IkeaORGANIZATION

0.99+

American ExpressORGANIZATION

0.99+

MITORGANIZATION

0.99+

PWCORGANIZATION

0.99+

99%QUANTITY

0.99+

HPEORGANIZATION

0.99+

DominoORGANIZATION

0.99+

ArvindPERSON

0.99+

Palo AltoLOCATION

0.99+

30 billionQUANTITY

0.99+

last yearDATE

0.99+

Constellation ResearchORGANIZATION

0.99+

GerstnerPERSON

0.99+

120 billionQUANTITY

0.99+

$100,000QUANTITY

0.99+

Breaking Analysis: Grading our 2022 Enterprise Technology Predictions


 

>>From the Cube Studios in Palo Alto in Boston, bringing you data-driven insights from the cube and E T R. This is breaking analysis with Dave Valante. >>Making technology predictions in 2022 was tricky business, especially if you were projecting the performance of markets or identifying I P O prospects and making binary forecast on data AI and the macro spending climate and other related topics in enterprise tech 2022, of course was characterized by a seesaw economy where central banks were restructuring their balance sheets. The war on Ukraine fueled inflation supply chains were a mess. And the unintended consequences of of forced march to digital and the acceleration still being sorted out. Hello and welcome to this week's weekly on Cube Insights powered by E T R. In this breaking analysis, we continue our annual tradition of transparently grading last year's enterprise tech predictions. And you may or may not agree with our self grading system, but look, we're gonna give you the data and you can draw your own conclusions and tell you what, tell us what you think. >>All right, let's get right to it. So our first prediction was tech spending increases by 8% in 2022. And as we exited 2021 CIOs, they were optimistic about their digital transformation plans. You know, they rushed to make changes to their business and were eager to sharpen their focus and continue to iterate on their digital business models and plug the holes that they, the, in the learnings that they had. And so we predicted that 8% rise in enterprise tech spending, which looked pretty good until Ukraine and the Fed decided that, you know, had to rush and make up for lost time. We kind of nailed the momentum in the energy sector, but we can't give ourselves too much credit for that layup. And as of October, Gartner had it spending growing at just over 5%. I think it was 5.1%. So we're gonna take a C plus on this one and, and move on. >>Our next prediction was basically kind of a slow ground ball. The second base, if I have to be honest, but we felt it was important to highlight that security would remain front and center as the number one priority for organizations in 2022. As is our tradition, you know, we try to up the degree of difficulty by specifically identifying companies that are gonna benefit from these trends. So we highlighted some possible I P O candidates, which of course didn't pan out. S NQ was on our radar. The company had just had to do another raise and they recently took a valuation hit and it was a down round. They raised 196 million. So good chunk of cash, but, but not the i p O that we had predicted Aqua Securities focus on containers and cloud native. That was a trendy call and we thought maybe an M SS P or multiple managed security service providers like Arctic Wolf would I p o, but no way that was happening in the crummy market. >>Nonetheless, we think these types of companies, they're still faring well as the talent shortage in security remains really acute, particularly in the sort of mid-size and small businesses that often don't have a sock Lacework laid off 20% of its workforce in 2022. And CO C e o Dave Hatfield left the company. So that I p o didn't, didn't happen. It was probably too early for Lacework. Anyway, meanwhile you got Netscope, which we've cited as strong in the E T R data as particularly in the emerging technology survey. And then, you know, I lumia holding its own, you know, we never liked that 7 billion price tag that Okta paid for auth zero, but we loved the TAM expansion strategy to target developers beyond sort of Okta's enterprise strength. But we gotta take some points off of the failure thus far of, of Okta to really nail the integration and the go to market model with azero and build, you know, bring that into the, the, the core Okta. >>So the focus on endpoint security that was a winner in 2022 is CrowdStrike led that charge with others holding their own, not the least of which was Palo Alto Networks as it continued to expand beyond its core network security and firewall business, you know, through acquisition. So overall we're gonna give ourselves an A minus for this relatively easy call, but again, we had some specifics associated with it to make it a little tougher. And of course we're watching ve very closely this this coming year in 2023. The vendor consolidation trend. You know, according to a recent Palo Alto network survey with 1300 SecOps pros on average organizations have more than 30 tools to manage security tools. So this is a logical way to optimize cost consolidating vendors and consolidating redundant vendors. The E T R data shows that's clearly a trend that's on the upswing. >>Now moving on, a big theme of 2020 and 2021 of course was remote work and hybrid work and new ways to work and return to work. So we predicted in 2022 that hybrid work models would become the dominant protocol, which clearly is the case. We predicted that about 33% of the workforce would come back to the office in 2022 in September. The E T R data showed that figure was at 29%, but organizations expected that 32% would be in the office, you know, pretty much full-time by year end. That hasn't quite happened, but we were pretty close with the projection, so we're gonna take an A minus on this one. Now, supply chain disruption was another big theme that we felt would carry through 2022. And sure that sounds like another easy one, but as is our tradition, again we try to put some binary metrics around our predictions to put some meat in the bone, so to speak, and and allow us than you to say, okay, did it come true or not? >>So we had some data that we presented last year and supply chain issues impacting hardware spend. We said at the time, you can see this on the left hand side of this chart, the PC laptop demand would remain above pre covid levels, which would reverse a decade of year on year declines, which I think started in around 2011, 2012. Now, while demand is down this year pretty substantially relative to 2021, I D C has worldwide unit shipments for PCs at just over 300 million for 22. If you go back to 2019 and you're looking at around let's say 260 million units shipped globally, you know, roughly, so, you know, pretty good call there. Definitely much higher than pre covid levels. But so what you might be asking why the B, well, we projected that 30% of customers would replace security appliances with cloud-based services and that more than a third would replace their internal data center server and storage hardware with cloud services like 30 and 40% respectively. >>And we don't have explicit survey data on exactly these metrics, but anecdotally we see this happening in earnest. And we do have some data that we're showing here on cloud adoption from ET R'S October survey where the midpoint of workloads running in the cloud is around 34% and forecast, as you can see, to grow steadily over the next three years. So this, well look, this is not, we understand it's not a one-to-one correlation with our prediction, but it's a pretty good bet that we were right, but we gotta take some points off, we think for the lack of unequivocal proof. Cause again, we always strive to make our predictions in ways that can be measured as accurate or not. Is it binary? Did it happen, did it not? Kind of like an O K R and you know, we strive to provide data as proof and in this case it's a bit fuzzy. >>We have to admit that although we're pretty comfortable that the prediction was accurate. And look, when you make an hard forecast, sometimes you gotta pay the price. All right, next, we said in 2022 that the big four cloud players would generate 167 billion in IS and PaaS revenue combining for 38% market growth. And our current forecasts are shown here with a comparison to our January, 2022 figures. So coming into this year now where we are today, so currently we expect 162 billion in total revenue and a 33% growth rate. Still very healthy, but not on our mark. So we think a w s is gonna miss our predictions by about a billion dollars, not, you know, not bad for an 80 billion company. So they're not gonna hit that expectation though of getting really close to a hundred billion run rate. We thought they'd exit the year, you know, closer to, you know, 25 billion a quarter and we don't think they're gonna get there. >>Look, we pretty much nailed Azure even though our prediction W was was correct about g Google Cloud platform surpassing Alibaba, Alibaba, we way overestimated the performance of both of those companies. So we're gonna give ourselves a C plus here and we think, yeah, you might think it's a little bit harsh, we could argue for a B minus to the professor, but the misses on GCP and Alibaba we think warrant a a self penalty on this one. All right, let's move on to our prediction about Supercloud. We said it becomes a thing in 2022 and we think by many accounts it has, despite the naysayers, we're seeing clear evidence that the concept of a layer of value add that sits above and across clouds is taking shape. And on this slide we showed just some of the pickup in the industry. I mean one of the most interesting is CloudFlare, the biggest supercloud antagonist. >>Charles Fitzgerald even predicted that no vendor would ever use the term in their marketing. And that would be proof if that happened that Supercloud was a thing and he said it would never happen. Well CloudFlare has, and they launched their version of Supercloud at their developer week. Chris Miller of the register put out a Supercloud block diagram, something else that Charles Fitzgerald was, it was was pushing us for, which is rightly so, it was a good call on his part. And Chris Miller actually came up with one that's pretty good at David Linthicum also has produced a a a A block diagram, kind of similar, David uses the term metacloud and he uses the term supercloud kind of interchangeably to describe that trend. And so we we're aligned on that front. Brian Gracely has covered the concept on the popular cloud podcast. Berkeley launched the Sky computing initiative. >>You read through that white paper and many of the concepts highlighted in the Supercloud 3.0 community developed definition align with that. Walmart launched a platform with many of the supercloud salient attributes. So did Goldman Sachs, so did Capital One, so did nasdaq. So you know, sorry you can hate the term, but very clearly the evidence is gathering for the super cloud storm. We're gonna take an a plus on this one. Sorry, haters. Alright, let's talk about data mesh in our 21 predictions posts. We said that in the 2020s, 75% of large organizations are gonna re-architect their big data platforms. So kind of a decade long prediction. We don't like to do that always, but sometimes it's warranted. And because it was a longer term prediction, we, at the time in, in coming into 22 when we were evaluating our 21 predictions, we took a grade of incomplete because the sort of decade long or majority of the decade better part of the decade prediction. >>So last year, earlier this year, we said our number seven prediction was data mesh gains momentum in 22. But it's largely confined and narrow data problems with limited scope as you can see here with some of the key bullets. So there's a lot of discussion in the data community about data mesh and while there are an increasing number of examples, JP Morgan Chase, Intuit, H S P C, HelloFresh, and others that are completely rearchitecting parts of their data platform completely rearchitecting entire data platforms is non-trivial. There are organizational challenges, there're data, data ownership, debates, technical considerations, and in particular two of the four fundamental data mesh principles that the, the need for a self-service infrastructure and federated computational governance are challenging. Look, democratizing data and facilitating data sharing creates conflicts with regulatory requirements around data privacy. As such many organizations are being really selective with their data mesh implementations and hence our prediction of narrowing the scope of data mesh initiatives. >>I think that was right on J P M C is a good example of this, where you got a single group within a, within a division narrowly implementing the data mesh architecture. They're using a w s, they're using data lakes, they're using Amazon Glue, creating a catalog and a variety of other techniques to meet their objectives. They kind of automating data quality and it was pretty well thought out and interesting approach and I think it's gonna be made easier by some of the announcements that Amazon made at the recent, you know, reinvent, particularly trying to eliminate ET t l, better connections between Aurora and Redshift and, and, and better data sharing the data clean room. So a lot of that is gonna help. Of course, snowflake has been on this for a while now. Many other companies are facing, you know, limitations as we said here and this slide with their Hadoop data platforms. They need to do new, some new thinking around that to scale. HelloFresh is a really good example of this. Look, the bottom line is that organizations want to get more value from data and having a centralized, highly specialized teams that own the data problem, it's been a barrier and a blocker to success. The data mesh starts with organizational considerations as described in great detail by Ash Nair of Warner Brothers. So take a listen to this clip. >>Yeah, so when people think of Warner Brothers, you always think of like the movie studio, but we're more than that, right? I mean, you think of H B O, you think of t n t, you think of C N N. We have 30 plus brands in our portfolio and each have their own needs. So the, the idea of a data mesh really helps us because what we can do is we can federate access across the company so that, you know, CNN can work at their own pace. You know, when there's election season, they can ingest their own data and they don't have to, you know, bump up against, as an example, HBO if Game of Thrones is going on. >>So it's often the case that data mesh is in the eyes of the implementer. And while a company's implementation may not strictly adhere to Jamma Dani's vision of data mesh, and that's okay, the goal is to use data more effectively. And despite Gartner's attempts to deposition data mesh in favor of the somewhat confusing or frankly far more confusing data fabric concept that they stole from NetApp data mesh is taking hold in organizations globally today. So we're gonna take a B on this one. The prediction is shaping up the way we envision, but as we previously reported, it's gonna take some time. The better part of a decade in our view, new standards have to emerge to make this vision become reality and they'll come in the form of both open and de facto approaches. Okay, our eighth prediction last year focused on the face off between Snowflake and Databricks. >>And we realized this popular topic, and maybe one that's getting a little overplayed, but these are two companies that initially, you know, looked like they were shaping up as partners and they, by the way, they are still partnering in the field. But you go back a couple years ago, the idea of using an AW w s infrastructure, Databricks machine intelligence and applying that on top of Snowflake as a facile data warehouse, still very viable. But both of these companies, they have much larger ambitions. They got big total available markets to chase and large valuations that they have to justify. So what's happening is, as we've previously reported, each of these companies is moving toward the other firm's core domain and they're building out an ecosystem that'll be critical for their future. So as part of that effort, we said each is gonna become aggressive investors and maybe start doing some m and a and they have in various companies. >>And on this chart that we produced last year, we studied some of the companies that were targets and we've added some recent investments of both Snowflake and Databricks. As you can see, they've both, for example, invested in elation snowflake's, put money into Lacework, the Secur security firm, ThoughtSpot, which is trying to democratize data with ai. Collibra is a governance platform and you can see Databricks investments in data transformation with D B T labs, Matillion doing simplified business intelligence hunters. So that's, you know, they're security investment and so forth. So other than our thought that we'd see Databricks I p o last year, this prediction been pretty spot on. So we'll give ourselves an A on that one. Now observability has been a hot topic and we've been covering it for a while with our friends at E T R, particularly Eric Bradley. Our number nine prediction last year was basically that if you're not cloud native and observability, you are gonna be in big trouble. >>So everything guys gotta go cloud native. And that's clearly been the case. Splunk, the big player in the space has been transitioning to the cloud, hasn't always been pretty, as we reported, Datadog real momentum, the elk stack, that's open source model. You got new entrants that we've cited before, like observe, honeycomb, chaos search and others that we've, we've reported on, they're all born in the cloud. So we're gonna take another a on this one, admittedly, yeah, it's a re reasonably easy call, but you gotta have a few of those in the mix. Okay, our last prediction, our number 10 was around events. Something the cube knows a little bit about. We said that a new category of events would emerge as hybrid and that for the most part is happened. So that's gonna be the mainstay is what we said. That pure play virtual events are gonna give way to hi hybrid. >>And the narrative is that virtual only events are, you know, they're good for quick hits, but lousy replacements for in-person events. And you know that said, organizations of all shapes and sizes, they learn how to create better virtual content and support remote audiences during the pandemic. So when we set at pure play is gonna give way to hybrid, we said we, we i we implied or specific or specified that the physical event that v i p experience is going defined. That overall experience and those v i p events would create a little fomo, fear of, of missing out in a virtual component would overlay that serves an audience 10 x the size of the physical. We saw that really two really good examples. Red Hat Summit in Boston, small event, couple thousand people served tens of thousands, you know, online. Second was Google Cloud next v i p event in, in New York City. >>Everything else was, was, was, was virtual. You know, even examples of our prediction of metaverse like immersion have popped up and, and and, and you know, other companies are doing roadshow as we predicted like a lot of companies are doing it. You're seeing that as a major trend where organizations are going with their sales teams out into the regions and doing a little belly to belly action as opposed to the big giant event. That's a definitely a, a trend that we're seeing. So in reviewing this prediction, the grade we gave ourselves is, you know, maybe a bit unfair, it should be, you could argue for a higher grade, but the, but the organization still haven't figured it out. They have hybrid experiences but they generally do a really poor job of leveraging the afterglow and of event of an event. It still tends to be one and done, let's move on to the next event or the next city. >>Let the sales team pick up the pieces if they were paying attention. So because of that, we're only taking a B plus on this one. Okay, so that's the review of last year's predictions. You know, overall if you average out our grade on the 10 predictions that come out to a b plus, I dunno why we can't seem to get that elusive a, but we're gonna keep trying our friends at E T R and we are starting to look at the data for 2023 from the surveys and all the work that we've done on the cube and our, our analysis and we're gonna put together our predictions. We've had literally hundreds of inbounds from PR pros pitching us. We've got this huge thick folder that we've started to review with our yellow highlighter. And our plan is to review it this month, take a look at all the data, get some ideas from the inbounds and then the e t R of January surveys in the field. >>It's probably got a little over a thousand responses right now. You know, they'll get up to, you know, 1400 or so. And once we've digested all that, we're gonna go back and publish our predictions for 2023 sometime in January. So stay tuned for that. All right, we're gonna leave it there for today. You wanna thank Alex Myerson who's on production and he manages the podcast, Ken Schiffman as well out of our, our Boston studio. I gotta really heartfelt thank you to Kristen Martin and Cheryl Knight and their team. They helped get the word out on social and in our newsletters. Rob Ho is our editor in chief over at Silicon Angle who does some great editing for us. Thank you all. Remember all these podcasts are available or all these episodes are available is podcasts. Wherever you listen, just all you do Search Breaking analysis podcast, really getting some great traction there. Appreciate you guys subscribing. I published each week on wikibon.com, silicon angle.com or you can email me directly at david dot valante silicon angle.com or dm me Dante, or you can comment on my LinkedIn post. And please check out ETR AI for the very best survey data in the enterprise tech business. Some awesome stuff in there. This is Dante for the Cube Insights powered by etr. Thanks for watching and we'll see you next time on breaking analysis.

Published Date : Dec 18 2022

SUMMARY :

From the Cube Studios in Palo Alto in Boston, bringing you data-driven insights from self grading system, but look, we're gonna give you the data and you can draw your own conclusions and tell you what, We kind of nailed the momentum in the energy but not the i p O that we had predicted Aqua Securities focus on And then, you know, I lumia holding its own, you So the focus on endpoint security that was a winner in 2022 is CrowdStrike led that charge put some meat in the bone, so to speak, and and allow us than you to say, okay, We said at the time, you can see this on the left hand side of this chart, the PC laptop demand would remain Kind of like an O K R and you know, we strive to provide data We thought they'd exit the year, you know, closer to, you know, 25 billion a quarter and we don't think they're we think, yeah, you might think it's a little bit harsh, we could argue for a B minus to the professor, Chris Miller of the register put out a Supercloud block diagram, something else that So you know, sorry you can hate the term, but very clearly the evidence is gathering for the super cloud But it's largely confined and narrow data problems with limited scope as you can see here with some of the announcements that Amazon made at the recent, you know, reinvent, particularly trying to the company so that, you know, CNN can work at their own pace. So it's often the case that data mesh is in the eyes of the implementer. but these are two companies that initially, you know, looked like they were shaping up as partners and they, So that's, you know, they're security investment and so forth. So that's gonna be the mainstay is what we And the narrative is that virtual only events are, you know, they're good for quick hits, the grade we gave ourselves is, you know, maybe a bit unfair, it should be, you could argue for a higher grade, You know, overall if you average out our grade on the 10 predictions that come out to a b plus, You know, they'll get up to, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Chris MillerPERSON

0.99+

CNNORGANIZATION

0.99+

Rob HoPERSON

0.99+

AlibabaORGANIZATION

0.99+

Dave ValantePERSON

0.99+

AmazonORGANIZATION

0.99+

5.1%QUANTITY

0.99+

2022DATE

0.99+

Charles FitzgeraldPERSON

0.99+

Dave HatfieldPERSON

0.99+

Brian GracelyPERSON

0.99+

2019DATE

0.99+

LaceworkORGANIZATION

0.99+

twoQUANTITY

0.99+

GCPORGANIZATION

0.99+

33%QUANTITY

0.99+

WalmartORGANIZATION

0.99+

DavidPERSON

0.99+

2021DATE

0.99+

20%QUANTITY

0.99+

Kristen MartinPERSON

0.99+

Palo AltoLOCATION

0.99+

2020DATE

0.99+

Ash NairPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

162 billionQUANTITY

0.99+

New York CityLOCATION

0.99+

DatabricksORGANIZATION

0.99+

OctoberDATE

0.99+

last yearDATE

0.99+

Arctic WolfORGANIZATION

0.99+

two companiesQUANTITY

0.99+

38%QUANTITY

0.99+

SeptemberDATE

0.99+

FedORGANIZATION

0.99+

JP Morgan ChaseORGANIZATION

0.99+

80 billionQUANTITY

0.99+

29%QUANTITY

0.99+

32%QUANTITY

0.99+

21 predictionsQUANTITY

0.99+

30%QUANTITY

0.99+

HBOORGANIZATION

0.99+

75%QUANTITY

0.99+

Game of ThronesTITLE

0.99+

JanuaryDATE

0.99+

2023DATE

0.99+

10 predictionsQUANTITY

0.99+

bothQUANTITY

0.99+

22QUANTITY

0.99+

ThoughtSpotORGANIZATION

0.99+

196 millionQUANTITY

0.99+

30QUANTITY

0.99+

eachQUANTITY

0.99+

last yearDATE

0.99+

Palo Alto NetworksORGANIZATION

0.99+

2020sDATE

0.99+

167 billionQUANTITY

0.99+

OktaORGANIZATION

0.99+

SecondQUANTITY

0.99+

GartnerORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

Aqua SecuritiesORGANIZATION

0.99+

DantePERSON

0.99+

8%QUANTITY

0.99+

Warner BrothersORGANIZATION

0.99+

IntuitORGANIZATION

0.99+

Cube StudiosORGANIZATION

0.99+

each weekQUANTITY

0.99+

7 billionQUANTITY

0.99+

40%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

Breaking Analysis: How Palo Alto Networks Became the Gold Standard of Cybersecurity


 

>> From "theCube" Studios in Palo Alto in Boston bringing you data-driven insights from "theCube" and ETR. This is "Breaking Analysis" with Dave Vellante. >> As an independent pure play company, Palo Alto Networks has earned its status as the leader in security. You can measure this in a variety of ways. Revenue, market cap, execution, ethos, and most importantly, conversations with customers generally. In CISO specifically, who consistently affirm this position. The company's on track to double its revenues in fiscal year 23 relative to fiscal year 2020. Despite macro headwinds, which are likely to carry through next year, Palo Alto owes its position to a clarity of vision and strong execution on a TAM expansion strategy through acquisitions and integration into its cloud and SaaS offerings. Hello and welcome to this week's "Wikibon Cube Insights" powered by ETR and this breaking analysis and ahead of Palo Alto Ignite the company's user conference, we bring you the next chapter on top of the last week's cybersecurity update. We're going to dig into the ETR data on Palo Alto Networks as we promised and provide a glimpse of what we're going to look for at "Ignite" and posit what Palo Alto needs to do to stay on top of the hill. Now, the challenges for cybersecurity professionals. Dead simple to understand. Solving it, not so much. This is a taxonomic eye test, if you will, from Optiv. It's one of our favorite artifacts to make the point the cybersecurity landscape is a mosaic of stovepipes. Security professionals have to work with dozens of tools many legacy combined with shiny new toys to try and keep up with the relentless pace of innovation catalyzed by the incredibly capable well-funded and motivated adversaries. Cybersecurity is an anomalous market in that the leaders have low single digit market shares. Think about that. Cisco at one point held 60% market share in the networking business and it's still deep into the 40s. Oracle captures around 30% of database market revenue. EMC and storage at its peak had more than 30% of that market. Even Dell's PC market shares, you know, in the mid 20s or even over that from a revenue standpoint. So cybersecurity from a market share standpoint is even more fragmented perhaps than the software industry. Okay, you get the point. So despite its position as the number one player Palo Alto might have maybe three maybe 4% of the total market, depending on what you use as your denominator, but just a tiny slice. So how is it that we can sit here and declare Palo Alto as the undisputed leader? Well, we probably wouldn't go that far. They probably have quite a bit of competition. But this CISO from a recent ETR round table discussion with our friend Eric Bradley, summed up Palo Alto's allure. We thought pretty well. The question was why Palo Alto Networks? Here's the answer. Because of its completeness as a platform, its ability to integrate with its own products or they acquire, integrate then rebrand them as their own. We've looked at other vendors we just didn't think they were as mature and we already had implemented some of the Palo Alto tools like the firewalls and stuff and we thought why not go holistically with the vendor a single throat to choke, if you will, if stuff goes wrong. And I think that was probably the primary driver and familiarity with the tools and the resources that they provided. Now here's another stat from ETR's Eric Bradley. He gave us a glimpse of the January survey that's in the field now. The percent of IT buyers stating that they plan to consolidate redundant vendors, it went from 34% in the October survey and now stands at 44%. So we fo we feel this bodes well for consolidators like Palo Alto networks. And the same is true from Microsoft's kind of good enough approach. It should also be true for CrowdStrike although last quarter we saw softness reported on in their SMB market, whereas interestingly MongoDB actually saw consistent strength from its SMB and its self-serve. So that's something that we're watching very closely. Now, Palo Alto Networks has held up better than most of its peers in the stock market. So let's take a look at that real quick. This chart gives you a sense of how well. It's a one year comparison of Palo Alto with the bug ETF. That's the cyber basket that we like to compare often CrowdStrike, Zscaler, and Okta. Now remember Palo Alto, they didn't run up as much as CrowdStrike, ZS and Okta during the pandemic but you can see it's now down unquote only 9% for the year. Whereas the cyber basket ETF is off 27% roughly in line with the NASDAQ. We're not showing that CrowdStrike down 44%, Zscaler down 61% and Okta off a whopping 72% in the past 12 months. Now as we've indicated, Palo Alto is making a strong case for consolidating point tools and we think it will have a much harder time getting customers to switch off of big platforms like Cisco who's another leader in network security. But based on the fragmentation in the market there's plenty of room to grow in our view. We asked breaking analysis contributor Chip Simington for his take on the technicals of the stock and he said that despite Palo Alto's leadership position it doesn't seem to make much difference these days. It's all about interest rates. And even though this name has performed better than its peers, it looks like the stock wants to keep testing its 52 week lows, but he thinks Palo Alto got oversold during the last big selloff. And the fact that the company's free cash flow is so strong probably keeps it at the one 50 level or above maybe bouncing around there for a while. If it breaks through that under to the downside it's ne next test is at that low of around one 40 level. So thanks for that, Chip. Now having get that out of the way as we said on the previous chart Palo Alto has strong opinions, it's founder and CTO, Nir Zuk, is extremely clear on that point of view. So let's take a look at how Palo Alto got to where it is today and how we think you should think about his future. The company was founded around 18 years ago as a network security company focused on what they called NextGen firewalls. Now, what Palo Alto did was different. They didn't try to stuff a bunch of functionality inside of a hardware box. Rather they layered network security functions on top of its firewalls and delivered value as a service through software running at the time in its own cloud. So pretty obvious today, but forward thinking for the time and now they've moved to a more true cloud native platform and much more activity in the public cloud. In February, 2020, right before the pandemic we reported on the divergence in market values between Palo Alto and Fort Net and we cited some challenges that Palo Alto was happening having transitioning to a cloud native model. And at the time we said we were confident that Palo Alto would make it through the knot hole. And you could see from the previous chart that it has. So the company's architectural approach was to do the heavy lifting in the cloud. And this eliminates the need for customers to deploy sensors on prem or proxies on prem or sandboxes on prem sandboxes, you know for instance are vulnerable to overwhelming attacks. Think about it, if you're a sandbox is on prem you're not going to be updating that every day. No way. You're probably not going to updated even every week or every month. And if the capacity of your sandbox is let's say 20,000 files an hour you know a hacker's just going to turn up the volume, it'll overwhelm you. They'll send a hundred thousand emails attachments into your sandbox and they'll choke you out and then they'll have the run of the house while you're trying to recover. Now the cloud doesn't completely prevent that but what it does, it definitely increases the hacker's cost. So they're going to probably hit some easier targets and that's kind of the objective of security firms. You know, increase the denominator on the ROI. All right, the next thing that Palo Alto did is start acquiring aggressively, I think we counted 17 or 18 acquisitions to expand the TAM beyond network security into endpoint CASB, PaaS security, IaaS security, container security, serverless security, incident response, SD WAN, CICD pipeline security, attack service management, supply chain security. Just recently with the acquisition of Cider Security and Palo Alto by all accounts takes the time to integrate into its cloud and SaaS platform called Prisma. Unlike many acquisitive companies in the past EMC was a really good example where you ended up with a kind of a Franken portfolio. Now all this leads us to believe that Palo Alto wants to be the consolidator and is in a good position to do so. But beyond that, as multi-cloud becomes more prevalent and more of a strategy customers tell us they want a consistent experience across clouds. And is going to be the same by the way with IoT. So of the next wave here. Customers don't want another stove pipe. So we think Palo Alto is in a good position to build what we call the security super cloud that layer above the clouds that brings a common experience for devs and operational teams. So of course the obvious question is this, can Palo Alto networks continue on this path of acquire and integrate and still maintain best of breed status? Can it? Will it? Does it even have to? As Holger Mueller of Constellation Research and I talk about all the time integrated suites seem to always beat best of breed in the long run. We'll come back to that. Now, this next graphic that we're going to show you underscores this question about portfolio. Here's a picture and I don't expect you to digest it all but it's a screen grab of Palo Alto's product and solutions portfolios, network cloud, network security rather, cloud security, Sassy, CNAP, endpoint unit 42 which is their threat intelligence platform and every imaginable security service and solution for customers. Well, maybe not every, I'm sure there's more to come like supply chain with the recent Cider acquisition and maybe more IoT beyond ZingBox and earlier acquisition but we're sure there will be more in the future both organic and inorganic. Okay, let's bring in more of the ETR survey data. For those of you who don't know ETR, they are the number one enterprise data platform surveying thousands of end customers every quarter with additional drill down surveys and customer round tables just an awesome SaaS enabled platform. And here's a view that shows net score or spending momentum on the vertical axis in provision or presence within the ETR data set on the horizontal axis. You see that red dotted line at 40%. Anything at or over that indicates a highly elevated net score. And as you can see Palo Alto is right on that line just under. And I'll give you another glimpse it looks like Palo Alto despite the macro may even just edge up a bit in the next survey based on the glimpse that Eric gave us. Now those colored bars in the bottom right corner they show the breakdown of Palo Alto's net score and underscore the methodology that ETR uses. The lime green is new customer adoptions, that's 7%. The forest green at 38% represents the percent of customers that are spending 6% or more on Palo Alto solutions. The gray is at that 40 or 8% that's flat spending plus or minus 5%. The pinkish at 5% is spending is down on Palo Alto network products by 6% or worse. And the bright red at only 2% is churn or defections. Very low single digit numbers for Palo Alto, that's a real positive. What you do is you subtract the red from the green and you get a net score of 38% which is very good for a company of Palo Alto size. And we'll note this is based on just under 400 responses in the ETR survey that are Palo Alto customers out of around 1300 in the total survey. It's a really good representation of Palo Alto. And you can see the other leading companies like CrowdStrike, Okta, Zscaler, Forte, Cisco they loom large with similar aspirations. Well maybe not so much Okta. They don't necessarily rule want to rule the world. They want to rule identity and of course the ever ubiquitous Microsoft in the upper right. Now drilling deeper into the ETR data, let's look at how Palo Alto has progressed over the last three surveys in terms of market presence in the survey. This view of the data shows provision in the data going back to October, 2021, that's the gray bars. The blue is July 22 and the yellow is the latest survey from October, 2022. Remember, the January survey is currently in the field. Now the leftmost set of data there show size a company. The middle set of data shows the industry for a select number of industries in the right most shows, geographic region. Notice anything, yes, Palo Alto up across the board relative to both this past summer and last fall. So that's pretty impressive. Palo Alto network CEO, Nikesh Aurora, stressed on the last earnings call that the company is seeing somewhat elongated deal approvals and sometimes splitting up size of deals. He's stressed that certain industries like energy, government and financial services continue to spend. But we would expect even a pullback there as companies get more conservative. But the point is that Nikesh talked about how they're hiring more sales pros to work the pipeline because they understand that they have to work harder to pull deals forward 'cause they got to get more approvals and they got to increase the volume that's coming through the pipeline to account for the possibility that certain companies are going to split up the deals, you know, large deals they want to split into to smaller bite size chunks. So they're really going hard after they go to market expansion to account for that. All right, so we're going to wrap by sharing what we expect and what we're going to probe for at Palo Alto Ignite next week, Lisa Martin and I will be hosting "theCube" and here's what we'll be looking for. First, it's a four day event at the MGM with the meat of the program on days two and three. That's day two was the big keynote. That's when we'll start our broadcasting, we're going for two days. Now our understanding is we've never done Palo Alto Ignite before but our understanding it's a pretty technically oriented crowd that's going to be eager to hear what CTO and founder Nir Zuk has to say. And as well CEO Nikesh Aurora and as in addition to longtime friend of "theCube" and current president, BJ Jenkins, he's going to be speaking. Wendy Whitmore runs Unit 42 and is going to be several other high profile Palo Alto execs, as well, Thomas Kurian from Google is a featured speaker. Lee Claridge, who is Palo Alto's, chief product officer we think is going to be giving the audience heavy doses of Prisma Cloud and Cortex enhancements. Now, Cortex, you might remember, came from an acquisition and does threat detection and attack surface management. And we're going to hear a lot about we think about security automation. So we'll be listening for how Cortex has been integrated and what kind of uptake that it's getting. We've done some, you know, modeling in from the ETR. Guys have done some modeling of cortex, you know looks like it's got a lot of upside and through the Palo Alto go to market machine, you know could really pick up momentum. That's something that we'll be probing for. Now, one of the other things that we'll be watching is pricing. We want to talk to customers about their spend optimization, their spending patterns, their vendor consolidation strategies. Look, Palo Alto is a premium offering. It charges for value. It's expensive. So we also want to understand what kind of switching costs are customers willing to absorb and how onerous they are and what's the business case look like? How are they thinking about that business case. We also want to understand and really probe on how will Palo Alto maintain best of breed as it continues to acquire and integrate to expand its TAM and appeal as that one-stop shop. You know, can it do that as we talked about before. And will it do that? There's also an interesting tension going on sort of changing subjects here in security. There's a guy named Edward Hellekey who's been in "theCube" before. He hasn't been in "theCube" in a while but he's a security pro who has educated us on the nuances of protecting data privacy, public policy, how it varies by region and how complicated it is relative to security. Because securities you technically you have to show a chain of custody that proves unequivocally, for example that data has been deleted or scrubbed or that metadata does. It doesn't include any residual private data that violates the laws, the local laws. And the tension is this, you need good data and lots of it to have good security, really the more the better. But government policy is often at odds in a major blocker to sharing data and it's getting more so. So we want to understand this tension and how companies like Palo Alto are dealing with it. Our customers testing public policy in courts we think not quite yet, our government's making exceptions and policies like GDPR that favor security over data privacy. What are the trade-offs there? And finally, one theme of this breaking analysis is what does Palo Alto have to do to stay on top? And we would sum it up with three words. Ecosystem, ecosystem, ecosystem. And we said this at CrowdStrike Falcon in September that the one concern we had was the pace of ecosystem development for CrowdStrike. Is collaboration possible with competitors? Is being adopted aggressively? Is Palo Alto being adopted aggressively by global system integrators? What's the uptake there? What about developers? Look, the hallmark of a cloud company which Palo Alto is a cloud security company is a thriving ecosystem that has entries into and exits from its platform. So we'll be looking at what that ecosystem looks like how vibrant and inclusive it is where the public clouds fit and whether Palo Alto Networks can really become the security super cloud. Okay, that's a wrap stop by next week. If you're in Vegas, say hello to "theCube" team. We have an unbelievable lineup on the program. Now if you're not there, check out our coverage on theCube.net. I want to thank Eric Bradley for sharing a glimpse on short notice of the upcoming survey from ETR and his thoughts. And as always, thanks to Chip Symington for his sharp comments. Want to thank Alex Morrison, who's on production and manages the podcast Ken Schiffman as well in our Boston studio, Kristen Martin and Cheryl Knight they help get the word out on social and of course in our newsletters, Rob Hoof, is our editor in chief over at Silicon Angle who does some awesome editing, thank you to all. Remember all these episodes they're available as podcasts. Wherever you listen, all you got to do is search "Breaking Analysis" podcasts. I publish each week on wikibon.com and silicon angle.com where you can email me at david.valante@siliconangle.com or dm me at D Valante or comment on our LinkedIn post. And please do check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Valante for "theCube" Insights powered by ETR. Thanks for watching. We'll see you next week on "Ignite" or next time on "Breaking Analysis". (upbeat music)

Published Date : Dec 11 2022

SUMMARY :

bringing you data-driven and of course the ever

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

Edward HellekeyPERSON

0.99+

Eric BradleyPERSON

0.99+

Lisa MartinPERSON

0.99+

CiscoORGANIZATION

0.99+

Thomas KurianPERSON

0.99+

Dave VellantePERSON

0.99+

Lee ClaridgePERSON

0.99+

Rob HoofPERSON

0.99+

17QUANTITY

0.99+

October, 2021DATE

0.99+

Palo AltoORGANIZATION

0.99+

February, 2020DATE

0.99+

October, 2022DATE

0.99+

40QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValantePERSON

0.99+

Wendy WhitmorePERSON

0.99+

SeptemberDATE

0.99+

OctoberDATE

0.99+

JanuaryDATE

0.99+

ZscalerORGANIZATION

0.99+

OktaORGANIZATION

0.99+

ForteORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

Chip SimingtonPERSON

0.99+

52 weekQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

BJ JenkinsPERSON

0.99+

DellORGANIZATION

0.99+

July 22DATE

0.99+

6%QUANTITY

0.99+

EricPERSON

0.99+

VegasLOCATION

0.99+

Palo AltoLOCATION

0.99+

two daysQUANTITY

0.99+

one yearQUANTITY

0.99+

34%QUANTITY

0.99+

Chip SymingtonPERSON

0.99+

Kristen MartinPERSON

0.99+

7%QUANTITY

0.99+

40%QUANTITY

0.99+

27%QUANTITY

0.99+

44%QUANTITY

0.99+

61%QUANTITY

0.99+

38%QUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Nir ZukPERSON

0.99+

72%QUANTITY

0.99+

5%QUANTITY

0.99+

4%QUANTITY

0.99+

next weekDATE

0.99+

Constellation ResearchORGANIZATION

0.99+

Cider SecurityORGANIZATION

0.99+

four dayQUANTITY

0.99+

fiscal year 23DATE

0.99+

8%QUANTITY

0.99+

last quarterDATE

0.99+

david.valante@siliconangle.comOTHER

0.99+

Fort NetORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

FirstQUANTITY

0.99+

Ken SchiffmanPERSON

0.99+

GDPRTITLE

0.99+

last fallDATE

0.99+

NASDAQORGANIZATION

0.99+

fiscal year 2020DATE

0.99+

threeQUANTITY

0.99+

more than 30%QUANTITY

0.99+

three wordsQUANTITY

0.99+

todayDATE

0.99+

OracleORGANIZATION

0.99+

FrankenORGANIZATION

0.99+

Breaking Analysis: Cyber Firms Revert to the Mean


 

(upbeat music) >> From theCube Studios in Palo Alto in Boston, bringing you data driven insights from theCube and ETR. This is Breaking Analysis with Dave Vellante. >> While by no means a safe haven, the cybersecurity sector has outpaced the broader tech market by a meaningful margin, that is up until very recently. Cybersecurity remains the number one technology priority for the C-suite, but as we've previously reported the CISO's budget has constraints just like other technology investments. Recent trends show that economic headwinds have elongated sales cycles, pushed deals into future quarters, and just like other tech initiatives, are pacing cybersecurity investments and breaking them into smaller chunks. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis we explain how cybersecurity trends are reverting to the mean and tracking more closely with other technology investments. We'll make a couple of valuation comparisons to show the magnitude of the challenge and which cyber firms are feeling the heat, which aren't. There are some exceptions. We'll then show the latest survey data from ETR to quantify the contraction in spending momentum and close with a glimpse of the landscape of emerging cybersecurity companies, the private companies that could be ripe for acquisition, consolidation, or disruptive to the broader market. First, let's take a look at the recent patterns for cyber stocks relative to the broader tech market as a benchmark, as an indicator. Here's a year to date comparison of the bug ETF, which comprises a basket of cyber security names, and we compare that with the tech heavy NASDAQ composite. Notice that on April 13th of this year the cyber ETF was actually in positive territory while the NAS was down nearly 14%. Now by August 16th, the green turned red for cyber stocks but they still meaningfully outpaced the broader tech market by more than 950 basis points as of December 2nd that Delta had contracted. As you can see, the cyber ETF is now down nearly 25%, year to date, while the NASDAQ is down 27% and change. Now take a look at just how far a few of the high profile cybersecurity names have fallen. Here are six security firms that we've been tracking closely since before the pandemic. We've been, you know, tracking dozens but let's just take a look at this data and the subset. We show for comparison the S&P 500 and the NASDAQ, again, just for reference, they're both up since right before the pandemic. They're up relative to right before the pandemic, and then during the pandemic the S&P shot up more than 40%, relative to its pre pandemic level, around February is what we're using for the pre pandemic level, and the NASDAQ peaked at around 65% higher than that February level. They're now down 85% and 71% of their previous. So they're at 85% and 71% respectively from their pandemic highs. You compare that to these six companies, Splunk, which was and still is working through a transition is well below its pre pandemic market value and 44, it's 44% of its pre pandemic high as of last Friday. Palo Alto Networks is the most interesting here, in that it had been facing challenges prior to the pandemic related to a pivot to the Cloud which we reported on at the time. But as we said at that time we believe the company would sort out its Cloud transition, and its go to market challenges, and sales compensation issues, which it did as you can see. And its valuation jumped from 24 billion prior to Covid to 56 billion, and it's holding 93% of its peak value. Its revenue run rate is now over 6 billion with a healthy growth rate of 24% expected for the next quarter. Similarly, Fortinet has done relatively well holding 71% of its peak Covid value, with a healthy 34% revenue guide for the coming quarter. Now, Okta has been the biggest disappointment, a darling of the pandemic Okta's communication snafu, with what was actually a pretty benign hack combined with difficulty absorbing its 7 billion off zero acquisition, knocked the company off track. Its valuation has dropped by 35 billion since its peak during the pandemic, and that's after a nice beat and bounce back quarter just announced by Okta. Now, in our view Okta remains a viable long-term leader in identity. However, its recent fiscal 24 revenue guide was exceedingly conservative at around 16% growth. So either the company is sandbagging, or has such poor visibility that it wants to be like super cautious or maybe it's actually seeing a dramatic slowdown in its business momentum. After all, this is a company that not long ago was putting up 50% plus revenue growth rates. So it's one that bears close watching. CrowdStrike is another big name that we've been talking about on Breaking Analysis for quite some time. It like Okta has led the industry in a key ETR performance indicator that measures customer spending momentum. Just last week, CrowdStrike announced revenue increased more than 50% but new ARR was soft and the company guided conservatively. Not surprisingly, the stock got absolutely crushed as CrowdStrike blamed tepid demand from smaller and midsize firms. Many analysts believe that competition from Microsoft was one factor along with cautious spending amongst those midsize and smaller customers. Notably, large customers remain active. So we'll see if this is a longer term trend or an anomaly. Zscaler is another company in the space that we've reported having great customer spending momentum from the ETR data. But even though the company beat expectations for its recent quarter, like other companies its Outlook was conservative. So other than Palo Alto, and to a lesser extent Fortinet, these companies and others that we're not showing here are feeling the economic pinch and it shows in the compression of value. CrowdStrike, for example, had a 70 billion valuation at one point during the pandemic Zscaler top 50 billion, Okta 45 billion. Now, having said that Palo Alto Networks, Fortinet, CrowdStrike, and Zscaler are all still trading well above their pre pandemic levels that we tracked back in February of 2020. All right, let's go now back to ETR'S January survey and take a look at how much things have changed since the beginning of the year. Remember, this is obviously pre Ukraine, and pre all the concerns about the economic headwinds but here's an X Y graph that shows a net score, or spending momentum on the y-axis, and market presence on the x-axis. The red dotted line at 40% on the vertical indicates a highly elevated net score. Anything above that we think is, you know, super elevated. Now, we filtered the data here to show only those companies with more than 50 responses in the ETR survey. Still really crowded. Note that there were around 20 companies above that red 40% mark, which is a very, you know, high number. It's a, it's a crowded market, but lots of companies with, you know, positive momentum. Now let's jump ahead to the most recent October survey and take a look at what, what's happening. Same graphic plotting, spending momentum, and market presence, and look at the number of companies above that red line and how it's been squashed. It's really compressing, it's still a crowded market, it's still, you know, plenty of green, but the number of companies above 40% that, that key mark has gone from around 20 firms down to about five or six. And it speaks to that compression and IT spending, and of course the elongated sales cycles pushing deals out, taking them in smaller chunks. I can't tell you how many conversations with customers I had, at last week at Reinvent underscoring this exact same trend. The buyers are getting pressure from their CFOs to slow things down, do more with less and, and, and prioritize projects to those that absolutely are critical to driving revenue or cutting costs. And that's rippling through all sectors, including cyber. Now, let's do a bit more playing around with the ETR data and take a look at those companies with more than a hundred citations in the survey this quarter. So N, greater than or equal to a hundred. Now remember the followers of Breaking Analysis know that each quarter we take a look at those, what we call four star security firms. That is, those are the, that are in, that hit the top 10 for both spending momentum, net score, and the N, the mentions in the survey, the presence, the pervasiveness in the survey, and that's what we show here. The left most chart is sorted by spending momentum or net score, and the right hand chart by shared N, or the number of mentions in the survey, that pervasiveness metric. that solid red line denotes the cutoff point at the top 10. And you'll note we've actually cut it off at 11 to account for Auth 0, which is now part of Okta, and is going through a go to market transition, you know, with the company, they're kind of restructuring sales so they can take advantage of that. So starting on the left with spending momentum, again, net score, Microsoft leads all vendors, typical Microsoft, very prominent, although it hadn't always done so, it, for a while, CrowdStrike and Okta were, were taking the top spot, now it's Microsoft. CrowdStrike, still always near the top, but note that CyberArk and Cloudflare have cracked the top five in Okta, which as I just said was consistently at the top, has dropped well off its previous highs. You'll notice that Palo Alto Network Palo Alto Networks with a 38% net score, just below that magic 40% number, is healthy, especially as you look over to the right hand chart. Take a look at Palo Alto with an N of 395. It is the largest of the independent pure play security firms, and has a very healthy net score, although one caution is that net score has dropped considerably since the beginning of the year, which is the case for most of the top 10 names. The only exception is Fortinet, they're the only ones that saw an increase since January in spending momentum as ETR measures it. Now this brings us to the four star security firms, that is those that hit the top 10 in both net score on the left hand side and market presence on the right hand side. So it's Microsoft, Palo Alto, CrowdStrike, Okta, still there even not accounting for a Auth 0, just Okta on its own. If you put in Auth 0, it's, it's even stronger. Adding then in Fortinet and Zscaler. So Microsoft, Palo Alto, CrowdStrike, Okta, Fortinet, and Zscaler. And as we've mentioned since January, only Fortinet has shown an increase in net score since, since that time, again, since the January survey. Now again, this talks to the compression in spending. Now one of the big themes we hear constantly in cybersecurity is the market is overcrowded. Everybody talks about that, me included. The implication there, is there's a lot of room for consolidation and that consolidation can come in the form of M&A, or it can come in the form of people consolidating onto a single platform, and retiring some other vendors, and getting rid of duplicate vendors. We're hearing that as a big theme as well. Now, as we saw in the previous, previous chart, this is a very crowded market and we've seen lots of consolidation in 2022, in the form of M&A. Literally hundreds of M&A deals, with some of the largest companies going private. SailPoint, KnowBe4, Barracuda, Mandiant, Fedora, these are multi billion dollar acquisitions, or at least billion dollars and up, and many of them multi-billion, for these companies, and hundreds more acquisitions in the cyberspace, now less you think the pond is overfished, here's a chart from ETR of emerging tech companies in the cyber security industry. This data comes from ETR's Emerging Technologies Survey, ETS, which is this diamond in a rough that I found a couple quarters ago, and it's ripe with companies that are candidates for M&A. Many would've liked, many of these companies would've liked to, gotten to the public markets during the pandemic, but they, you know, couldn't get there. They weren't ready. So the graph, you know, similar to the previous one, but different, it shows net sentiment on the vertical axis and that's a measurement of, of, of intent to adopt against a mind share on the X axis, which measures, measures the awareness of the vendor in the community. So this is specifically a survey that ETR goes out and, and, and fields only to track those emerging tech companies that are private companies. Now, some of the standouts in Mindshare, are OneTrust, BeyondTrust, Tanium and Endpoint, Net Scope, which we've talked about in previous Breaking Analysis. 1Password, which has been acquisitive on its own. In identity, the managed security service provider, Arctic Wolf Network, a company we've also covered, we've had their CEO on. We've talked about MSSPs as a real trend, particularly in small and medium sized business, we'll come back to that, Sneek, you know, kind of high flyer in both app security and containers, and you can just see the number of companies in the space this huge and it just keeps growing. Now, just to make it a bit easier on the eyes we filtered the data on these companies with with those, and isolated on those with more than a hundred responses only within the survey. And that's what we show here. Some of the names that we just mentioned are a bit easier to see, but these are the ones that really stand out in ERT, ETS, survey of private companies, OneTrust, BeyondTrust, Taniam, Netscope, which is in Cloud, 1Password, Arctic Wolf, Sneek, BitSight, SecurityScorecard, HackerOne, Code42, and Exabeam, and Sim. All of these hit the ETS survey with more than a hundred responses by, by the IT practitioners. Okay, so these firms, you know, maybe they do some M&A on their own. We've seen that with Sneek, as I said, with 1Password has been inquisitive, as have others. Now these companies with the larger footprint, these private companies, will likely be candidate for both buying companies and eventually going public when the markets settle down a bit. So again, no shortage of players to affect consolidation, both buyers and sellers. Okay, so let's finish with some key questions that we're watching. CrowdStrike in particular on its earnings calls cited softness from smaller buyers. Is that because these smaller buyers have stopped adopting? If so, are they more at risk, or are they tactically moving toward the easy button, aka, Microsoft's good enough approach. What does that mean for the market if smaller company cohorts continue to soften? How about MSSPs? Will companies continue to outsource, or pause on on that, as well as try to free up, to try to free up some budget? Adam Celiski at Reinvent last week said, "If you want to save money the Cloud's the best place to do it." Is the cloud the best place to save money in cyber? Well, it would seem that way from the standpoint of controlling budgets with lots of, lots of optionality. You could dial up and dial down services, you know, or does the Cloud add another layer of complexity that has to be understood and managed by Devs, for example? Now, consolidation should favor the likes of Palo Alto and CrowdStrike, cause they're platform players, and some of the larger players as well, like Cisco, how about IBM and of course Microsoft. Will that happen? And how will economic uncertainty impact the risk equation, a particular concern is increase of tax on vulnerable sectors of the population, like the elderly. How will companies and governments protect them from scams? And finally, how many cybersecurity companies can actually remain independent in the slingshot economy? In so many ways the market is still strong, it's just that expectations got ahead of themselves, and now as earnings forecast come, come, come down and come down to earth, it's going to basically come down to who can execute, generate cash, and keep enough runway to get through the knothole. And the one certainty is nobody really knows how tight that knothole really is. All right, let's call it a wrap. Next week we dive deeper into Palo Alto Networks, and take a look at how and why that company has held up so well and what to expect at Ignite, Palo Alto's big user conference coming up later this month in Las Vegas. We'll be there with theCube. Okay, many thanks to Alex Myerson on production and manages the podcast, Ken Schiffman as well, as our newest edition to our Boston studio. Great to have you Ken. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our EIC over at Silicon Angle. He does some great editing for us. Thank you to all. Remember these episodes are all available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibond.com and siliconangle.com, or you can email me directly David.vellante@siliconangle.com or DM me @DVellante, or comment on our LinkedIn posts. Please do checkout etr.ai, they got the best survey data in the enterprise tech business. This is Dave Vellante for theCube Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis. (upbeat music)

Published Date : Dec 5 2022

SUMMARY :

with Dave Vellante. and of course the elongated

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

December 2ndDATE

0.99+

OktaORGANIZATION

0.99+

DeltaORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

ZscalerORGANIZATION

0.99+

FortinetORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Adam CeliskiPERSON

0.99+

CrowdStrikeORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

August 16thDATE

0.99+

April 13thDATE

0.99+

Rob HofPERSON

0.99+

NASDAQORGANIZATION

0.99+

IBMORGANIZATION

0.99+

93%QUANTITY

0.99+

Kristin MartinPERSON

0.99+

Palo AltoLOCATION

0.99+

Arctic Wolf NetworkORGANIZATION

0.99+

38%QUANTITY

0.99+

40%QUANTITY

0.99+

71%QUANTITY

0.99+

JanuaryDATE

0.99+

Palo AltoORGANIZATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

50%QUANTITY

0.99+

February of 2020DATE

0.99+

Las VegasLOCATION

0.99+

7 billionQUANTITY

0.99+

six companiesQUANTITY

0.99+

SplunkORGANIZATION

0.99+

2022DATE

0.99+

BarracudaORGANIZATION

0.99+

34%QUANTITY

0.99+

24%QUANTITY

0.99+

FebruaryDATE

0.99+

last weekDATE

0.99+

last FridayDATE

0.99+

SailPointORGANIZATION

0.99+

FirstQUANTITY

0.99+

more than 50%QUANTITY

0.99+

85%QUANTITY

0.99+

each weekQUANTITY

0.99+

44%QUANTITY

0.99+

35 billionQUANTITY

0.99+

70 billionQUANTITY

0.99+

KenPERSON

0.99+

KnowBe4ORGANIZATION

0.99+

27%QUANTITY

0.99+

56 billionQUANTITY

0.99+

NetscopeORGANIZATION

0.99+

OctoberDATE

0.99+

Next weekDATE

0.99+

one factorQUANTITY

0.99+

bothQUANTITY

0.99+

hundredsQUANTITY

0.99+

44QUANTITY

0.99+

dozensQUANTITY

0.99+

BeyondTrustORGANIZATION

0.99+

David.vellante@siliconangle.comOTHER

0.99+

24 billionQUANTITY

0.99+

Dev Ittycheria, MongoDB | AWS re:Invent 2022


 

>>Hello and run. Welcome back to the Cube's live coverage here. Day three of Cube's coverage, two sets, wall to wall coverage. Third set upstairs in the Executive Briefing Center. I'm John Furry, host of the Cube with Dave Alon. Two other hosts here. Lot of action. Dave. The cheer here is the CEO of MongoDB, exclusive post on Silicon Angle for your prior to the event. Thanks for doing that. Great to see >>You. Likewise. Nice to see you >>Coming on. See you David. So it's great to catch up. Prior to the event for that exclusive story on ecosystem, your perspective that resonated with a lot of the people. The traffic on that post and comments have been off the charts. I think we're seeing a ecosystem kind of surge and not change over, but like a an and ISV and new platform. So I really appreciate your perspective as a platform ISV for aws. What's it like? What's this event like? What's your learnings? What's your takeaway from your customers here this year? What's the most important story going on? >>First of all, I think being here is important for us because we have so many customers and partners here. In fact, if you look at the customers that Amazon themselves announced about two thirds of those customers or MongoDB customers. So we have a huge overlap in customers here. So just connecting with customers and partners has been important. Obviously a lot of them are thinking about their plans going to next year. So we're kind of meeting with them to think about what their priorities are and how we can help. And also we're sharing a little bit of our product roadmap in terms of where we're going and helping them think through like how they can best use Mongadi B as they think about their data strategy, you know, going to next year. So it's been a very productive end. We have a lot of people here, a lot of sales people, a lot of product people, and there's tons of customers here. So we can get a lot accomplished in a few days. >>Dave and I always talk on the cube. Well, Dave always goes to the TAM expansion question. Expanding your total stressful market, the market is changing and you guys have a great position growing positioned. How do you look at the total addressable market for Mongo changing? Where's the growth gonna come from? How do you see your role in the market and how does that impact your current business model? >>Yeah, our whole goal is to really enable developers to think about Mongo, to be first when they're building modern applications. So what we've done is first built a fir, a first class transactional platform and now we've kind expanding the platform to do things like search and analytics, right? And so we are really offering a broad set of capabilities. Now our primary focus is the developer and helping developers build these amazing applications and giving them tools to really do so in a very quick way. So if you think about customers like Intuit, customers like Canva, customers like, you know, Verizon, at and t, you know, who are just using us to really transform their business. It's either to build new applications quickly to do things at a certain level of performance of scale they've never done before. And so really enabling them to do so much more in building these next generation applications that they can build anywhere else. >>So I was listening to McDermott, bill McDermott this morning. Yeah. And you listen to Bill, you just wanna buy from the guy, right? He's amazing. But he was basically saying, look, companies like he was talking about ServiceNow that could help organizations digitally transform, et cetera, but make money or save money or in a good position. And I said, right, Mongo's definitely one of those companies. What are those conversations like here? I know you've been meeting with customers, it's a different environment right now. There's a lot of uncertainty. I, I was talking to one of your customers said, yeah, I'm up for renewal. I love Mongo. I'm gonna see if they can stage my payments a little bit. You know, things like that. Are those conversations? Yeah, you know, similar to what >>You having, we clearly customers are getting a little bit more prudent, but we haven't seen any kind of like slow down terms of deal cycles or, or elongated sales cycles. I mean, obviously different customers in different sectors are going through different issues. What we are seeing customers think about is like how can I, you know, either drive more efficiency in my business like and big part of that is modernization of my existing legacy tech stack. How can maybe consolidate to a fewer set of vendors? I think they like our broad platform story. You know, rather than using three or four different databases, they can use MongoDB to do everything. So that that resonates with customers and the fact that they can move fast, right? Developer productivity is a proxy for innovation. And so being able to move fast to either seize new opportunities or respond to new threats is really, you know, top of mind for still C level executive. >>So can your software, you're right, consolidation is the number one way in which people are save money. Can your software be deflationary? I mean, I mean that in a good way. So >>I was just meeting with a customer who was thinking about Mongo for their transactional platform, elastic for the search platform and like a graph database for a special use case. And, and we said you can do all that on MongoDB. And he is like, oh my goodness, I can consolidate everything. Have one elegant developer interface. I can keep all the data in one place. I can easily access that data. And that makes so much more sense than having to basically use a bunch of peace parts. And so that's, that's what we're seeing more and more interest from customers about. >>So one of the things I want to get your reaction to is, I was saying on the cube, now you can disagree with me if you want, but at, in the cloud native world at Cuban and Kubernetes was going through its hype cycle. The conversation went to it's getting boring. And that's good cause they want it to be boring. They don't want people to talk about the run time. They want it to be working. Working is boring. That's invisible. It's good, it's sticky, it's done. As you guys have such a great sticky business model, you got a great install base. Mongo works, people are happy, they like the product. So it's kind of working, I won't wanna say boring cuz that's, it's irrelevant. What's the exciting things that Mongo's bringing on top of the existing base of product that is gonna really get your clients and prospects enthused about the innovation from Mongo? What's what cuz it's, it's almost like electricity in a way. You guys are very utility in, in the way you do, but it's growing. But is there an exciting element coming that you see that they should pay attention to? What's, what's your >>Vision that, right, so if you look back over the last 10, 15 years, there's been big two big platform shifts, mobile and cloud. I think the next big platform shift is from what I call dumb apps to smart apps. So building more intelligence into applications. And what that means is automating human decision making and embedding that into applications. So we believe that to be a fundamentally a developer problem to solve, yes, you need data scientist to build the machine learning algorithms to train the models. Yeah. But ultimately you can't really deploy, deployed at scale unless you give developers the tools to build those smart applications that what we focused on. And a big part of that is what we call application driven analytics where people or can, can embed that intelligence into applications so that they can instead rather having humans involved, they can make decisions faster, drive to businesses more quickly, you know, shorten it's short and time to market, et cetera. >>And so your strategy to implement those smart apps is to keep targeting the developer Yes. And build on that >>Base. Correct. Exactly. So we wanna essentially democratize the ability for any customer to use our tools to build a smart applications where they don't have the resources of a Google or you know, a large tech company. And that's essentially resonating with our customer base. >>We, we were talking about this earlier after Swami's keynote, is most companies struggle to put data at the core of their business. And I don't mean centralizing it all in a single place as data's everywhere, but, but really organizing their company and democratizing data so people can make data decisions. So I think what you're saying, essentially Atlas is the platform that you're gonna inject intelligence into and allow developers to then build applications that are, you know, intelligent, smart with ai, machine intelligence, et cetera. And that's how the ones that don't have the resources of a Google or an Amazon become correct the, that kind of AI company if >>You, and that's, that's the whole purpose of a developer data platform is to enable them to have the tools, you know, to have very sophisticated analytics, to have the ability to do very sophisticated indexes, optimized for analytics, the ability to use data lakes for very efficient storage and retrieval of data to leverage, you know, edge devices to be able to capture and synchronize data. These are all critical elements to build these next generation applications. And you have to do that, but you don't want to stitch together a thousand primitives. You want to have a platform to do that. And that's where we really focus. >>You know, Dave, Dave and I, three, two days, Dave and I, Dave Ante and I have been talking a lot about developer productivity. And one observation that's now validated is that developers are setting the pace for innovation. Correct? And if you look at the how they, the language that they speak, it's not the same language as security departments, right? They speak almost like different languages, developer and security, and then you got data language. But the developers are making choices of self-service. They can accelerate, they're driving the behavior behavior into the organizations. And this is one of the things I wrote about on Friday last week was the organizational changes are changing cuz the developers set the pace. You can't force tooling down their throat. They're gonna go with what's easy, what's workable. If you believe that to be true, then all the security's gonna be in the developer pipeline. All the innovations we've driven off that high velocity developer site, we're seeing success of security being embedded there with the developers. What are you gonna bring up to that developer layer that's going to help with security, help with maybe even new things, >>Right? So, you know, it's, it's almost a cliche to say now software is in the world, right? Because every company's value props is driven by, it's either enabled to find or created through software. What that really means is that developers are eating all the work, right? And you're seeing, you saw in DevOps, right? Where developers basically enro encroach into the ops world and made infrastructure a programmable interface. You see developers, to your point, encroaching in security, embedding more and more security features into their applications. We believe the same thing's gonna happen with data scientists and business analysts where developers are gonna embed that functionality that was done by different domains in the Alex world and embed that capability into apps themselves. So these applications are just naturally smarter. So you don't need someone to look at a dashboard and say, aha, there's some insight here now I need to go make a decision. The application will do that for you and actually make that decision for you so you can move that much more quickly to run your business either more efficiently or to drive more, you know, revenue. >>Well the interesting thing about your business is cuz you know, you got a lot of transactional activity going on and the data, the way I would say what you just described is the data stack and the application stacks are coming together, right? And you're in a really good position, I think to really affect that. You think about we've, we've operationalized so many systems, we really haven't operationalized our data systems. And, and particularly as you guys get more into analytics, it becomes an interesting, you know, roadmap for Mongo and your customers. How do you see that? >>Yeah, so I wanna be clear, we're not trying to be a data warehouse, I get it. We're not trying to be like, you know, go compete. In fact, we have nice partnership with data bricks and so forth. What we are really trying to do is enable developers to instrument and build these applications that embed analytics. Like a good analogy I'd use is like Google Maps. You think about how sophisticated Google Maps has, and I use that because everyone has used Google Maps. Yeah. Like in the old, I was old enough to print out the directions, map quest exactly, put it on my lap and drive and look down. Now have this device that tells me, you know, if there's a traffic, if there's an accident, if there's something you know, going will reroute me automatically. And what that app is doing is embedding real time data into, into its decision making and making the decision for you so that you don't have to think about which road to take. Right? You, you're gonna see that happen across almost every application over the next X number of years where these applications are gonna become so much smarter and make these decisions for you. So you can just move so much more quickly. >>Yeah. Talk about the company, what status of the company, your growth plans. Obviously you're seeing a lot of news and Salesforce co CEO just resigned, layoffs at cnn, layoffs at DoorDash. You know, tech unfortunately is not impacted, thank God. I'm not that too bad. Certainly in cloud's not impacted it is impacting some of the buying behavior. We talked about that. What's going on with the company head count? What's your goals? How's the team doing? What are your priorities? >>Right? So we we're going after a big, big opportunity. You know, we recognize, obviously the market's a little choppy right now, but our long term, we're very bullish on the opportunity. We believe that we can be the modern developer data platform to build these next generation applications in terms of costs. We're obviously being a little bit more judicious about where we're investing, but we see big, big opportunities for us. And so our overall cost base will grow next year. But obviously we also recognize that there's ways to drive more efficiency. We're at a scale now. We're a 1.2 billion business. We're gonna announce our Q3 results next week. So we'll talk a little bit more about, you know, what we're seeing in the business next week. But we, we think we're a business that's growing fast. You know, we grew, you know, over 50, 50% and so, so we're pretty fast growing business. Yeah. You see? >>Yeah, Tuesday, December 6th you guys announce Exactly. Course is a big, we always watch and love it. So, so what I'm hearing is you're not, you're not stepping on the brakes, you're still accelerating growth, but not at all costs. >>Correct. The term we're using is profitable growth. We wanna, you know, you know, drive the business in a way that we think continues to seize the opportunity. But we also, we always exercise discipline. You know, I, I'm old enough where I had to deal with 2000 and 2008, so, you know, seen the movie before, I'm not 28 and have not seen these markets. And so obviously some are, you know, emerging leaders have not seen these kinds of markets before. So we're kind of helping them think about how to continue to be disciplined. And >>I like that reference to two thousand.com bubble and the financial crisis of 2008. I mentioned this to you when we chat, I'd love to get your thoughts. Now looking back for reinvent, Amazon wasn't a force in, in 2008. They weren't really that big debt yet. Know impact agility, wasn't it? They didn't hit that, they didn't hit that cruising altitude of the value pro cloud agility, time of value moving fast. Now they are. So this is the first time that they're a part of the economic equation. You're on, you're on in the middle of it with Amazon. They could be a catalyst to recover faster if plan properly. What's your CEO take on just that general and other CEOs might be watching and saying, Hey, you know, if I play this right, I could leverage the cloud. You know, Adams is leading into the cloud during a recession. Okay, I get that. But specifically there might be a tactic. What's your view on >>That? I mean, what, what we're seeing the, the hyperscalers do is really continue to kind of compete at the raw infrastructure level on storage, on compute, on network performance, on security to provide the, the kind of the building blocks for companies like Monga Beach really build on. So we're leveraging that price performance curve that they're pushing. You know, they obviously talk about Graviton three, they're talking about their training model chip sets and their inference model chip sets and their security chip sets. Which is great for us because we can leverage those capabilities to build upon that. And I think, you know, if you had asked me, you know, in 2008, would we be talking about chip sets in 2022? I'd probably say, oh, we're way beyond that. But what it really speaks to is those things are still so profoundly important. And I think that's where you can see Amazon and Google and Microsoft compete to provide the best underlying infrastructure where companies like mongadi we can build upon and we can help customers leverage that to really build the next generation. >>I'm not saying it's 2008 all over again, but we have data from 2008 that was the first major tailwind for the cloud. Yeah. When the CFO said we're going from CapEx to opex. So we saw that. Now it's a lot different now it's a lot more mature >>I think. I think there's a fine tuning trend going on where people are right sizing, fine tuning, whatever you wanna call it. But a craft is coming. A trade craft of cloud management, cloud optimization, managing the cost structures, tuning, it's a crafting, it's more of a craft. It's kind of seems like we're >>In that era, I call it cost optimization, that people are looking to say like, I know I'm gonna invest but I wanna be rational and more thoughtful about where I invest and why and with whom I invest with. Versus just like, you know, just, you know, everyone getting a 30% increase in their opex budgets every year. I don't think that's gonna happen. And so, and that's where we feel like it's gonna be an opportunity for us. We've kind of hit scap velocity. We've got the developer mind share. We have 37,000 customers of all shapes and sizes across the world. And that customer crown's only growing. So we feel like we're a place where people are gonna say, I wanna standardize among the >>Db. Yeah. And so let's get a great quote in his keynote, he said, if you wanna save money, the place to do it is in the cloud. >>You tighten the belt, which belt you tightening? The marketplace belt, the wire belt. We had a whole session on that. Tighten your belt thing. David Chair, CEO of a billion dollar company, MongoDB, continue to grow and grow and continue to innovate. Thanks for coming on the cube and thanks for participating in our stories. >>Thanks for having me. Great to >>Be here. Thank. Okay, I, Dave ante live on the show floor. We'll be right back with our final interview of the day after this short break, day three coming to close. Stay with us. We'll be right back.

Published Date : Dec 1 2022

SUMMARY :

host of the Cube with Dave Alon. Nice to see you So it's great to catch up. can best use Mongadi B as they think about their data strategy, you know, going to next year. How do you see your role in the market and how does that impact your current customers like Canva, customers like, you know, Verizon, at and t, you know, And you listen to Bill, you just wanna buy from the guy, able to move fast to either seize new opportunities or respond to new threats is really, you know, So can your software, you're right, consolidation is the number one way in which people are save money. And, and we said you can do all that on MongoDB. So one of the things I want to get your reaction to is, I was saying on the cube, now you can disagree with me if you want, they can make decisions faster, drive to businesses more quickly, you know, And so your strategy to implement those smart apps is to keep targeting the developer Yes. of a Google or you know, a large tech company. And that's how the ones that don't have the resources of a Google or an Amazon data to leverage, you know, edge devices to be able to capture and synchronize data. And if you look at the how they, the language that they speak, it's not the same language as security So you don't need someone to look at a dashboard and say, aha, there's some insight here now I need to go make a the data, the way I would say what you just described is the data stack and the application stacks are coming together, into its decision making and making the decision for you so that you don't have to think about which road to take. Certainly in cloud's not impacted it is impacting some of the buying behavior. You know, we grew, you know, over 50, Yeah, Tuesday, December 6th you guys announce Exactly. And so obviously some are, you know, emerging leaders have not seen these kinds of markets before. I mentioned this to you when we chat, I'd love to get your thoughts. And I think, you know, if you had asked me, you know, in 2008, would we be talking about chip sets in When the CFO said we're going from CapEx to opex. fine tuning, whatever you wanna call it. Versus just like, you know, just, you know, everyone getting a 30% increase in their You tighten the belt, which belt you tightening? Great to of the day after this short break, day three coming to close.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

MongoORGANIZATION

0.99+

Dave AlonPERSON

0.99+

John FurryPERSON

0.99+

Dev IttycheriaPERSON

0.99+

VerizonORGANIZATION

0.99+

CapExORGANIZATION

0.99+

2008DATE

0.99+

1.2 billionQUANTITY

0.99+

2022DATE

0.99+

threeQUANTITY

0.99+

MongoDBORGANIZATION

0.99+

Tuesday, December 6thDATE

0.99+

30%QUANTITY

0.99+

next weekDATE

0.99+

next yearDATE

0.99+

37,000 customersQUANTITY

0.99+

AWSORGANIZATION

0.99+

ThirdQUANTITY

0.99+

two setsQUANTITY

0.99+

firstQUANTITY

0.99+

MongoDBTITLE

0.99+

oneQUANTITY

0.99+

David ChairPERSON

0.99+

2000DATE

0.99+

Google MapsTITLE

0.99+

CanvaORGANIZATION

0.99+

opexORGANIZATION

0.99+

this yearDATE

0.99+

SwamiPERSON

0.98+

Friday last weekDATE

0.98+

one placeQUANTITY

0.98+

McDermottPERSON

0.98+

two daysQUANTITY

0.98+

singleQUANTITY

0.98+

Dave AntePERSON

0.97+

28QUANTITY

0.97+

DoorDashORGANIZATION

0.97+

SalesforceORGANIZATION

0.97+

BillPERSON

0.97+

billion dollarQUANTITY

0.97+

over 50QUANTITY

0.97+

Day threeQUANTITY

0.97+

DevOpsTITLE

0.97+

FirstQUANTITY

0.96+

first timeQUANTITY

0.96+

CubeORGANIZATION

0.96+

Two other hostsQUANTITY

0.95+

one observationQUANTITY

0.94+

AlexTITLE

0.94+

IntuitORGANIZATION

0.93+

day threeQUANTITY

0.93+

tons of customersQUANTITY

0.92+

50%QUANTITY

0.92+

firQUANTITY

0.88+

Manu Parbhakar, AWS & Joel Jackson, Red Hat | AWS re:Invent 2022


 

>>Hello, brilliant humans and welcome back to Las Vegas, Nevada, where we are live from the AWS Reinvent Show floor here with the cube. My name is Savannah Peterson, joined with Dave Valante, and we have a very exciting conversation with you. Two, two companies you may have heard of. We've got AWS and Red Hat in the house. Manu and Joel, thank you so much for being here. Love this little fist bump. Started off, that's right. Before we even got rolling, Manu, you said that you wanted this to be the best segment of, of the cubes airing. We we're doing over a hundred segments, so you're gonna have to bring the heat. >>We're ready. We're did go. Are we ready? Yeah, go. We're ready. Let's bring it on. >>We're ready. All right. I'm, I'm ready. Dave's ready. Let's do it. How's the show going for you guys real quick before we dig in? >>Yeah, I think after Covid, it's really nice to see that we're back into the 2019 level and, you know, people just want to get out, meet people, have that human touch with each other, and I think a lot of trust gets built as a functional that, so it's super amazing to see our partners and customers here at Reedman. Yeah, >>And you've got a few in the house. That's true. Just a few maybe, maybe a couple >>Very few shows can say that, by the way. Yeah, it's maybe a handful. >>I think one of the things we were saying, it's almost like the entire Silicon Valley descended in the expo hall area, so >>Yeah, it's >>For a few different reasons. There's a few different silicon defined. Yeah, yeah, yeah. Don't have strong on for you. So far >>It's, it's, it is amazing. It's the 10th year, right? It's decade, I think I've been to five and it's, it grows every single year. It's the, you have to be here. It's as simple as that. And customers from every single industry are here too. You don't get, a lot of shows have every single industry and almost every single location around the globe. So it's, it's a must, must be >>Here. Well, and the personas evolved, right? I was at reinvent number two. That was my first, and it was all developers, not all, but a lot of developers. And today it's a business mix, really is >>Totally, is a business mix. And I just, I've talked about it a little bit down the show, but the diversity on the show floor, it's the first time I've had to wait in line for the ladies' room at a tech conference. Almost a two decade career. It is, yeah. And it was really refreshing. I'm so impressed. So clearly there's a commitment to community, but also a commitment to diversity. Yeah. And, and it's brilliant to see on the show floor. This is a partnership that is robust and has been around for a little while. Money. Why don't you tell us a little bit about the partnership here? >>Yes. So Red Hand and AWS are best friends, you know, forever together. >>Aw, no wonder we got the fist bumps and all the good vibes coming out. I know, it's great. I love that >>We have a decade of working together. I think the relationship in the first phase was around running rail bundled with E two. Sure. We have about 70,000 customers that are running rail, which are running mission critical workloads such as sap, Oracle databases, bespoke applications across the state of verticals. Now, as more and more enterprise customers are finally, you know, endorsing and adopting public cloud, I think that business is just gonna continue to grow. So a, a lot of progress there. The second titration has been around, you know, developers tearing Red Hat and aws, Hey, listen, we wanna, it's getting competitive. We wanna deliver new features faster, quicker, we want scale and we want resilience. So just entire push towards devs containers. So that's the second chapter with, you know, red Hat OpenShift on aws, which launched as a, a joint manage service in 2021 last year. And I think the third phase, which you're super excited about, is just bringing the ease of consumption, one click deployment, and then having our customers, you know, benefit from the joint committed spend programs together. So, you know, making sure that re and Ansible and JBoss, the entire portfolio of Red Hat products are available on AWS marketplace. So that's the 1, 2, 3, it of our relationship. It's a decade of working together and, you know, best friends are super committed to making sure our customers and partners continue successful. >>Yeah, that he said it, he said it perfectly. 2008, I know you don't like that, but we started with Rel on demand just in 2008 before E two even had a console. So the partnership has been there, like Manu says, for a long time, we got the partnership, we got the products up there now, and we just gotta finalize that, go to market and get that gas on the fire. >>Yeah. So Graviton Outpost, local zones, you lead it into all the new stuff. So that portends, I mean, 2008, we're talking two years after the launch of s3. >>That's right. >>Right. So, and now look, so is this a harbinger of things to come with these new innovations? >>Yeah, I, I would say, you know, the innovation is a key tenant of our partnership, our relationship. So if you look at from a product standpoint, red Hat or Rel was one of the first platforms that made a support for graviton, which is basically 40% better price performance than any other distribution. Then that translated into making sure that Rel is available on all of our regions globally. So this year we launched Switzerland, Spain, India, and Red Hat was available on launch there, support for Nitro support for Outpost Rosa support on Outpost as well. So I think that relationship, that innovation on the product side, that's pretty visible. I think that innovation again then translates into what we are doing on marketplace with one click deployments we spoke about. I think the third aspect of the know innovation is around making sure that we are making our partners and our customers successful. So one of the things that we've done so far is Joe leads a, you know, a black belt team that really goes into each customer opportunity, making sure how can we help you be successful. We launched and you know, we should be able to share that on a link. After this, we launched like a big playlist, which talks about every single use case on how do you get successful and running OpenShift on aws. So that innovation on behalf of our customers partners to make them successful, that's been a key tenant for us together as >>Well. That's right. And that team that Manu is talking about, we're gonna, gonna 10 x that team this year going into January. Our fiscal yield starts in January. Love that. So yeah, we're gonna have a lot of no hiring freeze over here. Nope. No ma'am. No. Yeah, that's right. Yeah. And you know what I love about working with aws and, and, and Manu just said it very, all of that's customer driven. Every single event that we, that he just talked about in that timeline, it's customer driven, right? Customers wanted rail on demand, customers want JBoss up in the cloud, Ansible this week, you know, everything's up there now. So it's just getting that go to market tight and we're gonna, we're gonna get that done. >>So what's the algorithm for customer driven in terms of taking the input? Because if every customers saying, Hey, I this a >>Really similar >>Question right up, right? I, that's what I want. And if you know, 95% of the customers say it, Jay, maybe that's a good idea. >>Yeah, that's right. Trends. But >>Yeah. You know, 30% you might be like, mm, you know, 20%, you know, how do you guys decide when to put gas on the fire? >>No, that, I think, as I mentioned, there are about 70,000 large customers that are running rail on Easy Two, many of these customers are informing our product strategy. So we have, you know, close to about couple of thousand power users. We have customer advisory booths, and these are the, you know, customers are informing us, Hey, let's get all of the Red Hat portfolio and marketplace support for graviton, support for Outpost. Why don't we, why are we not able to dip into the consumption committed spend programs for both Red Hat and aws? That's right. So it's these power users both at the developer level as well as the guys who are actually doing large commercial consumption. They are the ones who are informing the roadmap for both Red Hat and aws. >>But do, do you codify the the feedback? >>Yeah, I'm like, I wanna see the database, >>The, I think it was, I don't know, it was maybe Chasy, maybe it was Besos, that that data beats intuition. So do you take that information and somehow, I mean, it's global, 70,000 customers, right? And they have different weights, different spending patterns, different levels of maturity. Yeah. Do you, how do you codify that and then ultimately make the decision? Yeah, I >>If, I mean, well you, you've got the strategic advisory boards, which are made up of customers and partners and you know, you get, you get a good, you gotta get a good slice of your customer base to get, and you gotta take their feedback and you gotta do something with it, right? That's the, that's the way we do it and codify it at the product level, I'm sure open source. That's, that's basically how we work at the product level, right? The most elegant solution in open source wins. And that's, that's pretty much how we do that at the, >>I would just add, I think it's also just the implicit trust that the two companies had built with each other, working in the trenches, making our customers and partners successful over the last decade. And Alex, give an example. So that manifests itself in context of like, you know, Amazon and Red Hat just published the entire roadmap for OpenShift. What are the new features that are becoming over the next six to nine to 12 months? It's open source available on GitHub. Customers can see, and then they can basically come back and give feedback like, Hey, you know, we want hip compliance. We just launched. That was a big request that was coming from our >>Customers. That is not any process >>Also for Graviton or Nvidia instances. So I I I think it's a, >>Here's the thing, the reason I'm pounding on this is because you guys have a pretty high hit rate, and I think as a >>Customer, mildly successful company >>As, as a customer advocate, the better, you know, if, if you guys make bets that pay off, it's gonna pay off for customers. Right. And because there's a lot of failures in it. Yeah. I mean, let's face it. That's >>Right. And I think, I think you said the key word bets. You place a lot of small bets. Do you have the, the innovation engine to do that? AWS is the perfect place to place those small bets. And then you, you know, pour gas on the fire when, when they take off. >>Yeah, it's a good point. I mean, it's not expensive to experiment. Yeah. >>Especially in the managed service world. Right? >>And I know you love taking things to market and you're a go to market guy. Let's talk gtm, what's got your snow pumped about GTM for 2023? >>We, we are gonna, you know, 10 x the teams that's gonna be focused on these products, right? So we're gonna also come out with a hybrid committed spend program for our customers that meet them where they want to go. So they're coming outta the data center going into a cloud. We're gonna have a nice financial model for them to do that. And that's gonna take a lot of the friction out. >>Yeah. I mean, you've nailed it. I, I think the, the fact that now entire Red Hat portfolio is available on marketplace, you can do it on one click deployment. It's deeply integrated with Amazon services and the most important part that Joel was making now customers can double dip. They can drive benefit from the consumption committed spend programs, both from Red Hat and from aws, which is amazing. Which is a game changer That's right. For many of our large >>Customers. That's right. And that, so we're gonna, we're gonna really go to town on that next year. That's, and all the, all the resources that I have, which are the technology sellers and the sas, you know, the engineers we're growing this team the most out that team. So it's, >>When you say 10 x, how many are you at now? I'm >>Curious to see where you're headed. Tell you, okay. There's not right? Oh no, there's not one. It's triple digit. Yeah, yeah. >>Today. Oh, sweet. Awesome. >>So, and it's a very sizable team. They're actually making sure that each of our customers are successful and then really making sure that, you know, no customer left behind policy. >>And it's a great point that customers love when Amazonians and Red Hats show up, they love it and it's, they want to get more of it, and we're gonna, we're gonna give it to 'em. >>Must feel great to be loved like that. >>Yeah, that's right. Yeah. Yeah. I would say yes. >>Seems like it's safe to say that there's another decade of partnership between your two companies. >>Hope so. That's right. That's the plan. >>Yeah. And I would say also, you know, just the IBM coming into the mix here. Yeah. I, you know, red Hat has informed the way we have turned around our partnership with ibm, essentially we, we signed the strategic collaboration agreement with the company. All of IBM software now runs on Rosa. So that is now also providing a lot of tailwinds both to our rail customers and as well as Rosa customers. And I think it's a very net creative, very positive for our partnership. >>That's right. It's been very positive. Yep. Yeah. >>You see the >>Billboards positive. Yeah, right. Also that, that's great. Great point, Dave. Yep. We have a, we have a new challenge, a new tradition on the cube here at Reinvent where we're, well, it's actually kind of a glamor moment for you, depending on how you leverage it. We're looking for your 32nd hot take your Instagram reel, your sizzle thought leadership, biggest takeaway, most important theme from this year's show. I know you want, right, Joel? I mean, you TM boy, I feel like you can spit the time. >>Yeah. It is all about Rosa for us. It is all in on that, that's the native OpenShift offering on aws and that's, that's the soundbite we're going go to town with. Now, I don't wanna forget all the other products that are in there, but Rosa is a, is a very key push for us this year. >>Fantastic. All right. Manu. >>I think our customers, it's getting super competitive. Our customers want to innovate just a >>Little bit. >>The enterprise customers see the cloud native companies. I wanna do what these guys are doing. I wanna develop features at a fast clip. I wanna scale, I wanna be resilient. And I think that's really the spirit that's coming out. So to Joel's point, you know, move to worlds containers, serverless, DevOps, which was like, you know, aha, something that's happening on the side of an enterprise is not becoming mainstream. The business is demanding it. The, it is becoming the centerpiece in the business strategy. So that's been really like the aha. Big thing that's happening here. >>Yeah. And those architectures are coming together, aren't they? That's correct. Right. You know, VMs and containers, it used to be one architecture and then at the other end of the spectrum is serverless. People thought of those as different things and now it's a single architecture and, and it's kind of right approach for the right job. >>And, and a compliments say to Red Hat, they do an incredible job of hiding that complexity. Yeah. Yes. And making sure that, you know, for example, just like, make it easier for the developers to create value and then, and you know, >>Yeah, that's right. Those, they were previously siloed architectures and >>That's right. OpenShift wanna be place where you wanna run containers or virtual machines. We want that to be this Yeah. Single place. Not, not go bolt on another piece of architecture to just do one or the other. Yeah. >>And hey, the hybrid cloud vision is working for ibm. No question. You know, and it's achievable. Yeah. I mean, I just, I've said unlike, you know, some of the previous, you know, visions on fixing the world with ai, hybrid cloud is actually a real problem that you're attacking and it's showing the results. Agreed. Oh yeah. >>Great. Alright. Last question for you guys. Cause it might be kind of fun, 10 years from now, oh, we're at another, we're sitting here, we all look the same. Time has passed, but we are not aging, which is a part of the new technology that's come out in skincare. That's my, I'm just throwing that out there. Why not? What do you guys hope that you can say about the partnership and, and your continued commitment to community? >>Oh, that's a good question. You go first this time. Yeah. >>I think, you know, the, you know, for looking into the future, you need to look into the past. And Amazon has always been driven by working back from our customers. That's like our key tenant, principle number 1 0 1. >>Couple people have said that on this stage this week. Yeah. >>Yeah. And I think our partnership, I hope over the next decade continues to keep that tenant as a centerpiece. And then whatever comes out of that, I think we, we are gonna be, you know, working through that. >>Yeah. I, I would say this, I think you said that, well, the customer innovation is gonna lead us to wherever that is. And it's, it's, it's gonna be in the cloud for sure. I think we can say that in 10 years. But yeah, anything from, from AI to the quant quantum computing that IBM's really pushing behind that, you know, those are, those are gonna be things that hopefully we show up on a, on a partnership with Manu in 10 years, maybe sooner. >>Well, whatever happens next, we'll certainly be covering it here on the cube. That's right. Thank you both for being here. Joel Manu, fantastic interview. Thanks to see you guys. Yeah, good to see you brought the energy. I think you're definitely ranking high on the top interviews. We >>Love that for >>The day. >>Thank >>My pleasure >>Job, guys. Now that you're competitive at all, and thank you all for tuning in to our live coverage here from AWS Reinvent in Las Vegas, Nevada, with Dave Valante. I'm Savannah Peterson. You're watching The Cube, the leading source for high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

Manu and Joel, thank you so much for being here. Are we ready? How's the show going for you guys real and, you know, people just want to get out, meet people, have that human touch with each other, And you've got a few in the house. Very few shows can say that, by the way. So far It's the, you have to be here. I was at reinvent number two. And I just, I've talked about it a little bit down the show, but the diversity on the show floor, you know, forever together. I love that you know, benefit from the joint committed spend programs together. 2008, I know you don't like that, but we started So that portends, I mean, 2008, we're talking two years after the launch of s3. harbinger of things to come with these new innovations? Yeah, I, I would say, you know, the innovation is a key tenant of our So it's just getting that go to market tight and we're gonna, we're gonna get that done. And if you know, 95% of the customers say it, Yeah, that's right. how do you guys decide when to put gas on the fire? So we have, you know, close to about couple of thousand power users. So do you take that information and somehow, I mean, it's global, you know, you get, you get a good, you gotta get a good slice of your customer base to get, context of like, you know, Amazon and Red Hat just published the entire roadmap for OpenShift. That is not any process So I I I think it's a, As, as a customer advocate, the better, you know, if, if you guys make bets AWS is the perfect place to place those small bets. I mean, it's not expensive to experiment. Especially in the managed service world. And I know you love taking things to market and you're a go to market guy. We, we are gonna, you know, 10 x the teams that's gonna be focused on these products, Red Hat portfolio is available on marketplace, you can do it on one click deployment. you know, the engineers we're growing this team the most out that team. Curious to see where you're headed. then really making sure that, you know, no customer left behind policy. And it's a great point that customers love when Amazonians and Red Hats show up, I would say yes. That's the plan. I, you know, red Hat has informed the way we have turned around our partnership with ibm, That's right. I mean, you TM boy, I feel like you can spit the time. It is all in on that, that's the native OpenShift offering I think our customers, it's getting super competitive. So to Joel's point, you know, move to worlds containers, and it's kind of right approach for the right job. And making sure that, you know, for example, just like, make it easier for the developers to create value and Yeah, that's right. OpenShift wanna be place where you wanna run containers or virtual machines. I mean, I just, I've said unlike, you know, some of the previous, What do you guys hope that you can say about Yeah. I think, you know, the, you know, Couple people have said that on this stage this week. you know, working through that. you know, those are, those are gonna be things that hopefully we show up on a, on a partnership with Manu Yeah, good to see you brought the energy. Now that you're competitive at all, and thank you all for tuning in to our live coverage here from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

Savannah PetersonPERSON

0.99+

Dave ValantePERSON

0.99+

AmazonORGANIZATION

0.99+

ManuPERSON

0.99+

IBMORGANIZATION

0.99+

Manu ParbhakarPERSON

0.99+

40%QUANTITY

0.99+

AWSORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Joel ManuPERSON

0.99+

2021DATE

0.99+

TwoQUANTITY

0.99+

JanuaryDATE

0.99+

two companiesQUANTITY

0.99+

two companiesQUANTITY

0.99+

DavePERSON

0.99+

Red HandORGANIZATION

0.99+

95%QUANTITY

0.99+

fiveQUANTITY

0.99+

third phaseQUANTITY

0.99+

2019DATE

0.99+

Joel JacksonPERSON

0.99+

RosaORGANIZATION

0.99+

second chapterQUANTITY

0.99+

firstQUANTITY

0.99+

2008DATE

0.99+

20%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

30%QUANTITY

0.99+

next yearDATE

0.99+

OutpostORGANIZATION

0.99+

red HatORGANIZATION

0.99+

10th yearQUANTITY

0.99+

JayPERSON

0.99+

Silicon ValleyLOCATION

0.99+

bothQUANTITY

0.99+

10 yearsQUANTITY

0.99+

first phaseQUANTITY

0.99+

TodayDATE

0.99+

ibmORGANIZATION

0.99+

awsORGANIZATION

0.99+

JoePERSON

0.99+

70,000 customersQUANTITY

0.99+

2023DATE

0.99+

oneQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.98+

CovidPERSON

0.98+

first timeQUANTITY

0.98+

nineQUANTITY

0.98+

12 monthsQUANTITY

0.98+

10QUANTITY

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

this yearDATE

0.98+

about 70,000 customersQUANTITY

0.98+

GravitonORGANIZATION

0.98+

this weekDATE

0.97+

AmazoniansORGANIZATION

0.97+

third aspectQUANTITY

0.97+

SpainLOCATION

0.97+

E twoEVENT

0.97+

each customerQUANTITY

0.97+

The CubeTITLE

0.96+

Breaking Analysis: re:Invent 2022 marks the next chapter in data & cloud


 

from the cube studios in Palo Alto in Boston bringing you data-driven insights from the cube and ETR this is breaking analysis with Dave vellante the ascendancy of AWS under the leadership of Andy jassy was marked by a tsunami of data and corresponding cloud services to leverage that data now those Services they mainly came in the form of Primitives I.E basic building blocks that were used by developers to create more sophisticated capabilities AWS in the 2020s being led by CEO Adam solipski will be marked by four high-level Trends in our opinion one A Rush of data that will dwarf anything we've previously seen two a doubling or even tripling down on the basic elements of cloud compute storage database security Etc three a greater emphasis on end-to-end integration of AWS services to simplify and accelerate customer adoption of cloud and four significantly deeper business integration of cloud Beyond it as an underlying element of organizational operations hello and welcome to this week's wikibon Cube insights powered by ETR in this breaking analysis we extract and analyze nuggets from John furrier's annual sit-down with the CEO of AWS we'll share data from ETR and other sources to set the context for the market and competition in cloud and we'll give you our glimpse of what to expect at re invent in 2022. now before we get into the core of our analysis Alibaba has announced earnings they always announced after the big three you know a month later and we've updated our Q3 slash November hyperscale Computing forecast for the year as seen here and we're going to spend a lot of time on this as most of you have seen the bulk of it already but suffice to say alibaba's cloud business is hitting that same macro Trend that we're seeing across the board but a more substantial slowdown than we expected and more substantial than its peers they're facing China headwinds they've been restructuring its Cloud business and it's led to significantly slower growth uh in in the you know low double digits as opposed to where we had it at 15 this puts our year-end estimates for 2022 Revenue at 161 billion still a healthy 34 growth with AWS surpassing 80 billion in 2022 Revenue now on a related note one of the big themes in Cloud that we've been reporting on is how customers are optimizing their Cloud spend it's a technique that they use and when the economy looks a little shaky and here's a graphic that we pulled from aws's website which shows the various pricing plans at a high level as you know they're much more granular than that and more sophisticated but Simplicity we'll just keep it here basically there are four levels first one here is on demand I.E pay by the drink now we're going to jump down to what we've labeled as number two spot instances that's like the right place at the right time I can use that extra capacity in the moment the third is reserved instances or RIS where I pay up front to get a discount and the fourth is sort of optimized savings plans where customers commit to a one or three year term and for a better price now you'll notice we labeled the choices in a different order than AWS presented them on its website and that's because we believe that the order that we chose is the natural progression for customers this started on demand they maybe experiment with spot instances they move to reserve instances when the cloud bill becomes too onerous and if you're large enough you lock in for one or three years okay the interesting thing is the order in which AWS presents them we believe that on-demand accounts for the majority of AWS customer spending now if you think about it those on-demand customers they're also at risk customers yeah sure there's some switching costs like egress and learning curve but many customers they have multiple clouds and they've got experience and so they're kind of already up to a learning curve and if you're not married to AWS with a longer term commitment there's less friction to switch now AWS here presents the most attractive plan from a financial perspective second after on demand and it's also the plan that makes the greatest commitment from a lock-in standpoint now In fairness to AWS it's also true that there is a trend towards subscription-based pricing and we have some data on that this chart is from an ETR drill down survey the end is 300. pay attention to the bars on the right the left side is sort of busy but the pink is subscription and you can see the trend upward the light blue is consumption based or on demand based pricing and you can see there's a steady Trend toward subscription now we'll dig into this in a later episode of Breaking analysis but we'll share with you a little some tidbits with the data that ETR provides you can select which segment is and pass or you can go up the stack Etc but so when you choose is and paths 44 of customers either prefer or are required to use on-demand pricing whereas around 40 percent of customers say they either prefer or are required to use subscription pricing again that's for is so now the further mu you move up the stack the more prominent subscription pricing becomes often with sixty percent or more for the software-based offerings that require or prefer subscription and interestingly cyber security tracks along with software at around 60 percent that that prefer subscription it's likely because as with software you're not shutting down your cyber protection on demand all right let's get into the expectations for reinvent and we're going to start with an observation in data in this 2018 book seeing digital author David michella made the point that whereas most companies apply data on the periphery of their business kind of as an add-on function successful data companies like Google and Amazon and Facebook have placed data at the core of their operations they've operationalized data and they apply machine intelligence to that foundational element why is this the fact is it's not easy to do what the internet Giants have done very very sophisticated engineering and and and cultural discipline and this brings us to reinvent 2022 in the future of cloud machine learning and AI will increasingly be infused into applications we believe the data stack and the application stack are coming together as organizations build data apps and data products data expertise is moving from the domain of Highly specialized individuals to Everyday business people and we are just at the cusp of this trend this will in our view be a massive theme of not only re invent 22 but of cloud in the 2020s the vision of data mesh We Believe jamachtagani's principles will be realized in this decade now what we'd like to do now is share with you a glimpse of the thinking of Adam solipsky from his sit down with John Furrier each year John has a one-on-one conversation with the CEO of AWS AWS he's been doing this for years and the outcome is a better understanding of the directional thinking of the leader of the number one Cloud platform so we're now going to share some direct quotes I'm going to run through them with some commentary and then bring in some ETR data to analyze the market implications here we go this is from solipsky quote I.T in general and data are moving from departments into becoming intrinsic parts of how businesses function okay we're talking here about deeper business integration let's go on to the next one quote in time we'll stop talking about people who have the word analyst we inserted data he meant data data analyst in their title rather will have hundreds of millions of people who analyze data as part of their day-to-day job most of whom will not have the word analyst anywhere in their title we're talking about graphic designers and pizza shop owners and product managers and data scientists as well he threw that in I'm going to come back to that very interesting so he's talking about here about democratizing data operationalizing data next quote customers need to be able to take an end-to-end integrated view of their entire data Journey from ingestion to storage to harmonizing the data to being able to query it doing business Intelligence and human-based Analysis and being able to collaborate and share data and we've been putting together we being Amazon together a broad Suite of tools from database to analytics to business intelligence to help customers with that and this last statement it's true Amazon has a lot of tools and you know they're beginning to become more and more integrated but again under jassy there was not a lot of emphasis on that end-to-end integrated view we believe it's clear from these statements that solipsky's customer interactions are leading him to underscore that the time has come for this capability okay continuing quote if you have data in one place you shouldn't have to move it every time you want to analyze that data couldn't agree more it would be much better if you could leave that data in place avoid all the ETL which has become a nasty three-letter word more and more we're building capabilities where you can query that data in place end quote okay this we see a lot in the marketplace Oracle with mySQL Heatwave the entire Trend toward converge database snowflake [ __ ] extending their platforms into transaction and analytics respectively and so forth a lot of the partners are are doing things as well in that vein let's go into the next quote the other phenomenon is infusing machine learning into all those capabilities yes the comments from the michelleographic come into play here infusing Ai and machine intelligence everywhere next one quote it's not a data Cloud it's not a separate Cloud it's a series of broad but integrated capabilities to help you manage the end-to-end life cycle of your data there you go we AWS are the cloud we're going to come back to that in a moment as well next set of comments around data very interesting here quote data governance is a huge issue really what customers need is to find the right balance of their organization between access to data and control and if you provide too much access then you're nervous that your data is going to end up in places that it shouldn't shouldn't be viewed by people who shouldn't be viewing it and you feel like you lack security around that data and by the way what happens then is people overreact and they lock it down so that almost nobody can see it it's those handcuffs there's data and asset are reliability we've talked about that for years okay very well put by solipsky but this is a gap in our in our view within AWS today and we're we're hoping that they close it at reinvent it's not easy to share data in a safe way within AWS today outside of your organization so we're going to look for that at re invent 2022. now all this leads to the following statement by solipsky quote data clean room is a really interesting area and I think there's a lot of different Industries in which clean rooms are applicable I think that clean rooms are an interesting way of enabling multiple parties to share and collaborate on the data while completely respecting each party's rights and their privacy mandate okay again this is a gap currently within AWS today in our view and we know snowflake is well down this path and databricks with Delta sharing is also on this curve so AWS has to address this and demonstrate this end-to-end data integration and the ability to safely share data in our view now let's bring in some ETR spending data to put some context around these comments with reference points in the form of AWS itself and its competitors and partners here's a chart from ETR that shows Net score or spending momentum on the x-axis an overlap or pervasiveness in the survey um sorry let me go back up the net scores on the y-axis and overlap or pervasiveness in the survey is on the x-axis so spending momentum by pervasiveness okay or should have share within the data set the table that's inserted there with the Reds and the greens that informs us to how the dots are positioned so it's Net score and then the shared ends are how the plots are determined now we've filtered the data on the three big data segments analytics database and machine learning slash Ai and we've only selected one company with fewer than 100 ends in the survey and that's databricks you'll see why in a moment the red dotted line indicates highly elevated customer spend at 40 percent now as usual snowflake outperforms all players on the y-axis with a Net score of 63 percent off the charts all three big U.S cloud players are above that line with Microsoft and AWS dominating the x-axis so very impressive that they have such spending momentum and they're so large and you see a number of other emerging data players like rafana and datadog mongodbs there in the mix and then more established players data players like Splunk and Tableau now you got Cisco who's gonna you know it's a it's a it's a adjacent to their core networking business but they're definitely into you know the analytics business then the really established players in data like Informatica IBM and Oracle all with strong presence but you'll notice in the red from the momentum standpoint now what you're going to see in a moment is we put red highlights around databricks Snowflake and AWS why let's bring that back up and we'll explain so there's no way let's bring that back up Alex if you would there's no way AWS is going to hit the brakes on innovating at the base service level what we call Primitives earlier solipsky told Furrier as much in their sit down that AWS will serve the technical user and data science Community the traditional domain of data bricks and at the same time address the end-to-end integration data sharing and business line requirements that snowflake is positioned to serve now people often ask Snowflake and databricks how will you compete with the likes of AWS and we know the answer focus on data exclusively they have their multi-cloud plays perhaps the more interesting question is how will AWS compete with the likes of Specialists like Snowflake and data bricks and the answer is depicted here in this chart AWS is going to serve both the technical and developer communities and the data science audience and through end-to-end Integrations and future services that simplify the data Journey they're going to serve the business lines as well but the Nuance is in all the other dots in the hundreds or hundreds of thousands that are not shown here and that's the AWS ecosystem you can see AWS has earned the status of the number one Cloud platform that everyone wants to partner with as they say it has over a hundred thousand partners and that ecosystem combined with these capabilities that we're discussing well perhaps behind in areas like data sharing and integrated governance can wildly succeed by offering the capabilities and leveraging its ecosystem now for their part the snowflakes of the world have to stay focused on the mission build the best products possible and develop their own ecosystems to compete and attract the Mind share of both developers and business users and that's why it's so interesting to hear solipski basically say it's not a separate Cloud it's a set of integrated Services well snowflake is in our view building a super cloud on top of AWS Azure and Google when great products meet great sales and marketing good things can happen so this will be really fun to watch what AWS announces in this area at re invent all right one other topic that solipsky talked about was the correlation between serverless and container adoption and you know I don't know if this gets into there certainly their hybrid place maybe it starts to get into their multi-cloud we'll see but we have some data on this so again we're talking about the correlation between serverless and container adoption but before we get into that let's go back to 2017 and listen to what Andy jassy said on the cube about serverless play the clip very very earliest days of AWS Jeff used to say a lot if I were starting Amazon today I'd have built it on top of AWS we didn't have all the capability and all the functionality at that very moment but he knew what was coming and he saw what people were still able to accomplish even with where the services were at that point I think the same thing is true here with Lambda which is I think if Amazon were starting today it's a given they would build it on the cloud and I think we with a lot of the applications that comprise Amazon's consumer business we would build those on on our serverless capabilities now we still have plenty of capabilities and features and functionality we need to add to to Lambda and our various serverless services so that may not be true from the get-go right now but I think if you look at the hundreds of thousands of customers who are building on top of Lambda and lots of real applications you know finra has built a good chunk of their market watch application on top of Lambda and Thompson Reuters has built you know one of their key analytics apps like people are building real serious things on top of Lambda and the pace of iteration you'll see there will increase as well and I really believe that to be true over the next year or two so years ago when Jesse gave a road map that serverless was going to be a key developer platform going forward and so lipsky referenced the correlation between serverless and containers in the Furrier sit down so we wanted to test that within the ETR data set now here's a screen grab of The View across 1300 respondents from the October ETR survey and what we've done here is we've isolated on the cloud computing segment okay so you can see right there cloud computing segment now we've taken the functions from Google AWS Lambda and Microsoft Azure functions all the serverless offerings and we've got Net score on the vertical axis we've got presence in the data set oh by the way 440 by the way is highly elevated remember that and then we've got on the horizontal axis we have the presence in the data center overlap okay that's relative to each other so remember 40 all these guys are above that 40 mark okay so you see that now what we're going to do this is just for serverless and what we're going to do is we're going to turn on containers to see the correlation and see what happens so watch what happens when we click on container boom everything moves to the right you can see all three move to the right Google drops a little bit but all the others now the the filtered end drops as well so you don't have as many people that are aggressively leaning into both but all three move to the right so watch again containers off and then containers on containers off containers on so you can see a really major correlation between containers and serverless okay so to get a better understanding of what that means I call my friend and former Cube co-host Stu miniman what he said was people generally used to think of VMS containers and serverless as distinctly different architectures but the lines are beginning to blur serverless makes things simpler for developers who don't want to worry about underlying infrastructure as solipsky and the data from ETR indicate serverless and containers are coming together but as Stu and I discussed there's a spectrum where on the left you have kind of native Cloud VMS in the middle you got AWS fargate and in the rightmost anchor is Lambda AWS Lambda now traditionally in the cloud if you wanted to use containers developers would have to build a container image they have to select and deploy the ec2 images that they or instances that they wanted to use they have to allocate a certain amount of memory and then fence off the apps in a virtual machine and then run the ec2 instances against the apps and then pay for all those ec2 resources now with AWS fargate you can run containerized apps with less infrastructure management but you still have some you know things that you can you can you can do with the with the infrastructure so with fargate what you do is you'd build the container images then you'd allocate your memory and compute resources then run the app and pay for the resources only when they're used so fargate lets you control the runtime environment while at the same time simplifying the infrastructure management you gotta you don't have to worry about isolating the app and other stuff like choosing server types and patching AWS does all that for you then there's Lambda with Lambda you don't have to worry about any of the underlying server infrastructure you're just running code AS functions so the developer spends their time worrying about the applications and the functions that you're calling the point is there's a movement and we saw in the data towards simplifying the development environment and allowing the cloud vendor AWS in this case to do more of the underlying management now some folks will still want to turn knobs and dials but increasingly we're going to see more higher level service adoption now re invent is always a fire hose of content so let's do a rapid rundown of what to expect we talked about operate optimizing data and the organization we talked about Cloud optimization there'll be a lot of talk on the show floor about best practices and customer sharing data solipsky is leading AWS into the next phase of growth and that means moving beyond I.T transformation into deeper business integration and organizational transformation not just digital transformation organizational transformation so he's leading a multi-vector strategy serving the traditional peeps who want fine-grained access to core services so we'll see continued Innovation compute storage AI Etc and simplification through integration and horizontal apps further up to stack Amazon connect is an example that's often cited now as we've reported many times databricks is moving from its stronghold realm of data science into business intelligence and analytics where snowflake is coming from its data analytics stronghold and moving into the world of data science AWS is going down a path of snowflake meet data bricks with an underlying cloud is and pass layer that puts these three companies on a very interesting trajectory and you can expect AWS to go right after the data sharing opportunity and in doing so it will have to address data governance they go hand in hand okay price performance that is a topic that will never go away and it's something that we haven't mentioned today silicon it's a it's an area we've covered extensively on breaking analysis from Nitro to graviton to the AWS acquisition of Annapurna its secret weapon new special specialized capabilities like inferential and trainium we'd expect something more at re invent maybe new graviton instances David floyer our colleague said he's expecting at some point a complete system on a chip SOC from AWS and maybe an arm-based server to eventually include high-speed cxl connections to devices and memories all to address next-gen applications data intensive applications with low power requirements and lower cost overall now of course every year Swami gives his usual update on machine learning and AI building on Amazon's years of sagemaker innovation perhaps a focus on conversational AI or a better support for vision and maybe better integration across Amazon's portfolio of you know large language models uh neural networks generative AI really infusing AI everywhere of course security always high on the list that reinvent and and Amazon even has reinforce a conference dedicated to it uh to security now here we'd like to see more on supply chain security and perhaps how AWS can help there as well as tooling to make the cio's life easier but the key so far is AWS is much more partner friendly in the security space than say for instance Microsoft traditionally so firms like OCTA and crowdstrike in Palo Alto have plenty of room to play in the AWS ecosystem we'd expect of course to hear something about ESG it's an important topic and hopefully how not only AWS is helping the environment that's important but also how they help customers save money and drive inclusion and diversity again very important topics and finally come back to it reinvent is an ecosystem event it's the Super Bowl of tech events and the ecosystem will be out in full force every tech company on the planet will have a presence and the cube will be featuring many of the partners from the serial floor as well as AWS execs and of course our own independent analysis so you'll definitely want to tune into thecube.net and check out our re invent coverage we start Monday evening and then we go wall to wall through Thursday hopefully my voice will come back we have three sets at the show and our entire team will be there so please reach out or stop by and say hello all right we're going to leave it there for today many thanks to Stu miniman and David floyer for the input to today's episode of course John Furrier for extracting the signal from the noise and a sit down with Adam solipski thanks to Alex Meyerson who was on production and manages the podcast Ken schiffman as well Kristen Martin and Cheryl Knight helped get the word out on social and of course in our newsletters Rob hoef is our editor-in-chief over at siliconangle does some great editing thank thanks to all of you remember all these episodes are available as podcasts wherever you listen you can pop in the headphones go for a walk just search breaking analysis podcast I published each week on wikibon.com at siliconangle.com or you can email me at david.valante at siliconangle.com or DM me at di vallante or please comment on our LinkedIn posts and do check out etr.ai for the best survey data in the Enterprise Tech business this is Dave vellante for the cube insights powered by ETR thanks for watching we'll see it reinvent or we'll see you next time on breaking analysis [Music]

Published Date : Nov 26 2022

SUMMARY :

so now the further mu you move up the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David michellaPERSON

0.99+

Alex MeyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

AWSORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

oneQUANTITY

0.99+

Dave vellantePERSON

0.99+

David floyerPERSON

0.99+

Kristen MartinPERSON

0.99+

JohnPERSON

0.99+

sixty percentQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Adam solipskiPERSON

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2022DATE

0.99+

Andy jassyPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

hundredsQUANTITY

0.99+

2017DATE

0.99+

Palo AltoLOCATION

0.99+

40 percentQUANTITY

0.99+

alibabaORGANIZATION

0.99+

LambdaTITLE

0.99+

63 percentQUANTITY

0.99+

1300 respondentsQUANTITY

0.99+

Super BowlEVENT

0.99+

80 billionQUANTITY

0.99+

John furrierPERSON

0.99+

ThursdayDATE

0.99+

CiscoORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Monday eveningDATE

0.99+

JessePERSON

0.99+

Stu minimanPERSON

0.99+

siliconangle.comOTHER

0.99+

OctoberDATE

0.99+

thecube.netOTHER

0.99+

fourthQUANTITY

0.99+

a month laterDATE

0.99+

thirdQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

fargateORGANIZATION

0.99+

Breaking Analysis: Snowflake caught in the storm clouds


 

>> From the CUBE Studios in Palo Alto in Boston, bringing you data driven insights from the Cube and ETR. This is Breaking Analysis with Dave Vellante. >> A better than expected earnings report in late August got people excited about Snowflake again, but the negative sentiment in the market is weighed heavily on virtually all growth tech stocks and Snowflake is no exception. As we've stressed many times the company's management is on a long term mission to dramatically simplify the way organizations use data. Snowflake is tapping into a multi hundred billion dollar total available market and continues to grow at a rapid pace. In our view, Snowflake is embarking on its third major wave of innovation data apps, while its first and second waves are still bearing significant fruit. Now for short term traders focused on the next 90 or 180 days, that probably doesn't matter. But those taking a longer view are asking, "Should we still be optimistic about the future of this high flyer or is it just another over hyped tech play?" Hello and welcome to this week's Wiki Bond Cube Insights powered by ETR. Snowflake's Quarter just ended. And in this breaking analysis we take a look at the most recent survey data from ETR to see what clues and nuggets we can extract to predict the near term future in the long term outlook for Snowflake which is going to announce its earnings at the end of this month. Okay, so you know the story. If you've been investor in Snowflake this year, it's been painful. We said at IPO, "If you really want to own this stock on day one, just hold your nose and buy it." But like most IPOs we said there will be likely a better entry point in the future, and not surprisingly that's been the case. Snowflake IPOed a price of 120, which you couldn't touch on day one unless you got into a friends and family Delio. And if you did, you're still up 5% or so. So congratulations. But at one point last year you were up well over 200%. That's been the nature of this volatile stock, and I certainly can't help you with the timing of the market. But longer term Snowflake is targeting 10 billion in revenue for fiscal year 2028. A big number. Is it achievable? Is it big enough? Tell you what, let's come back to that. Now shorter term, our expert trader and breaking analysis contributor Chip Simonton said he got out of the stock a while ago after having taken a shot at what turned out to be a bear market rally. He pointed out that the stock had been bouncing around the 150 level for the last few months and broke that to the downside last Friday. So he'd expect 150 is where the stock is going to find resistance on the way back up, but there's no sign of support right now. He said maybe at 120, which was the July low and of course the IPO price that we just talked about. Now, perhaps earnings will be a catalyst, when Snowflake announces on November 30th, but until the mentality toward growth tech changes, nothing's likely to change dramatically according to Simonton. So now that we have that out of the way, let's take a look at the spending data for Snowflake in the ETR survey. Here's a chart that shows the time series breakdown of snowflake's net score going back to the October, 2021 survey. Now at that time, Snowflake's net score stood at a robust 77%. And remember, net score is a measure of spending velocity. It's a proprietary network, and ETR derives it from a quarterly survey of IT buyers and asks the respondents, "Are you adopting the platform new? Are you spending 6% or more? Is you're spending flat? Is you're spending down 6% or worse? Or are you leaving the platform decommissioning?" You subtract the percent of customers that are spending less or churning from those that are spending more and adopting or adopting and you get a net score. And that's expressed as a percentage of customers responding. In this chart we show Snowflake's in out of the total survey which ranges... The total survey ranges between 1,200 and 1,400 each quarter. And the very last column... Oh sorry, very last row, we show the number of Snowflake respondents that are coming in the survey from the Fortune 500 and the Global 2000. Those are two very important Snowflake constituencies. Now what this data tells us is that Snowflake exited 2021 with very strong momentum in a net score of 82%, which is off the charts and it was actually accelerating from the previous survey. Now by April that sentiment had flipped and Snowflake came down to earth with a 68% net score. Still highly elevated relative to its peers, but meaningfully down. Why was that? Because we saw a drop in new ads and an increase in flat spend. Then into the July and most recent October surveys, you saw a significant drop in the percentage of customers that were spending more. Now, notably, the percentage of customers who are contemplating adding the platform is actually staying pretty strong, but it is off a bit this past survey. And combined with a slight uptick in planned churn, net score is now down to 60%. That uptick from 0% and 1% and then 3%, it's still small, but that net score at 60% is still 20 percentage points higher than our highly elevated benchmark of 40% as you recall from listening to earlier breaking analysis. That 40% range is we consider a milestone. Anything above that is actually quite strong. But again, Snowflake is down and coming back to churn, while 3% churn is very low, in previous quarters we've seen Snowflake 0% or 1% decommissions. Now the last thing to note in this chart is the meaningful uptick in survey respondents that are citing, they're using the Snowflake platform. That's up to 212 in the survey. So look, it's hard to imagine that Snowflake doesn't feel the softening in the market like everyone else. Snowflake is guiding for around 60% growth in product revenue against the tough compare from a year ago with a 2% operating margin. So like every company, the reaction of the street is going to come down to how accurate or conservative the guide is from their CFO. Now, earlier this year, Snowflake acquired a company called Streamlit for around $800 million. Streamlit is an open source Python library and it makes it easier to build data apps with machine learning, obviously a huge trend. And like Snowflake, generally its focus is on simplifying the complex, in this case making data science easier to integrate into data apps that business people can use. So we were excited this summer in the July ETR survey to see that they added some nice data and pick on Streamlit, which we're showing here in comparison to Snowflake's core business on the left hand side. That's the data warehousing, the Streamlit pieces on the right hand side. And we show again net score over time from the previous survey for Snowflake's core database and data warehouse offering again on the left as compared to a Streamlit on the right. Snowflake's core product had 194 responses in the October, 22 survey, Streamlit had an end of 73, which is up from 52 in the July survey. So significant uptick of people responding that they're doing business in adopting Streamlit. That was pretty impressive to us. And it's hard to see, but the net scores stayed pretty constant for Streamlit at 51%. It was 52% I think in the previous quarter, well over that magic 40% mark. But when you blend it with Snowflake, it does sort of bring things down a little bit. Now there are two key points here. One is that the acquisition seems to have gained exposure right out of the gate as evidenced by the large number of responses. And two, the spending momentum. Again while it's lower than Snowflake overall, and when you blend it with Snowflake it does pull it down, it's very healthy and steady. Now let's do a little pure comparison with some of our favorite names in this space. This chart shows net score or spending velocity in the Y-axis, an overlap or presence, pervasiveness if you will, in the data set on the X-axis. That red dotted line again is that 40% highly elevated net score that we like to talk about. And that table inserted informs us as to how the companies are plotted, where the dots set up, the net score, the ins. And we're comparing a number of database players, although just a caution, Oracle includes all of Oracle including its apps. But we just put it in there for reference because it is the leader in database. Right off the bat, Snowflake jumps out with a net score of 64%. The 60% from the earlier chart, again included Streamlit. So you can see its core database, data warehouse business actually is higher than the total company average that we showed you before 'cause the Streamlit is blended in. So when you separate it out, Streamlit is right on top of data bricks. Isn't that ironic? Only Snowflake and Databricks in this selection of names are above the 40% level. You see Mongo and Couchbase, they know they're solid and Teradata cloud actually showing pretty well compared to some of the earlier survey results. Now let's isolate on the database data platform sector and see how that shapes up. And for this analysis, same XY dimensions, we've added the big giants, AWS and Microsoft and Google. And notice that those three plus Snowflake are just at or above the 40% line. Snowflake continues to lead by a significant margin in spending momentum and it keeps creeping to the right. That's that end that we talked about earlier. Now here's an interesting tidbit. Snowflake is often asked, and I've asked them myself many times, "How are you faring relative to AWS, Microsoft and Google, these big whales with Redshift and Synapse and Big Query?" And Snowflake has been telling folks that 80% of its business comes from AWS. And when Microsoft heard that, they said, "Whoa, wait a minute, Snowflake, let's partner up." 'Cause Microsoft is smart, and they understand that the market is enormous. And if they could do better with Snowflake, one, they may steal some business from AWS. And two, even if Snowflake is winning against some of the Microsoft database products, if it wins on Azure, Microsoft is going to sell more compute and more storage, more AI tools, more other stuff to these customers. Now AWS is really aggressive from a partnering standpoint with Snowflake. They're openly negotiating, not openly, but they're negotiating better prices. They're realizing that when it comes to data, the cheaper that you make the offering, the more people are going to consume. At scale economies and operating leverage are really powerful things at volume that kick in. Now Microsoft, they're coming along, they obviously get it, but Google is seemingly resistant to that type of go to market partnership. Rather than lean into Snowflake as a great partner Google's field force is kind of fighting fashion. Google itself at Cloud next heavily messaged what they call the open data cloud, which is a direct rip off of Snowflake. So what can we say about Google? They continue to be kind of behind the curve when it comes to go to market. Now just a brief aside on the competitive posture. I've seen Slootman, Frank Slootman, CEO of Snowflake in action with his prior companies and how he depositioned the competition. At Data Domain, he eviscerated a company called Avamar with their, what he called their expensive and slow post process architecture. I think he actually called it garbage, if I recall at one conference I heard him speak at. And that sort of destroyed BMC when he was at ServiceNow, kind of positioning them as the equivalent of the department of motor vehicles. And so it's interesting to hear how Snowflake openly talks about the data platforms of AWS, Microsoft, Google, and data bricks. I'll give you this sort of short bumper sticker. Redshift is just an on-prem database that AWS morphed to the cloud, which by the way is kind of true. They actually did a brilliant job of it, but it's basically a fact. Microsoft Excel, a collection of legacy databases, which also kind of morphed to run in the cloud. And even Big Query, which is considered cloud native by many if not most, is being positioned by Snowflake as originally an on-prem database to support Google's ad business, maybe. And data bricks is for those people smart enough to get it to Berkeley that love complexity. And now Snowflake doesn't, they don't mention Berkeley as far as I know. That's my addition. But you get the point. And the interesting thing about Databricks and Snowflake is a while ago in the cube I said that there was a new workload type emerging around data where you have AWS cloud, Snowflake obviously for the cloud database and Databricks data for the data science and EML, you bring those things together and there's this new workload emerging that's going to be very powerful in the future. And it's interesting to see now the aspirations of all three of these platforms are colliding. That's quite a dynamic, especially when you see both Snowflake and Databricks putting venture money and getting their hooks into the loyalties of the same companies like DBT labs and Calibra. Anyway, Snowflake's posture is that we are the pioneer in cloud native data warehouse, data sharing and now data apps. And our platform is designed for business people that want simplicity. The other guys, yes, they're formidable, but we Snowflake have an architectural lead and of course we run in multiple clouds. So it's pretty strong positioning or depositioning, you have to admit. Now I'm not sure I agree with the big query knockoffs completely. I think that's a bit of a stretch, but snowflake, as we see in the ETR survey data is winning. So in thinking about the longer term future, let's talk about what's different with Snowflake, where it's headed and what the opportunities are for the company. Snowflake put itself on the map by focusing on simplifying data analytics. What's interesting about that is the company's founders are as you probably know from Oracle. And rather than focusing on transactional data, which is Oracle's sweet spot, the stuff they worked on when they were at Oracle, the founder said, "We're going to go somewhere else. We're going to attack the data warehousing problem and the data analytics problem." And they completely re-imagined the database and how it could be applied to solve those challenges and reimagine what was possible if you had virtually unlimited compute and storage capacity. And of course Snowflake became famous for separating the compute from storage and being able to completely shut down compute so you didn't have to pay for it when you're not using it. And the ability to have multiple clusters hit the same data without making endless copies and a consumption/cloud pricing model. And then of course everyone on the planet realized, "Wow, that's a pretty good idea." Every venture capitalist in Silicon Valley has been funding companies to copy that move. And that today has pretty much become mainstream in table stakes. But I would argue that Snowflake not only had the lead, but when you look at how others are approaching this problem, it's not necessarily as clean and as elegant. Some of the startups, the early startups I think get it and maybe had an advantage of starting later, which can be a disadvantage too. But AWS is a good example of what I'm saying here. Is its version of separating compute from storage was an afterthought and it's good, it's... Given what they had it was actually quite clever and customers like it, but it's more of a, "Okay, we're going to tier to storage to lower cost, we're going to sort of dial down the compute not completely, we're not going to shut it off, we're going to minimize the compute required." It's really not true as separation is like for instance Snowflake has. But having said that, we're talking about competitors with lots of resources and cohort offerings. And so I don't want to make this necessarily all about the product, but all things being equal architecture matters, okay? So that's the cloud S-curve, the first one we're showing. Snowflake's still on that S-curve, and in and of itself it's got legs, but it's not what's going to power the company to 10 billion. The next S-curve we denote is the multi-cloud in the middle. And now while 80% of Snowflake's revenue is AWS, Microsoft is ramping up and Google, well, we'll see. But the interesting part of that curve is data sharing, and this idea of data clean rooms. I mean it really should be called the data sharing curve, but I have my reasons for calling it multi-cloud. And this is all about network effects and data gravity, and you're seeing this play out today, especially in industries like financial services and healthcare and government that are highly regulated verticals where folks are super paranoid about compliance. There not going to share data if they're going to get sued for it, if they're going to be in the front page of the Wall Street Journal for some kind of privacy breach. And what Snowflake has done is said, "Put all the data in our cloud." Now, of course now that triggers a lot of people because it's a walled garden, okay? It is. That's the trade off. It's not the Wild West, it's not Windows, it's Mac, it's more controlled. But the idea is that as different parts of the organization or even partners begin to share data that they need, it's got to be governed, it's got to be secure, it's got to be compliant, it's got to be trusted. So Snowflake introduced the idea of, they call these things stable edges. I think that's the term that they use. And they track a metric around stable edges. And so a stable edge, or think of it as a persistent edge is an ongoing relationship between two parties that last for some period of time, more than a month. It's not just a one shot deal, one a done type of, "Oh guys shared it for a day, done." It sent you an FTP, it's done. No, it's got to have trajectory over time. Four weeks or six weeks or some period of time that's meaningful. And that metric is growing. Now I think sort of a different metric that they track. I think around 20% of Snowflake customers are actively sharing data today and then they track the number of those edge relationships that exist. So that's something that's unique. Because again, most data sharing is all about making copies of data. That's great for storage companies, it's bad for auditors, and it's bad for compliance officers. And that trend is just starting out, that middle S-curve, it's going to kind of hit the base of that steep part of the S-curve and it's going to have legs through this decade we think. And then finally the third wave that we show here is what we call super cloud. That's why I called it multi-cloud before, so it could invoke super cloud. The idea that you've built a PAS layer that is purpose built for a specific objective, and in this case it's building data apps that are cloud native, shareable and governed. And is a long-term trend that's going to take some time to develop. I mean, application development platforms can take five to 10 years to mature and gain significant adoption, but this one's unique. This is a critical play for Snowflake. If it's going to compete with the big cloud players, it has to have an app development framework like Snowpark. It has to accommodate new data types like transactional data. That's why it announced this thing called UniStore last June, Snowflake a summit. And the pattern that's forming here is Snowflake is building layer upon layer with its architecture at the core. It's not currently anyway, it's not going out and saying, "All right, we're going to buy a company that's got to another billion dollars in revenue and that's how we're going to get to 10 billion." So it's not buying its way into new markets through revenue. It's actually buying smaller companies that can complement Snowflake and that it can turn into revenue for growth that fit in to the data cloud. Now as to the 10 billion by fiscal year 28, is that achievable? That's the question. Yeah, I think so. Would the momentum resources go to market product and management prowess that Snowflake has? Yes, it's definitely achievable. And one could argue to $10 billion is too conservative. Indeed, Snowflake CFO, Mike Scarpelli will fully admit his forecaster built on existing offerings. He's not including revenue as I understand it from all the new stuff that's in the pipeline because he doesn't know what it's going to look like. He doesn't know what the adoption is going to look like. He doesn't have data on that adoption, not just yet anyway. And now of course things can change quite dramatically. It's possible that is forecast for existing businesses don't materialize or competition picks them off or a company like Databricks actually is able in the longer term replicate the functionality of Snowflake with open source technologies, which would be a very competitive source of innovation. But in our view, there's plenty of room for growth, the market is enormous and the real key is, can and will Snowflake deliver on the promises of simplifying data? Of course we've heard this before from data warehouse, the data mars and data legs and master data management and ETLs and data movers and data copiers and Hadoop and a raft of technologies that have not lived up to expectations. And we've also, by the way, seen some tremendous successes in the software business with the likes of ServiceNow and Salesforce. So will Snowflake be the next great software name and hit that 10 billion magic mark? I think so. Let's reconnect in 2028 and see. Okay, we'll leave it there today. I want to thank Chip Simonton for his input to today's episode. Thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hove is our Editor in Chief over at Silicon Angle. He does some great editing for us. Check it out for all the news. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me to get in touch David.vallante@siliconangle.com. DM me @dvellante or comment on our LinkedIn post. And please do check out etr.ai, they've got the best survey data in the enterprise tech business. This is Dave Vellante for the CUBE Insights, powered by ETR. Thanks for watching, thanks for listening and we'll see you next time on breaking analysis. (upbeat music)

Published Date : Nov 10 2022

SUMMARY :

insights from the Cube and ETR. And the ability to have multiple

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

Mike ScarpelliPERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

November 30thDATE

0.99+

Ken SchiffmanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Chip SimontonPERSON

0.99+

October, 2021DATE

0.99+

Rob HovePERSON

0.99+

Cheryl KnightPERSON

0.99+

Frank SlootmanPERSON

0.99+

Four weeksQUANTITY

0.99+

JulyDATE

0.99+

six weeksQUANTITY

0.99+

10 billionQUANTITY

0.99+

fiveQUANTITY

0.99+

Palo AltoLOCATION

0.99+

SlootmanPERSON

0.99+

BMCORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

6%QUANTITY

0.99+

80%QUANTITY

0.99+

last yearDATE

0.99+

OctoberDATE

0.99+

Silicon ValleyLOCATION

0.99+

40%QUANTITY

0.99+

1,400QUANTITY

0.99+

$10 billionQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

AprilDATE

0.99+

3%QUANTITY

0.99+

77%QUANTITY

0.99+

64%QUANTITY

0.99+

60%QUANTITY

0.99+

194 responsesQUANTITY

0.99+

Kristin MartinPERSON

0.99+

two partiesQUANTITY

0.99+

51%QUANTITY

0.99+

2%QUANTITY

0.99+

Silicon AngleORGANIZATION

0.99+

fiscal year 28DATE

0.99+

billion dollarsQUANTITY

0.99+

0%QUANTITY

0.99+

AvamarORGANIZATION

0.99+

52%QUANTITY

0.99+

BerkeleyLOCATION

0.99+

2028DATE

0.99+

MongoORGANIZATION

0.99+

Data DomainORGANIZATION

0.99+

1%QUANTITY

0.99+

late AugustDATE

0.99+

twoQUANTITY

0.99+

threeQUANTITY

0.99+

fiscal year 2028DATE

0.99+

Breaking Analysis: Cloudflare’s Supercloud…What Multi Cloud Could Have Been


 

from the cube studios in Palo Alto in Boston bringing you data-driven insights from the cube and ETR this is breaking analysis with Dave vellante over the past decade cloudflare has built a Global Network that has the potential to become the fourth us-based hyperscale class cloud in our view the company is building a durable Revenue model with hooks into many important markets these include the more mature DDOS protection space to other growth sectors such as zero trust a serverless platform for application development and an increasing number of services such as database and object storage and other network services in essence cloudflare could be thought of as a giant distributed supercomputer that can connect multiple clouds and act as a highly efficient scheduling engine at scale its disruptive DNA is increasingly attracting novel startups and established Global firms alike looking for Reliable secure high performance low latency and more cost-effective alternatives to AWS and Legacy infrastructure Solutions hello and welcome to this week's wikibon Cube insights powered by ETR in this breaking analysis we initiate our deeper coverage of cloudflare we'll briefly explain our take on the company and its unique business model we'll then share some peer comparisons with both the financial snapshot and some fresh ETR survey data finally we'll share some examples of how we think cloudflare could be a disruptive force with a super cloud-like offering that in many respects is what multi-cloud should have been cloudflare has been on our peripheral radar Ben Thompson and many others have written about their disruptive business model and recently a breaking analysis follower who will remain anonymous emailed with some excellent insights on cloudflare that prompted us to initiate more detailed coverage let's first take a look at how cloudflare seize the world in terms of its view of a modern stack this is a graphic from cloudflare that shows a simple three-layer Stack comprising Storage and compute the lower level and application layer and the network and their key message is basically that the big four hyperscalers have replaced the on-prem leaders apps have been satisfied and that mess of network that you see and Security in the upper left can now be handled all by cloudflare and the stack can be rented via Opex versus requiring heavy capex investment so okay somewhat of a simplified view is those companies on the the left are you know not standing still and we're going to come back to that but cloudflare has done something quite amazing I mean it's been a while since we've invoked Russ hanneman of Silicon Valley Fame on breaking analysis but remember when he was in a meeting one of his first meetings if not the first with Richard Hendricks it was the whiz kid on the show Silicon Valley and hanneman said something like if you had a blank check and you could build anything in the world what would it be and Richard's answer was basically a new internet and that led to Pied Piper this peer-to-peer Network powered by decentralized devices and and iPhones and this amazing compression algorithm that enabled high-speed data movement and low latency uh up to no low latency access across the network well in a way that's what cloudflare has built its founding premise reimagined how the internet should be built with a consistent set of server infrastructure where each server had lots of cores lots of dram lots of cash fast ssds and plenty of network connectivity and bandwidth and well this picture makes it look like a bunch of dots and points of presence on a map which of course it is there's a software layer that enables cloudflare to efficiently allocate resources across this Global Network the company claims that it's Network utilization is in the 70 percent range and it has used its build out to enter the technology space from the bottoms up offering for example free tiers of services to users with multiple entry points on different services and selling then more services over time to a customer which of course drives up its average contract value and its lifetime value at the same time the company continues to innovate and add new services at a very rapid cloud-like Pace you can think of cloudflare's initial Market entry as like a lightweight Cisco as a service the company's CFO actually he uses that term he calls it that which really must tick off Cisco who of course has a massive portfolio and a dominant Market position now because it owns the network cloudflare is a marginal cost of adding new Services is very small and goes towards zero so it's able to get software like economics at scale despite all this infrastructure that's building out so it doesn't have to constantly face the increasing infrastructure tax snowflake for example doesn't own its own network infrastructure as it grows it relies on AWS or Azure gcp and and while it gives the company obvious advantages it doesn't have to build out its own network it also requires them to constantly pay the tax and negotiate with hyperscalers for better rental rates now as previously mentioned Cloud Fair cloudflare claims that its utilization is very high probably higher than the hyperscalers who can spin up servers that they can charge for underutilized customer capacity cloudflare also has excellent Network traffic data that it can use to its Advantage with its Analytics the company has been rapidly innovating Beyond its original Core Business adding as I said before serverless zero trust offerings it has announced a database it calls its database D1 that's pretty creative and it's announced an object store called R2 that is S3 minus one both from the alphabet and the numeric I.E minus the egress cost saying no egress cost that's their big claim to fame and they've made a lot of marketing noise around about that and of course they've promised in our a D2 database which of course is R2D2 RR they've launched a developer platform cloudflare can be thought of kind of like first of all a modern CDN they've got a simpler security model that's how they compete for example with z-scaler that brings uh they also bring VPN sd-wan and DDOS protection services that are that are part of the network and they're less expensive than AWS that's kind of their sort of go to market and messaging and value proposition and they're positioning themselves as a neutral Network that can connect across multiple clouds now to be clear unlike AWS in particular cloudflare is not well suited to lift and shift your traditional apps like for instance sap Hana you're not going to run that in on cloudflare's platform rather the company started by making websites more secure and faster and it flew under the radar and much in the same way that clay Christensen described the disruption in the steel industry if you've seen that where new entrants picked off the low margin rebar business then moved up the stack we've used that analogy in the semiconductor business with arm and and even China cloudflare is running a similar playbook in the cloud and in the network so in the early part of the last decade as aws's ascendancy was becoming more clear many of us started thinking about how and where firms could compete and add value as AWS is becoming so dominant so for instance take an industry Focus you could do things like data sharing with snowflake eventually you know uh popularized you could build on top of clouds again snowflake is doing that as are others you could build private clouds and of course connect to hybrid clouds but not many had the wherewithal and or the hutzpah to build out a Global Network that could serve as a connecting platform for cloud services cloudflare has traction in the market as it adds new services like zero trust and object store or database its Tam continues to grow here's a quick snapshot of cloudflare's financials relative to Z scalar which is both a competitor and a customer fastly which is a smaller CDN and Akamai a more mature CDN slash Edge platform cloudflare and fastly both reported earnings this past week Cloud Fair Cloud flare surpassed a billion dollar Revenue run rate but they gave tepid guidance and the stock got absolutely crushed today which is Friday but the company's business model is sound it's growing close to 50 annually it has sas-like gross margins in the mid to high 70s and it's it it's got a very strong balance sheet and a 13x revenue run rate multiple in fact it's Financial snapshot is quite close to that of z-scaler which is kind of interesting which zinc sailor of course doesn't own its own network that's a pure play software company fastly is much smaller and growing more slowly than cloudflare hence its lower multiple well Akamai as you can see is a more mature company but it's got a nice business now on its earnings call this week cloudflare announced that its head of sales was stepping down and the company has brought in a new leader to take the firm to five billion dollars in sales I think actually its current sales leader felt like hey you know my work is done here bring on somebody else to take it to the next level the company is promising to be free cash flow positive by the end of the year and is working hard toward its long-term financial model or so working towards sorry it's a long-term financial model with gross margin Targets in the mid 70s it's targeting 20 non-gaap operating margins so so solid you know very solid not like completely off the charts but you know very good and to our knowledge it has not committed to a long-term growth rate but at that sort of operating profit level you would like to see growth be consistently at least in the 20 range so they could at least be a rule of 40 company or perhaps even even five even higher if they're going to continue to command a premium valuation okay let's take a look at the ETR data ETR is very positive on cloudflare and has recently published a report on the company like many companies cloudflare is seeing an across the board slowdown in spending velocity we've reported on this quite extensively using the ETR data to quantify the degree to that Slowdown and on the data set with ETR we see that many customers they're shifting their spend to Flat spend you know plus or minus let's say you know single digits you know two three percent or even zero or in the market we're seeing a shift from paid to free tiers remember cloudflare offers a lot of free services as you're seeing customers maybe turn off the pay for a while and going with the freebie but we're also seeing some larger customers in the data and the fortune 1000 specifically they're actually spending more which was confirmed on cloudflare's earnings call they did say everything across the board was softer but they did also indicate that some of their larger customers are actually growing faster than their smaller customers and their churn is very very low here's a two-dimensional graphic we'd like to share this view a lot it's got Net score or spending momentum on the vertical axis and overlap or pervasiveness in the survey on the horizontal axis and this cut isolates three segments in the etrs taxonomy that cloudflare plays in Cloud security and networking now the table inserted in that upper left there shows the raw data which informs the position of each company in the dots with Net score in the ends listed in that rightmost column the red dotted line indicates a highly elevated Net score and finally we posted the breakdown those colors in the bottom right of cloudflare's Net score the lime green that's new adoptions the forest green is we're spending more six percent or more the gray is flat plus or minus uh five percent and you can see that the majority of customers you can see that's the majority of the customers that gray area the pink is we're spending Less in other words down six percent or worse and the bright red is churn which is minimal one percent very good indicator for for cloudflare what you do to get etr's proprietary Net score and they've done this for many many quarters so we have that time series data you subtract the Reds from the greens and that's Net score cloudflare is at 39 just under that magic red line now note that cloudflare and zscaler are right on top of each other Cisco has a dominant position on the x-axis that cloudflare and others are eyeing AWS is also dominant but note that its Net score is well above the red dotted line it's incredible Palo Alto networks is also very impressive it's got both a strong presence on the horizontal axis and it's got a Net score that's pretty comparable to cloudflare and z-scaler to much smaller companies Akamai is actually well positioned for a reasonably mature company and you can see fastly ATT Juniper and F5 have far less spending momentum on their platforms than does cloudflare but at least they are in positive Net score territory so what's going to be really interesting to see is whether cloudflare can continue to hold this momentum or even accelerate it as we've seen with some other clouds as it scales its Network and keeps adding more and more services cloudflare has a couple of potential strategic vectors that we want to talk about and it'll be going to be interesting to see how that plays out Now One path is to compete more directly as a Cloud Player offering secure access Edge services like firewall as a service and zero Trust Services like data loss prevention email security from its area one acquisition and other zero trust offerings as well as Network Services like routing and network connectivity this is The Sweet Spot of the company load balancing many others and then add in things like Object Store and database Services more Edge services in the future it might be telecom like services such as Network switching for offices so that's one route and cloudflare is clearly on that path more services more cohorts at innovating and and growing the company and bringing in more Revenue increasing acvs and and increasing long-term value and keeping retention high now the other Vector is what we're just going to refer to as super cloud as an enabler of cross-cloud infrastructure this is new value uh relative to the former Vector that we were just talking about now the title of this episode is what multi-cloud should have been meaning cloudflare could be the control plane providing a consistent experience across clouds one that is fast and secure at global scale now to give you Insight on this let's take a look at some of the comments made by Matthew Prince the CEO and co-founder of cloudflare cloudflare put its R2 Object Store into public beta this past May and I believe it's storing around a petabyte of data today I think that's what they said in their call here's what Prince said about that quote we are talking to very large companies about moving more and more of their stored objects to where we can store that with R2 and one of the benefits is not only can we help them save money on the egress fees but it allows them to then use those object stores or objects across any of the different Cloud platforms they're that they're using so by being that neutral third party we can let people adopt a little bit of Amazon a little bit of Microsoft a little bit of Google a little bit of SAS vendors and share that data across all those different places so what's interesting about this in the super cloud context is it suggests that customers could take the best of each Cloud to power their digital businesses I might like AWS for in redshift for my analytic database or I love Google's machine learning Microsoft's collaboration and I'd like a consistent way to connect those resources but of course he's strongly hinting and has made many public statements that aws's egress fees are a blocker to that vision now at a recent investor event Matthew Prince added some color to this concept when he talked about one metric of success being how much R2 capacity was consumed and how much they sold but perhaps a more interesting Benchmark is highlighted by the following statement that he made he said a completely different measure of success for R2 is Andy jassy says I'm sick and tired of these guys meaning cloudflare taking our objects away we're dropping our egress fees to zero I would be so excited because we've then unlocked the ability to be the network that interconnects the cloud together now of course it would be Adam solipski who would be saying that or maybe Andy Jesse you know still watching over AWS and I think it's highly unlikely that that's going to happen anytime soon and that of course but but in theory gets us closer to the super cloud value proposition and to further drive that point home and we're paraphrasing a little bit his comments here he said something the effect of quote customers need one consistent control plane across clouds and we are the neutral Network that can be consistent no matter which Cloud you're using interesting right that Prince sees the world that's similar to if not nearly identical to the concepts that the cube Community has been putting forth around supercloud now this vision is a ways off let's be real Prince even suggested that his initial vision of an application running across multiple clouds you know that's like super cloud Nirvana isn't what customers are doing today that's that's really hard to do and perhaps you know it's never going to happen but there's a little doubt that cloudflare could be and is positioning itself as that cross-cloud control plane it has the network economics and the business model levers to pull it's got an edge up on the competition at the edge pun intended cloudflare is the definition of Edge and it's distributed platform it's decentralized platform is much better suited for Edge workloads than these giant data centers that are you know set up to to try and handle that today the the hyperscalers are building out you know their Edge networks things like outposts you know going out to the edge and other local zones Etc now cloudflare is increasingly competitive to the hyperscalers and those traditional Stacks that it depositioned on an earlier slide that we showed but you know the likes of AWS and Dell and hpe and Cisco and those others they're not sitting in their hands they have a huge huge customer install bases and they are definitely a moving Target they're investing and they're building out their own Super clouds with really robust stacks as well let's face it it's going to take a decade or more for Enterprises to adopt a developer platform or a new database Cloud plus cloudflare's capabilities when compared to incumbent stacks and the hyperscalers is much less robust in these areas and even in storage you know despite all the great conversation that R2 generated and the buzz you take a specialist like Wasabi they're more mature they're more functional and they're way cheaper even than cloudflare so you know it's not a fake a complete that cloudflare is going to win in those markets but we love the disruption and if cloudflare wants to be the fourth us-based hyperscaler or join the the big four as the as the fifth if we put Alibaba in the mix it's got a lot of work to do in the ecosystem by its own admission as much to learn and is part of the value by the way that it sees in its area one acquisition it's email security company that it bought but even in that case much of the emphasis has been on reseller channels compare that to the AWS ecosystem which is not only a channel play but is as much an innovation flywheel filling gaps where companies like snowflake Thrive side by side with aws's data stores as well all the on-prem stacks are building hybrid connections to AWS and other clouds as a means of providing consistent experiences across clouds indeed many of them see what they call cross-cloud services or what we call super cloud hyper cloud or whatever you know Mega Cloud you want to call it we use super cloud they are really eyeing that opportunity so very few companies frankly are not going after that space but we're going to close with this cloudflare is one of those companies that's in a position to wake up each morning and ask who can we disrupt today and very few companies are in a position to disrupt the hyperscalers to the degree that cloudflare is and that my friends is going to be fascinating to watch unfold all right let's call it a wrap I want to thank Alex Meyerson who's on production and manages the podcast as well as Ken schiffman who's our newest addition to the Boston Studio Kristen Martin and Cheryl Knight help us get the word out on social media and in our newsletters and Rob Hof is our editor-in-chief over at silicon angle thank you to all remember all these episodes are available as podcasts wherever you listen all you're going to do is search breaking analysis podcasts I publish each week on wikibon.com and siliconangle.com you can email me at david.velante at siliconangle.com or DM me at divalante if you comment on my LinkedIn posts and please do check out etr.ai they got the best survey data in the Enterprise Tech business this is Dave vellante for the cube insights powered by ETR thank you very much for watching and we'll see you next time on breaking analysis

Published Date : Nov 5 2022

SUMMARY :

that the majority of customers you can

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MeyersonPERSON

0.99+

RichardPERSON

0.99+

Matthew PrincePERSON

0.99+

Ken schiffmanPERSON

0.99+

Matthew PrincePERSON

0.99+

Adam solipskiPERSON

0.99+

70 percentQUANTITY

0.99+

Rob HofPERSON

0.99+

Cheryl KnightPERSON

0.99+

PrincePERSON

0.99+

Dave vellantePERSON

0.99+

Andy JessePERSON

0.99+

Palo AltoLOCATION

0.99+

six percentQUANTITY

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

13xQUANTITY

0.99+

AmazonORGANIZATION

0.99+

five billionQUANTITY

0.99+

AWSORGANIZATION

0.99+

hannemanPERSON

0.99+

FridayDATE

0.99+

Ben ThompsonPERSON

0.99+

Richard HendricksPERSON

0.99+

zeroQUANTITY

0.99+

DellORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

Andy jassyPERSON

0.99+

39QUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

AlibabaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

five percentQUANTITY

0.99+

Boston StudioORGANIZATION

0.99+

AkamaiORGANIZATION

0.99+

clay ChristensenPERSON

0.99+

one percentQUANTITY

0.99+

awsORGANIZATION

0.99+

R2TITLE

0.99+

40 companyQUANTITY

0.98+

fiveQUANTITY

0.98+

fifthQUANTITY

0.98+

sapTITLE

0.98+

BostonLOCATION

0.98+

firstQUANTITY

0.98+

Russ hannemanPERSON

0.98+

cloudflareTITLE

0.98+

each companyQUANTITY

0.98+

each weekQUANTITY

0.97+

mid 70sDATE

0.97+

ETRORGANIZATION

0.97+

each serverQUANTITY

0.97+

this weekDATE

0.97+

EdgeTITLE

0.97+

zero trustQUANTITY

0.96+

todayDATE

0.96+

fourthQUANTITY

0.96+

two three percentQUANTITY

0.96+

each morningQUANTITY

0.95+

S3TITLE

0.95+

one metricQUANTITY

0.95+

bothQUANTITY

0.95+

billion dollarQUANTITY

0.95+

hpeORGANIZATION

0.94+

one acquisitionQUANTITY

0.94+

Evolving InfluxDB into the Smart Data Platform


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Nov 2 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

David BrownPERSON

0.99+

Tim YoakumPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VolantePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

StuPERSON

0.99+

Herain OberoiPERSON

0.99+

JohnPERSON

0.99+

Dave ValantePERSON

0.99+

Kamile TaoukPERSON

0.99+

John FourierPERSON

0.99+

Rinesh PatelPERSON

0.99+

Dave VellantePERSON

0.99+

Santana DasguptaPERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

BMWORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ICEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

AustraliaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VenkatPERSON

0.99+

MichaelPERSON

0.99+

CamillePERSON

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

DellORGANIZATION

0.99+

Don TapscottPERSON

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Intercontinental ExchangeORGANIZATION

0.99+

Children's Cancer InstituteORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

telcoORGANIZATION

0.99+

Sabrina YanPERSON

0.99+

TimPERSON

0.99+

SabrinaPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MontyCloudORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeoPERSON

0.99+

COVID-19OTHER

0.99+

Santa AnaLOCATION

0.99+

UKLOCATION

0.99+

TusharPERSON

0.99+

Las VegasLOCATION

0.99+

ValentePERSON

0.99+

JL ValentePERSON

0.99+

1,000QUANTITY

0.99+

Breaking Analysis: Even the Cloud Is Not Immune to the Seesaw Economy


 

>>From the Cube Studios in Palo Alto in Boston, bringing you data driven insights from the cube and etr. This is breaking analysis with Dave Ante. >>Have you ever been driving on the highway and traffic suddenly slows way down and then after a little while it picks up again and you're cruising along and you're thinking, Okay, hey, that was weird. But it's clear sailing now. Off we go, only to find out in a bit that the traffic is building up ahead again, forcing you to pump the brakes as the traffic pattern ebbs and flows well. Welcome to the Seesaw economy. The fed induced fire that prompted an unprecedented rally in tech is being purposefully extinguished now by that same fed. And virtually every sector of the tech industry is having to reset its expectations, including the cloud segment. Hello and welcome to this week's Wikibon Cube Insights powered by etr. In this breaking analysis will review the implications of the earnings announcements from the big three cloud players, Amazon, Microsoft, and Google who announced this week. >>And we'll update you on our quarterly IAS forecast and share the latest from ETR with a focus on cloud computing. Now, before we get into the new data, we wanna review something we shared with you on October 14th, just a couple weeks back, this is sort of a, we told you it was coming slide. It's an XY graph that shows ET R'S proprietary net score methodology on the vertical axis. That's a measure of spending momentum, spending velocity, and an overlap or presence in the dataset that's on the X axis. That's really a measure of pervasiveness. In the survey, the table, you see that table insert there that shows Wiki Bond's Q2 estimates of IAS revenue for the big four hyperscalers with their year on year growth rates. Now we told you at the time, this is data from the July TW 22 ETR survey and the ETR hadn't released its October survey results at that time. >>This was just a couple weeks ago. And while we couldn't share the specific data from the October survey, we were able to get a glimpse and we depicted the slowdown that we saw in the October data with those dotted arrows kind of down into the right, we said at the time that we were seeing and across the board slowdown even for the big three cloud vendors. Now, fast forward to this past week and we saw earnings releases from Alphabet, Microsoft, and just last night Amazon. Now you may be thinking, okay, big deal. The ETR survey data didn't really tell us anything we didn't already know. But judging from the negative reaction in the stock market to these earnings announcements, the degree of softness surprised a lot of investors. Now, at the time we didn't update our forecast, it doesn't make sense for us to do that when we're that close to earning season. >>And now that all the big three ha with all the big four with the exception of Alibaba have announced we've, we've updated. And so here's that data. This chart lays out our view of the IS and PAs worldwide revenue. Basically it's cloud infrastructure with an attempt to exclude any SaaS revenue so we can make an apples to apples comparison across all the clouds. Now the reason that actual is in quotes is because Microsoft and Google don't report IAS revenue, but they do give us clues and kind of directional commentary, which we then triangulate with other data that we have from the channel and ETR surveys and just our own intelligence. Now the second column there after the vendor name shows our previous estimates for q3, and then next to that we show our actuals. Same with the growth rates. And then we round out the chart with that lighter blue color highlights, the full year estimates for revenue and growth. >>So the key takeaways are that we shaved about $4 billion in revenue and roughly 300 basis points of growth off of our full year estimates. AWS had a strong July but exited Q3 in the mid 20% growth rate year over year. So we're using that guidance, you know, for our Q4 estimates. Azure came in below our earlier estimates, but Google actually exceeded our expectations. Now the compression in the numbers is in our view of function of the macro demand climate, we've made every attempt to adjust for constant currency. So FX should not be a factor in this data, but it's sure you know that that ma the the, the currency effects are weighing on those companies income statements. And so look, this is the fundamental dynamic of a cloud model where you can dial down consumption when you need to and dial it up when you need to. >>Now you may be thinking that many big cloud customers have a committed level of spending in order to get better discounts. And that's true. But what's happening we think is they'll reallocate that spend toward, let's say for example, lower cost storage tiers or they may take advantage of better price performance processors like Graviton for example. That is a clear trend that we're seeing and smaller companies that were perhaps paying by the drink just on demand, they're moving to reserve instance models to lower their monthly bill. So instead of taking the easy way out and just spending more companies are reallocating their reserve capacity toward lower cost. So those sort of lower cost services, so they're spending time and effort optimizing to get more for, for less whereas, or get more for the same is really how we should, should, should phrase it. Whereas during the pandemic, many companies were, you know, they perhaps were not as focused on doing that because business was booming and they had a response. >>So they just, you know, spend more dial it up. So in general, as they say, customers are are doing more with, with the same. Now let's look at the growth dynamic and spend some time on that. I think this is important. This data shows worldwide quarterly revenue growth rates back to Q1 2019 for the big four. So a couple of interesting things. The data tells us during the pandemic, you saw both AWS and Azure, but the law of large numbers and actually accelerate growth. AWS especially saw progressively increasing growth rates throughout 2021 for each quarter. Now that trend, as you can see is reversed in 2022 for aws. Now we saw Azure come down a bit, but it's still in the low forties in terms of percentage growth. While Google actually saw an uptick in growth this last quarter for GCP by our estimates as GCP is becoming an increasingly large portion of Google's overall cloud business. >>Now, unfortunately Google Cloud continues to lose north of 850 million per quarter, whereas AWS and Azure are profitable cloud businesses even though Alibaba is suffering its woes from China. And we'll see how they come in when they report in mid-November. The overall hyperscale market grew at 32% in Q3 in terms of worldwide revenue. So the slowdown isn't due to the repatriation or competition from on-prem vendors in our view, it's a macro related trend. And cloud will continue to significantly outperform other sectors despite its massive size. You know, on the repatriation point, it just still doesn't show up in the data. The A 16 Z article from Sarah Wong and Martin Martin Kasa claiming that repatriation was inevitable as a means to lower cost of good sold for SaaS companies. You know, while that was thought provoking, it hasn't shown up in the numbers. And if you read the financial statements of both AWS and its partners like Snowflake and you dig into the, to the, to the quarterly reports, you'll see little notes and comments with their ongoing negotiations to lower cloud costs for customers. >>AWS and no doubt execs at Azure and GCP understand that the lifetime value of a customer is worth much more than near term gross margin. And you can expect the cloud vendors to strike a balance between profitability, near term profitability anyway and customer attention. Now, even though Google Cloud platform saw accelerated growth, we need to put that in context for you. So GCP, by our estimate, has now crossed over the $3 billion for quarter market actually did so last quarter, but its growth rate accelerated to 42% this quarter. And so that's a good sign in our view. But let's do a quick little comparison with when AWS and Azure crossed the $3 billion mark and compare their growth rates at the time. So if you go back to to Q2 2016, as we're showing in this chart, that's around the time that AWS hit 3 billion per quarter and at the same time was growing at 58%. >>Azure by our estimates crossed that mark in Q4 2018 and at that time was growing at 67%. Again, compare that to Google's 42%. So one would expect Google's growth rate would be higher than its competitors at this point in the MO in the maturity of its cloud, which it's, you know, it's really not when you compared to to Azure. I mean they're kind of con, you know, comparable now but today, but, but you'll go back, you know, to that $3 billion mark. But more so looking at history, you'd like to see its growth rate at this point of a maturity model at least over 50%, which we don't believe it is. And one other point on this topic, you know, my business friend Matt Baker from Dell often says it's not a zero sum game, meaning there's plenty of opportunity exists to build value on top of hyperscalers. >>And I would totally agree it's not a dollar for dollar swap if you can continue to innovate. But history will show that the first company in makes the most money. Number two can do really well and number three tends to break even. Now maybe cloud is different because you have Microsoft software estate and the power behind that and that's driving its IAS business and Google ads are funding technology buildouts for, for for Google and gcp. So you know, we'll see how that plays out. But right now by this one measurement, Google is four years behind Microsoft in six years behind aws. Now to the point that cloud will continue to outpace other markets, let's, let's break this down a bit in spending terms and see why this claim holds water. This is data from ET r's latest October survey that shows the granularity of its net score or spending velocity metric. >>The lime green is new adoptions, so they're adding the platform, the forest green is spending more 6% or more. The gray bars spending is flat plus or minus, you know, 5%. The pinkish colors represent spending less down 6% or worse. And the bright red shows defections or churn of the platform. You subtract the reds from the greens and you get what's called net score, which is that blue dot that you can see on each of the bars. So what you see in the table insert is that all three have net scores above 40%, which is a highly elevated measure. Microsoft's net scores above 60% AWS well into the fifties and GCP in the mid forties. So all good. Now what's happening with all three is more customers are keep keeping their spending flat. So a higher percentage of customers are saying, our spending is now flat than it was in previous quarters and that's what's accounting for the compression. >>But the churn of all three, even gcp, which we reported, you know, last quarter from last quarter survey was was five x. The other two is actually very low in the single digits. So that might have been an anomaly. So that's a very good sign in our view. You know, again, customers aren't repatriating in droves, it's just not a trend that we would bet on, maybe makes for a FUD or you know, good marketing head, but it's just not a big deal. And you can't help but be impressed with both Microsoft and AWS's performance in the survey. And as we mentioned before, these companies aren't going to give up customers to try and preserve a little bit of gross margin. They'll do what it takes to keep people on their platforms cuz they'll make up for it over time with added services and improved offerings. >>Now, once these companies acquire a customer, they'll be very aggressive about keeping them. So customers take note, you have negotiating leverage, so use it. Okay, let's look at another cut at the cloud market from the ETR data set. Here's the two dimensional view, again, it's back, it's one of our favorites. Net score or spending momentum plotted against presence. And the data set, that's the x axis net score on the, on the vertical axis, this is a view of et r's cloud computing sector sector. You can see we put that magic 40% dotted red line in the table showing and, and then that the table inserts shows how the data are plotted with net score against presence. I e n in the survey, notably only the big three are above the 40% line of the names that we're showing here. The oth there, there are others. >>I mean if you put Snowflake on there, it'd be higher than any of these names, but we'll dig into that name in a later breaking analysis episode. Now this is just another way of quantifying the dominance of AWS and Azure, not only relative to Google, but the other cloud platforms out there. So we've, we've taken the opportunity here to plot IBM and Oracle, which both own a public cloud. Their performance is largely a reflection of them migrating their install bases to their respective public clouds and or hybrid clouds. And you know, that's fine, they're in the game. That's a point that we've made, you know, a number of times they're able to make it through the cloud, not whole and they at least have one, but they simply don't have the business momentum of AWS and Azure, which is actually quite impressive because AWS and Azure are now as large or larger than IBM and Oracle. >>And to show this type of continued growth that that that Azure and AWS show at their size is quite remarkable and customers are starting to recognize the viability of on-prem hi, you know, hybrid clouds like HPE GreenLake and Dell's apex. You know, you may say, well that's not cloud, but if the customer thinks it is and it was reporting in the survey that it is, we're gonna continue to report this view. You know, I don't know what's happening with H P E, They had a big down tick this quarter and I, and I don't read too much into that because their end is still pretty small at 53. So big fluctuations are not uncommon with those types of smaller ends, but it's over 50. So, you know, we did notice a a a negative within a giant public and private sector, which is often a, a bellwether giant public private is big public companies and large private companies like, like a Mars for example. >>So it, you know, it looks like for HPE it could be an outlier. We saw within the Fortune 1000 HPE E'S cloud looked actually really good and it had good spending momentum in that sector. When you di dig into the industry data within ETR dataset, obviously we're not showing that here, but we'll continue to monitor that. Okay, so where's this Leave us. Well look, this is really a tactical story of currency and macro headwinds as you can see. You know, we've laid out some of the points on this slide. The action in the stock market today, which is Friday after some of the soft earnings reports is really robust. You know, we'll see how it ends up in the day. So maybe this is a sign that the worst is over, but we don't think so. The visibility from tech companies is murky right now as most are guiding down, which indicates that their conservative outlook last quarter was still too optimistic. >>But as it relates to cloud, that platform is not going anywhere anytime soon. Sure, there are potential disruptors on the horizon, especially at the edge, but we're still a long ways off from, from the possibility that a new economic model emerges from the edge to disrupt the cloud and the opportunities in the cloud remain strong. I mean, what other path is there? Really private cloud. It was kind of a bandaid until the on-prem guys could get their a as a service models rolled out, which is just now happening. The hybrid thing is real, but it's, you know, defensive for the incumbents until they can get their super cloud investments going. Super cloud implying, capturing value above the hyperscaler CapEx, you know, call it what you want multi what multi-cloud should have been, the metacloud, the Uber cloud, whatever you like. But there are opportunities to play offense and that's clearly happening in the cloud ecosystem with the likes of Snowflake, Mongo, Hashi Corp. >>Hammer Spaces is a startup in this area. Aviatrix, CrowdStrike, Zeke Scaler, Okta, many, many more. And even the projects we see coming out of enterprise players like Dell, like with Project Alpine and what Pure Storage is doing along with a number of other of the backup vendors. So Q4 should be really interesting, but the real story is the investments that that companies are making now to leverage the cloud for digital transformations will be paying off down the road. This is not 1999. We had, you know, May might have had some good ideas and admittedly at a lot of bad ones too, but you didn't have the infrastructure to service customers at a low enough cost like you do today. The cloud is that infrastructure and so far it's been transformative, but it's likely the best is yet to come. Okay, let's call this a rap. >>Many thanks to Alex Morrison who does production and manages the podcast. Also Can Schiffman is our newest edition to the Boston Studio. Kristin Martin and Cheryl Knight helped get the word out on social media and in our newsletters. And Rob Ho is our editor in chief over@siliconangle.com, who does some wonderful editing for us. Thank you. Remember, all these episodes are available as podcasts. Wherever you listen, just search breaking analysis podcast. I publish each week on wiki bond.com at silicon angle.com. And you can email me at David dot valante@siliconangle.com or DM me at Dante or comment on my LinkedIn posts. And please do checkout etr.ai. They got the best survey data in the enterprise tech business. This is Dave Valante for the Cube Insights powered by etr. Thanks for watching and we'll see you next time on breaking analysis.

Published Date : Oct 29 2022

SUMMARY :

From the Cube Studios in Palo Alto in Boston, bringing you data driven insights from Have you ever been driving on the highway and traffic suddenly slows way down and then after In the survey, the table, you see that table insert there that Now, at the time we didn't update our forecast, it doesn't make sense for us And now that all the big three ha with all the big four with the exception of Alibaba have announced So we're using that guidance, you know, for our Q4 estimates. Whereas during the pandemic, many companies were, you know, they perhaps were not as focused So they just, you know, spend more dial it up. So the slowdown isn't due to the repatriation or And you can expect the cloud And one other point on this topic, you know, my business friend Matt Baker from Dell often says it's not a And I would totally agree it's not a dollar for dollar swap if you can continue to So what you see in the table insert is that all three have net scores But the churn of all three, even gcp, which we reported, you know, And the data set, that's the x axis net score on the, That's a point that we've made, you know, a number of times they're able to make it through the cloud, the viability of on-prem hi, you know, hybrid clouds like HPE GreenLake and Dell's So it, you know, it looks like for HPE it could be an outlier. off from, from the possibility that a new economic model emerges from the edge to And even the projects we see coming out of enterprise And you can email me at David dot valante@siliconangle.com or DM me at Dante

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AlphabetORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Rob HoPERSON

0.99+

Cheryl KnightPERSON

0.99+

Matt BakerPERSON

0.99+

October 14thDATE

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave ValantePERSON

0.99+

OctoberDATE

0.99+

$3 billionQUANTITY

0.99+

Sarah WongPERSON

0.99+

Palo AltoLOCATION

0.99+

42%QUANTITY

0.99+

32%QUANTITY

0.99+

FridayDATE

0.99+

1999DATE

0.99+

40%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

5%QUANTITY

0.99+

six yearsQUANTITY

0.99+

3 billionQUANTITY

0.99+

2022DATE

0.99+

MongoORGANIZATION

0.99+

last quarterDATE

0.99+

67%QUANTITY

0.99+

Martin Martin KasaPERSON

0.99+

Kristin MartinPERSON

0.99+

AviatrixORGANIZATION

0.99+

JulyDATE

0.99+

CrowdStrikeORGANIZATION

0.99+

58%QUANTITY

0.99+

four yearsQUANTITY

0.99+

OktaORGANIZATION

0.99+

second columnQUANTITY

0.99+

Zeke ScalerORGANIZATION

0.99+

2021DATE

0.99+

last quarterDATE

0.99+

each weekQUANTITY

0.99+

over@siliconangle.comOTHER

0.99+

Dave AntePERSON

0.99+

Project AlpineORGANIZATION

0.99+

Wiki BondORGANIZATION

0.99+

mid fortiesDATE

0.99+

Hashi Corp.ORGANIZATION

0.99+

oneQUANTITY

0.99+

mid-NovemberDATE

0.99+

todayDATE

0.99+

eachQUANTITY

0.99+

AzureORGANIZATION

0.99+

about $4 billionQUANTITY

0.98+

Stephen Chin, JFrog | KubeCon + CloudNativeCon NA 2022


 

>>Good afternoon, brilliant humans, and welcome back to the Cube. We're live in Detroit, Michigan at Cub Con, and I'm joined by John Furrier. John three exciting days buzzing. How you doing? >>That's great. I mean, we're coming down to the third day. We're keeping the energy going, but this segment's gonna be awesome. The CD foundation's doing amazing work. Developers are gonna be running businesses and workflows are changing. Productivity's the top conversation, and you're gonna start to see a coalescing of the communities who are continuous delivery, and it's gonna be awesome. >>And, and our next guess is an outstanding person to talk about this. We are joined by Stephen Chin, the chair of the CD Foundation. Steven, thanks so much for being here. >>No, no, my pleasure. I mean, this has been an amazing week quote that CubeCon with all of the announcements, all of the people who came out here to Detroit and, you know, fantastic. Like just walking around, you bump into all the right people here. Plus we held a CD summit zero day events, and had a lot of really exciting announcements this week. >>Gotta love the shirt. I gotta say, it's one of my favorites. Love the logos. Love the love the branding. That project got traction. What's the news in the CD foundation? I tried to sneak in the back. I got a little laid into your co-located event. It was packed. Everyone's engaged. It was really looked, look really cool. Give us the update. >>What's the news? Yeah, I know. So we, we had a really, really powerful event. All the key practitioners, the open source leads and folks were there. And one of, one of the things which I think we've done a really good job in the past six months with the CD foundation is getting back to the roots and focusing on technical innovation, right? This is what drives foundations, having strong projects, having people who are building innovation, and also bringing in a new innovation. So one of the projects which we added to the CD foundation this week is called Persia. So it's a, it's a decentralized package repository for getting open source libraries. And it solves a lot of the problems which you get when you have centralized infrastructure. You don't have the right security certificates, you don't have the right verification libraries. And these, these are all things which large companies provision and build out inside of their infrastructure. But the open source communities don't have the benefit of the same sort of really, really strong architecture. A lot of, a lot of the systems we depend upon. It's >>A good point, yeah. >>Yeah. I mean, if you think about the systems that developers depend upon, we depend upon, you know, npm, ruby Gems, Mayn Central, and these systems been around for a while. Like they serve the community well, right? They're, they're well supported by the companies and it's, it's, it's really a great contribution that they give us. But every time there's an outage or there's a security issue, guess, guess how many security issues that our, our research team found at npm? Just ballpark. >>74. >>So there're >>It's gotta be thousands. I mean, it's gotta be a lot of tons >>Of Yeah, >>They, they're currently up to 60,000 >>Whoa. >>Vulnerable, malicious packages in NPM and >>Oh my gosh. So that's a super, that's a jar number even. I know it was gonna be huge, but Holy mo. >>Yeah. So that's a software supply chain in actually right there. So that's, that's open source. Everything's out there. What's, how do, how does, how do you guys fix that? >>Yeah, so per peria kind of shifts the whole model. So when, when you think about a system that can be sustained, it has to be something which, which is not just one company. It has to be a, a, a set of companies, be vendor neutral and be decentralized. So that's why we donated it to the Continuous Delivery Foundation. So that can be that governance body, which, which makes sure it's not a single company, it is to use modern technologies. So you, you, you just need something which is immutable, so it can't be changed. So you can rely on it. It has to have a strong transaction ledger so you can see all of the history of it. You can build up your software, build materials off of it, and it, it has to have a strong peer-to-peer architecture, so it can be sustained long term. >>Steven, you mentioned something I want to just get back to. You mentioned outages and disruption. I, you didn't, you didn't say just the outages, but this whole disruption angle is interesting if something happens. Talk about the impact of the developer. They stalled, inefficiencies create basically disruption. >>No, I mean, if, if, so, so if you think about most DevOps teams in big companies, they support hundreds or thousands of teams and an hour of outage. All those developers, they, they can't program, they can't work. And that's, that's a huge loss of productivity for the company. Now, if you, if you take that up a level when MPM goes down for an hour, how many millions of man hours are wasted by not being able to get your builds working by not being able to get your codes to compile. Like it's, it's >>Like, yeah, I mean, it's almost hard to fathom. I mean, everyone's, It's stopped. Exactly. It's literally like having the plug pulled >>Exactly on whenever you're working on, That's, that's the fundamental problem we're trying to solve. Is it, it needs to be on a, like a well supported, well architected peer to peer network with some strong backing from big companies. So the company is working on Persia, include J Frog, which who I work for, Docker, Oracle. We have Deploy hub, Huawei, a whole bunch of other folks who are also helping out. And when you look at all of those folks, they all have different interests, but it's designed in a way where no single party has control over the network. So really it's, it's a system system. You, you're not relying upon one company or one logo. You're relying upon a well-architected open source implementation that everyone can rely >>On. That's shared software, but it's kind of a fault tolerant feature too. It's like, okay, if something happens here, you have a distributed piece of it, decentralized, you're not gonna go down. You can remediate. All right, so where's this go next? I mean, cuz we've been talking about the role of developer. This needs to be a modern, I won't say modern upgrade, but like a modern workflow or value chain. What's your vision? How do you see that? Cuz you're the center of the CD foundation coming together. People are gonna be coalescing multiple groups. Yeah. >>What's the, No, I think this is a good point. So there, there's a, a lot of different continuous delivery, continuous integration technologies. We're actually, from a Linux Foundation standpoint, we're coalescing all the continued delivery events into one big conference >>Next. You just made an announcement about this earlier this week. Tell us about CD events. What's going on, what's in, what's in the cooker? >>Yeah, and I think one of the big announcements we had was the 0.1 release of CD events. And CD events allows you to take all these systems and connect them in an event scalable, event oriented architecture. The first integration is between Tecton and Capin. So now you can get CD events flowing cleanly between your, your continuous delivery and your observability. And this extends through your entire DevOps pipeline. We all, we all need a standards based framework Yep. For how we get all the disparate continuous integration, continuous delivery, observability systems to, to work together. That's also high performance. It scales with our needs and it, it kind of gives you a future architecture to build on top of. So a lot of the companies I was talking with at the CD summit Yeah. They were very excited about not only using this with the projects we announced, but using this internally as an architecture to build their own DevOps pipelines on. >>I bet that feels good to hear. >>Yeah, absolutely. Yeah. >>Yeah. You mentioned Teton, they just graduated. I saw how many projects have graduated? >>So we have two graduated projects right now. We have Jenkins, which is the first graduated project. Now Tecton is also graduated. And I think this shows that for Tecton it was, it was time, the very mature project, great support, getting a lot of users and having them join the set of graduated projects. And the continuous delivery foundation is a really strong portfolio. And we have a bunch of other projects which also are on their way towards graduation. >>Feels like a moment of social proof I bet. >>For you all. Yeah, yeah. Yeah. No, it's really good. Yeah. >>How long has the CD Foundation been around? >>The CD foundation has been around for, i, I won't wanna say the exact number of years, a few years now. >>Okay. >>But I, I think that it, it was formed because what we wanted is we wanted a foundation which was purpose built. So CNCF is a great foundation. It has a very large umbrella of projects and it takes kind of that big umbrella approach where a lot of different efforts are joining it, a lot of things are happening and you can get good traction, but it produces its own bottlenecks in process. Having a foundation which is just about continuous delivery caters to more of a DevOps, professional DevOps audience. I think this, this gives a good platform for best practices. We're working on a new CDF best practices Yeah. Guide. We're working when use cases with all the member companies. And it, it gives that thought leadership platform for continuous delivery, which you need to be an expert in that area >>And the best practices too. And to identify the issues. Because at the end of the day, with the big thing that's coming out of this is velocity and more developers coming on board. I mean, this is the big thing. More people doing more. Yeah. Well yeah, I mean you take this open source continuous thunder away, you have more developers coming in, they be more productive and then people are gonna even either on the DevOps side or on the straight AP upside. And this is gonna be a huge issue. And the other thing that comes out that I wanna get your thoughts on is the supply chain issue you talked about is hot verifications and certifications of code is such big issue. Can you share your thoughts on that? Because Yeah, this is become, I won't say a business model for some companies, but it's also becoming critical for security that codes verified. >>Yeah. Okay. So I, I think one of, one of the things which we're specifically doing with the Peria project, which is unique, is rather than distributing, for example, libraries that you developed on your laptop and compiled there, or maybe they were built on, you know, a runner somewhere like Travis CI or GitHub actions, all the libraries being distributed on Persia are built by the authorized nodes in the network. And then they're, they're verified across all of the authorized nodes. So you nice, you have a, a gar, the basic guarantee we're giving you is when you download something from the Peria network, you'll get exactly the same binary as if you built it yourself from source. >>So there's a lot of trust >>And, and transparency. Yeah, exactly. And if you remember back to like kind of the seminal project, which kicked off this whole supply chain security like, like whirlwind it was SolarWinds. Yeah. Yeah. And the exact problem they hit was the build ran, it produced a result, they modified the code of the bill of the resulting binary and then they signed it. So if you built with the same source and then you went through that same process a second time, you would've gotten a different result, which was a malicious pre right. Yeah. And it's very hard to risk take, to take a binary file Yep. And determine if there's malicious code in it. Cuz it's not like source code. You can't inspect it, you can't do a code audit. It's totally different. So I think we're solving a key part of this with Persia, where you're freeing open source projects from the possibility of having their binaries, their packages, their end reduces, tampered with. And also upstream from this, you do want to have verification of prs, people doing code reviews, making sure that they're looking at the source code. And I think there's a lot of good efforts going on in the open source security foundation. So I'm also on the governing board of Open ssf >>To Do you sleep? You have three jobs you've said on camera? No, I can't even imagine. Yeah. Didn't >>You just spin that out from this open source security? Is that the new one they >>Spun out? Yeah, So the Open Source Security foundation is one of the new Linux Foundation projects. They, they have been around for a couple years, but they did a big reboot last year around this time. And I think what they really did a good job of now is bringing all the industry players to the table, having dialogue with government agencies, figuring out like, what do we need to do to support open source projects? Is it more investment in memory, safe languages? Do we need to have more investment in, in code audits or like security reviews of opensource projects. Lot of things. And all of those things require money investments. And that's what all the companies, including Jay Frogger doing to advance open source supply chain security. I >>Mean, it's, it's really kind of interesting to watch some different demographics of the developers and the vendors and the customers. On one hand, if you're a hardware person company, you have, you talk zero trust your software, your top trust, so your trusted code, and you got zero trust. It's interesting, depending on where you're coming from, they're all trying to achieve the same thing. It means zero trust. Makes sense. But then also I got code, I I want trust. Trust and verified. So security is in everything now. So code. So how do you see that traversing over? Is it just semantics or what's your view on that? >>The, the right way of looking at security is from the standpoint of the hacker, because they're always looking for >>Well said, very well said, New >>Loop, hope, new loopholes, new exploits. And they're, they're very, very smart people. And I think when you, when you look some >>Of the smartest >>Yeah, yeah, yeah. I, I, I work with, well former hackers now, security researchers, >>They converted, they're >>Recruited. But when you look at them, there's like two main classes of like, like types of exploits. So some, some attacker groups. What they're looking for is they're looking for pulse zero days, CVEs, like existing vulnerabilities that they can exploit to break into systems. But there's an increasing number of attackers who are now on the opposite end of the spectrum. And what they're doing is they're creating their own exploits. So, oh, they're for example, putting malicious code into open source projects. Little >>Trojan horse status. Yeah. >>They're they're getting their little Trojan horses in. Yeah. Or they're finding supply chain attacks by maybe uploading a malicious library to NPM or to pii. And by creating these attacks, especially ones that start at the top of the supply chain, you have such a large reach. >>I was just gonna say, it could be a whole, almost gives me chills as we're talking about it, the systemic, So this is this >>Gnarly nation state attackers, like people who wanted serious >>Damages. Engineered hack just said they're high, highly funded. Highly skilled. Exactly. Highly agile, highly focused. >>Yes. >>Teams, team. Not in the teams. >>Yeah. And so, so one, one example of this, which actually netted quite a lot of money for the, for the hacker who exposed it was, you guys probably heard about this, but it was a, an attack where they uploaded a malicious library to npm with the same exact namespace as a corporate library and clever, >>Creepy. >>It's called a dependency injection attack. And what happens is if you, if you don't have the right sort of security package management guidelines inside your company, and it's just looking for the latest version of merging multiple repositories as like a, like a single view. A lot of companies were accidentally picking up the latest version, which was out in npm uploaded by Alex Spearson was the one who did the, the attack. And he simultaneously reported bug bounties on like a dozen different companies and netted 130 k. Wow. So like these sort of attacks that they're real Yep. They're exploitable. And the, the hackers >>Complex >>Are finding these sort of attacks now in our supply chain are the ones who really are the most dangerous. That's the biggest threat to us. >>Yeah. And we have stacker ones out there. You got a bunch of other services, the white hat hackers get the bounties. That's really important. All right. What's next? What's your vision of this show as we end Coan? What's the most important story coming outta Coan in your opinion? And what are you guys doing next? >>Well, I, I actually think this is, this is probably not what most hooks would say is the most exciting story to con, but I find this personally the best is >>I can't wait for this now. >>So, on, on Sunday, the CNCF ran the first kids' day. >>Oh. >>And so they had a, a free kids workshop for, you know, underprivileged kids for >>About, That's >>Detroit area. It was, it was taught by some of the folks from the CNCF community. So Arro, Eric hen my, my older daughter, Cassandra's also an instructor. So she also was teaching a raspberry pie workshop. >>Amazing. And she's >>Here and Yeah, Yeah. She's also here at the show. And when you think about it, you know, there's always, there's, there's, you know, hundreds of announcements this week, A lot of exciting technologies, some of which we've talked about. Yeah. But it's, it's really what matters is the community. >>It this is a community first event >>And the people, and like, if we're giving back to the community and helping Detroit's kids to get better at technology, to get educated, I think that it's a worthwhile for all of us to be here. >>What a beautiful way to close it. That is such, I'm so glad you brought that up and brought that to our attention. I wasn't aware of that. Did you know that was >>Happening, John? No, I know about that. Yeah. No, that was, And that's next generation too. And what we need, we need to get down into the elementary schools. We gotta get to the kids. They're all doing robotics club anyway in high school. Computer science is now, now a >>Sport, in my opinion. Well, I think that if you're in a privileged community, though, I don't think that every school's doing robotics. And >>That's why Well, Cal Poly, Cal Poly and the universities are stepping up and I think CNCF leadership is amazing here. And we need more of it. I mean, I'm, I'm bullish on this. I love it. And I think that's a really great story. No, >>I, I am. Absolutely. And, and it just goes to show how committed CNF is to community, Putting community first and Detroit. There has been such a celebration of Detroit this whole week. Stephen, thank you so much for joining us on the show. Best Wishes with the CD Foundation. John, thanks for the banter as always. And thank you for tuning in to us here live on the cube in Detroit, Michigan. I'm Savannah Peterson and we are having the best day. I hope you are too.

Published Date : Oct 28 2022

SUMMARY :

How you doing? We're keeping the energy going, but this segment's gonna be awesome. the chair of the CD Foundation. of the announcements, all of the people who came out here to Detroit and, you know, What's the news in the CD foundation? You don't have the right security certificates, you don't have the right verification libraries. you know, npm, ruby Gems, Mayn Central, I mean, it's gotta be a lot of tons So that's a super, that's a jar number even. What's, how do, how does, how do you guys fix that? It has to have a strong transaction ledger so you can see all of the history of it. Talk about the impact of the developer. No, I mean, if, if, so, so if you think about most DevOps teams It's literally like having the plug pulled And when you look at all of those folks, they all have different interests, you have a distributed piece of it, decentralized, you're not gonna go down. What's the, No, I think this is a good point. What's going on, what's in, what's in the cooker? And CD events allows you to take all these systems and connect them Yeah. I saw how many projects have graduated? And the continuous delivery foundation is a really strong portfolio. For you all. The CD foundation has been around for, i, I won't wanna say the exact number of years, it gives that thought leadership platform for continuous delivery, which you need to be an expert in And the other thing that comes out that I wanna get your thoughts on is So you nice, you have a, a gar, the basic guarantee And the exact problem they hit was the build ran, To Do you sleep? And I think what they really did a good job of now is bringing all the industry players to So how do you see that traversing over? And I think when you, when you look some Yeah, yeah, yeah. But when you look at them, there's like two main classes of like, like types Yeah. the supply chain, you have such a large reach. Engineered hack just said they're high, highly funded. Not in the teams. the same exact namespace as a corporate library the latest version, which was out in npm uploaded by Alex Spearson That's the biggest threat to us. And what are you guys doing next? the CNCF community. And she's And when you think about it, And the people, and like, if we're giving back to the community and helping Detroit's kids to get better That is such, I'm so glad you brought that up and brought that to our attention. into the elementary schools. And And I think that's a really great story. And thank you for tuning in to us here live

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevenPERSON

0.99+

Stephen ChinPERSON

0.99+

Alex SpearsonPERSON

0.99+

StephenPERSON

0.99+

Continuous Delivery FoundationORGANIZATION

0.99+

Cal PolyORGANIZATION

0.99+

DetroitLOCATION

0.99+

OracleORGANIZATION

0.99+

JohnPERSON

0.99+

CassandraPERSON

0.99+

HuaweiORGANIZATION

0.99+

130 k.QUANTITY

0.99+

Savannah PetersonPERSON

0.99+

hundredsQUANTITY

0.99+

John FurrierPERSON

0.99+

oneQUANTITY

0.99+

Jay FroggerPERSON

0.99+

Mayn CentralORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

TectonORGANIZATION

0.99+

CD FoundationORGANIZATION

0.99+

last yearDATE

0.99+

SundayDATE

0.99+

DockerORGANIZATION

0.99+

Detroit, MichiganLOCATION

0.99+

Detroit, MichiganLOCATION

0.99+

thousandsQUANTITY

0.99+

third dayQUANTITY

0.99+

first eventQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

Open Source SecurityORGANIZATION

0.99+

one companyQUANTITY

0.99+

KubeConEVENT

0.99+

this weekDATE

0.98+

CD foundationORGANIZATION

0.98+

CNFORGANIZATION

0.98+

one logoQUANTITY

0.98+

millionsQUANTITY

0.98+

earlier this weekDATE

0.98+

JFrogPERSON

0.98+

second timeQUANTITY

0.98+

TetonORGANIZATION

0.98+

J FrogORGANIZATION

0.97+

ArroPERSON

0.97+

CloudNativeConEVENT

0.97+

npmORGANIZATION

0.97+

first integrationQUANTITY

0.97+

GitHubORGANIZATION

0.96+

an hourQUANTITY

0.96+

two main classesQUANTITY

0.96+

PersiaORGANIZATION

0.95+

up to 60,000QUANTITY

0.95+

CapinORGANIZATION

0.95+

hundreds of announcementsQUANTITY

0.94+

zero daysQUANTITY

0.94+

zero trustQUANTITY

0.94+

three jobsQUANTITY

0.93+

single companyQUANTITY

0.92+

CubeORGANIZATION

0.91+

single viewQUANTITY

0.91+

Deploy hubORGANIZATION

0.9+

past six monthsDATE

0.9+

CDORGANIZATION

0.9+

ruby GemsORGANIZATION

0.89+

NA 2022EVENT

0.89+

Eric henPERSON

0.87+

zero dayQUANTITY

0.86+

single partyQUANTITY

0.86+

Day 2 Keynote Analysis & Wrap | KubeCon + CloudNativeCon NA 2022


 

>>Set restaurants. And who says TEUs had got a little ass more skin in the game for us, in charge of his destiny? You guys are excited. Robert Worship is Chief Alumni. >>My name is Dave Ante, and I'm a long time industry analyst. So when you're as old as I am, you've seen a lot of transitions. Everybody talks about industry cycles and waves. I've seen many, many waves. Met a lot of industry executives and of a little bit of a, an industry historian. When you interview many thousands of people, probably five or 6,000 people as I have over the last half of a decade, you get to interact with a lot of people's knowledge and you begin to develop patterns. And so that's sort of what I bring is, is an ability to catalyze the conversation and, you know, share that knowledge with others in the community. Our philosophy is everybody's expert at something. Everybody's passionate about something and has real deep knowledge about that's something well, we wanna focus in on that area and extract that knowledge and share it with our communities. This is Dave Ante. Thanks for watching the Cube. >>Hello everyone and welcome back to the Cube where we are streaming live this week from CubeCon. I am Savannah Peterson and I am joined by an absolutely stellar lineup of cube brilliance this afternoon. To my left, a familiar face, Lisa Martin. Lisa, how you feeling? End of day two. >>Excellent. It was so much fun today. The buzz started yesterday, the momentum, the swell, and we only heard even more greatness today. >>Yeah, yeah, abs, absolutely. You know, I, I sometimes think we've hit an energy cliff, but it feels like the energy is just >>Continuous. Well, I think we're gonna, we're gonna slide right into tomorrow. >>Yeah, me too. I love it. And we've got two fantastic analysts with us today, Sarge and Keith. Thank you both for joining us. We feel so lucky today. >>Great being back on. >>Thanks for having us. Yeah, Yeah. It's nice to have you back on the show. We were, had you yesterday, but I miss hosting with you. It's been a while. >>It has been a while. We haven't done anything in since, Since pre >>Pandemic, right? Yeah, I think you're >>Right. Four times there >>Be four times back in the day. >>We, I always enjoy whole thing, Lisa, cuz she's so well prepared. I don't have to do any research when I come >>Home. >>Lisa will bring up some, Oh, sorry. Jeep, I see that in 2008 you won this award for Yeah. Being just excellent and I, I'm like, Oh >>Yeah. All right Keith. So, >>So did you do his analysis? >>Yeah, it's all done. Yeah. Great. He only part, he's not sitting next to me too. We can't see it, so it's gonna be like a magic crystal bell. Right. So a lot of people here. You got some stats in terms of the attendees compared >>To last year? Yeah, Priyanka told us we were double last year up to 8,000. We also got the scoop earlier that 2023 is gonna be in Chicago, which is very exciting. >>Oh, that is, is nice. Yeah, >>We got to break that here. >>Excellent. Keith, talk to us about what some of the things are that you've seen the last couple of days. The momentum. What's the vibe? I saw your tweet about the top three things you were being asked. Kubernetes was not one of them. >>Kubernetes were, was not one of 'em. This conference is starting to, it, it still feels very different than a vendor conference. The keynote is kind of, you know, kind of all over the place talking about projects, but the hallway track has been, you know, I've, this is maybe my fifth or sixth CU con in person. And the hallway track is different. It's less about projects and more about how, how do we adjust to the enterprise? How do we Yes. Actually do enterprise things. And it has been amazing watching this community grow. I'm gonna say grow up and mature. Yes. You know, you know, they're not wearing ties yet, but they are definitely understanding kind of the, the friction of implementing new technology in, in an enterprise. >>Yeah. So ge what's your, what's been your take, We were with you yesterday. What's been the take today to take aways? >>NOMA has changed since yesterday, but a few things I think I, I missed talking about that yesterday were that, first of all, let's just talk about Amazon. Amazon earnings came out, it spooked the market and I think it's relevant in this context as well, because they're number one cloud provider. Yeah. And all, I mean, almost all of these technologies on the back of us here, they are related to cloud, right? So it will have some impact on these. Like we have to analyze that. Like will it make the open source go faster or slower in, in lieu of the fact that the, the cloud growth is slowing. Right? So that's, that's one thing that's put that's put that aside. I've been thinking about the, the future of Kubernetes. What is the future of Kubernetes? And in that context, I was thinking like, you know, I think in, when I put a pointer there, I think in tangents, like, what else is around this thing? So I think CN CNCF has been writing the success of Kubernetes. They are, that was their number one flagship project, if you will. And it was mature enough to stand on its own. It it was Google, it's Google's Borg dub da Kubernetes. It's a genericized version of that. Right? So folks who do tech deep down, they know that, Right. So I think it's easier to stand with a solid, you know, project. But when the newer projects come in, then your medal will get tested at cncf. Right. >>And cncf, I mean they've got over 140 projects Yeah. Right now. So there's definitely much beyond >>Kubernetes. Yeah. So they, I have numbers there. 18 graduated, right, 37 in incubation and then 81 in Sandbox stage. They have three stages, right. So it's, they have a lot to chew on and the more they take on, the less, you know, quality you get goes into it. Who is, who's putting the money behind it? Which vendors are sponsoring like cncf, like how they're getting funded up. I think it >>Something I pay attention to as well. Yeah. Yeah. Lisa, I know you've got >>Some insight. Those are the things I was thinking about today. >>I gotta ask you, what's your take on what Keith said? Are you also seeing the maturation of the enterprise here at at coupon? >>Yes, I am actually, when you say enterprise versus what's the other side? Startups, right? Yeah. So startups start using open source a lot more earlier or lot more than enterprises. The enterprise is what they need. Number one thing is the, for their production workloads, they want a vendor sporting them. I said that yesterday as well, right? So it depend depending on the size of the enterprise. If you're a big shop, definitely if you have one of the 500 or Fortune five hundreds and your tech savvy shop, then you can absorb the open source directly coming from the open source sort of universe right. Coming to you. But if you are the second tier of enterprise, you want to go to a provider which is managed service provider, or it can be cloud service provider in this case. Yep. Most of the cloud service providers have multiple versions of Kubernetes, for example. >>I'm not talking about Kubernetes only, but like, but that is one example, right? So at Amazon you can get five different flavors of Kubernetes, right? Fully manage, have, manage all kind of stuff. So people don't have bandwidth to manage that stuff locally. You have to patch it, you have to roll in the new, you know, updates and all that stuff. Like, it's a lot of work for many. So CNCF actually is formed for that reason. Like the, the charter is to bring the quality to open source. Like in other companies they have the release process and they, the stringent guidelines and QA and all that stuff. So is is something ready for production? That's the question when it comes to any software, right? So they do that kind of work and, and, and they have these buckets defined at high level, but it needs more >>Work. Yeah. So one of the things that, you know, kind of stood out to me, I have good friend in the community, Alex Ellis, who does open Fast. It's a serverless platform, great platform. Two years ago or in 2019, there was a serverless day date. And in serverless day you had K Native, you had Open Pass, you had Ws, which is supported by IBM completely, not CNCF platforms. K native came into the CNCF full when Google donated the project a few months ago or a couple of years ago, now all of a sudden there's a K native day. Yes. Not a serverless day, it's a K native day. And I asked the, the CNCF event folks like, what happened to Serverless Day? I missed having open at serverless day. And you know, they, they came out and said, you know what, K native got big enough. >>They came in and I think Red Hat and Google wanted to sponsor a K native day. So serverless day went away. So I think what what I'm interested in and over the next couple of years is, is they're gonna be pushback from the C against the cncf. Is the CNCF now too big? Is it now the gatekeeper for do I have to be one of those 147 projects, right? In order enough to get my project noticed the open, fast, great project. I don't think Al Alex has any desire to have his project hosted by cncf, but it probably deserves, you know, shoulder left recognition with that. So I'm pushing to happen to say, okay, if this is open community, this is open source. If CNC is the place to have the cloud native conversation, what about the projects that's not cncf? Like how do we have that conversation when we don't have the power of a Google right. Or a, or a Lenox, et cetera, or a Lenox Foundation. So GE what, >>What are your thoughts on that? Is, is CNC too big? >>I don't think it's too big. I think it's too small to handle the, what we are doing in open source, right? So it's a bottle. It can become a bottleneck. Okay. I think too big in a way that yeah, it has, it has, it has power from that point of view. It has that cloud, if you will. The people listen to it. If it's CNCF project or this must be good, it's like in, in incubators. Like if you are y white Combinator, you know, company, it must be good. You know, I mean, may not be >>True, but, >>Oh, I think there's a bold assumption there though. I mean, I think everyone's just trying to do the best they can. And when we're evaluating projects, a very different origin and background, it's incredibly hard. Very c and staff is a staff of 30 people. They've got 180,000 people that are contributing to these projects and a thousand maintainers that they're trying to uphold. I think the challenge is actually really great. And to me, I actually look at events as an illustration of, you know, what's the culture and the health of an organization. If I were to evaluate CNCF based on that, I'd say we're very healthy right now. I would say that we're in a good spot. There's a lot of momentum. >>Yeah. I, I think CNCF is very healthy. I'm, I'm appreciative for it being here. I love coupon. It's becoming the, the facto conference to have this conversation has >>A totally >>Different vibe to other, It's a totally different vibe. Yeah. There needs to be a conduit and truth be told, enterprise buyers, to subject's point, this is something that we do absolutely agree on, on enterprise buyers. We want someone to pick winners and losers. We do, we, we don't want a box of Lego dumped on our, the middle of our table. We want somebody to have sorted that out. So while there may be five or six different service mesh solutions, at least the cncf, I can go there and say, Oh, I'll pick between the three or four that are most popular. And it, it's a place to curate. But I think with that curation comes the other side of it. Of how do we, how, you know, without the big corporate sponsor, how do I get my project pushed up? Right? Elevated. Elevated, Yep. And, and put onto the show floor. You know, another way that projects get noticed is that startups will adopt them, Push them. They may not even be, I don't, my CNCF project may not, my product may not even be based on the CNCF product. But the new stack has a booth, Ford has a booth. Nothing to do with a individual prod up, but promoting open source. What happens when you're not sponsored? >>I gotta ask you guys, what do you disagree on? >>Oh, so what, what do we disagree on? So I'm of the mindset, I can, I can say this, I I believe hybrid infrastructure is the future of it. Bar none. If I built my infrastructure, if I built my application in the cloud 10 years ago and I'm still building net new applications, I have stuff that I built 10 years ago that looks a lot like on-prem, what do I do with it? I can't modernize it cuz I don't have the developers to do it. I need to stick that somewhere. And where I'm going to stick that at is probably a hybrid infrastructure. So colo, I'm not gonna go back to the data center, but I'm, I'm gonna look, pick up something that looks very much like the data center and I'm saying embrace that it's the future. And if you're Boeing and you have, and Boeing is a member, cncf, that's a whole nother topic. If you have as 400 s, hpu X, et cetera, stick that stuff. Colo, build new stuff, but, and, and continue to support OpenStack, et cetera, et cetera. Because that's the future. Hybrid is the future. >>And sub g agree, disagree. >>I okay. Hybrid. Nobody can deny that the hybrid is the reality, not the future. It's a reality right now. It's, it's a necessity right now you can't do without it. Right. And okay, hybrid is very relative term. You can be like 10% here, 90% still hybrid, right? So the data center is shrinking and it will keep shrinking. Right? And >>So if by whole is the data center shrinking? >>This is where >>Quick one quick getting guys for it. How is growing by a clip? Yeah, but there's no data supporting. David Lym just came out for a report I think last year that showed that the data center is holding steady, holding steady, not growing, but not shrinking. >>Who sponsored that study? Wait, hold on. So the, that's a question, right? So more than 1 million data centers have been closed. I have, I can dig that through number through somebody like some organizations we published that maybe they're cloud, you know, people only. So the, when you get these kind of statements like it, it can be very skewed statements, right. But if you have seen the, the scene out there, which you have, I know, but I have also seen a lot of data centers walk the floor of, you know, a hundred thousand servers in a data center. I cannot imagine us consuming the infrastructure the way we were going into the future of co Okay. With, with one caveat actually. I am not big fan of like broad strokes. Like make a blanket statement. Oh no, data center's dead. Or if you are, >>That's how you get those esty headlines now. Yeah, I know. >>I'm all about to >>Put a stake in the ground. >>Actually. The, I think that you get more intelligence from the new end, right? A small little details if you will. If you're golden gold manak or Bank of America, you have so many data centers and you will still have data centers because performance matters to you, right? Your late latency matters for applications. But if you are even a Fortune 500 company on the lower end and or a healthcare vertical, right? That your situation is different. If you are a high, you know, growth startup, your situation is different, right? You will be a hundred percent cloud. So cloud gives you velocity, the, the, the pace of change, the pace of experimentation that actually you are buying innovation through cloud. It's proxy for innovation. And that's how I see it. But if you have, if you're stuck with older applications, I totally understand. >>Yeah. So the >>We need that OnPrem. Yeah, >>Well I think the, the bring your fuel sober, what we agree is that cloud is the place where innovation happens. Okay? At some point innovation becomes legacy debt and you have thus hybrid, you are not going to keep your old applications up to date forever. The, the, the math just doesn't add up. And where I differ in opinion is that not everyone needs innovation to keep moving. They need innovation for a period of time and then they need steady state. So Sergeant, we >>Argue about this. I have a, I >>Love this debate though. I say it's efficiency and stability also plays an important role. I see exactly what you're talking about. No, it's >>Great. I have a counter to that. Let me tell you >>Why. Let's >>Hear it. Because if you look at the storage only, right? Just storage. Just take storage computer network for, for a minute. There three cost reps in, in infrastructure, right? So storage earlier, early on there was one tier of storage. You say pay the same price, then now there are like five storage tiers, right? What I'm trying to say is the market sets the price, the market will tell you where this whole thing will go, but I know their margins are high in cloud, 20 plus percent and margin will shrink as, as we go forward. That means the, the cloud will become cheaper relative to on-prem. It, it, in some cases it's already cheaper. But even if it's a stable workload, even in that case, we will have a lower tier of service. I mean, you, you can't argue with me that the cloud versus your data center, they are on the same tier of services. Like cloud is a better, you know, product than your data center. Hands off. >>I love it. We, we are gonna relish in the debates between the two of you. Mic drops. The energy is great. I love it. Perspective. It's not like any of us can quite see through the crystal ball that we have very informed opinions, which is super exciting. Yeah. Lisa, any last thoughts today? >>Just love, I love the debate as well. That, and that's, that's part of what being in this community is all about. So sharing about, sharing opinions, expressing opinions. That's how it grows. That's how, that's how we innovate. Yeah. Obviously we need the cloud, but that's how we innovate. That's how we grow. Yeah. And we've seen that demonstrated the last couple days and I and your, your takes here on the Cuban on Twitter. Brilliant. >>Thank you. I absolutely love it. I'm gonna close this out with a really important analysis on the swag of the show. Yes. And if you know, yesterday we were looking at what is the weirdest swag or most unique swag We had that bucket hat that took the grand prize. Today we're gonna focus on something that's actually quite cool. A lot of the vendors here have really dedicated their swag to being local to Detroit. Very specific in their sourcing. Sonotype here has COOs. They're beautiful. You can't quite feel this flannel, but it's very legit hand sound here in Michigan. I can't say that I've been to too many conferences, if any, where there was this kind of commitment to localizing and sourcing swag from around the corner. We also see this with the Intel booth. They've got screen printers out here doing custom hoodies on spot. >>Oh fun. They're even like appropriately sized. They had local artists do these designs and if you're like me and you care about what's on your wrist, you're familiar with Shinola. This is one of my favorite swags that's available. There is a contest. Oh going on. Hello here. Yeah, so if you are Atan, make sure that you go and check this out. The we, I talked about this on the show. We've had the founder on the show or the CEO and yeah, I mean Shine is just full of class as since we are in Detroit as well. One of the fun themes is cars. >>Yes. >>And Storm Forge, who are also on the show, is actually giving away an Aston Martin, which is very exciting. Not exactly manufactured in Detroit. However, still very cool on the car front and >>The double oh seven version named the best I >>Know in the sixties. It's love it. It's very cool. Two quick last things. We talk about it a lot on the show. Every company now wants to be a software company. Yep. On that vein, and keeping up with my hat theme, the Home Depot is here because they want everybody to know that they in fact are a technology company, which is very cool. They have over 500,000 employees. You can imagine there's a lot of technology that has to go into keeping Napa. Absolutely. Yep. Wild to think about. And then last, but not at least very quick, rapid fire, best t-shirt contest. If you've ever ran to one of these events, there are a ton of T-shirts out there. I rate them on two things. Wittiest line and softness. If you combine the two, you'll really be our grand champion for the year. I'm just gonna hold these up and set them down for your laughs. Not afraid to commit, which is pretty great. This is another one designed by locals here. Detroit Code City. Oh, love it. This one made me chuckle the most. Kiss my cash. >>Oh, that's >>Good. These are also really nice and soft, which is fantastic. Also high on the softness category is this Op Sarah one. I also like their bird logo. These guys, there's just, you know, just real nice touch. So unfortunately, if you have the fumble, you're not here with us, live in Detroit. At least you're gonna get taste of the swag. I taste of the stories and some smiles hear from those of us on the cube. Thank you both so much for being here with us. Lisa, thanks for another fabulous day. Got it, girl. My name's Savannah Peterson. Thank you for joining us from Detroit. We're the cube and we can't wait to see you tomorrow.

Published Date : Oct 28 2022

SUMMARY :

And who says TEUs had got a little ass more skin in the game for as I have over the last half of a decade, you get to interact with a lot of people's knowledge Lisa, how you feeling? It was so much fun today. but it feels like the energy is just Thank you both for joining us. It's nice to have you back on the show. We haven't done anything in since, Since pre Right. I don't have to do any research when I come Jeep, I see that in 2008 you won this award You got some stats in terms of the attendees compared We also got the scoop earlier Oh, that is, is nice. What's the vibe? You know, you know, they're not wearing ties yet, but they are definitely understanding kind What's been the take today I was thinking like, you know, I think in, when I put a pointer So there's definitely much the less, you know, quality you get goes into it. Something I pay attention to as well. Those are the things I was thinking about today. So it depend depending on the size of the enterprise. You have to patch it, you have to roll in the new, I have good friend in the community, Alex Ellis, who does open Fast. If CNC is the place to have the cloud native conversation, what about the projects that's Like if you are y white Combinator, you know, I actually look at events as an illustration of, you know, what's the culture and the health of an organization. I love coupon. I don't, my CNCF project may not, my product may not even be based on the CNCF I can't modernize it cuz I don't have the developers to do it. So the data How is growing by a clip? the floor of, you know, a hundred thousand servers in a data center. That's how you get those esty headlines now. So cloud gives you velocity, the, the, We need that OnPrem. hybrid, you are not going to keep your old applications up to date forever. I have a, I I see exactly what you're talking about. I have a counter to that. Like cloud is a better, you know, It's not like any of us can quite see through the crystal ball that we have Just love, I love the debate as well. And if you know, yesterday we were looking at what is the weirdest swag or most unique like me and you care about what's on your wrist, you're familiar with Shinola. And Storm Forge, who are also on the show, is actually giving away an Aston Martin, If you combine the two, you'll really be our grand champion for We're the cube and we can't wait to see you tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LenoxORGANIZATION

0.99+

BoeingORGANIZATION

0.99+

PriyankaPERSON

0.99+

Lisa MartinPERSON

0.99+

fiveQUANTITY

0.99+

LisaPERSON

0.99+

Alex EllisPERSON

0.99+

KeithPERSON

0.99+

David LymPERSON

0.99+

ChicagoLOCATION

0.99+

DetroitLOCATION

0.99+

GoogleORGANIZATION

0.99+

2008DATE

0.99+

MichiganLOCATION

0.99+

SargePERSON

0.99+

Savannah PetersonPERSON

0.99+

AmazonORGANIZATION

0.99+

10%QUANTITY

0.99+

IBMORGANIZATION

0.99+

FordORGANIZATION

0.99+

threeQUANTITY

0.99+

30 peopleQUANTITY

0.99+

Dave AntePERSON

0.99+

fourQUANTITY

0.99+

90%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

last yearDATE

0.99+

CNCFORGANIZATION

0.99+

yesterdayDATE

0.99+

Home DepotORGANIZATION

0.99+

2019DATE

0.99+

Lenox FoundationORGANIZATION

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

37QUANTITY

0.99+

one tierQUANTITY

0.99+

147 projectsQUANTITY

0.99+

second tierQUANTITY

0.99+

180,000 peopleQUANTITY

0.99+

tomorrowDATE

0.99+

KubeConEVENT

0.99+

81QUANTITY

0.99+

TodayDATE

0.99+

over 500,000 employeesQUANTITY

0.99+

Two years agoDATE

0.99+

18QUANTITY

0.99+

Robert WorshipPERSON

0.99+

JeepORGANIZATION

0.99+

LegoORGANIZATION

0.99+

Bank of AmericaORGANIZATION

0.98+

KubernetesTITLE

0.98+

Four timesQUANTITY

0.98+

10 years agoDATE

0.98+

6,000 peopleQUANTITY

0.98+

GEORGANIZATION

0.98+

bothQUANTITY

0.98+

five storage tiersQUANTITY

0.98+

sixthQUANTITY

0.98+

CloudNativeConEVENT

0.98+

Evolving InfluxDB into the Smart Data Platform Full Episode


 

>>This past May, The Cube in collaboration with Influx data shared with you the latest innovations in Time series databases. We talked at length about why a purpose built time series database for many use cases, was a superior alternative to general purpose databases trying to do the same thing. Now, you may, you may remember the time series data is any data that's stamped in time, and if it's stamped, it can be analyzed historically. And when we introduced the concept to the community, we talked about how in theory, those time slices could be taken, you know, every hour, every minute, every second, you know, down to the millisecond and how the world was moving toward realtime or near realtime data analysis to support physical infrastructure like sensors and other devices and IOT equipment. A time series databases have had to evolve to efficiently support realtime data in emerging use cases in iot T and other use cases. >>And to do that, new architectural innovations have to be brought to bear. As is often the case, open source software is the linchpin to those innovations. Hello and welcome to Evolving Influx DB into the smart Data platform, made possible by influx data and produced by the Cube. My name is Dave Valante and I'll be your host today. Now in this program we're going to dig pretty deep into what's happening with Time series data generally, and specifically how Influx DB is evolving to support new workloads and demands and data, and specifically around data analytics use cases in real time. Now, first we're gonna hear from Brian Gilmore, who is the director of IOT and emerging technologies at Influx Data. And we're gonna talk about the continued evolution of Influx DB and the new capabilities enabled by open source generally and specific tools. And in this program you're gonna hear a lot about things like Rust, implementation of Apache Arrow, the use of par k and tooling such as data fusion, which powering a new engine for Influx db. >>Now, these innovations, they evolve the idea of time series analysis by dramatically increasing the granularity of time series data by compressing the historical time slices, if you will, from, for example, minutes down to milliseconds. And at the same time, enabling real time analytics with an architecture that can process data much faster and much more efficiently. Now, after Brian, we're gonna hear from Anna East Dos Georgio, who is a developer advocate at In Flux Data. And we're gonna get into the why of these open source capabilities and how they contribute to the evolution of the Influx DB platform. And then we're gonna close the program with Tim Yokum, he's the director of engineering at Influx Data, and he's gonna explain how the Influx DB community actually evolved the data engine in mid-flight and which decisions went into the innovations that are coming to the market. Thank you for being here. We hope you enjoy the program. Let's get started. Okay, we're kicking things off with Brian Gilmore. He's the director of i t and emerging Technology at Influx State of Bryan. Welcome to the program. Thanks for coming on. >>Thanks Dave. Great to be here. I appreciate the time. >>Hey, explain why Influx db, you know, needs a new engine. Was there something wrong with the current engine? What's going on there? >>No, no, not at all. I mean, I think it's, for us, it's been about staying ahead of the market. I think, you know, if we think about what our customers are coming to us sort of with now, you know, related to requests like sql, you know, query support, things like that, we have to figure out a way to, to execute those for them in a way that will scale long term. And then we also, we wanna make sure we're innovating, we're sort of staying ahead of the market as well and sort of anticipating those future needs. So, you know, this is really a, a transparent change for our customers. I mean, I think we'll be adding new capabilities over time that sort of leverage this new engine, but you know, initially the customers who are using us are gonna see just great improvements in performance, you know, especially those that are working at the top end of the, of the workload scale, you know, the massive data volumes and things like that. >>Yeah, and we're gonna get into that today and the architecture and the like, but what was the catalyst for the enhancements? I mean, when and how did this all come about? >>Well, I mean, like three years ago we were primarily on premises, right? I mean, I think we had our open source, we had an enterprise product, you know, and, and sort of shifting that technology, especially the open source code base to a service basis where we were hosting it through, you know, multiple cloud providers. That was, that was, that was a long journey I guess, you know, phase one was, you know, we wanted to host enterprise for our customers, so we sort of created a service that we just managed and ran our enterprise product for them. You know, phase two of this cloud effort was to, to optimize for like multi-tenant, multi-cloud, be able to, to host it in a truly like sass manner where we could use, you know, some type of customer activity or consumption as the, the pricing vector, you know, And, and that was sort of the birth of the, of the real first influx DB cloud, you know, which has been really successful. >>We've seen, I think like 60,000 people sign up and we've got tons and tons of, of both enterprises as well as like new companies, developers, and of course a lot of home hobbyists and enthusiasts who are using out on a, on a daily basis, you know, and having that sort of big pool of, of very diverse and very customers to chat with as they're using the product, as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction in terms of making sure we're continuously improving that and then also making these big leaps as we're doing with this, with this new engine. >>Right. So you've called it a transparent change for customers, so I'm presuming it's non-disruptive, but I really wanna understand how much of a pivot this is and what, what does it take to make that shift from, you know, time series, you know, specialist to real time analytics and being able to support both? >>Yeah, I mean, it's much more of an evolution, I think, than like a shift or a pivot. You know, time series data is always gonna be fundamental and sort of the basis of the solutions that we offer our customers, and then also the ones that they're building on the sort of raw APIs of our platform themselves. You know, the time series market is one that we've worked diligently to lead. I mean, I think when it comes to like metrics, especially like sensor data and app and infrastructure metrics, if we're being honest though, I think our, our user base is well aware that the way we were architected was much more towards those sort of like backwards looking historical type analytics, which are key for troubleshooting and making sure you don't, you know, run into the same problem twice. But, you know, we had to ask ourselves like, what can we do to like better handle those queries from a performance and a, and a, you know, a time to response on the queries, and can we get that to the point where the results sets are coming back so quickly from the time of query that we can like limit that window down to minutes and then seconds. >>And now with this new engine, we're really starting to talk about a query window that could be like returning results in, in, you know, milliseconds of time since it hit the, the, the ingest queue. And that's, that's really getting to the point where as your data is available, you can use it and you can query it, you can visualize it, and you can do all those sort of magical things with it, you know? And I think getting all of that to a place where we're saying like, yes to the customer on, you know, all of the, the real time queries, the, the multiple language query support, but, you know, it was hard, but we're now at a spot where we can start introducing that to, you know, a a limited number of customers, strategic customers and strategic availability zones to start. But you know, everybody over time. >>So you're basically going from what happened to in, you can still do that obviously, but to what's happening now in the moment? >>Yeah, yeah. I mean if you think about time, it's always sort of past, right? I mean, like in the moment right now, whether you're talking about like a millisecond ago or a minute ago, you know, that's, that's pretty much right now, I think for most people, especially in these use cases where you have other sort of components of latency induced by the, by the underlying data collection, the architecture, the infrastructure, the, you know, the, the devices and you know, the sort of highly distributed nature of all of this. So yeah, I mean, getting, getting a customer or a user to be able to use the data as soon as it is available is what we're after here. >>I always thought, you know, real, I always thought of real time as before you lose the customer, but now in this context, maybe it's before the machine blows up. >>Yeah, it's, it's, I mean it is operationally or operational real time is different, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, is just how many sort of operational customers we have. You know, everything from like aerospace and defense. We've got companies monitoring satellites, we've got tons of industrial users, users using us as a processes storing on the plant floor, you know, and, and if we can satisfy their sort of demands for like real time historical perspective, that's awesome. I think what we're gonna do here is we're gonna start to like edge into the real time that they're used to in terms of, you know, the millisecond response times that they expect of their control systems, certainly not their, their historians and databases. >>I, is this available, these innovations to influx DB cloud customers only who can access this capability? >>Yeah. I mean commercially and today, yes. You know, I think we want to emphasize that's a, for now our goal is to get our latest and greatest and our best to everybody over time. Of course. You know, one of the things we had to do here was like we double down on sort of our, our commitment to open source and availability. So like anybody today can take a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try to, you know, implement or execute some of it themselves in their own infrastructure. You know, we are, we're committed to bringing our sort of latest and greatest to our cloud customers first for a couple of reasons. Number one, you know, there are big workloads and they have high expectations of us. I think number two, it also gives us the opportunity to monitor a little bit more closely how it's working, how they're using it, like how the system itself is performing. >>And so just, you know, being careful, maybe a little cautious in terms of, of, of how big we go with this right away, just sort of both limits, you know, the risk of, of, you know, any issues that can come with new software rollouts. We haven't seen anything so far, but also it does give us the opportunity to have like meaningful conversations with a small group of users who are using the products, but once we get through that and they give us two thumbs up on it, it'll be like, open the gates and let everybody in. It's gonna be exciting time for the whole ecosystem. >>Yeah, that makes a lot of sense. And you can do some experimentation and, you know, using the cloud resources. Let's dig into some of the architectural and technical innovations that are gonna help deliver on this vision. What, what should we know there? >>Well, I mean, I think foundationally we built the, the new core on Rust. You know, this is a new very sort of popular systems language, you know, it's extremely efficient, but it's also built for speed and memory safety, which goes back to that us being able to like deliver it in a way that is, you know, something we can inspect very closely, but then also rely on the fact that it's going to behave well. And if it does find error conditions, I mean we, we've loved working with Go and, you know, a lot of our libraries will continue to, to be sort of implemented in Go, but you know, when it came to this particular new engine, you know, that power performance and stability rust was critical. On top of that, like, we've also integrated Apache Arrow and Apache Parque for persistence. I think for anybody who's really familiar with the nuts and bolts of our backend and our TSI and our, our time series merged Trees, this is a big break from that, you know, arrow on the sort of in MI side and then Par K in the on disk side. >>It, it allows us to, to present, you know, a unified set of APIs for those really fast real time inquiries that we talked about, as well as for very large, you know, historical sort of bulk data archives in that PARQUE format, which is also cool because there's an entire ecosystem sort of popping up around Parque in terms of the machine learning community, you know, and getting that all to work, we had to glue it together with aero flight. That's sort of what we're using as our, our RPC component. You know, it handles the orchestration and the, the transportation of the Coer data. Now we're moving to like a true Coer database model for this, this version of the engine, you know, and it removes a lot of overhead for us in terms of having to manage all that serialization, the deserialization, and, you know, to that again, like blurring that line between real time and historical data. It's, you know, it's, it's highly optimized for both streaming micro batch and then batches, but true streaming as well. >>Yeah. Again, I mean, it's funny you mentioned Rust. It is, it's been around for a long time, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. And, and we're gonna dig into to more of that, but give us any, is there anything else that we should know about Bryan? Give us the last word? >>Well, I mean, I think first I'd like everybody sort of watching just to like take a look at what we're offering in terms of early access in beta programs. I mean, if, if, if you wanna participate or if you wanna work sort of in terms of early access with the, with the new engine, please reach out to the team. I'm sure you know, there's a lot of communications going out and you know, it'll be highly featured on our, our website, you know, but reach out to the team, believe it or not, like we have a lot more going on than just the new engine. And so there are also other programs, things we're, we're offering to customers in terms of the user interface, data collection and things like that. And, you know, if you're a customer of ours and you have a sales team, a commercial team that you work with, you can reach out to them and see what you can get access to because we can flip a lot of stuff on, especially in cloud through feature flags. >>But if there's something new that you wanna try out, we'd just love to hear from you. And then, you know, our goal would be that as we give you access to all of these new cool features that, you know, you would give us continuous feedback on these products and services, not only like what you need today, but then what you'll need tomorrow to, to sort of build the next versions of your business. Because you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented stack of cloud services and enterprise databases and edge databases, you know, it's gonna be what we all make it together, not just, you know, those of us who were employed by Influx db. And then finally I would just say please, like watch in ICE in Tim's sessions, like these are two of our best and brightest, They're totally brilliant, completely pragmatic, and they are most of all customer obsessed, which is amazing. And there's no better takes, like honestly on the, the sort of technical details of this, then there's, especially when it comes to like the value that these investments will, will bring to our customers and our communities. So encourage you to, to, you know, pay more attention to them than you did to me, for sure. >>Brian Gilmore, great stuff. Really appreciate your time. Thank you. >>Yeah, thanks Dave. It was awesome. Look forward to it. >>Yeah, me too. Looking forward to see how the, the community actually applies these new innovations and goes, goes beyond just the historical into the real time really hot area. As Brian said in a moment, I'll be right back with Anna East dos Georgio to dig into the critical aspects of key open source components of the Influx DB engine, including Rust, Arrow, Parque, data fusion. Keep it right there. You don't wanna miss this >>Time series Data is everywhere. The number of sensors, systems and applications generating time series data increases every day. All these data sources producing so much data can cause analysis paralysis. Influx DB is an entire platform designed with everything you need to quickly build applications that generate value from time series data influx. DB Cloud is a serverless solution, which means you don't need to buy or manage your own servers. There's no need to worry about provisioning because you only pay for what you use. Influx DB Cloud is fully managed so you get the newest features and enhancements as they're added to the platform's code base. It also means you can spend time building solutions and delivering value to your users instead of wasting time and effort managing something else. Influx TVB Cloud offers a range of security features to protect your data, multiple layers of redundancy ensure you don't lose any data access controls ensure that only the people who should see your data can see it. >>And encryption protects your data at rest and in transit between any of our regions or cloud providers. InfluxDB uses a single API across the entire platform suite so you can build on open source, deploy to the cloud and then then easily query data in the cloud at the edge or on prem using the same scripts. And InfluxDB is schemaless automatically adjusting to changes in the shape of your data without requiring changes in your application. Logic. InfluxDB Cloud is production ready from day one. All it needs is your data and your imagination. Get started today@influxdata.com slash cloud. >>Okay, we're back. I'm Dave Valante with a Cube and you're watching evolving Influx DB into the smart data platform made possible by influx data. Anna ETOs Georgio is here, she's a developer advocate for influx data and we're gonna dig into the rationale and value contribution behind several open source technologies that Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the world of data into real-time analytics and is welcome to the program. Thanks for coming on. >>Hi, thank you so much. It's a pleasure to be here. >>Oh, you're very welcome. Okay, so IX is being touted as this next gen open source core for Influx db. And my understanding is that it leverages in memory of course for speed. It's a kilo store, so it gives you a compression efficiency, it's gonna give you faster query speeds, you store files and object storage, so you got very cost effective approach. Are these the salient points on the platform? I know there are probably dozens of other features, but what are the high level value points that people should understand? >>Sure, that's a great question. So some of the main requirements that IOx is trying to achieve and some of the most impressive ones to me, the first one is that it aims to have no limits on cardinality and also allow you to write any kind of event data that you want, whether that's live tag or a field. It also wants to deliver the best in class performance on analytics queries. In addition to our already well served metrics queries, we also wanna have operator control over memory usage. So you should be able to define how much memory is used for buffering caching and query processing. Some other really important parts is the ability to have bulk data export and import super useful. Also broader ecosystem compatibility where possible we aim to use and embrace emerging standards in the data analytics ecosystem and have compatibility with things like sql, Python, and maybe even pandas in the future. >>Okay, so lot there. Now we talked to Brian about how you're using Rust and which is not a new programming language and of course we had some drama around Rust during the pandemic with the Mozilla layoffs, but the formation of the Rust Foundation really addressed any of those concerns. You got big guns like Amazon and Google and Microsoft throwing their collective weights behind it. It's really, the adoption is really starting to get steep on the S-curve. So lots of platforms, lots of adoption with rust, but why rust as an alternative to say c plus plus for example? >>Sure, that's a great question. So Russ was chosen because of his exceptional performance and reliability. So while Russ is synt tactically similar to c plus plus and it has similar performance, it also compiles to a native code like c plus plus. But unlike c plus plus, it also has much better memory safety. So memory safety is protection against bugs or security vulnerabilities that lead to excessive memory usage or memory leaks. And rust achieves this memory safety due to its like innovative type system. Additionally, it doesn't allow for dangling pointers. And dangling pointers are the main classes of errors that lead to exploitable security vulnerabilities in languages like c plus plus. So Russ like helps meet that requirement of having no limits on ality, for example, because it's, we're also using the Russ implementation of Apache Arrow and this control over memory and also Russ Russ's packaging system called crates IO offers everything that you need out of the box to have features like AY and a weight to fix race conditions, to protection against buffering overflows and to ensure thread safe async cashing structures as well. So essentially it's just like has all the control, all the fine grain control, you need to take advantage of memory and all your resources as well as possible so that you can handle those really, really high ity use cases. >>Yeah, and the more I learn about the, the new engine and, and the platform IOCs et cetera, you know, you, you see things like, you know, the old days not even to even today you do a lot of garbage collection in these, in these systems and there's an inverse, you know, impact relative to performance. So it looks like you really, you know, the community is modernizing the platform, but I wanna talk about Apache Arrow for a moment. It it's designed to address the constraints that are associated with analyzing large data sets. We, we know that, but please explain why, what, what is Arrow and and what does it bring to Influx db? >>Sure, yeah. So Arrow is a, a framework for defining in memory calmer data. And so much of the efficiency and performance of IOx comes from taking advantage of calmer data structures. And I will, if you don't mind, take a moment to kind of of illustrate why column or data structures are so valuable. Let's pretend that we are gathering field data about the temperature in our room and also maybe the temperature of our stove. And in our table we have those two temperature values as well as maybe a measurement value, timestamp value, maybe some other tag values that describe what room and what house, et cetera we're getting this data from. And so you can picture this table where we have like two rows with the two temperature values for both our room and the stove. Well usually our room temperature is regulated so those values don't change very often. >>So when you have calm oriented st calm oriented storage, essentially you take each row, each column and group it together. And so if that's the case and you're just taking temperature values from the room and a lot of those temperature values are the same, then you'll, you might be able to imagine how equal values will then enable each other and when they neighbor each other in the storage format, this provides a really perfect opportunity for cheap compression. And then this cheap compression enables high cardinality use cases. It also enables for faster scan rates. So if you wanna define like the men and max value of the temperature in the room across a thousand different points, you only have to get those a thousand different points in order to answer that question and you have those immediately available to you. But let's contrast this with a row oriented storage solution instead so that we can understand better the benefits of calmer oriented storage. >>So if you had a row oriented storage, you'd first have to look at every field like the temperature in, in the room and the temperature of the stove. You'd have to go across every tag value that maybe describes where the room is located or what model the stove is. And every timestamp you'd then have to pluck out that one temperature value that you want at that one time stamp and do that for every single row. So you're scanning across a ton more data and that's why Rowe Oriented doesn't provide the same efficiency as calmer and Apache Arrow is in memory calmer data, commoner data fit framework. So that's where a lot of the advantages come >>From. Okay. So you basically described like a traditional database, a row approach, but I've seen like a lot of traditional database say, okay, now we've got, we can handle colo format versus what you're talking about is really, you know, kind of native i, is it not as effective? Is the, is the foreman not as effective because it's largely a, a bolt on? Can you, can you like elucidate on that front? >>Yeah, it's, it's not as effective because you have more expensive compression and because you can't scan across the values as quickly. And so those are, that's pretty much the main reasons why, why RO row oriented storage isn't as efficient as calm, calmer oriented storage. Yeah. >>Got it. So let's talk about Arrow Data Fusion. What is data fusion? I know it's written in Rust, but what does it bring to the table here? >>Sure. So it's an extensible query execution framework and it uses Arrow as it's in memory format. So the way that it helps in influx DB IOCs is that okay, it's great if you can write unlimited amount of cardinality into influx Cbis, but if you don't have a query engine that can successfully query that data, then I don't know how much value it is for you. So Data fusion helps enable the, the query process and transformation of that data. It also has a PANDAS API so that you could take advantage of PANDAS data frames as well and all of the machine learning tools associated with Pandas. >>Okay. You're also leveraging Par K in the platform cause we heard a lot about Par K in the middle of the last decade cuz as a storage format to improve on Hadoop column stores. What are you doing with Parque and why is it important? >>Sure. So parque is the column oriented durable file format. So it's important because it'll enable bulk import, bulk export, it has compatibility with Python and Pandas, so it supports a broader ecosystem. Par K files also take very little disc disc space and they're faster to scan because again, they're column oriented in particular, I think PAR K files are like 16 times cheaper than CSV files, just as kind of a point of reference. And so that's essentially a lot of the, the benefits of par k. >>Got it. Very popular. So and he's, what exactly is influx data focusing on as a committer to these projects? What is your focus? What's the value that you're bringing to the community? >>Sure. So Influx DB first has contributed a lot of different, different things to the Apache ecosystem. For example, they contribute an implementation of Apache Arrow and go and that will support clearing with flux. Also, there has been a quite a few contributions to data fusion for things like memory optimization and supportive additional SQL features like support for timestamp, arithmetic and support for exist clauses and support for memory control. So yeah, Influx has contributed a a lot to the Apache ecosystem and continues to do so. And I think kind of the idea here is that if you can improve these upstream projects and then the long term strategy here is that the more you contribute and build those up, then the more you will perpetuate that cycle of improvement and the more we will invest in our own project as well. So it's just that kind of symbiotic relationship and appreciation of the open source community. >>Yeah. Got it. You got that virtuous cycle going, the people call the flywheel. Give us your last thoughts and kind of summarize, you know, where what, what the big takeaways are from your perspective. >>So I think the big takeaway is that influx data is doing a lot of really exciting things with Influx DB IOx and I really encourage, if you are interested in learning more about the technologies that Influx is leveraging to produce IOCs, the challenges associated with it and all of the hard work questions and you just wanna learn more, then I would encourage you to go to the monthly Tech talks and community office hours and they are on every second Wednesday of the month at 8:30 AM Pacific time. There's also a community forums and a community Slack channel look for the influx DDB unders IAC channel specifically to learn more about how to join those office hours and those monthly tech tech talks as well as ask any questions they have about iacs, what to expect and what you'd like to learn more about. I as a developer advocate, I wanna answer your questions. So if there's a particular technology or stack that you wanna dive deeper into and want more explanation about how INFLUX DB leverages it to build IOCs, I will be really excited to produce content on that topic for you. >>Yeah, that's awesome. You guys have a really rich community, collaborate with your peers, solve problems, and, and you guys super responsive, so really appreciate that. All right, thank you so much Anise for explaining all this open source stuff to the audience and why it's important to the future of data. >>Thank you. I really appreciate it. >>All right, you're very welcome. Okay, stay right there and in a moment I'll be back with Tim Yoakum, he's the director of engineering for Influx Data and we're gonna talk about how you update a SAS engine while the plane is flying at 30,000 feet. You don't wanna miss this. >>I'm really glad that we went with InfluxDB Cloud for our hosting because it has saved us a ton of time. It's helped us move faster, it's saved us money. And also InfluxDB has good support. My name's Alex Nada. I am CTO at Noble nine. Noble Nine is a platform to measure and manage service level objectives, which is a great way of measuring the reliability of your systems. You can essentially think of an slo, the product we're providing to our customers as a bunch of time series. So we need a way to store that data and the corresponding time series that are related to those. The main reason that we settled on InfluxDB as we were shopping around is that InfluxDB has a very flexible query language and as a general purpose time series database, it basically had the set of features we were looking for. >>As our platform has grown, we found InfluxDB Cloud to be a really scalable solution. We can quickly iterate on new features and functionality because Influx Cloud is entirely managed, it probably saved us at least a full additional person on our team. We also have the option of running InfluxDB Enterprise, which gives us the ability to even host off the cloud or in a private cloud if that's preferred by a customer. Influx data has been really flexible in adapting to the hosting requirements that we have. They listened to the challenges we were facing and they helped us solve it. As we've continued to grow, I'm really happy we have influx data by our side. >>Okay, we're back with Tim Yokum, who is the director of engineering at Influx Data. Tim, welcome. Good to see you. >>Good to see you. Thanks for having me. >>You're really welcome. Listen, we've been covering open source software in the cube for more than a decade, and we've kind of watched the innovation from the big data ecosystem. The cloud has been being built out on open source, mobile, social platforms, key databases, and of course influx DB and influx data has been a big consumer and contributor of open source software. So my question to you is, where have you seen the biggest bang for the buck from open source software? >>So yeah, you know, influx really, we thrive at the intersection of commercial services and open, so open source software. So OSS keeps us on the cutting edge. We benefit from OSS in delivering our own service from our core storage engine technologies to web services temping engines. Our, our team stays lean and focused because we build on proven tools. We really build on the shoulders of giants and like you've mentioned, even better, we contribute a lot back to the projects that we use as well as our own product influx db. >>You know, but I gotta ask you, Tim, because one of the challenge that that we've seen in particular, you saw this in the heyday of Hadoop, the, the innovations come so fast and furious and as a software company you gotta place bets, you gotta, you know, commit people and sometimes those bets can be risky and not pay off well, how have you managed this challenge? >>Oh, it moves fast. Yeah, that, that's a benefit though because it, the community moves so quickly that today's hot technology can be tomorrow's dinosaur. And what we, what we tend to do is, is we fail fast and fail often. We try a lot of things. You know, you look at Kubernetes for example, that ecosystem is driven by thousands of intelligent developers, engineers, builders, they're adding value every day. So we have to really keep up with that. And as the stack changes, we, we try different technologies, we try different methods, and at the end of the day, we come up with a better platform as a result of just the constant change in the environment. It is a challenge for us, but it's, it's something that we just do every day. >>So we have a survey partner down in New York City called Enterprise Technology Research etr, and they do these quarterly surveys of about 1500 CIOs, IT practitioners, and they really have a good pulse on what's happening with spending. And the data shows that containers generally, but specifically Kubernetes is one of the areas that has kind of, it's been off the charts and seen the most significant adoption and velocity particularly, you know, along with cloud. But, but really Kubernetes is just, you know, still up until the right consistently even with, you know, the macro headwinds and all, all of the stuff that we're sick of talking about. But, so what are you doing with Kubernetes in the platform? >>Yeah, it, it's really central to our ability to run the product. When we first started out, we were just on AWS and, and the way we were running was, was a little bit like containers junior. Now we're running Kubernetes everywhere at aws, Azure, Google Cloud. It allows us to have a consistent experience across three different cloud providers and we can manage that in code so our developers can focus on delivering services, not trying to learn the intricacies of Amazon, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. >>Just to follow up on that, is it, no. So I presume it's sounds like there's a PAs layer there to allow you guys to have a consistent experience across clouds and out to the edge, you know, wherever is that, is that correct? >>Yeah, so we've basically built more or less platform engineering, This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us because we've built a platform that our developers can lean on and they only have to learn one way of deploying their application, managing their application. And so that, that just gets all of the underlying infrastructure out of the way and, and lets them focus on delivering influx cloud. >>Yeah, and I know I'm taking a little bit of a tangent, but is that, that, I'll call it a PAs layer if I can use that term. Is that, are there specific attributes to Influx db or is it kind of just generally off the shelf paths? You know, are there, is, is there any purpose built capability there that, that is, is value add or is it pretty much generic? >>So we really build, we, we look at things through, with a build versus buy through a, a build versus by lens. Some things we want to leverage cloud provider services, for instance, Postgres databases for metadata, perhaps we'll get that off of our plate, let someone else run that. We're going to deploy a platform that our engineers can, can deliver on that has consistency that is, is all generated from code that we can as a, as an SRE group, as an ops team, that we can manage with very few people really, and we can stamp out clusters across multiple regions and in no time. >>So how, so sometimes you build, sometimes you buy it. How do you make those decisions and and what does that mean for the, for the platform and for customers? >>Yeah, so what we're doing is, it's like everybody else will do, we're we're looking for trade offs that make sense. You know, we really want to protect our customers data. So we look for services that support our own software with the most uptime, reliability, and durability we can get. Some things are just going to be easier to have a cloud provider take care of on our behalf. We make that transparent for our own team. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, like I had mentioned with SQL data stores for metadata, perhaps let's build on top of what of these three large cloud providers have already perfected. And we can then focus on our platform engineering and we can have our developers then focus on the influx data, software, influx, cloud software. >>So take it to the customer level, what does it mean for them? What's the value that they're gonna get out of all these innovations that we've been been talking about today and what can they expect in the future? >>So first of all, people who use the OSS product are really gonna be at home on our cloud platform. You can run it on your desktop machine, on a single server, what have you, but then you want to scale up. We have some 270 terabytes of data across, over 4 billion series keys that people have stored. So there's a proven ability to scale now in terms of the open source, open source software and how we've developed the platform. You're getting highly available high cardinality time series platform. We manage it and, and really as, as I mentioned earlier, we can keep up with the state of the art. We keep reinventing, we keep deploying things in real time. We deploy to our platform every day repeatedly all the time. And it's that continuous deployment that allows us to continue testing things in flight, rolling things out that change new features, better ways of doing deployments, safer ways of doing deployments. >>All of that happens behind the scenes. And like we had mentioned earlier, Kubernetes, I mean that, that allows us to get that done. We couldn't do it without having that platform as a, as a base layer for us to then put our software on. So we, we iterate quickly. When you're on the, the Influx cloud platform, you really are able to, to take advantage of new features immediately. We roll things out every day and as those things go into production, you have, you have the ability to, to use them. And so in the end we want you to focus on getting actual insights from your data instead of running infrastructure, you know, let, let us do that for you. So, >>And that makes sense, but so is the, is the, are the innovations that we're talking about in the evolution of Influx db, do, do you see that as sort of a natural evolution for existing customers? I, is it, I'm sure the answer is both, but is it opening up new territory for customers? Can you add some color to that? >>Yeah, it really is it, it's a little bit of both. Any engineer will say, well, it depends. So cloud native technologies are, are really the hot thing. Iot, industrial iot especially, people want to just shove tons of data out there and be able to do queries immediately and they don't wanna manage infrastructure. What we've started to see are people that use the cloud service as their, their data store backbone and then they use edge computing with R OSS product to ingest data from say, multiple production lines and downsample that data, send the rest of that data off influx cloud where the heavy processing takes place. So really us being in all the different clouds and iterating on that and being in all sorts of different regions allows for people to really get out of the, the business of man trying to manage that big data, have us take care of that. And of course as we change the platform end users benefit from that immediately. And, >>And so obviously taking away a lot of the heavy lifting for the infrastructure, would you say the same thing about security, especially as you go out to IOT and the Edge? How should we be thinking about the value that you bring from a security perspective? >>Yeah, we take, we take security super seriously. It, it's built into our dna. We do a lot of work to ensure that our platform is secure, that the data we store is, is kept private. It's of course always a concern. You see in the news all the time, companies being compromised, you know, that's something that you can have an entire team working on, which we do to make sure that the data that you have, whether it's in transit, whether it's at rest, is always kept secure, is only viewable by you. You know, you look at things like software, bill of materials, if you're running this yourself, you have to go vet all sorts of different pieces of software. And we do that, you know, as we use new tools. That's something that, that's just part of our jobs to make sure that the platform that we're running it has, has fully vetted software and, and with open source especially, that's a lot of work. And so it's, it's definitely new territory. Supply chain attacks are, are definitely happening at a higher clip than they used to, but that is, that is really just part of a day in the, the life for folks like us that are, are building platforms. >>Yeah, and that's key. I mean especially when you start getting into the, the, you know, we talk about IOT and the operations technologies, the engineers running the, that infrastructure, you know, historically, as you know, Tim, they, they would air gap everything. That's how they kept it safe. But that's not feasible anymore. Everything's >>That >>Connected now, right? And so you've gotta have a partner that is again, take away that heavy lifting to r and d so you can focus on some of the other activities. Right. Give us the, the last word and the, the key takeaways from your perspective. >>Well, you know, from my perspective I see it as, as a a two lane approach with, with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, what you had mentioned, air gaping. Sure there's plenty of need for that, but at the end of the day, people that don't want to run big data centers, people that want torus their data to, to a company that's, that's got a full platform set up for them that they can build on, send that data over to the cloud, the cloud is not going away. I think more hybrid approach is, is where the future lives and that's what we're prepared for. >>Tim, really appreciate you coming to the program. Great stuff. Good to see you. >>Thanks very much. Appreciate it. >>Okay, in a moment I'll be back to wrap up. Today's session, you're watching The Cube. >>Are you looking for some help getting started with InfluxDB Telegraph or Flux Check >>Out Influx DB University >>Where you can find our entire catalog of free training that will help you make the most of your time series data >>Get >>Started for free@influxdbu.com. >>We'll see you in class. >>Okay, so we heard today from three experts on time series and data, how the Influx DB platform is evolving to support new ways of analyzing large data sets very efficiently and effectively in real time. And we learned that key open source components like Apache Arrow and the Rust Programming environment Data fusion par K are being leveraged to support realtime data analytics at scale. We also learned about the contributions in importance of open source software and how the Influx DB community is evolving the platform with minimal disruption to support new workloads, new use cases, and the future of realtime data analytics. Now remember these sessions, they're all available on demand. You can go to the cube.net to find those. Don't forget to check out silicon angle.com for all the news related to things enterprise and emerging tech. And you should also check out influx data.com. There you can learn about the company's products. You'll find developer resources like free courses. You could join the developer community and work with your peers to learn and solve problems. And there are plenty of other resources around use cases and customer stories on the website. This is Dave Valante. Thank you for watching Evolving Influx DB into the smart data platform, made possible by influx data and brought to you by the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Oct 28 2022

SUMMARY :

we talked about how in theory, those time slices could be taken, you know, As is often the case, open source software is the linchpin to those innovations. We hope you enjoy the program. I appreciate the time. Hey, explain why Influx db, you know, needs a new engine. now, you know, related to requests like sql, you know, query support, things like that, of the real first influx DB cloud, you know, which has been really successful. as they're giving us feedback, et cetera, has has, you know, pointed us in a really good direction shift from, you know, time series, you know, specialist to real time analytics better handle those queries from a performance and a, and a, you know, a time to response on the queries, you know, all of the, the real time queries, the, the multiple language query support, the, the devices and you know, the sort of highly distributed nature of all of this. I always thought, you know, real, I always thought of real time as before you lose the customer, you know, and that's one of the things that really triggered us to know that we were, we were heading in the right direction, a look at the, the libraries in on our GitHub and, you know, can ex inspect it and even can try And so just, you know, being careful, maybe a little cautious in terms And you can do some experimentation and, you know, using the cloud resources. You know, this is a new very sort of popular systems language, you know, really fast real time inquiries that we talked about, as well as for very large, you know, but it's popularity is, is you know, really starting to hit that steep part of the S-curve. going out and you know, it'll be highly featured on our, our website, you know, the whole database, the ecosystem as it expands out into to, you know, this vertically oriented Really appreciate your time. Look forward to it. goes, goes beyond just the historical into the real time really hot area. There's no need to worry about provisioning because you only pay for what you use. InfluxDB uses a single API across the entire platform suite so you can build on Influx DB is leveraging to increase the granularity of time series analysis analysis and bring the Hi, thank you so much. it's gonna give you faster query speeds, you store files and object storage, it aims to have no limits on cardinality and also allow you to write any kind of event data that It's really, the adoption is really starting to get steep on all the control, all the fine grain control, you need to take you know, the community is modernizing the platform, but I wanna talk about Apache And so you can answer that question and you have those immediately available to you. out that one temperature value that you want at that one time stamp and do that for every talking about is really, you know, kind of native i, is it not as effective? Yeah, it's, it's not as effective because you have more expensive compression and So let's talk about Arrow Data Fusion. It also has a PANDAS API so that you could take advantage of PANDAS What are you doing with and Pandas, so it supports a broader ecosystem. What's the value that you're bringing to the community? And I think kind of the idea here is that if you can improve kind of summarize, you know, where what, what the big takeaways are from your perspective. the hard work questions and you All right, thank you so much Anise for explaining I really appreciate it. Data and we're gonna talk about how you update a SAS engine while I'm really glad that we went with InfluxDB Cloud for our hosting They listened to the challenges we were facing and they helped Good to see you. Good to see you. So my question to you is, So yeah, you know, influx really, we thrive at the intersection of commercial services and open, You know, you look at Kubernetes for example, But, but really Kubernetes is just, you know, Azure, and Google and figure out how to deliver services on those three clouds with all of their differences. to the edge, you know, wherever is that, is that correct? This is the new hot phrase, you know, it, it's, Kubernetes has made a lot of things easy for us Is that, are there specific attributes to Influx db as an SRE group, as an ops team, that we can manage with very few people So how, so sometimes you build, sometimes you buy it. And of course for customers you don't even see that, but we don't want to try to reinvent the wheel, and really as, as I mentioned earlier, we can keep up with the state of the art. the end we want you to focus on getting actual insights from your data instead of running infrastructure, So cloud native technologies are, are really the hot thing. You see in the news all the time, companies being compromised, you know, technologies, the engineers running the, that infrastructure, you know, historically, as you know, take away that heavy lifting to r and d so you can focus on some of the other activities. with influx, with Anytime series data, you know, you've got a lot of stuff that you're gonna run on-prem, Tim, really appreciate you coming to the program. Thanks very much. Okay, in a moment I'll be back to wrap up. brought to you by the Cube, your leader in enterprise and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

Tim YoakumPERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

Dave ValantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

GoogleORGANIZATION

0.99+

16 timesQUANTITY

0.99+

two rowsQUANTITY

0.99+

New York CityLOCATION

0.99+

60,000 peopleQUANTITY

0.99+

RustTITLE

0.99+

InfluxORGANIZATION

0.99+

Influx DataORGANIZATION

0.99+

todayDATE

0.99+

Influx DataORGANIZATION

0.99+

PythonTITLE

0.99+

three expertsQUANTITY

0.99+

InfluxDBTITLE

0.99+

bothQUANTITY

0.99+

each rowQUANTITY

0.99+

two laneQUANTITY

0.99+

TodayDATE

0.99+

Noble nineORGANIZATION

0.99+

thousandsQUANTITY

0.99+

FluxORGANIZATION

0.99+

Influx DBTITLE

0.99+

each columnQUANTITY

0.99+

270 terabytesQUANTITY

0.99+

cube.netOTHER

0.99+

twiceQUANTITY

0.99+

BryanPERSON

0.99+

PandasTITLE

0.99+

c plus plusTITLE

0.99+

three years agoDATE

0.99+

twoQUANTITY

0.99+

more than a decadeQUANTITY

0.98+

ApacheORGANIZATION

0.98+

dozensQUANTITY

0.98+

free@influxdbu.comOTHER

0.98+

30,000 feetQUANTITY

0.98+

Rust FoundationORGANIZATION

0.98+

two temperature valuesQUANTITY

0.98+

In Flux DataORGANIZATION

0.98+

one time stampQUANTITY

0.98+

tomorrowDATE

0.98+

RussPERSON

0.98+

IOTORGANIZATION

0.98+

Evolving InfluxDBTITLE

0.98+

firstQUANTITY

0.97+

Influx dataORGANIZATION

0.97+

oneQUANTITY

0.97+

first oneQUANTITY

0.97+

Influx DB UniversityORGANIZATION

0.97+

SQLTITLE

0.97+

The CubeTITLE

0.96+

Influx DB CloudTITLE

0.96+

single serverQUANTITY

0.96+

KubernetesTITLE

0.96+