Breaking Analysis: Survey Says! Takeaways from the latest CIO spending data
>> From theCUBE Studios in Palo Alto and Boston, bringing you data driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> The technology spending outlook is not pretty and very much unpredictable right now. The negative sentiment is of course being driven by the macroeconomic factors in earnings forecasts that have been coming down all year in an environment of rising interest rates. And what's worse, is many people think earnings estimates are still too high. But it's understandable why there's so much uncertainty. I mean, technology is still booming, digital transformations are happening in earnest, leading companies have momentum and they got cash runways. And moreover, the CEOs of these leading companies are still really optimistic. But strong guidance in an environment of uncertainty is somewhat risky. Hello and welcome to this week's Wikibon CUBE Insights Powered by ETR. In this breaking analysis, we share takeaways from ETR'S latest spending survey, which was released to their private clients on October 21st. Today, we're going to review the macro spending data. We're going to share where CIOs think their cloud spend is headed. We're going to look at the actions that organizations are taking to manage uncertainty and then review some of the technology companies that have the most positive and negative outlooks in the ETR data set. Let's first look at the sample makeup from the latest ETR survey. ETR captured more than 1300 respondents in this latest survey. Its highest figure for the year and the quality and seniority of respondents just keeps going up each time we dig into the data. We've got large contributions as you can see here from sea level executives in a broad industry focus. Now the survey is still North America centric with 20% of the respondents coming from overseas and there is a bias toward larger organizations. And nonetheless, we're still talking well over 400 respondents coming from SMBs. Now ETR for those of you who don't know, conducts a quarterly spending intention survey and they also do periodic drilldowns. So just by the way of review, let's take a look at the expectations in the latest drilldown survey for IT spending. Before we look at the broader technology spending intentions survey data, followers of this program know that we reported on this a couple of weeks ago, spending expectations that peaked last December at 8.3% are now down to 5.5% with a slight uptick expected for next year as shown here. Now one CIO in the ETR community said these figures could be understated because of inflation. Now that's an interesting comment. Real GDP in the US is forecast to be around 1.5% in 2022. So these figures are significantly ahead of that. Nominal GDP is forecast to be significantly higher than what is shown in that slide. It was over 9% in June for example. And one would interpret that survey respondents are talking about real dollars which reflects inflationary factors in IT spend. So you might say, well if nominal GDP is in the high single digits this means that IT spending is below GDP which is usually not the case. But the flip side of that is technology tends to be deflationary because prices come down over time on a per unit basis, so this would be a normal and even positive trend. But it's mixed right now with prices on hard to find hardware, they're holding more firms. Software, you know, software tends to be driven by lock in and competition and switching costs. So you have those countervailing factors. Services can be inflationary, especially now as wages rise but certain sectors like laptops and semis and NAND are seeing less demand and maybe even some oversupply. So the way to look at this data is on a relative basis. In other words, IT buyers are reporting 280 basis point drop in spending sentiment from the end of last year. Now, something that we haven't shared from the latest drilldown survey which we will now is how IT bar buyers are thinking about cloud adoption. This chart shows responses from 419 IT execs from that drilldown and depicts the percentage of workloads their organizations have in the cloud today and what the expectation is through years from now. And you can see it's 27% today and it's nearly 50% in three years. Now the nuance is if you look at the question, that ETRS, it's they asked about IaaS and PaaS, which to some could include on-prem. Now, let me come back to that. In particular, financial services, IT, telco and retail and services industry cited expectations for the future for three years out that we're well above the average of the mean adoption levels. Regardless of how you interpret this data there's most certainly plenty of public cloud in the numbers. And whether you believe cloud is an operating environment or a place out there in the cloud, there's plenty of room for workloads to move into a cloud model well beyond mid this decade. So you know, as ho hum as we've been toward recent as-a-service models announced from the likes of HPE with GreenLake and Dell with APEX, the timing of those offerings may be pretty good actually. Now let's expand on some of the data that we showed a couple weeks ago. This chart shows responses from 282 execs on actions their organizations are taking over the next three months. And the Deltas are quite traumatic from the early part of this charter than the left hand side. The brown line is hiring freezes, the black line is freezing IT projects, and the green line is hiring increases and that red line is layoffs. And we put a box around the sort of general area of the isolation economy timeframe. And you can see the wild swings on this chart. By mid last summer, people were kickstarting things and more hiring was going on and the black line shows IT project freezes, you know, came way down. And now, or on the way back up as our hiring freezes. So we're seeing these wild swings in organizational actions and strategies which underscores the lack of predictability. As with supply chains around the world, this is likely due to the fact that organizations, pre pandemic they were optimized for efficiency, not a lot of waste rather than business resilience. Meaning, you know, there's again not a lot of fluff in the system or if there was it got flushed out during the pandemic. And so the need for productivity and automation is becoming increasingly important, especially as actions that solely rely on headcount changes are very, very difficult to manage. Now, let's dig into some of the vendor commentary and take a look at some of the names that have momentum and some of the others possibly facing headwinds. Here's a list of companies that stand out in the ETR survey. Snowflake, once again leads the pack with a positive spending outlook. HashiCorp, CrowdStrike, Databricks, Freshworks and ServiceNow, they round out the top six. Microsoft, they seem to always be in the mix, as do a number of other security and related companies including CyberArk, Zscaler, CloudFlare, Elastic, Datadog, Fortinet, Tenable and to a certain extent Akamai, you can kind of put them sort of in that group. You know, CDN, they got to worry about security. Everybody worries about security, but especially the CDNs. Now the other software names that are highlighted here include Workday and Salesforce. On the negative side, you can see Dynatrace saw some negatives in the latest survey especially around its analytics business. Security is generally holding up better than other sectors but it's still seeing greater levels of pressure than it had previously. So lower spend. And defections relative to its observability peers, that's really for Dynatrace. Now the other one that was somewhat surprising is IBM. You see the IBM was sort of in that negative realm here but IBM reported an outstanding quarter this past week with double digit revenue growth, strong momentum in software, consulting, mainframes and other infrastructure like storage. It's benefiting from the Kyndryl restructuring and it's on track IBM to deliver 10 billion in free cash flow this year. Red Hat is performing exceedingly well and growing in the very high teens. And so look, IBM is in the midst of a major transformation and it seems like a company that is really focused now with hybrid cloud being powered by Red Hat and consulting and a decade plus of AI investments finally paying off. Now the other big thing we'll add is, IBM was once an outstanding acquire of companies and it seems to be really getting its act together on the M&A front. Yes, Red Hat was a big pill to swallow but IBM has done a number of smaller acquisitions, I think seven this year. Like for example, Turbonomic, which is starting to pay off. Arvind Krishna has the company focused once again. And he and Jim J. Kavanaugh, IBM CFO, seem to be very confident on the guidance that they're giving in their business. So that's a real positive in our view for the industry. Okay, the last thing we'd like to do is take 12 of the companies from the previous chart and plot them in context. Now these companies don't necessarily compete with each other, some do. But they are standouts in the ETR survey and in the market. What we're showing here is a view that we like to often show, it's net score or spending velocity on the vertical axis. And it's a measure, that's a measure of the net percentage of customers that are spending more on a particular platform. So ETR asks, are you spending more or less? They subtract less from the mores. I mean I'm simplifying, but that's what net score is. Now in the horizontal axis, that is a measure of overlap which is which measures presence or pervasiveness in the dataset. So bigger the better. We've inserted a table that informs how the dots in the companies are positioned. These companies are all in the green in terms of net score. And that right most column in the table insert is indicative of their presence in the dataset, the end. So higher, again, is better for both columns. Two other notes, the red dotted line there you see at 40%. Anything over that indicates an highly elevated spending momentum for a given platform. And we purposefully took Microsoft out of the mix in this chart because it skews the data due to its large size. Everybody else would cluster on the left and Microsoft would be all alone in the right. So we take them out. Now as we noted earlier, Snowflake once again leads with a net score of 64%, well above the 40% line. Having said that, while adoption rates for Snowflake remains strong the company's spending velocity in the survey has come down to Earth. And many more customers are shifting from where they were last year and the year before in growth mode i.e. spending more year to year with Snowflake to now shifting more toward flat spending. So a plus or minus 5%. So that puts pressure on Snowflake's net score, just based on the math as to how ETR calculates, its proprietary net score methodology. So Snowflake is by no means insulated completely to the macro factors. And this was seen especially in the data in the Fortune 500 cut of the survey for Snowflake. We didn't show that here, just giving you anecdotal commentary from the survey which is backed up by data. So, it showed steeper declines in the Fortune 500 momentum. But overall, Snowflake, very impressive. Now what's more, note the position of Streamlit relative to Databricks. Streamlit is an open source python framework for developing data driven, data science oriented apps. And it's ironic that it's net score and shared in is almost identical to those of data bricks, as the aspirations of Snowflake and Databricks are beginning to collide. Now, however, the Databricks net score has held up very well over the past year and is in the 92nd percentile of its machine learning and AI peers. And while it's seeing some softness, like Snowflake in the Fortune 500, Databricks has steadily moved to the right on the X axis over the last several surveys even though it was unable to get to the public markets and do an IPO during the lockdown tech bubble. Let's come back to the chart. ServiceNow is impressive because it's well above the 40% mark and it has 437 shared in on this cut, the largest of any company that we chose to plot here. The only real negative on ServiceNow is, more large customers are keeping spending levels flat. That's putting a little bit pressure on its net score, but that's just conservatives. It's kind of like Snowflakes, you know, same thing but in a larger scale. But it's defections, the ServiceNow as in Snowflake as well. It's defections remain very, very low, really low churn below 2% for ServiceNow, in fact, within the dataset. Now it's interesting to also see Freshworks hit the list. You can see them as one of the few ITSM vendors that has momentum and can potentially take on ServiceNow. Workday, on this chart, it's the other big app player that's above the 40% line and we're only showing Workday HCM, FYI, in this graphic. It's Workday Financials, that offering, is below the 40% line just for reference. Now let's talk about CrowdStrike. We attended Falcon last month, CrowdStrike's user conference and we're very impressed with the product visio, the company's execution, it's growing partnerships. And you can see in this graphic, the ETR survey data confirms the company's stellar performance with a net score at 50%, well above the 40% mark. And importantly, more than 300 mentions. That's second only to ServiceNow, amongst the 12 companies that we've chosen to highlight here. Only Microsoft, which is not shown here, has a higher net score in the security space than CrowdStrike. And when it comes to presence, CrowdStrike now has caught up to Splunk in terms of pervasion in the survey. Now CyberArk and Zscaler are the other two security firms that are right at that 40% red dotted line. CyberArk for names with over a hundred citations in the security sector, is only behind Microsoft and CrowdStrike. Zscaler for its part in the survey is seeing strong momentum in the Fortune 500, unlike what we said for Snowflake. And its pervasion on the X-axis has been steadily increasing. Again, not that Snowflake and CrowdStrike compete with each other but they're too prominent names and it's just interesting to compare peers and business models. Cloudflare, Elastic and Datadog are slightly below the 40% mark but they made the sort of top 12 that we showed to highlight here and they continue to have positive sentiment in the survey. So, what are the big takeaways from this latest survey, this really quick snapshot that we've taken. As you know, over the next several weeks we're going to dig into it more and more. As we've previously reported, the tide is going out and it's taking virtually all the tech ships with it. But in many ways the current market is a story of heightened expectations coming down to Earth, miscalculations about the economic patterns and the swings and imperfect visibility. Leading Barclays analyst, Ramo Limchao ask the question to guide or not to guide in a recent research note he wrote. His point being, should companies guide or should they be more cautious? Many companies, if not most companies, are actually giving guidance. Indeed, when companies like Oracle and IBM are emphatic about their near term outlook and their visibility, it gives one confidence. On the other hand, reasonable people are asking, will the red hot valuations that we saw over the last two years from the likes of Snowflake, CrowdStrike, MongoDB, Okta, Zscaler, and others. Will they return? Or are we in for a long, drawn out, sideways exercise before we see sustained momentum? And to that uncertainty, we add elections and public policy. It's very hard to predict right now. I'm sorry to be like a two-handed lawyer, you know. On the one hand, on the other hand. But that's just the way it is. Let's just say for our part, we think that once it's clear that interest rates are on their way back down and we'll stabilize it under 4% and we have clarity on the direction of inflation, wages, unemployment and geopolitics, the wild swings and sentiment will subside. But when that happens is anyone's guess. If I had to peg, I'd say 18 months, which puts us at least into the spring of 2024. What's your prediction? You know, it's almost that time of year. Let's hear it. Please keep in touch and let us know what you think. Okay, that's it for now. Many thanks to Alex Myerson. He is on production and he manages the podcast for us. Ken Schiffman as well is our newest addition to the Boston Studio. Kristin Martin and Cheryl Knight, they help get the word out on social media and in our newsletters. And Rob Hoff is our EIC, editor-in-chief over at SiliconANGLE. He does some wonderful editing for us. Thank you all. Remember all these episodes, they are available as podcasts. Wherever you listen, just search breaking analysis podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me at david.vellante@siliconangle.com or DM me @dvellante. Or feel free to comment on our LinkedIn posts. And please do check out etr.ai. They've got the best survey data in the enterprise tech business. If you haven't checked that out, you should. It'll give you an advantage. This is Dave Vellante for theCUBE Insights Powered by ETR. Thanks for watching. Be well and we'll see you next time on Breaking Analysis. (soft upbeat music)
SUMMARY :
in Palo Alto and Boston, and growing in the very high teens.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jim J. Kavanaugh | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
October 21st | DATE | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Ramo Limchao | PERSON | 0.99+ |
June | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Earth | LOCATION | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
10 billion | QUANTITY | 0.99+ |
282 execs | QUANTITY | 0.99+ |
12 companies | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
27% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
2022 | DATE | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
APEX | ORGANIZATION | 0.99+ |
8.3% | QUANTITY | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Freshworks | ORGANIZATION | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
Tenable | ORGANIZATION | 0.99+ |
419 IT execs | QUANTITY | 0.99+ |
64% | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
5.5% | QUANTITY | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
92nd percentile | QUANTITY | 0.99+ |
spring of 2024 | DATE | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
more than 300 mentions | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
each week | QUANTITY | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
MongoDB | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
CyberArk | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
HashiCorp | ORGANIZATION | 0.99+ |
theCUBE Studios | ORGANIZATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
more than 1300 respondents | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
mid last summer | DATE | 0.99+ |
437 | QUANTITY | 0.98+ |
ETRS | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
both columns | QUANTITY | 0.98+ |
minus 5% | QUANTITY | 0.98+ |
last December | DATE | 0.98+ |
Streamlit | TITLE | 0.98+ |
Breaking Analysis: What Black Hat '22 tells us about securing the Supercloud
>> From theCUBE Studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR, This is "Breaking Analysis with Dave Vellante". >> Black Hat 22 was held in Las Vegas last week, the same time as theCUBE Supercloud event. Unlike AWS re:Inforce where words are carefully chosen to put a positive spin on security, Black Hat exposes all the warts of cyber and openly discusses its hard truths. It's a conference that's attended by technical experts who proudly share some of the vulnerabilities they've discovered, and, of course, by numerous vendors marketing their products and services. Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this "Breaking Analysis", we summarize what we learned from discussions with several people who attended Black Hat and our analysis from reviewing dozens of keynotes, articles, sessions, and data from a recent Black Hat Attendees Survey conducted by Black Hat and Informa, and we'll end with the discussion of what it all means for the challenges around securing the supercloud. Now, I personally did not attend, but as I said at the top, we reviewed a lot of content from the event which is renowned for its hundreds of sessions, breakouts, and strong technical content that is, as they say, unvarnished. Chris Krebs, the former director of Us cybersecurity and infrastructure security agency, CISA, he gave the keynote, and he spoke about the increasing complexity of tech stacks and the ripple effects that that has on organizational risk. Risk was a big theme at the event. Where re:Inforce tends to emphasize, again, the positive state of cybersecurity, it could be said that Black Hat, as the name implies, focuses on the other end of the spectrum. Risk, as a major theme of the event at the show, got a lot of attention. Now, there was a lot of talk, as always, about the expanded threat service, you hear that at any event that's focused on cybersecurity, and tons of emphasis on supply chain risk as a relatively new threat that's come to the CISO's minds. Now, there was also plenty of discussion about hybrid work and how remote work has dramatically increased business risk. According to data from in Intel 471's Mark Arena, the previously mentioned Black Hat Attendee Survey showed that compromise credentials posed the number one source of risk followed by infrastructure vulnerabilities and supply chain risks, so a couple of surveys here that we're citing, and we'll come back to that in a moment. At an MIT cybersecurity conference earlier last decade, theCUBE had a hypothetical conversation with former Boston Globe war correspondent, Charles Sennott, about the future of war and the role of cyber. We had similar discussions with Dr. Robert Gates on theCUBE at a ServiceNow event in 2016. At Black Hat, these discussions went well beyond the theoretical with actual data from the war in Ukraine. It's clear that modern wars are and will be supported by cyber, but the takeaways are that they will be highly situational, targeted, and unpredictable because in combat scenarios, anything can happen. People aren't necessarily at their keyboards. Now, the role of AI was certainly discussed as it is at every conference, and particularly cyber conferences. You know, it was somewhat dissed as over hyped, not surprisingly, but while AI is not a panacea to cyber exposure, automation and machine intelligence can definitely augment, what appear to be and have been stressed out, security teams can do this by recommending actions and taking other helpful types of data and presenting it in a curated form that can streamline the job of the SecOps team. Now, most cyber defenses are still going to be based on tried and true monitoring and telemetry data and log analysis and curating known signatures and analyzing consolidated data, but increasingly, AI will help with the unknowns, i.e. zero-day threats and threat actor behaviors after infiltration. Now, finally, while much lip service was given to collaboration and public-private partnerships, especially after Stuxsnet was revealed early last decade, the real truth is that threat intelligence in the private sector is still evolving. In particular, the industry, mid decade, really tried to commercially exploit proprietary intelligence and, you know, do private things like private reporting and monetize that, but attitudes toward collaboration are trending in a positive direction was one of the sort of outcomes that we heard at Black Hat. Public-private partnerships are being both mandated by government, and there seems to be a willingness to work together to fight an increasingly capable adversary. These things are definitely on the rise. Now, without this type of collaboration, securing the supercloud is going to become much more challenging and confined to narrow solutions. and we're going to talk about that little later in the segment. Okay, let's look at some of the attendees survey data from Black Hat. Just under 200 really serious security pros took the survey, so not enough to slice and dice by hair color, eye color, height, weight, and favorite movie genre, but enough to extract high level takeaways. You know, these strongly agree or disagree survey responses can sometimes give vanilla outputs, but let's look for the ones where very few respondents strongly agree or disagree with a statement or those that overwhelmingly strongly agree or somewhat agree. So it's clear from this that the respondents believe the following, one, your credentials are out there and available to criminals. Very few people thought that that was, you know, unavoidable. Second, remote work is here to stay, and third, nobody was willing to really jinx their firms and say that they strongly disagree that they'll have to respond to a major cybersecurity incident within the next 12 months. Now, as we've reported extensively, COVID has permanently changed the cybersecurity landscape and the CISO's priorities and playbook. Check out this data that queries respondents on the pandemic's impact on cybersecurity, new requirements to secure remote workers, more cloud, more threats from remote systems and remote users, and a shift away from perimeter defenses that are no longer as effective, e.g. firewall appliances. Note, however, the fifth response that's down there highlighted in green. It shows a meaningful drop in the percentage of remote workers that are disregarding corporate security policy, still too many, but 10 percentage points down from 2021 survey. Now, as we've said many times, bad user behavior will trump good security technology virtually every time. Consistent with the commentary from Mark Arena's Intel 471 threat report, fishing for credentials is the number one concern cited in the Black Hat Attendees Survey. This is a people and process problem more than a technology issue. Yes, using multifactor authentication, changing passwords, you know, using unique passwords, using password managers, et cetera, they're all great things, but if it's too hard for users to implement these things, they won't do it, they'll remain exposed, and their organizations will remain exposed. Number two in the graphic, sophisticated attacks that could expose vulnerabilities in the security infrastructure, again, consistent with the Intel 471 data, and three, supply chain risks, again, consistent with Mark Arena's commentary. Ask most CISOs their number one problem, and they'll tell you, "It's a lack of talent." That'll be on the top of their list. So it's no surprise that 63% of survey respondents believe they don't have the security staff necessary to defend against cyber threats. This speaks to the rise of managed security service providers that we've talked about previously on "Breaking Analysis". We've seen estimates that less than 50% of organizations in the US have a SOC, and we see those firms as ripe for MSSP support as well as larger firms augmenting staff with managed service providers. Now, after re:Invent, we put forth this conceptual model that discussed how the cloud was becoming the first line of defense for CISOs, and DevOps was being asked to do more, things like securing the runtime, the containers, the platform, et cetera, and audit was kind of that last line of defense. So a couple things we picked up from Black Hat which are consistent with this shift and some that are somewhat new, first, is getting visibility across the expanded threat surface was a big theme at Black Hat. This makes it even harder to identify risk, of course, this being the expanded threat surface. It's one thing to know that there's a vulnerability somewhere. It's another thing to determine the severity of the risk, but understanding how easy or difficult it is to exploit that vulnerability and how to prioritize action around that. Vulnerability is increasingly complex for CISOs as the security landscape gets complexified. So what's happening is the SOC, if there even is one at the organization, is becoming federated. No longer can there be one ivory tower that's the magic god room of data and threat detection and analysis. Rather, the SOC is becoming distributed following the data, and as we just mentioned, the SOC is being augmented by the cloud provider and the managed service providers, the MSSPs. So there's a lot of critical security data that is decentralized and this will necessitate a new cyber data model where data can be synchronized and shared across a federation of SOCs, if you will, or mini SOCs or SOC capabilities that live in and/or embedded in an organization's ecosystem. Now, to this point about cloud being the first line of defense, let's turn to a story from ETR that came out of our colleague Eric Bradley's insight in a one-on-one he did with a senior IR person at a manufacturing firm. In a piece that ETR published called "Saved by Zscaler", check out this comment. Quote, "As the last layer, we are filtering all the outgoing internet traffic through Zscaler. And when an attacker is already on your network, and they're trying to communicate with the outside to exchange encryption keys, Zscaler is already blocking the traffic. It happened to us. It happened and we were saved by Zscaler." So that's pretty cool. So not only is the cloud the first line of defense, as we sort of depicted in that previous graphic, here's an example where it's also the last line of defense. Now, let's end on what this all means to securing the supercloud. At our Supercloud 22 event last week in our Palo Alto CUBE Studios, we had a session on this topic on supercloud, securing the supercloud. Security, in our view, is going to be one of the most important and difficult challenges for the idea of supercloud to become real. We reviewed in last week's "Breaking Analysis" a detailed discussion with Snowflake co-founder and president of products, Benoit Dageville, how his company approaches security in their data cloud, what we call a superdata cloud. Snowflake doesn't use the term supercloud. They use the term datacloud, but what if you don't have the focus, the engineering depth, and the bank roll that Snowflake has? Does that mean superclouds will only be developed by those companies with deep pockets and enormous resources? Well, that's certainly possible, but on the securing the supercloud panel, we had three technical experts, Gee Rittenhouse of Skyhigh Security, Piyush Sharrma who's the founder of Accurics who sold to Tenable, and Tony Kueh, who's the former Head of Product at VMware. Now, John Furrier asked each of them, "What is missing? What's it going to take to secure the supercloud? What has to happen?" Here's what they said. Play the clip. >> This is the final question. We have one minute left. I wish we had more time. This is a great panel. We'll bring you guys back for sure after the event. What one thing needs to happen to unify or get through the other side of this fragmentation and then the challenges for supercloud? Because remember, the enterprise equation is solve complexity with more complexity. Well, that's not what the market wants. They want simplicity. They want SaaS. They want ease of use. They want infrastructure risk code. What has to happen? What do you think, each of you? >> So I can start, and extending to the previous conversation, I think we need a consortium. We need a framework that defines that if you really want to operate on supercloud, these are the 10 things that you must follow. It doesn't matter whether you take AWS, Slash, or TCP or you have all, and you will have the on-prem also, which means that it has to follow a pattern, and that pattern is what is required for supercloud, in my opinion. Otherwise, security is going everywhere. They're like they have to fix everything, find everything, and so on and so forth. It's not going to be possible. So they need a framework. They need a consortium, and this consortium needs to be, I think, needs to led by the cloud providers because they're the ones who have these foundational infrastructure elements, and the security vendor should contribute on providing more severe detections or severe findings. So that's, in my opinion, should be the model. >> Great, well, thank you, Gee. >> Yeah, I would think it's more along the lines of a business model. We've seen in cloud that the scale matters, and once you're big, you get bigger. We haven't seen that coalesce around either a vendor, a business model, or whatnot to bring all of this and connect it all together yet. So that value proposition in the industry, I think, is missing, but there's elements of it already available. >> I think there needs to be a mindset. If you look, again, history repeating itself. The internet sort of came together around set of IETF, RSC standards. Everybody embraced and extended it, right? But still, there was, at least, a baseline, and I think at that time, the largest and most innovative vendors understood that they couldn't do it by themselves, right? And so I think what we need is a mindset where these big guys, like Google, let's take an example. They're not going to win at all, but they can have a substantial share. So how do they collaborate with the ecosystem around a set of standards so that they can bring their differentiation and then embrace everybody together. >> Okay, so Gee's point about a business model is, you know, business model being missing, it's broadly true, but perhaps Snowflake serves as a business model where they've just gone out and and done it, setting or trying to set a de facto standard by which data can be shared and monetized. They're certainly setting that standard and mandating that standard within the Snowflake ecosystem with its proprietary framework. You know, perhaps that is one answer, but Tony lays out a scenario where there's a collaboration mindset around a set of standards with an ecosystem. You know, intriguing is this idea of a consortium or a framework that Piyush was talking about, and that speaks to the collaboration or lack thereof that we spoke of earlier, and his and Tony's proposal that the cloud providers should lead with the security vendor ecosystem playing a supporting role is pretty compelling, but can you see AWS and Azure and Google in a kumbaya moment getting together to make that happen? It seems unlikely, but maybe a better partnership between the US government and big tech could be a starting point. Okay, that's it for today. I want to thank the many people who attended Black Hat, reported on it, wrote about it, gave talks, did videos, and some that spoke to me that had attended the event, Becky Bracken, who is the EIC at Dark Reading. They do a phenomenal job and the entire team at Dark Reading, the news desk there, Mark Arena, whom I mentioned, Garrett O'Hara, Nash Borges, Kelly Jackson, sorry, Kelly Jackson Higgins, Roya Gordon, Robert Lipovsky, Chris Krebs, and many others, thanks for the great, great commentary and the content that you put out there, and thanks to Alex Myerson, who's on production, and Alex manages the podcasts for us. Ken Schiffman is also in our Marlborough studio as well, outside of Boston. Kristen Martin and Cheryl Knight, they help get the word out on social media and in our newsletters, and Rob Hoff is our Editor-in-Chief at SiliconANGLE and does some great editing and helps with the titles of "Breaking Analysis" quite often. Remember these episodes, they're all available as podcasts, wherever you listen, just search for "Breaking Analysis Podcasts". I publish each on wikibon.com and siliconangle.com, and you could email me, get in touch with me at david.vellante@siliconangle.com or you can DM me @dvellante or comment on my LinkedIn posts, and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis". (upbeat music)
SUMMARY :
with Dave Vellante". and the ripple effects that This is the final question. and the security vendor should contribute that the scale matters, the largest and most innovative and the content that you put out there,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cheryl Knight | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Robert Lipovsky | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Chris Krebs | PERSON | 0.99+ |
Charles Sennott | PERSON | 0.99+ |
Becky Bracken | PERSON | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Kelly Jackson | PERSON | 0.99+ |
Gee Rittenhouse | PERSON | 0.99+ |
Benoit Dageville | PERSON | 0.99+ |
Tony Kueh | PERSON | 0.99+ |
Mark Arena | PERSON | 0.99+ |
Piyush Sharrma | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Roya Gordon | PERSON | 0.99+ |
CISA | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto | LOCATION | 0.99+ |
Garrett O'Hara | PERSON | 0.99+ |
Accurics | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Skyhigh Security | ORGANIZATION | 0.99+ |
Black Hat | ORGANIZATION | 0.99+ |
10 things | QUANTITY | 0.99+ |
Tenable | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Nash Borges | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Robert Gates | PERSON | 0.99+ |
one minute | QUANTITY | 0.99+ |
63% | QUANTITY | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
Kelly Jackson Higgins | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Black Hat 22 | EVENT | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Black Hat | EVENT | 0.98+ |
three technical experts | QUANTITY | 0.98+ |
first line | QUANTITY | 0.98+ |
fifth response | QUANTITY | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
Ukraine | LOCATION | 0.98+ |
Boston Globe | ORGANIZATION | 0.98+ |
Dr. | PERSON | 0.98+ |
one answer | QUANTITY | 0.97+ |
wikibon.com | OTHER | 0.97+ |
first line | QUANTITY | 0.97+ |
this week | DATE | 0.96+ |
first | QUANTITY | 0.96+ |
Marlborough | LOCATION | 0.96+ |
siliconangle.com | OTHER | 0.95+ |
Saved by Zscaler | TITLE | 0.95+ |
Palo Alto CUBE Studios | LOCATION | 0.95+ |
hundreds of sessions | QUANTITY | 0.95+ |
ORGANIZATION | 0.94+ | |
both | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
dozens of keynotes | QUANTITY | 0.93+ |
today | DATE | 0.93+ |
Breaking Analysis Further defining Supercloud W/ tech leaders VMware, Snowflake, Databricks & others
from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante at our inaugural super cloud 22 event we further refined the concept of a super cloud iterating on the definition the salient attributes and some examples of what is and what is not a super cloud welcome to this week's wikibon cube insights powered by etr you know snowflake has always been what we feel is one of the strongest examples of a super cloud and in this breaking analysis from our studios in palo alto we unpack our interview with benoit de javille co-founder and president of products at snowflake and we test our super cloud definition on the company's data cloud platform and we're really looking forward to your feedback first let's examine how we defl find super cloudant very importantly one of the goals of super cloud 22 was to get the community's input on the definition and iterate on previous work super cloud is an emerging computing architecture that comprises a set of services which are abstracted from the underlying primitives of hyperscale clouds we're talking about services such as compute storage networking security and other native tooling like machine learning and developer tools to create a global system that spans more than one cloud super cloud as shown on this slide has five essential properties x number of deployment models and y number of service models we're looking for community input on x and y and on the first point as well so please weigh in and contribute now we've identified these five essential elements of a super cloud let's talk about these first the super cloud has to run its services on more than one cloud leveraging the cloud native tools offered by each of the cloud providers the builder of the super cloud platform is responsible for optimizing the underlying primitives of each cloud and optimizing for the specific needs be it cost or performance or latency or governance data sharing security etc but those primitives must be abstracted such that a common experience is delivered across the clouds for both users and developers the super cloud has a metadata intelligence layer that can maximize efficiency for the specific purpose of the super cloud i.e the purpose that the super cloud is intended for and it does so in a federated model and it includes what we call a super pass this is a prerequisite that is a purpose-built component and enables ecosystem partners to customize and monetize incremental services while at the same time ensuring that the common experiences exist across clouds now in terms of deployment models we'd really like to get more feedback on this piece but here's where we are so far based on the feedback we got at super cloud 22. we see three deployment models the first is one where a control plane may run on one cloud but supports data plane interactions with more than one other cloud the second model instantiates the super cloud services on each individual cloud and within regions and can support interactions across more than one cloud with a unified interface connecting those instantiations those instances to create a common experience and the third model superimposes its services as a layer or in the case of snowflake they call it a mesh on top of the cloud on top of the cloud providers region or regions with a single global instantiation a single global instantiation of those services which spans multiple cloud providers this is our understanding from a comfort the conversation with benoit dejaville as to how snowflake approaches its solutions and for now we're going to park the service models we need to more time to flesh that out and we'll propose something shortly for you to comment on now we peppered benoit dejaville at super cloud 22 to test how the snowflake data cloud aligns to our concepts and our definition let me also say that snowflake doesn't use the term data cloud they really want to respect and they want to denigrate the importance of their hyperscale partners nor do we but we do think the hyperscalers today anyway are building or not building what we call super clouds but they are but but people who bar are building super clouds are building on top of hyperscale clouds that is a prerequisite so here are the questions that we tested with snowflake first question how does snowflake architect its data cloud and what is its deployment model listen to deja ville talk about how snowflake has architected a single system play the clip there are several ways to do this you know uh super cloud as as you name them the way we we we picked is is to create you know one single system and that's very important right the the the um [Music] there are several ways right you can instantiate you know your solution uh in every region of a cloud and and you know potentially that region could be a ws that region could be gcp so you are indeed a multi-cloud solution but snowflake we did it differently we are really creating cloud regions which are superposed on top of the cloud provider you know region infrastructure region so we are building our regions but but where where it's very different is that each region of snowflake is not one in instantiation of our service our service is global by nature we can move data from one region to the other when you land in snowflake you land into one region but but you can grow from there and you can you know exist in multiple clouds at the same time and that's very important right it's not one single i mean different instantiation of a system is one single instantiation which covers many cloud regions and many cloud providers snowflake chose the most advanced level of our three deployment models dodgeville talked about too presumably so it could maintain maximum control and ensure that common experience like the iphone model next we probed about the technical enablers of the data cloud listen to deja ville talk about snow grid he uses the term mesh and then this can get confusing with the jamaicani's data mesh concept but listen to benoit's explanation well as i said you know first we start by building you know snowflake regions we have today furry region that spawn you know the world so it's a worldwide worldwide system with many regions but all these regions are connected together they are you know meshed together with our technology we name it snow grid and that makes it hard because you know regions you know azure region can talk to a ws region or gcp regions and and as a as a user of our cloud you you don't see really these regional differences that you know regions are in different you know potentially clown when you use snowflake you can exist your your presence as an organization can be in several regions several clouds if you want geographic and and and both geographic and cloud provider so i can share data irrespective of the the cloud and i'm in the snowflake data cloud is that correct i can do that today exactly and and that's very critical right what we wanted is to remove data silos and and when you instantiate a system in one single region and that system is locked in that region you cannot communicate with other parts of the world you are locking the data in one region right and we didn't want to do that we wanted you know data to be distributed the way customer wants it to be distributed across the world and potentially sharing data at world scale now maybe there are many ways to skin the other cat meaning perhaps if a platform does instantiate in multiple places there are ways to share data but this is how snowflake chose to approach the problem next question how do you deal with latency in this big global system this is really important to us because while snowflake has some really smart people working as engineers and and the like we don't think they've solved for the speed of light problem the best people working on it as we often joke listen to benoit deja ville's comments on this topic so yes and no the the way we do it it's very expensive to do that because generally if you want to join you know data which is in which are in different regions and different cloud it's going to be very expensive because you need to move you know data every time you join it so the way we do it is that you replicate the subset of data that you want to access from one region from other regions so you can create this data mesh but data is replicated to make it very cheap and very performant too and is the snow grid does that have the metadata intelligence yes to actually can you describe that a little bit yeah snow grid is both uh a way to to exchange you know metadata about so each region of snowflake knows about all the other regions of snowflake every time we create a new region diary you know the metadata is distributed over our data cloud not only you know region knows all the regions but knows you know every organization that exists in our clouds where this organization is where data can be replicated by this organization and then of course it's it's also used as a way to uh uh exchange data right so you can exchange you know beta by scale of data size and we just had i was just receiving an email from one of our customers who moved more than four petabytes of data cross-region cross you know cloud providers in you know few days and you know it's a lot of data so it takes you know some time to move but they were able to do that online completely online and and switch over you know to the diff to the other region which is failover is very important also so yes and no probably means typically no he says yes and no probably means no so it sounds like snowflake is selectively pulling small amounts of data and replicating it where necessary but you also heard him talk about the metadata layer which is one of the essential aspects of super cloud okay next we dug into security it's one of the most important issues and we think one of the hardest parts related to deploying super cloud so we've talked about how the cloud has become the first line of defense for the cso but now with multi-cloud you have multiple first lines of defense and that means multiple shared responsibility models and multiple tool sets from different cloud providers and an expanded threat surface so listen to benoit's explanation here please play the clip this is a great question uh security has always been the most important aspect of snowflake since day one right this is the question that every customer of ours has you know how you can you guarantee the security of my data and so we secure data really tightly in region we have several layers of security it starts by by encrypting it every data at rest and that's very important a lot of customers are not doing that right you hear these attacks for example on on cloud you know where someone left you know their buckets uh uh open and then you know you can access the data because it's a non-encrypted uh so we are encrypting everything at rest we are encrypting everything in transit so a region is very secure now you know you never from one region you never access data from another region in snowflake that's why also we replicate data now the replication of that data across region or the metadata for that matter is is really highly secure so snow grits ensure that everything is encrypted everything is you know we have multiple you know encryption keys and it's you know stored in hardware you know secure modules so we we we built you know snow grids such that it's secure and it allows very secure movement of data so when we heard this explanation we immediately went to the lowest common denominator question meaning when you think about how aws for instance deals with data in motion or data and rest it might be different from how another cloud provider deals with it so how does aws uh uh uh differences for example in the aws maturity model for various you know cloud capabilities you know let's say they've got a faster nitro or graviton does it do do you have to how does snowflake deal with that do they have to slow everything else down like imagine a caravan cruising you know across the desert so you know every truck can keep up let's listen it's a great question i mean of course our software is abstracting you know all the cloud providers you know infrastructure so that when you run in one region let's say aws or azure it doesn't make any difference as far as the applications are concerned and and this abstraction of course is a lot of work i mean really really a lot of work because it needs to be secure it needs to be performance and you know every cloud and it has you know to expose apis which are uniform and and you know cloud providers even though they have potentially the same concept let's say blob storage apis are completely different the way you know these systems are secure it's completely different the errors that you can get and and the retry you know mechanism is very different from one cloud to the other performance is also different we discovered that when we were starting to port our software and and and you know we had to completely rethink how to leverage blob storage in that cloud versus that cloud because just of performance too so we had you know for example to you know stripe data so all this work is work that's you know you don't need as an application because our vision really is that applications which are running in our data cloud can you know be abstracted of all this difference and and we provide all the services all the workload that this application need whether it's transactional access to data analytical access to data you know managing you know logs managing you know metrics all of these is abstracted too such that they are not you know tied to one you know particular service of one cloud and and distributing this application across you know many regions many cloud is very seamless so from that answer we know that snowflake takes care of everything but we really don't understand the performance implications in you know in that specific case but we feel pretty certain that the promises that snowflake makes around governance and security within their data sharing construct construct will be kept now another criterion that we've proposed for super cloud is a super pass layer to create a common developer experience and an enabler for ecosystem partners to monetize please play the clip let's listen we build it you know a custom build because because as you said you know what exists in one cloud might not exist in another cloud provider right so so we have to build you know on this all these this components that modern application mode and that application need and and and and that you know goes to machine learning as i say transactional uh analytical system and the entire thing so such that they can run in isolation basically and the objective is the developer experience will be identical across those clouds yes right the developers doesn't need to worry about cloud provider and actually our system we have we didn't talk about it but the marketplace that we have which allows actually to deliver we're getting there yeah okay now we're not going to go deep into ecosystem today we've talked about snowflakes strengths in this regard but snowflake they pretty much ticked all the boxes on our super cloud attributes and definition we asked benoit dejaville to confirm that this is all shipping and available today and he also gave us a glimpse of the future play the clip and we are still developing it you know the transactional you know unistore as we call it was announced in last summit so so they are still you know working properly but but but that's the vision right and and and that's important because we talk about the infrastructure right you mentioned a lot about storage and compute but it's not only that right when you think about application they need to use the transactional database they need to use an analytical system they need to use you know machine learning so you need to provide also all these services which are consistent across all the cloud providers so you can hear deja ville talking about expanding beyond taking advantage of the core infrastructure storage and networking et cetera and bringing intelligence to the data through machine learning and ai so of course there's more to come and there better be at this company's valuation despite the recent sharp pullback in a tightening fed environment okay so i know it's cliche but everyone's comparing snowflakes and data bricks databricks has been pretty vocal about its open source posture compared to snowflakes and it just so happens that we had aligotsy on at super cloud 22 as well he wasn't in studio he had to do remote because i guess he's presenting at an investor conference this week so we had to bring him in remotely now i didn't get to do this interview john furrier did but i listened to it and captured this clip about how data bricks sees super cloud and the importance of open source take a listen to goatzee yeah i mean let me start by saying we just we're big fans of open source we think that open source is a force in software that's going to continue for you know decades hundreds of years and it's going to slowly replace all proprietary code in its way we saw that you know it could do that with the most advanced technology windows you know proprietary operating system very complicated got replaced with linux so open source can pretty much do anything and what we're seeing with the data lake house is that slowly the open source community is building a replacement for the proprietary data warehouse you know data lake machine learning real-time stack in open source and we're excited to be part of it for us delta lake is a very important project that really helps you standardize how you lay out your data in the cloud and with it comes a really important protocol called delta sharing that enables you in an open way actually for the first time ever share large data sets between organizations but it uses an open protocol so the great thing about that is you don't need to be a database customer you don't even like databricks you just need to use this open source project and you can now securely share data sets between organizations across clouds and it actually does so really efficiently just one copy of the data so you don't have to copy it if you're within the same cloud so the implication of ellie gotzi's comments is that databricks with delta sharing as john implied is playing a long game now i don't know if enough about the databricks architecture to comment in detail i got to do more research there so i reached out to my two analyst friends tony bear and sanji mohan to see what they thought because they cover these companies pretty closely here's what tony bear said quote i've viewed the divergent lake house strategies of data bricks and snowflake in the context of their roots prior to delta lake databrick's prime focus was the compute not the storage layer and more specifically they were a compute engine not a database snowflake approached from the opposite end of the pool as they originally fit the mold of the classic database company rather than a specific compute engine per se the lake house pushes both companies outside of their original comfort zones data bricks to storage snowflake to compute engine so it makes perfect sense for databricks to embrace the open source narrative at the storage layer and for snowflake to continue its walled garden approach but in the long run their strategies are already overlapping databricks is not a 100 open source company its practitioner experience has always been proprietary and now so is its sql query engine likewise snowflake has had to open up with the support of iceberg for open data lake format the question really becomes how serious snowflake will be in making iceberg a first-class citizen in its environment that is not necessarily officially branding a lake house but effectively is and likewise can databricks deliver the service levels associated with walled gardens through a more brute force approach that relies heavily on the query engine at the end of the day those are the key requirements that will matter to data bricks and snowflake customers end quote that was some deep thought by by tony thank you for that sanjay mohan added the following quote open source is a slippery slope people buy mobile phones based on open source android but it's not fully open similarly databricks delta lake was not originally fully open source and even today its photon execution engine is not we are always going to live in a hybrid world snowflake and databricks will support whatever model works best for them and their customers the big question is do customers care as deeply about which vendor has a higher degree of openness as we technology people do i believe customers evaluation criteria is far more nuanced than just to decipher each vendor's open source claims end quote okay so i had to ask dodgeville about their so-called wall garden approach and what their strategy is with apache iceberg here's what he said iceberg is is very important so just to to give some context iceberg is an open you know table format right which was you know first you know developed by netflix and netflix you know put it open source in the apache community so we embrace that's that open source standard because because it's widely used by by many um many you know companies and also many companies have you know really invested a lot of effort in building you know big data hadoop solution or data like solution and they want to use snowflake and they couldn't really use snowflake because all their data were in open you know formats so we are embracing icebergs to help these companies move through the cloud but why we have been relentless with direct access to data direct access to data is a little bit of a problem for us and and the reason is when you direct access to data now you have direct access to storage now you have to understand for example the specificity of one cloud versus the other so as soon as you start to have direct access to data you lose your you know your cloud diagnostic layer you don't access data with api when you have direct access to data it's very hard to secure data because you need to grant access direct access to tools which are not you know protected and you see a lot of you know hacking of of data you know because of that so so that was not you know direct access to data is not serving well our customers and that's why we have been relented to do that because it's it's cr it's it's not cloud diagnostic it's it's you you have to code that you have to you you you need a lot of intelligence while apis access so we want open apis that's that's i guess the way we embrace you know openness is is by open api versus you know you access directly data here's my take snowflake is hedging its bets because enough people care about open source that they have to have some open data format options and it's good optics and you heard benoit deja ville talk about the risks of directly accessing the data and the complexities it brings now is that maybe a little fud against databricks maybe but same can be said for ollie's comments maybe flooding the proprietaryness of snowflake but as both analysts pointed out open is a spectrum hey i remember unix used to equal open systems okay let's end with some etr spending data and why not compare snowflake and data bricks spending profiles this is an xy graph with net score or spending momentum on the y-axis and pervasiveness or overlap in the data set on the x-axis this is data from the january survey when snowflake was holding above 80 percent net score off the charts databricks was also very strong in the upper 60s now let's fast forward to this next chart and show you the july etr survey data and you can see snowflake has come back down to earth now remember anything above 40 net score is highly elevated so both companies are doing well but snowflake is well off its highs and data bricks has come down somewhat as well databricks is inching to the right snowflake rocketed to the right post its ipo and as we know databricks wasn't able to get to ipo during the covet bubble ali gotzi is at the morgan stanley ceo conference this week they got plenty of cash to withstand a long-term recession i'm told and they've started the message that they're a billion dollars in annualized revenue i'm not sure exactly what that means i've seen some numbers on their gross margins i'm not sure what that means i've seen some numbers on their net retention revenue or net revenue retention again i'll reserve judgment until we see an s1 but it's clear both of these companies have momentum and they're out competing in the market well as always be the ultimate arbiter different philosophies perhaps is it like democrats and republicans well it could be but they're both going after a solving data problem both companies are trying to help customers get more value out of their data and both companies are highly valued so they have to perform for their investors to paraphrase ralph nader the similarities may be greater than the differences okay that's it for today thanks to the team from palo alto for this awesome super cloud studio build alex myerson and ken shiffman are on production in the palo alto studios today kristin martin and sheryl knight get the word out to our community rob hoff is our editor-in-chief over at siliconangle thanks to all please check out etr.ai for all the survey data remember these episodes are all available as podcasts wherever you listen just search breaking analysis podcasts i publish each week on wikibon.com and siliconangle.com and you can email me at david.vellante at siliconangle.com or dm me at devellante or comment on my linkedin posts and please as i say etr has got some of the best survey data in the business we track it every quarter and really excited to be partners with them this is dave vellante for the cube insights powered by etr thanks for watching and we'll see you next time on breaking analysis [Music] you
SUMMARY :
and and the retry you know mechanism is
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
netflix | ORGANIZATION | 0.99+ |
john furrier | PERSON | 0.99+ |
palo alto | ORGANIZATION | 0.99+ |
tony bear | PERSON | 0.99+ |
boston | LOCATION | 0.99+ |
sanji mohan | PERSON | 0.99+ |
ken shiffman | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
ellie gotzi | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
more than four petabytes | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
kristin martin | PERSON | 0.99+ |
both companies | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
rob hoff | PERSON | 0.99+ |
more than one | QUANTITY | 0.99+ |
second model | QUANTITY | 0.98+ |
alex myerson | PERSON | 0.98+ |
third model | QUANTITY | 0.98+ |
one region | QUANTITY | 0.98+ |
one copy | QUANTITY | 0.98+ |
one region | QUANTITY | 0.98+ |
five essential elements | QUANTITY | 0.98+ |
android | TITLE | 0.98+ |
100 | QUANTITY | 0.98+ |
first line | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
sheryl | PERSON | 0.98+ |
more than one cloud | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
iphone | COMMERCIAL_ITEM | 0.98+ |
super cloud 22 | EVENT | 0.98+ |
each cloud | QUANTITY | 0.98+ |
each | QUANTITY | 0.97+ |
sanjay mohan | PERSON | 0.97+ |
john | PERSON | 0.97+ |
republicans | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
hundreds of years | QUANTITY | 0.97+ |
siliconangle | ORGANIZATION | 0.97+ |
each week | QUANTITY | 0.97+ |
data lake house | ORGANIZATION | 0.97+ |
one single region | QUANTITY | 0.97+ |
january | DATE | 0.97+ |
dave vellante | PERSON | 0.96+ |
each region | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
dave vellante | PERSON | 0.96+ |
tony | PERSON | 0.96+ |
above 80 percent | QUANTITY | 0.95+ |
more than one cloud | QUANTITY | 0.95+ |
more than one cloud | QUANTITY | 0.95+ |
data lake | ORGANIZATION | 0.95+ |
five essential properties | QUANTITY | 0.95+ |
democrats | ORGANIZATION | 0.95+ |
first time | QUANTITY | 0.95+ |
july | DATE | 0.94+ |
linux | TITLE | 0.94+ |
etr | ORGANIZATION | 0.94+ |
devellante | ORGANIZATION | 0.93+ |
dodgeville | ORGANIZATION | 0.93+ |
each vendor | QUANTITY | 0.93+ |
super cloud 22 | ORGANIZATION | 0.93+ |
delta lake | ORGANIZATION | 0.92+ |
three deployment models | QUANTITY | 0.92+ |
first lines | QUANTITY | 0.92+ |
dejaville | LOCATION | 0.92+ |
day one | QUANTITY | 0.92+ |
Breaking Analysis: AWS re:Inforce marks a summer checkpoint on cybersecurity
>> From theCUBE Studios in Palo Alto and Boston bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> After a two year hiatus, AWS re:Inforce is back on as an in-person event in Boston next week. Like the All-Star break in baseball, re:Inforce gives us an opportunity to evaluate the cyber security market overall, the state of cloud security and cross cloud security and more specifically what AWS is up to in the sector. Welcome to this week's Wikibon cube insights powered by ETR. In this Breaking Analysis we'll share our view of what's changed since our last cyber update in May. We'll look at the macro environment, how it's impacting cyber security plays in the market, what the ETR data tells us and what to expect at next week's AWS re:Inforce. We start this week with a checkpoint from Breaking Analysis contributor and stock trader Chip Simonton. We asked for his assessment of the market generally in cyber stocks specifically. So we'll summarize right here. We've kind of moved on from a narrative of the sky is falling to one where the glass is half empty you know, and before today's big selloff it was looking more and more like glass half full. The SNAP miss has dragged down many of the big names that comprise the major indices. You know, earning season as always brings heightened interest and this time we're seeing many cross currents. It starts as usual with the banks and the money centers. With the exception of JP Morgan the numbers were pretty good according to Simonton. Investment banks were not so great with Morgan and Goldman missing estimates but in general, pretty positive outlooks. But the market also shrugged off IBM's growth. And of course, social media because of SNAP is getting hammered today. The question is no longer recession or not but rather how deep the recession will be. And today's PMI data was the weakest since the start of the pandemic. Bond yields continue to weaken and there's a growing consensus that Fed tightening may be over after September as commodity prices weaken. Now gas prices of course are still high but they've come down. Tesla, Nokia and AT&T all indicated that supply issues were getting better which is also going to help with inflation. So it's no shock that the NASDAQ has done pretty well as beaten down as tech stocks started to look oversold you know, despite today's sell off. But AT&T and Verizon, they blamed their misses in part on people not paying their bills on time. SNAP's huge miss even after guiding lower and then refusing to offer future guidance took that stock down nearly 40% today and other social media stocks are off on sympathy. Meta and Google were off, you know, over 7% at midday. I think at one point hit 14% down and Google, Meta and Twitter have all said they're freezing new hires. So we're starting to see according to Simonton for the first time in a long time, the lower income, younger generation really feeling the pinch of inflation. Along of course with struggling families that have to choose food and shelter over discretionary spend. Now back to the NASDAQ for a moment. As we've been reporting back in mid-June and NASDAQ was off nearly 33% year to date and has since rallied. It's now down about 25% year to date as of midday today. But as I say, it had been, you know much deeper back in early June. But it's broken that downward trend that we talked about where the highs are actually lower and the lows are lower. That's started to change for now anyway. We'll see if it holds. But chip stocks, software stocks, and of course the cyber names have broken those down trends and have been trading above their 50 day moving averages for the first time in around four months. And again, according to Simonton, we'll see if that holds. If it does, that's a positive sign. Now remember on June 24th, we recorded a Breaking Analysis and talked about Qualcomm trading at a 12 X multiple with an implied 15% growth rate. On that day the stock was 124 and it surpassed 155 earlier this month. That was a really good call by Simonton. So looking at some of the cyber players here SailPoint is of course the anomaly with the Thoma Bravo 7 billion acquisition of the company holding that stock up. But the Bug ETF of basket of cyber stocks has definitely improved. When we last reported on cyber in May, CrowdStrike was off 23% year to date. It's now off 4%. Palo Alto has held steadily. Okta is still underperforming its peers as it works through the fallout from the breach and the ingestion of its Auth0 acquisition. Meanwhile, Zscaler and SentinelOne, those high flyers are still well off year to date, with Ping Identity and CyberArk not getting hit as hard as their valuations hadn't run up as much. But virtually all these tech stocks generally in cyber issues specifically, they've been breaking their down trend. So it will now come down to earnings guidance in the coming months. But the SNAP reaction is quite stunning. I mean, the environment is slowing, we know that. Ad spending gets cut in that type of market, we know that too. So it shouldn't be a huge surprise to anyone but as Chip Simonton says, this shows that sellers are still in control here. So it's going to take a little while to work through that despite the positive signs that we're seeing. Okay. We also turned to our friend Eric Bradley from ETR who follows these markets quite closely. He frequently interviews CISOs on his program, on his round tables. So we asked to get his take and here's what ETR is saying. Again, as we've reported while CIOs and IT buyers have tempered spending expectations since December and early January when they called for an 8% plus spending growth, they're still expecting a six to seven percent uptick in spend this year. So that's pretty good. Security remains the number one priority and also is the highest ranked sector in the ETR data set when you measure in terms of pervasiveness in the study. Within security endpoint detection and extended detection and response along with identity and privileged account management are the sub-sectors with the most spending velocity. And when you exclude Microsoft which is just dominant across the board in so many sectors, CrowdStrike has taken over the number one spot in terms of spending momentum in ETR surveys with CyberArk and Tanium showing very strong as well. Okta has seen a big dropoff in net score from 54% last survey to 45% in July as customers maybe put a pause on new Okta adoptions. That clearly shows in the survey. We'll talk about that in a moment. Look Okta still elevated in terms of spending momentum, but it doesn't have the dominant leadership position it once held in spend velocity. Year on year, according to ETR, Tenable and Elastic are seeing the biggest jumps in spending momentum, with SailPoint, Tanium, Veronis, CrowdStrike and Zscaler seeing the biggest jump in new adoptions since the last survey. Now on the downside, SonicWall, Symantec, Trellic which is McAfee, Barracuda and TrendMicro are seeing the highest percentage of defections and replacements. Let's take a deeper look at what the ETR data tells us about the cybersecurity space. This is a popular view that we like to share with net score or spending momentum on the Y axis and overlap or pervasiveness in the data on the X axis. It's a measure of presence in the data set we used to call it market share. With the data, the dot positions, you see that little inserted table, that's how the dots are plotted. And it's important to note that this data is filtered for firms with at least 100 Ns in the survey. That's why some of the other ones that we mentioned might have dropped off. The red dotted line at 40% that indicates highly elevated spending momentum and there are several firms above that mark including of course, Microsoft, which is literally off the charts in both dimensions in the upper right. It's quite incredible actually. But for the rest of the pack, CrowdStrike has now taken back its number one net score position in the ETR survey. And CyberArk and Okta and Zscaler, CloudFlare and Auth0 now Okta through the acquisition, are all above the 40% mark. You can stare at the data at your leisure but I'll just point out, make three quick points. First Palo Alto continues to impress and as steady as she goes. Two, it's a very crowded market still and it's complicated space. And three there's lots of spending in different pockets. This market has too many tools and will continue to consolidate. Now I'd like to drill into a couple of firms net scores and pick out some of the pure plays that are leading the way. This series of charts shows the net score or spending velocity or granularity for Okta, CrowdStrike, Zscaler and CyberArk. Four of the top pure plays in the ETR survey that also have over a hundred responses. Now the colors represent the following. Bright red is defections. We're leaving the platform. The pink is we're spending less, meaning we're spending 6% or worse. The gray is flat spend plus or minus 5%. The forest green is spending more, i.e, 6% or more and the lime green is we're adding the platform new. That red dotted line at the 40% net score mark is the same elevated level that we like to talk about. All four are above that target. Now that blue line you see there is net score. The yellow line is pervasiveness in the data. The data shown in each bar goes back 10 surveys all the way back to January 2020. First I want to call out that all four again are seeing down trends in spending momentum with the whole market. That's that blue line. They're seeing that this quarter, again, the market is off overall. Everybody is kind of seeing that down trend for the most part. Very few exceptions. Okta is being hurt by fewer new additions which is why we highlighted in red, that red dotted area, that square that we put there in the upper right of that Okta bar. That lime green, new ads are off as well. And the gray for Okta, flat spending is noticeably up. So it feels like people are pausing a bit and taking a breather for Okta. And as we said earlier, perhaps with the breach earlier this year and the ingestion of Auth0 acquisition the company is seeing some friction in its business. Now, having said that, you can see Okta's yellow line or presence in the data set, continues to grow. So it's a good proxy from market presence. So Okta remains a leader in identity. So again, I'll let you stare at the data if you want at your leisure, but despite some concerns on declining momentum, notice this very little red at these companies when it comes to the ETR survey data. Now one more data slide which brings us to our four star cyber firms. We started a tradition a few years ago where we sorted the ETR data by net score. That's the left hand side of this graphic. And we sorted by shared end or presence in the data set. That's the right hand side. And again, we filtered by companies with at least 100 N and oh, by the way we've excluded Microsoft just to level the playing field. The red dotted line signifies the top 10. If a company cracks the top 10 in both spending momentum and presence, we give them four stars. So Palo Alto, CrowdStrike, Okta, Fortinet and Zscaler all made the cut this time. Now, as we pointed out in May if you combined Auth0 with Okta, they jumped to the number two on the right hand chart in terms of presence. And they would lead the pure plays there although it would bring down Okta's net score somewhat, as you can see, Auth0's net score is lower than Okta's. So when you combine them it would drag that down a little bit but it would give them bigger presence in the data set. Now, the other point we'll make is that Proofpoint and Splunk both dropped off the four star list this time as they both saw marked declines in net score or spending velocity. They both got four stars last quarter. Okay. We're going to close on what to expect at re:Inforce this coming week. Re:Inforce, if you don't know, is AWS's security event. They first held it in Boston back in 2019. It's dedicated to cloud security. The past two years has been virtual and they announced that reinvent that it would take place in Houston in June, which everybody said, that's crazy. Who wants to go to Houston in June and turns out nobody did so they postponed the event, thankfully. And so now they're back in Boston, starting on Monday. Not that it's going to be much cooler in Boston. Anyway, Steven Schmidt had been the face of AWS security at all these previous events as the Chief Information Security Officer. Now he's dropped the I from his title and is now the Chief Security Officer at Amazon. So he went with Jesse to the mothership. Presumably he dropped the I because he deals with physical security now too, like at the warehouses. Not that he didn't have to worry about physical security at the AWS data centers. I don't know. Anyway, he and CJ Moses who is now the new CISO at AWS will be keynoting along with some others including MongoDB's Chief Information Security Officer. So that should be interesting. Now, if you've been following AWS you'll know they like to break things down into, you know, a couple of security categories. Identity, detection and response, data protection slash privacy slash GRC which is governance, risk and compliance, and we would expect a lot more talk this year on container security. So you're going to hear also product updates and they like to talk about how they're adding value to services and try to help, they try to help customers understand how to apply services. Things like GuardDuty, which is their threat detection that has machine learning in it. They'll talk about Security Hub, which centralizes views and alerts and automates security checks. They have a service called Detective which does root cause analysis, and they have tools to mitigate denial of service attacks. And they'll talk about security in Nitro which isolates a lot of the hardware resources. This whole idea of, you know, confidential computing which is, you know, AWS will point out it's kind of become a buzzword. They take it really seriously. I think others do as well, like Arm. We've talked about that on previous Breaking Analysis. And again, you're going to hear something on container security because it's the hottest thing going right now and because AWS really still serves developers and really that's what they're trying to do. They're trying to enable developers to design security in but you're also going to hear a lot of best practice advice from AWS i.e, they'll share the AWS dogfooding playbooks with you for their own security practices. AWS like all good security practitioners, understand that the keys to a successful security strategy and implementation don't start with the technology, rather they're about the methods and practices that you apply to solve security threats and a top to bottom cultural approach to security awareness, designing security into systems, that's really where the developers come in, and training for continuous improvements. So you're going to get heavy doses of really strong best practices and guidance and you know, some good preaching. You're also going to hear and see a lot of partners. They'll be very visible at re:Inforce. AWS is all about ecosystem enablement and AWS is going to host close to a hundred security partners at the event. This is key because AWS doesn't do it all. Interestingly, they don't even show up in the ETR security taxonomy, right? They just sort of imply that it's built in there even though they have a lot of security tooling. So they have to apply the shared responsibility model not only with customers but partners as well. They need an ecosystem to fill gaps and provide deeper problem solving with more mature and deeper security tooling. And you're going to hear a lot of positivity around how great cloud security is and how it can be done well. But the truth is this stuff is still incredibly complicated and challenging for CISOs and practitioners who are understaffed when it comes to top talent. Now, finally, theCUBE will be at re:Inforce in force. John Furry and I will be hosting two days of broadcast so please do stop by if you're in Boston and say hello. We'll have a little chat, we'll share some data and we'll share our overall impressions of the event, the market, what we're seeing, what we're learning, what we're worried about in this dynamic space. Okay. That's it for today. Thanks for watching. Thanks to Alex Myerson, who is on production and manages the podcast. Kristin Martin and Cheryl Knight, they helped get the word out on social and in our newsletters and Rob Hoff is our Editor in Chief over at siliconangle.com. You did some great editing. Thank you all. Remember all these episodes they're available, this podcast. Wherever you listen, all you do is search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com. You can get in touch with me by emailing avid.vellante@siliconangle.com or DM me @dvellante, or comment on my LinkedIn post and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you in Boston next week if you're there or next time on Breaking Analysis (soft music)
SUMMARY :
in Palo Alto and Boston and of course the cyber names
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chip Simonton | PERSON | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
January 2020 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
June 24th | DATE | 0.99+ |
Houston | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Okta | ORGANIZATION | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
July | DATE | 0.99+ |
SNAP | ORGANIZATION | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
CJ Moses | PERSON | 0.99+ |
John Furry | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
6% | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Jesse | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
Four | QUANTITY | 0.99+ |
54% | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Simonton | PERSON | 0.99+ |
JP Morgan | ORGANIZATION | 0.99+ |
8% | QUANTITY | 0.99+ |
14% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
SailPoint | ORGANIZATION | 0.99+ |
TrendMicro | ORGANIZATION | 0.99+ |
Monday | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
45% | QUANTITY | 0.99+ |
10 surveys | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
CyberArk | ORGANIZATION | 0.99+ |
Thoma Bravo | ORGANIZATION | 0.99+ |
Tenable | ORGANIZATION | 0.99+ |
avid.vellante@siliconangle.com | OTHER | 0.99+ |
next week | DATE | 0.99+ |
SentinelOne | ORGANIZATION | 0.99+ |
early June | DATE | 0.99+ |
Meta | ORGANIZATION | 0.99+ |
Breaking Analysis: Technology & Architectural Considerations for Data Mesh
>> From theCUBE Studios in Palo Alto and Boston, bringing you data driven insights from theCUBE in ETR, this is Breaking Analysis with Dave Vellante. >> The introduction in socialization of data mesh has caused practitioners, business technology executives, and technologists to pause, and ask some probing questions about the organization of their data teams, their data strategies, future investments, and their current architectural approaches. Some in the technology community have embraced the concept, others have twisted the definition, while still others remain oblivious to the momentum building around data mesh. Here we are in the early days of data mesh adoption. Organizations that have taken the plunge will tell you that aligning stakeholders is a non-trivial effort, but necessary to break through the limitations that monolithic data architectures and highly specialized teams have imposed over frustrated business and domain leaders. However, practical data mesh examples often lie in the eyes of the implementer, and may not strictly adhere to the principles of data mesh. Now, part of the problem is lack of open technologies and standards that can accelerate adoption and reduce friction, and that's what we're going to talk about today. Some of the key technology and architecture questions around data mesh. Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR, and in this Breaking Analysis, we welcome back the founder of data mesh and director of Emerging Technologies at Thoughtworks, Zhamak Dehghani. Hello, Zhamak. Thanks for being here today. >> Hi Dave, thank you for having me back. It's always a delight to connect and have a conversation. Thank you. >> Great, looking forward to it. Okay, so before we get into it in the technology details, I just want to quickly share some data from our friends at ETR. You know, despite the importance of data initiative since the pandemic, CIOs and IT organizations have had to juggle of course, a few other priorities, this is why in the survey data, cyber and cloud computing are rated as two most important priorities. Analytics and machine learning, and AI, which are kind of data topics, still make the top of the list, well ahead of many other categories. And look, a sound data architecture and strategy is fundamental to digital transformations, and much of the past two years, as we've often said, has been like a forced march into digital. So while organizations are moving forward, they really have to think hard about the data architecture decisions that they make, because it's going to impact them, Zhamak, for years to come, isn't it? >> Yes, absolutely. I mean, we are moving really from, slowly moving from reason based logical algorithmic to model based computation and decision making, where we exploit the patterns and signals within the data. So data becomes a very important ingredient, of not only decision making, and analytics and discovering trends, but also the features and applications that we build for the future. So we can't really ignore it, and as we see, some of the existing challenges around getting value from data is not necessarily that no longer is access to computation, is actually access to trustworthy, reliable data at scale. >> Yeah, and you see these domains coming together with the cloud and obviously it has to be secure and trusted, and that's why we're here today talking about data mesh. So let's get into it. Zhamak, first, your new book is out, 'Data Mesh: Delivering Data-Driven Value at Scale' just recently published, so congratulations on getting that done, awesome. Now in a recent presentation, you pulled excerpts from the book and we're going to talk through some of the technology and architectural considerations. Just quickly for the audience, four principles of data mesh. Domain driven ownership, data as product, self-served data platform and federated computational governance. So I want to start with self-serve platform and some of the data that you shared recently. You say that, "Data mesh serves autonomous domain oriented teams versus existing platforms, which serve a centralized team." Can you elaborate? >> Sure. I mean the role of the platform is to lower the cognitive load for domain teams, for people who are focusing on the business outcomes, the technologies that are building the applications, to really lower the cognitive load for them, to be able to work with data. Whether they are building analytics, automated decision making, intelligent modeling. They need to be able to get access to data and use it. So the role of the platform, I guess, just stepping back for a moment is to empower and enable these teams. Data mesh by definition is a scale out model. It's a decentralized model that wants to give autonomy to cross-functional teams. So it is core requires a set of tools that work really well in that decentralized model. When we look at the existing platforms, they try to achieve this similar outcome, right? Lower the cognitive load, give the tools to data practitioners, to manage data at scale because today centralized teams, really their job, the centralized data teams, their job isn't really directly aligned with a one or two or different, you know, business units and business outcomes in terms of getting value from data. Their job is manage the data and make the data available for then those cross-functional teams or business units to use the data. So the platforms they've been given are really centralized around or tuned to work with this structure as a team, structure of centralized team. Although on the surface, it seems that why not? Why can't I use my, you know, cloud storage or computation or data warehouse in a decentralized way? You should be able to, but some changes need to happen to those online platforms. As an example, some cloud providers simply have hard limits on the number of like account storage, storage accounts that you can have. Because they never envisaged you have hundreds of lakes. They envisage one or two, maybe 10 lakes, right. They envisage really centralizing data, not decentralizing data. So I think we see a shift in thinking about enabling autonomous independent teams versus a centralized team. >> So just a follow up if I may, we could be here for a while. But so this assumes that you've sorted out the organizational considerations? That you've defined all the, what a data product is and a sub product. And people will say, of course we use the term monolithic as a pejorative, let's face it. But the data warehouse crowd will say, "Well, that's what data march did. So we got that covered." But Europe... The primest of data mesh, if I understand it is whether it's a data march or a data mart or a data warehouse, or a data lake or whatever, a snowflake warehouse, it's a node on the mesh. Okay. So don't build your organization around the technology, let the technology serve the organization is that-- >> That's a perfect way of putting it, exactly. I mean, for a very long time, when we look at decomposition of complexity, we've looked at decomposition of complexity around technology, right? So we have technology and that's maybe a good segue to actually the next item on that list that we looked at. Oh, I need to decompose based on whether I want to have access to raw data and put it on the lake. Whether I want to have access to model data and put it on the warehouse. You know I need to have a team in the middle to move the data around. And then try to figure organization into that model. So data mesh really inverses that, and as you said, is look at the organizational structure first. Then scale boundaries around which your organization and operation can scale. And then the second layer look at the technology and how you decompose it. >> Okay. So let's go to that next point and talk about how you serve and manage autonomous interoperable data products. Where code, data policy you say is treated as one unit. Whereas your contention is existing platforms of course have independent management and dashboards for catalogs or storage, et cetera. Maybe we double click on that a bit. >> Yeah. So if you think about that functional, or technical decomposition, right? Of concerns, that's one way, that's a very valid way of decomposing, complexity and concerns. And then build solutions, independent solutions to address them. That's what we see in the technology landscape today. We will see technologies that are taking care of your management of data, bring your data under some sort of a control and modeling. You'll see technology that moves that data around, will perform various transformations and computations on it. And then you see technology that tries to overlay some level of meaning. Metadata, understandability, discovery was the end policy, right? So that's where your data processing kind of pipeline technologies versus data warehouse, storage, lake technologies, and then the governance come to play. And over time, we decomposed and we compose, right? Deconstruct and reconstruct back this together. But, right now that's where we stand. I think for data mesh really to become a reality, as in independent sources of data and teams can responsibly share data in a way that can be understood right then and there can impose policies, right then when the data gets accessed in that source and in a resilient manner, like in a way that data changes structure of the data or changes to the scheme of the data, doesn't have those downstream down times. We've got to think about this new nucleus or new units of data sharing. And we need to really bring back transformation and governing data and the data itself together around these decentralized nodes on the mesh. So that's another, I guess, deconstruction and reconstruction that needs to happen around the technology to formulate ourselves around the domains. And again the data and the logic of the data itself, the meaning of the data itself. >> Great. Got it. And we're going to talk more about the importance of data sharing and the implications. But the third point deals with how operational, analytical technologies are constructed. You've got an app DevStack, you've got a data stack. You've made the point many times actually that we've contextualized our operational systems, but not our data systems, they remain separate. Maybe you could elaborate on this point. >> Yes. I think this is, again, has a historical background and beginning. For a really long time, applications have dealt with features and the logic of running the business and encapsulating the data and the state that they need to run that feature or run that business function. And then we had for anything analytical driven, which required access data across these applications and across the longer dimension of time around different subjects within the organization. This analytical data, we had made a decision that, "Okay, let's leave those applications aside. Let's leave those databases aside. We'll extract the data out and we'll load it, or we'll transform it and put it under the analytical kind of a data stack and then downstream from it, we will have analytical data users, the data analysts, the data sciences and the, you know, the portfolio of users that are growing use that data stack. And that led to this really separation of dual stack with point to point integration. So applications went down the path of transactional databases or urban document store, but using APIs for communicating and then we've gone to, you know, lake storage or data warehouse on the other side. If we are moving and that again, enforces the silo of data versus app, right? So if we are moving to the world that our missions that are ambitions around making applications, more intelligent. Making them data driven. These two worlds need to come closer. As in ML Analytics gets embedded into those app applications themselves. And the data sharing, as a very essential ingredient of that, gets embedded and gets closer, becomes closer to those applications. So, if you are looking at this now cross-functional, app data, based team, right? Business team, then the technology stacks can't be so segregated, right? There has to be a continuum of experience from app delivery, to sharing of the data, to using that data, to embed models back into those applications. And that continuum of experience requires well integrated technologies. I'll give you an example, which actually in some sense, we are somewhat moving to that direction. But if we are talking about data sharing or data modeling and applications use one set of APIs, you know, HTTP compliant, GraQL or RAC APIs. And on the other hand, you have proprietary SQL, like connect to my database and run SQL. Like those are very two different models of representing and accessing data. So we kind of have to harmonize or integrate those two worlds a bit more closely to achieve that domain oriented cross-functional teams. >> Yeah. We are going to talk about some of the gaps later and actually you look at them as opportunities, more than barriers. But they are barriers, but they're opportunities for more innovation. Let's go on to the fourth one. The next point, it deals with the roles that the platform serves. Data mesh proposes that domain experts own the data and take responsibility for it end to end and are served by the technology. Kind of, we referenced that before. Whereas your contention is that today, data systems are really designed for specialists. I think you use the term hyper specialists a lot. I love that term. And the generalist are kind of passive bystanders waiting in line for the technical teams to serve them. >> Yes. I mean, if you think about the, again, the intention behind data mesh was creating a responsible data sharing model that scales out. And I challenge any organization that has a scaled ambitions around data or usage of data that relies on small pockets of very expensive specialists resources, right? So we have no choice, but upscaling cross-scaling. The majority population of our technologists, we often call them generalists, right? That's a short hand for people that can really move from one technology to another technology. Sometimes we call them pandric people sometimes we call them T-shaped people. But regardless, like we need to have ability to really mobilize our generalists. And we had to do that at Thoughtworks. We serve a lot of our clients and like many other organizations, we are also challenged with hiring specialists. So we have tested the model of having a few specialists, really conveying and translating the knowledge to generalists and bring them forward. And of course, platform is a big enabler of that. Like what is the language of using the technology? What are the APIs that delight that generalist experience? This doesn't mean no code, low code. We have to throw away in to good engineering practices. And I think good software engineering practices remain to exist. Of course, they get adopted to the world of data to build resilient you know, sustainable solutions, but specialty, especially around kind of proprietary technology is going to be a hard one to scale. >> Okay. I'm definitely going to come back and pick your brain on that one. And, you know, your point about scale out in the examples, the practical examples of companies that have implemented data mesh that I've talked to. I think in all cases, you know, there's only a handful that I've really gone deep with, but it was their hadoop instances, their clusters wouldn't scale, they couldn't scale the business and around it. So that's really a key point of a common pattern that we've seen now. I think in all cases, they went to like the data lake model and AWS. And so that maybe has some violation of the principles, but we'll come back to that. But so let me go on to the next one. Of course, data mesh leans heavily, toward this concept of decentralization, to support domain ownership over the centralized approaches. And we certainly see this, the public cloud players, database companies as key actors here with very large install bases, pushing a centralized approach. So I guess my question is, how realistic is this next point where you have decentralized technologies ruling the roost? >> I think if you look at the history of places, in our industry where decentralization has succeeded, they heavily relied on standardization of connectivity with, you know, across different components of technology. And I think right now you are right. The way we get value from data relies on collection. At the end of the day, collection of data. Whether you have a deep learning machinery model that you're training, or you have, you know, reports to generate. Regardless, the model is bring your data to a place that you can collect it, so that we can use it. And that leads to a naturally set of technologies that try to operate as a full stack integrated proprietary with no intention of, you know, opening, data for sharing. Now, conversely, if you think about internet itself, web itself, microservices, even at the enterprise level, not at the planetary level, they succeeded as decentralized technologies to a large degree because of their emphasis on open net and openness and sharing, right. API sharing. We don't talk about, in the API worlds, like we don't say, you know, "I will build a platform to manage your logical applications." Maybe to a degree but we actually moved away from that. We say, "I'll build a platform that opens around applications to manage your APIs, manage your interfaces." Right? Give you access to API. So I think the shift needs to... That definition of decentralized there means really composable, open pieces of the technology that can play nicely with each other, rather than a full stack, all have control of your data yet being somewhat decentralized within the boundary of my platform. That's just simply not going to scale if data needs to come from different platforms, different locations, different geographical locations, it needs to rethink. >> Okay, thank you. And then the final point is, is data mesh favors technologies that are domain agnostic versus those that are domain aware. And I wonder if you could help me square the circle cause it's nuanced and I'm kind of a 100 level student of your work. But you have said for example, that the data teams lack context of the domain and so help us understand what you mean here in this case. >> Sure. Absolutely. So as you said, we want to take... Data mesh tries to give autonomy and decision making power and responsibility to people that have the context of those domains, right? The people that are really familiar with different business domains and naturally the data that that domain needs, or that naturally the data that domains shares. So if the intention of the platform is really to give the power to people with most relevant and timely context, the platform itself naturally becomes as a shared component, becomes domain agnostic to a large degree. Of course those domains can still... The platform is a (chuckles) fairly overloaded world. As in, if you think about it as a set of technology that abstracts complexity and allows building the next level solutions on top, those domains may have their own set of platforms that are very much doing agnostic. But as a generalized shareable set of technologies or tools that allows us share data. So that piece of technology needs to relinquish the knowledge of the context to the domain teams and actually becomes domain agnostic. >> Got it. Okay. Makes sense. All right. Let's shift gears here. Talk about some of the gaps and some of the standards that are needed. You and I have talked about this a little bit before, but this digs deeper. What types of standards are needed? Maybe you could walk us through this graphic, please. >> Sure. So what I'm trying to depict here is that if we imagine a world that data can be shared from many different locations, for a variety of analytical use cases, naturally the boundary of what we call a node on the mesh will encapsulates internally a fair few pieces. It's not just the boundary of that, not on the mesh, is the data itself that it's controlling and updating and maintaining. It's of course a computation and the code that's responsible for that data. And then the policies that continue to govern that data as long as that data exists. So if that's the boundary, then if we shift that focus from implementation details, that we can leave that for later, what becomes really important is the scene or the APIs and interfaces that this node exposes. And I think that's where the work that needs to be done and the standards that are missing. And we want the scene and those interfaces be open because that allows, you know, different organizations with different boundaries of trust to share data. Not only to share data to kind of move that data to yes, another location, to share the data in a way that distributed workloads, distributed analytics, distributed machine learning model can happen on the data where it is. So if you follow that line of thinking around the centralization and connection of data versus collection of data, I think the very, very important piece of it that needs really deep thinking, and I don't claim that I have done that, is how do we share data responsibly and sustainably, right? That is not brittle. If you think about it today, the ways we share data, one of the very common ways is around, I'll give you a JDC endpoint, or I give you an endpoint to your, you know, database of choice. And now as technology, whereas a user actually, you can now have access to the schema of the underlying data and then run various queries or SQL queries on it. That's very simple and easy to get started with. That's why SQL is an evergreen, you know, standard or semi standard, pseudo standard that we all use. But it's also very brittle, because we are dependent on a underlying schema and formatting of the data that's been designed to tell the computer how to store and manage the data. So I think that the data sharing APIs of the future really need to think about removing this brittle dependencies, think about sharing, not only the data, but what we call metadata, I suppose. Additional set of characteristics that is always shared along with data to make the data usage, I suppose ethical and also friendly for the users and also, I think we have to... That data sharing API, the other element of it, is to allow kind of computation to run where the data exists. So if you think about SQL again, as a simple primitive example of computation, when we select and when we filter and when we join, the computation is happening on that data. So maybe there is a next level of articulating, distributed computational data that simply trains models, right? Your language primitives change in a way to allow sophisticated analytical workloads run on the data more responsibly with policies and access control and force. So I think that output port that I mentioned simply is about next generation data sharing, responsible data sharing APIs. Suitable for decentralized analytical workloads. >> So I'm not trying to bait you here, but I have a follow up as well. So you schema, for all its good creates constraints. No schema on right, that didn't work, cause it was just a free for all and it created the data swamps. But now you have technology companies trying to solve that problem. Take Snowflake for example, you know, enabling, data sharing. But it is within its proprietary environment. Certainly Databricks doing something, you know, trying to come at it from its angle, bringing some of the best to data warehouse, with the data science. Is your contention that those remain sort of proprietary and defacto standards? And then what we need is more open standards? Maybe you could comment. >> Sure. I think the two points one is, as you mentioned. Open standards that allow... Actually make the underlying platform invisible. I mean my litmus test for a technology provider to say, "I'm a data mesh," (laughs) kind of compliant is, "Is your platform invisible?" As in, can I replace it with another and yet get the similar data sharing experience that I need? So part of it is that. Part of it is open standards, they're not really proprietary. The other angle for kind of sharing data across different platforms so that you know, we don't get stuck with one technology or another is around APIs. It is around code that is protecting that internal schema. So where we are on the curve of evolution of technology, right now we are exposing the internal structure of the data. That is designed to optimize certain modes of access. We're exposing that to the end client and application APIs, right? So the APIs that use the data today are very much aware that this database was optimized for machine learning workloads. Hence you will deal with a columnar storage of the file versus this other API is optimized for a very different, report type access, relational access and is optimized around roles. I think that should become irrelevant in the API sharing of the future. Because as a user, I shouldn't care how this data is internally optimized, right? The language primitive that I'm using should be really agnostic to the machine optimization underneath that. And if we did that, perhaps this war between warehouse or lake or the other will become actually irrelevant. So we're optimizing for that human best human experience, as opposed to the best machine experience. We still have to do that but we have to make that invisible. Make that an implementation concern. So that's another angle of what should... If we daydream together, the best experience and resilient experience in terms of data usage than these APIs with diagnostics to the internal storage structure. >> Great, thank you for that. We've wrapped our ankles now on the controversy, so we might as well wade all the way in, I can't let you go without addressing some of this. Which you've catalyzed, which I, by the way, I see as a sign of progress. So this gentleman, Paul Andrew is an architect and he gave a presentation I think last night. And he teased it as quote, "The theory from Zhamak Dehghani versus the practical experience of a technical architect, AKA me," meaning him. And Zhamak, you were quick to shoot back that data mesh is not theory, it's based on practice. And some practices are experimental. Some are more baked and data mesh really avoids by design, the specificity of vendor or technology. Perhaps you intend to frame your post as a technology or vendor specific, specific implementation. So touche, that was excellent. (Zhamak laughs) Now you don't need me to defend you, but I will anyway. You spent 14 plus years as a software engineer and the better part of a decade consulting with some of the most technically advanced companies in the world. But I'm going to push you a little bit here and say, some of this tension is of your own making because you purposefully don't talk about technologies and vendors. Sometimes doing so it's instructive for us neophytes. So, why don't you ever like use specific examples of technology for frames of reference? >> Yes. My role is pushes to the next level. So, you know everybody picks their fights, pick their battles. My role in this battle is to push us to think beyond what's available today. Of course, that's my public persona. On a day to day basis, actually I work with clients and existing technology and I think at Thoughtworks we have given the talk we gave a case study talk with a colleague of mine and I intentionally got him to talk about (indistinct) I want to talk about the technology that we use to implement data mesh. And the reason I haven't really embraced, in my conversations, the specific technology. One is, I feel the technology solutions we're using today are still not ready for the vision. I mean, we have to be in this transitional step, no matter what we have to be pragmatic, of course, and practical, I suppose. And use the existing vendors that exist and I wholeheartedly embrace that, but that's just not my role, to show that. I've gone through this transformation once before in my life. When microservices happened, we were building microservices like architectures with technology that wasn't ready for it. Big application, web application servers that were designed to run these giant monolithic applications. And now we're trying to run little microservices onto them. And the tail was riding the dock, the environmental complexity of running these services was consuming so much of our effort that we couldn't really pay attention to that business logic, the business value. And that's where we are today. The complexity of integrating existing technologies is really overwhelmingly, capturing a lot of our attention and cost and effort, money and effort as opposed to really focusing on the data product themselves. So it's just that's the role I have, but it doesn't mean that, you know, we have to rebuild the world. We've got to do with what we have in this transitional phase until the new generation, I guess, technologies come around and reshape our landscape of tools. >> Well, impressive public discipline. Your point about microservice is interesting because a lot of those early microservices, weren't so micro and for the naysayers look past this, not prologue, but Thoughtworks was really early on in the whole concept of microservices. So be very excited to see how this plays out. But now there was some other good comments. There was one from a gentleman who said the most interesting aspects of data mesh are organizational. And that's how my colleague Sanji Mohan frames data mesh versus data fabric. You know, I'm not sure, I think we've sort of scratched the surface today that data today, data mesh is more. And I still think data fabric is what NetApp defined as software defined storage infrastructure that can serve on-prem and public cloud workloads back whatever, 2016. But the point you make in the thread that we're showing you here is that you're warning, and you referenced this earlier, that the segregating different modes of access will lead to fragmentation. And we don't want to repeat the mistakes of the past. >> Yes, there are comments around. Again going back to that original conversation that we have got this at a macro level. We've got this tendency to decompose complexity based on technical solutions. And, you know, the conversation could be, "Oh, I do batch or you do a stream and we are different."' They create these bifurcations in our decisions based on the technology where I do events and you do tables, right? So that sort of segregation of modes of access causes accidental complexity that we keep dealing with. Because every time in this tree, you create a new branch, you create new kind of new set of tools and then somehow need to be point to point integrated. You create new specialization around that. So the least number of branches that we have, and think about really about the continuum of experiences that we need to create and technologies that simplify, that continuum experience. So one of the things, for example, give you a past experience. I was really excited around the papers and the work that came around on Apache Beam, and generally flow based programming and stream processing. Because basically they were saying whether you are doing batch or whether you're doing streaming, it's all one stream. And sometimes the window of time, narrows and sometimes the window of time over which you're computing, widens and at the end of today, is you are just getting... Doing the stream processing. So it is those sort of notions that simplify and create continuum of experience. I think resonate with me personally, more than creating these tribal fights of this type versus that mode of access. So that's why data mesh naturally selects kind of this multimodal access to support end users, right? The persona of end users. >> Okay. So the last topic I want to hit, this whole discussion, the topic of data mesh it's highly nuanced, it's new, and people are going to shoehorn data mesh into their respective views of the world. And we talked about lake houses and there's three buckets. And of course, the gentleman from LinkedIn with Azure, Microsoft has a data mesh community. See you're going to have to enlist some serious army of enforcers to adjudicate. And I wrote some of the stuff down. I mean, it's interesting. Monte Carlo has a data mesh calculator. Starburst is leaning in, chaos. Search sees themselves as an enabler. Oracle and Snowflake both use the term data mesh. And then of course you've got big practitioners J-P-M-C, we've talked to Intuit, Orlando, HelloFresh has been on, Netflix has this event based sort of streaming implementation. So my question is, how realistic is it that the clarity of your vision can be implemented and not polluted by really rich technology companies and others? (Zhamak laughs) >> Is it even possible, right? Is it even possible? That's a yes. That's why I practice then. This is why I should practice things. Cause I think, it's going to be hard. What I'm hopeful, is that the socio-technical, Leveling Data mentioned that this is a socio-technical concern or solution, not just a technology solution. Hopefully always brings us back to, you know, the reality that vendors try to sell you safe oil that solves all of your problems. (chuckles) All of your data mesh problems. It's just going to cause more problem down the track. So we'll see, time will tell Dave and I count on you as one of those members of, (laughs) you know, folks that will continue to share their platform. To go back to the roots, as why in the first place? I mean, I dedicated a whole part of the book to 'Why?' Because we get, as you said, we get carried away with vendors and technology solution try to ride a wave. And in that story, we forget the reason for which we even making this change and we are going to spend all of this resources. So hopefully we can always come back to that. >> Yeah. And I think we can. I think you have really given this some deep thought and as we pointed out, this was based on practical knowledge and experience. And look, we've been trying to solve this data problem for a long, long time. You've not only articulated it well, but you've come up with solutions. So Zhamak, thank you so much. We're going to leave it there and I'd love to have you back. >> Thank you for the conversation. I really enjoyed it. And thank you for sharing your platform to talk about data mesh. >> Yeah, you bet. All right. And I want to thank my colleague, Stephanie Chan, who helps research topics for us. Alex Myerson is on production and Kristen Martin, Cheryl Knight and Rob Hoff on editorial. Remember all these episodes are available as podcasts, wherever you listen. And all you got to do is search Breaking Analysis Podcast. Check out ETR's website at etr.ai for all the data. And we publish a full report every week on wikibon.com, siliconangle.com. You can reach me by email david.vellante@siliconangle.com or DM me @dvellante. Hit us up on our LinkedIn post. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (bright music)
SUMMARY :
bringing you data driven insights Organizations that have taken the plunge and have a conversation. and much of the past two years, and as we see, and some of the data and make the data available But the data warehouse crowd will say, in the middle to move the data around. and talk about how you serve and the data itself together and the implications. and the logic of running the business and are served by the technology. to build resilient you I think in all cases, you know, And that leads to a that the data teams lack and naturally the data and some of the standards that are needed. and formatting of the data and it created the data swamps. We're exposing that to the end client and the better part of a decade So it's just that's the role I have, and for the naysayers look and at the end of today, And of course, the gentleman part of the book to 'Why?' and I'd love to have you back. And thank you for sharing your platform etr.ai for all the data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kristen Martin | PERSON | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Zhamak | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 lakes | QUANTITY | 0.99+ |
Sanji Mohan | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Paul Andrew | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Data Mesh: Delivering Data-Driven Value at Scale | TITLE | 0.99+ |
Boston | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
14 plus years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two points | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
second layer | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
hundreds of lakes | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
theCUBE Studios | ORGANIZATION | 0.98+ |
SQL | TITLE | 0.98+ |
one unit | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
100 level | QUANTITY | 0.98+ |
third point | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Europe | LOCATION | 0.98+ |
three buckets | QUANTITY | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
DevStack | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
wikibon.com | OTHER | 0.97+ |
both | QUANTITY | 0.97+ |
Thoughtworks | ORGANIZATION | 0.96+ |
one set | QUANTITY | 0.96+ |
one stream | QUANTITY | 0.96+ |
Intuit | ORGANIZATION | 0.95+ |
one way | QUANTITY | 0.93+ |
two worlds | QUANTITY | 0.93+ |
HelloFresh | ORGANIZATION | 0.93+ |
this week | DATE | 0.93+ |
last night | DATE | 0.91+ |
fourth one | QUANTITY | 0.91+ |
Snowflake | TITLE | 0.91+ |
two different models | QUANTITY | 0.91+ |
ML Analytics | TITLE | 0.91+ |
Breaking Analysis | TITLE | 0.87+ |
two worlds | QUANTITY | 0.84+ |
Breaking Analysis: Snowflake’s Wild Ride
from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante snowflake they love the stock at 400 and hated at 165 that's the nature of the business i guess especially in this crazy cycle over the last two years of lockdowns free money exploding demand and now rising inflation and rates but with the fed providing some clarity on its actions the time has come to really dig into the fundamentals of companies and there's no tech company that's more fun to analyze than snowflake hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we look at the action of snowflake stock since its ipo why it's behaved the way it has how some sharp traders are looking at the stock and most importantly what customer demand looks like the stock has really provided some great theater since its ipo i know people who got in at 120 before the open and i know lots of people who kind of held their noses and bought the stock on day one at over 300 a day when it closed at around 240 that first day of trading snowflake hit 164 this week it's all-time low as a public company as my college roommate chip simonton a long time trader told me when great companies trade at all times time lows because of panic it's worth taking a shot he did now of course the stock could go lower there's geopolitical risk and the stock with a 64 billion market cap is expensive for a company that's forecast to do around 2 billion in product revenue this year and remember i don't recommend stocks you shouldn't take my advice and my comments you got to do your own research but i have lots of data and i have opinions and i'm willing to share that with you stocks like snowflake crowdstrike z-scaler octa and companies like this are highly volatile when markets are moving up they're going to move up faster than the mean when they're declining they're going to drop more severely and that's clearly what's happened to snowflake so with a company like this you when you see panic selling you'll also see panic buying sometimes like we we've seen with this name it went from 220 to 320 in a very short period earlier snowflake put in a short-term bottom this week and many traders feel the issue was oversold so they bought okay but not everyone felt this way and you can see this in the headlines snowflake hits low but cloud stocks rise and we're going to come back to that is it a buy don't buy the dip buy the dip and what snowflake investors can learn from microsoft and from the street.com snow stock is sliding on the back of ill-conceived guidance and to that i would say that conservative guidance these days is anything but ill-conceived now let's unpack all this a bit and to do so i reached out to ivana delevska who has been on this program before she's with spear invest a female-led etf that goes deep into understanding supply chains she came on breaking analysis and laid out her thesis to buy the dip on snowflake this is a while ago she told me currently spear still likes snowflake and has doubled its position let me share her analysis she called out two drivers for the downside interest rates you know rising of course in snowflakes guidance which my own publication called weak in that previous chart that i just showed you so let's dig into that a bit snowflake guided for product revenues of 67 year on year which was below buy side expectations but i believe within sell side consensus regardless the guide was nuanced and driven by snowflake's decision to pass along price efficiencies to customers from optimizing processor price performance predominantly from aws's graviton too this is going to hit snowflakes revenue a net of about a hundred million dollars this year but the timing's not precise because it's going to hit 165 million but they're going to make up 65 million in increased demand frank slootman on the earnings call made this very clear he said quote this is not philanthropy this stimulates demand classic slootman the point is spear and other bulls believe that this will result in a gain for snowflake over the medium term and we would agree price goes down roi gets better you throw more projects at snowflakes customers going to buy more snowflake and when that happens and it gives the company an advantage as they continue to build their moat it's a longer term bet on cloud and data which are good bets now some of this could also be competitive pressures there have been you know studies that are out there from competitors attacking snowflakes pricing and price performance and they make comparisons oracle's been pretty aggressive as have others but so far the company's customers continue to consume now at a very fast rate now on on this front what can we learn from microsoft that applies to snowflake that's the headline here from benzinga so the article quoted a wealth manager named josh brown talking about what happened to microsoft after the dot-com bubble burst and how they quadrupled earnings over the next decade and the stock went sideways suggesting the same thing could happen to snowflake now i'd like to make a couple of comments here first at the time microsoft was a 23 billion dollar company and it had a monopoly and was already highly profitable steve ballmer became the ceo of microsoft right after the dot-com bubble burst and he hugged onto windows for dear life and lived off of microsoft's pc software monopoly microsoft became an extremely profitable and remarkably uninteresting caretaker of a pc in on-prem software estate during balmer's tenure so i just don't see the comparison as relevant snowflake you know they're going to make struggle for other reasons but that one didn't really resonate with me what's interesting is this chart it poses the question do cloud and data markets behave differently it's a chart that shows aws growth rates over time and superimposes the revenue in the red in q1 2018 aws generated 5.4 billion dollars in revenue and that was growing at the time at nearly a 50 rate now that rate as you can see decelerated quite significantly as aws grew to a 50 billion dollar run rate company that down below where you see it bottoms now it makes sense right law of large numbers you can't keep growing that fast when you get that big well oops look what happened in 2021 aws's growth rate bottoms in the high 20s and then rockets back up to 40 this past quarter as aws surpasses a 70 billion dollar run rate so you have to ask is cloud different is data different is cloud data different or data cloud different let's put it in the snowflake parlance can cloud because of its consumption model and the speed of innovation and ecosystem depth and breadth enable snowflake to exhibit lots of variability in its growth rates versus a say progressive and somewhat linear decline as the company grows revenue which is what you would expect historically and part of the answer relates to its market size here's a chart we've shared before with some additions it's our version of snowflake's total available market they're tam which snowflake's version that that blue data cloud thing superimposed on the right it shows the various layers of market opportunity that we came up with that that snowflake and others we think have in front of them emerging from the disruption of legacy data lakes and data warehouses to what snowflake refers to as its data cloud we think about the data mesh concept and decentralized data architectures with domain ownership and data product and service builders as consistent with snowflake's data cloud vision where snowflake data stores are nodes they're just simply discoverable nodes on the mesh you could have you know data bricks data lakes you know s3 buckets on that mesh it doesn't matter they can be discovered they can be shared and of course they're governed in a federated model now in snowflake's model it's all inside the snowflake data cloud that's fine then you'll go to the out years it gets a little fuzzy you know from edge locations and ai inference it becomes massive and decision making occurs in real time where machines and machine data take over the world instead of you know clicks and keystrokes sounds out there but it's real and how exactly snowflake plays there at this point is unclear but one thing's for sure there'll be a lot of data and it's going to find its way into snowflake you know snowflake's not a real-time engine it's an analytical system it's moving into the realm of data science and you know we've talked about the need for you know semantic layer between those those two worlds of analytics and data science but expanding the scope further out we think that snowflake is a big role to play in this future and the future is massive okay check you got the big tam now as someone that looks at companies through a fundamentals prism you've got to look obviously at the markets in the tan which we just did but you also want to understand customers and it's not hard to find snowflake customers capital one disney micron alliance sainsbury sonos and hundreds of other companies i've talked to snowflake customers who have also been customers of oracle teradata ibm neteza vertica serious database practitioners and they tell me it's consistent soulflake is different they say it's simpler it's more agile it's less complicated to secure and it's disruptive to their traditional ways of doing data management now of course there are naysayers i've spoken to a number of analysts that feel snowflake is deficient in areas like workload management and course complex joins and it's too specialized in a world where we're seeing the convergence of analytics and transactional workloads our own david floyer believes that what oracle is doing with mysql heatwave is radically disruptive to many of the database architectures and blows away anything out there and he believes that snowflake and the likes of aws are going to have to respond now this the other criticism here is that snowflake is not architected for real-time inference where a lot of that edge activity is is going to happen it's a multi-hundred billion dollar market and so look snowflake has a ton of competition that's the other thing all the major cloud players have very capable and competitive database platforms even though they all partner with snowflake except oracle of course but companies like databricks and have garnered tons of vc other vc funded companies have raised billions of dollars to do this kind of elastic consumption based separate compute from storage stuff so you have to always keep an open mind and be aware of potential blind spots for these companies but to the criticisms i would say look snowflake they got there first and watch their ecosystem it's a real key to its continued success snowflake's not going to go it alone and it's going to use its ecosystem partners to expand its reach and accelerate the network effects and fill those gaps and it will acquire its stock is valuable so it should be doing that just as it did with streamlit a zero revenue company that it bought for 800 million dollars in stock and cash just recently streamlit is an open source python library that gets snowflake further deeper into that data science space that data brick space and look watch what snowflake is doing with snowpark it's an api library for processing data and building data intensive applications we've talked about snowflake essentially being becoming the super cloud and building this sort of path-like layer across clouds rather than trying to do it all themselves it seems snowflake is really staring at the api economy and building its ecosystem to plug those holes so let's come back to the customers here's a chart that shows snowflakes customer spending momentum or net score on the the top line that's the vertical axis and pervasiveness in the data or market share and that bottom brown line snowflake has unprecedented net scores and held them up for many many quarters as you can see here going back you know a couple years all leading to its expanded market penetration and measured as pervasiveness of so-called market share within the etr survey it's not like idc market share it's pervasiveness in the data set now i'll say this i don't see how this is sustainable i've been waiting for this to moderate i wouldn't be surprised to see snowflake come back to earth a little bit i think they'll clearly still be highly elevated based on the data that i've seen but but i could see in in one or more of the etr surveys this year this starting to moderate as they get they get big it's just it has to happen um but i would again expect them to have a high spending velocity score but i think we're going to see snowflake you know maybe porpoise a bit here meaning you know it moderates it comes back up it's just really hard to sustain this piece of momentum and higher train retain and scale without absorbing some some friction and some head woods that's going to slow you down but back to the aws growth example it's entirely possible that we could see a similar dynamic with snowflake that you saw with aws and you kind of see it with salesforce and servicenow very successful large entrenched entrenched companies and it's very possible that snowflake could pull back moderate and then accelerate that growth even though people are concerned about the moderated guidance of 80 percent growth yeah that's that's the new definition of tepid i guess i look i like to look at other some other metrics the one that really called you know my my my attention was the remaining performance obligations this last quarter rpo snowflakes is up to something like 2.6 billion and that is a forward-looking indicator of of future revenues so i want to i'd like to see that growing and it's growing at a fast pace so you're going to see some ups and downs with snowflake i have no doubt but i think things are still looking pretty solid for the company growth companies like snowflake and octa and z scalar those other ones that i mentioned earlier have probably been repriced and refactored by investors while there's always going to be market and of course geopolitical risk especially in these times fundamentals matter you've got huge market well capitalized you got a leadership position great products and strong customer adoption you also have a great team team is something else that we look for we haven't touched on that but i'll leave you with this thought everyone knows about frank slootman mike scarpelli and what they've accomplished in their years of working together that's why the stock you know in ipo was was so overvalued they had seen these guys do it before slootman just documented in all this in his book amp it up which gives great insight into the history of of that though you know that pair and and the teams that they've built the companies that they've built how he thinks about building companies and markets and and how you know total available markets super important but the whole philosophy and culture that that he's building in his management style but you got to wonder right how long is this guy going to keep going what keeps him motivated you know i asked him that one time here's what he said why i mean are you in this for the sport what's the story here uh actually that that's not a bad way of characterizing it i think i am in it uh you know for the sport uh you know the only way to become the best version of yourself is to be uh to be under the gun and uh you know every single day and that's that's certainly uh what we are it sort of has its own rewards building great products building great companies uh you know regardless of you know uh what the spoils may be uh it has its own rewards and i i it's hard for people like us to get off the field and uh you know hang it up so here we are so there you have it he's in it for the sport how great is that he loves building companies and that my opinion that's how frank slootman thinks about success it's not about money money's the byproduct of success as earl nightingale would say success is the progressive realization of a worthy ideal i love that quote building great companies building products that change the world changing people's lives with data and insights creating jobs creating life-altering wealth opportunities not for himself but for thousands of employees and partners i'd say that's a pretty worthy ideal and i hope frank slootman sticks with it for a while okay that's it for today thanks to stephanie chan for the background research she does for breaking analysis alex meyerson on production kristen martin and cheryl knight on social with rob hoff on siliconangle and thanks to ivana delevska of spear invest and my friend chip symington for the angles from the money side of things remember all these episodes are available as podcasts just search breaking analysis podcast i publish weekly on wikibon.com and siliconangle.com and don't forget to check out etr.plus for all the survey data you can reach me at devolante or david.velante siliconangle.com and this is dave vellante for cube insights powered by etrbsafe stay well and we'll see you next time [Music] you
SUMMARY :
the history of of that though you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
microsoft | ORGANIZATION | 0.99+ |
josh brown | PERSON | 0.99+ |
alex meyerson | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
80 percent | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
slootman | PERSON | 0.99+ |
rob hoff | PERSON | 0.99+ |
67 year | QUANTITY | 0.99+ |
5.4 billion dollars | QUANTITY | 0.99+ |
50 billion dollar | QUANTITY | 0.99+ |
64 billion | QUANTITY | 0.99+ |
800 million dollars | QUANTITY | 0.99+ |
165 million | QUANTITY | 0.99+ |
23 billion dollar | QUANTITY | 0.99+ |
stephanie chan | PERSON | 0.99+ |
david floyer | PERSON | 0.99+ |
ivana delevska | PERSON | 0.99+ |
steve ballmer | PERSON | 0.99+ |
this year | DATE | 0.99+ |
2.6 billion | QUANTITY | 0.99+ |
frank slootman | PERSON | 0.99+ |
mike scarpelli | PERSON | 0.99+ |
billions of dollars | QUANTITY | 0.99+ |
oracle | ORGANIZATION | 0.99+ |
earl nightingale | PERSON | 0.99+ |
two drivers | QUANTITY | 0.99+ |
multi-hundred billion dollar | QUANTITY | 0.99+ |
david.velante | OTHER | 0.98+ |
boston | LOCATION | 0.98+ |
dave vellante | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
about a hundred million dollars | QUANTITY | 0.98+ |
120 | QUANTITY | 0.98+ |
aws | ORGANIZATION | 0.98+ |
Snowflake’s Wild Ride | TITLE | 0.98+ |
frank slootman | PERSON | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
this week | DATE | 0.98+ |
around 2 billion | QUANTITY | 0.98+ |
70 billion dollar | QUANTITY | 0.97+ |
400 | QUANTITY | 0.97+ |
320 | QUANTITY | 0.97+ |
q1 2018 | DATE | 0.97+ |
kristen martin | PERSON | 0.97+ |
220 | QUANTITY | 0.97+ |
chip symington | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
benzinga | ORGANIZATION | 0.96+ |
164 | QUANTITY | 0.96+ |
over 300 a day | QUANTITY | 0.96+ |
first day | QUANTITY | 0.95+ |
earth | LOCATION | 0.95+ |
windows | TITLE | 0.95+ |
two worlds | QUANTITY | 0.95+ |
past quarter | DATE | 0.95+ |
165 | QUANTITY | 0.94+ |
disney | ORGANIZATION | 0.94+ |
65 million | QUANTITY | 0.94+ |
simonton | LOCATION | 0.94+ |
python | TITLE | 0.94+ |
street.com | OTHER | 0.93+ |
a lot of data | QUANTITY | 0.92+ |
last quarter | DATE | 0.92+ |
cheryl knight | PERSON | 0.92+ |
today | DATE | 0.92+ |
50 rate | QUANTITY | 0.91+ |
day one | QUANTITY | 0.9+ |
zero revenue | QUANTITY | 0.9+ |
devolante | OTHER | 0.9+ |
tons | QUANTITY | 0.89+ |
wikibon.com | OTHER | 0.88+ |
one time | QUANTITY | 0.88+ |
hundreds of other companies | QUANTITY | 0.88+ |
etr | ORGANIZATION | 0.87+ |
single day | QUANTITY | 0.86+ |
balmer | PERSON | 0.85+ |
around 240 | QUANTITY | 0.85+ |
ipo | ORGANIZATION | 0.85+ |
20s | QUANTITY | 0.84+ |
lots of data | QUANTITY | 0.83+ |
Andy Jassy & James Hamilton Keynote Analysis | AWS re:Invent 2016
>>Like for Las Vegas, Nevada, that's the cue governor AWS reinvent 2016, brought to you by AWS and its ecosystem partners. Now, here are your hosts, John furrier and Stu minimum. >>We are here, live in Las Vegas with the cube all week. I'm John minimum. We are breaking down all the re-invent coverage. The cube is going on for three days. Um, Stu and I are going to break down here and studio B the analysis of Andy Jassy, his keynote. This is really day one of the event yesterday was kind of a preview at James Hamilton. Uh, Tuesday evening, I had a great band up there. Uh, and then he came on and delivered a really an Epic performance laying out as a, he's not a showman in the sense of, uh, uh, Steve jobs like, but he has a Steve jobs like cred, uh, James Hamilton, when it comes to the gigs in the community, he delivered the, what I call the secret sauce with AWS as data centers. And then Andy Jassy today with his keynote again is so high pack. >>They start at 8:00 AM, which is kind of not usual for events with so much to up their pack. Councilor came on stage AI Stu. First, I want to get your take on today's keynote with Andy Jassy. You were in the front row. What was going on inside the room? Tip, tell us your perspective, give us the vibe. What was the energy level and what was, what was it like? Yeah. John, as you said, starting at 8:00 AM, it's like a up, we must be talking to the tech audience because developers usually like to start a little bit later than that. Um, it was an embarrassment of riches. Uh, Andy gets on stage, as he told you, when you met with him up at his home in Seattle, uh, they've got, they're going to have about a thousand, you know, major new features updates. Uh, and you know, I think Andy went through a couple of hundred of them up on stage. >>Uh, you know, this is a group of true believers pack. Keynote people started streaming in over an hour ahead of time because only 10,000 could fit in the main tent. They had other remote locations where you could go get, you know, mimosas, bloody Marys or coffee. Uh, if you wanted to watch us, all over that. But it, it, it just to tell you, my fourth year here at the show and it's like, Oh yeah, another tech show. You're going to get keynotes. They're going to make some announcements yawn, no Amazon impresses every year. And they delivered this year. Andy might not be a showman, but you know, he was punching at a, you know, Larry Ellison and Oracle quite a bit. He got huge ovations. Like every time they announced a new compute instance, uh, in lots of these things, uh, and a little bit of show flare, uh, at the end, uh, certainly the going into the database market. >>Uh, but also they're making some really good infrastructure enhancements with the new services. What was your highlight if you're going to look at what the most significant, most important story this morning, what, what was squinting through all the great announcements? What ones you liked best? Oh boy. John, I have to pick one. I mean, here, here's a few number one is, you know, there's, there's some pushback from people in the community that, Oh, you know, they announced another ton of news, you know, compute instances, there's all these different storage configurations. Uh aren't we supposed to be making things simple. Uh, and that's when they had a one Amazon LightSail, which is the virtual private servers in seconds really goes after, you know, kind of a, you know, simple, low cost model, uh, really digital ocean's the leader in that space starting at like $5 a month, John, uh, you know, very exciting. A lot of people, uh, you know, really getting, uh, you know, as to where this could go every year, Amazon has a number of competitors that they're just like up, we see this opportunity. We can go after this. And John, this is not a high margin business. I mean, usually it's like, Oh, okay, database. I understand there's huge margin there. The storage market, of course, LightSail $5 a month. I mean, you know, they make it up in volume, but it's super fast. >>It was on a playbook. It drive the price down as low as possible, and then shift the value with the analytics. Um, and, uh, Aurora PA um, um, uh, pack housing or any chassis said fastest growing service in the history of Amazon last year, he said red shift was that this surpass red shift, uh, the announced Postgres equal on a roar, another big significant customer request. Um, just on and on the database seems to be the lock-in spec that they're trying to undo from Oracle. Um, they're not stopping. I mean, the rhetoric was all time high, John, the picture Larry Ellison popped out, popped in the Oracle. Oh, in the, in, in the O >>We know the long pole in the tent for enterprises is the applications you have making any changes in that, uh, doing any refactoring, you know, tinkering, you know, those are hard things to do. Um, but you know, we've heard a lot from Amazon this week as to how they're helping with migration, how they're giving options, how they're giving bridges, uh, things like VMware on AWS to bridge over from where you are, you know, you can lift and shift it. You can move it, you can rewrite it, lots of options there. Uh, and Amazon just has so many services and so many customers, thousands of systems integrators, uh, you know, thousands of ASVs, uh, and really big enterprises, you know, making statements up on stage. When you get Workday up on stage, John, you get McDonald's up on stage. Uh, you know, it's impressive. >>Some big name accounts, no doubt about it. That's do I want to get your thoughts on James Hamilton? Again, Amazon's got some of the announcements. I mean, some companies will launch entire conference keynote around maybe one or two of what they've done out of the many that they've had here also to note, there's been over 150 partner announcements. So the ecosystems do before we get to Hamilton, I want to talk about the ecosystem. This feels a lot like 2011, VMware. I was kind of joking with Sanjay Poonen the CEO of VMware was just on the cube with us and saying, what do you think about VMworld this year? I mean, re-invent, I was kind of tongue in cheek. I wanted to zinc them a little bit, but stew, this feels like, >>So John, I'm an infrastructure guy, and I want to talk about James Hamilton. One thing we got to cover first green grass. I, you know, green grass is how Amazon is taking their serverless architecture, really Lambda and taking it beyond the cloud. So how do I get, you know, that, that kind of hybrid edge, we talked about it a little bit with Sanjay, but number one, I can start pulling VMware into AWS. Number two, I can now get, you know, my Lambda services, uh, out on the edge, they talked about some IOT plays on, they talked about the snowball edge, uh, which is going to allow me to have kind of compute and storage, uh, down at that edge. Uh, I've seen huge excitement at this show, uh, on the serverless piece developers, it's really quick to work with, uh, twenty-five thousand Amazon echo dots were handed out and I've already talked to people that are already, you know, writing functions for that and figuring out how to can play with it. And God, we haven't even talked about the AI, John with voice and images. How many hours do we have John? >>I we'll get there. Let's stay on green grass for a minute, because if you think about what that's about, I want to get your thoughts on your thoughts on the impact of green grass. I mean, obviously the lamb done, that's got a little edge piece of snowball tied to it. Uh, you know, green grass and high ties forever. The old song by, you know, Southern rock band Outlaws back in the day, this is a significant announcement. What is the impact of that? >>Yeah, well, John, I mean the grass is greener in the cloud, right? So now we're going to bring the green grass, >>No ball when it snowball, my melts extends in the green grass. >>So we're going to be riffing all day on this stuff. So David foyer, uh, our CTO at Wiki bond has been talking for awhile, uh, that, you know, while cloud is great for data, the problem we have is that IOT is going to have most of the, you know, most of the data out on the edge. And we know the physics of moving large amounts of data is really tough. And especially if it's spread out things like sensors, things like wind farms, getting the networking to that last mile can be difficult. That's where things like green grass are going to be able to play in. How can I take really that cloud type of compute and put it on the edge. It really has potential to be a real game changer. I think John, we talked about what hybrid means, uh, and you know, we'll, we'll see a lot, a lot of buzz in the industry about what Microsoft's doing with Azure stack, uh, and you know, lots of pieces, but you know, grass, you know, it gives this new model of programming. It gives the developers, uh, it gives me, you know, I can use the arm processors, uh, out on the edge and, you know, we could try and talk about how that fits with James Hamilton too. >>We are inside the hall next to the cube studio, being so much content. We have to actually set up a separate set. Stu I want to get your thoughts on, I mean, obviously we can go on forever, but the significant innovation on multiple fronts for Amazon, you mentioned Greengrass, snowball, multiple instances. Um, and certainly they got all the analytics on Bubba, the top of the stack with Redshift and other stuff. And he says, streaming goes on and on the list goes on and on, but you look at what they're doing with Greengrass and snowball. And then you go look at what James Hamilton talked about yesterday. Now they're going down an innovating down to the actual physical chip level. They're doing stuff with the network routes, the control in the packet there, no one's touching the packets. They are significantly building the next global infrastructure backbone for themselves to power the world. This is, to me, I thought a subtle talk that James gave. There's a ton of nuance in there. Your thoughts on last, night's a really Epic presentation. I know we're gonna have a sit down exclusive interview with James Hamilton with Rob Hoff, our new editor in chief Silicon angle, but still give us a preview. What blew you away? What got you excited? I mean, it was certainly a geek dream. >>Yeah. I mean, John, you know, James Hamilton is just one of those. You talk about tech athletes, you know, just the, the real heroes in this space, uh, that so many of us look up to, uh, it's been one of the real pleasures of my career working, uh, with the cube that I've gotten to speak to James a few times. Uh, and the first article I wrote three years ago, uh, about what James Hamilton has done is it's hyper optimization. The misconception that people had about cloud is, Oh, it's just a white box. They're taking standard stuff, Amazon. And what James always talks about is how to, you know, really grow and innovate at scale. And that means they build for their environments and they really get down to every piece of the environment, all the software, all the hardware, they either customize it or make their own. So, you know, the big monitor >>And Stu to your point for their own use cases, the home, a prime Fridays and those spike days, he was talking about how they would have to provision months and months in advance to add, to understand some estimated peak that they were spinning up, literally thousands of servers. >>Yeah. So John, you know, Amazon doesn't make a lot of acquisitions, but one that they made is Annapurna labs. So they've got their own custom Silicon that they're making. Uh, so this will really allows them to control, uh, how they're doing their build-out. They can focus on things like performance. Uh, James talked about, uh, you know, how they're, they're really innovating on the network side. He was very early with 25 gigabit ethernet, uh, which really drove down. Some of the costs, gave them huge bandwidth advantages, uh, and kind of leading the way in the industry. Uh, the, the, the thing we've been poking out a bit is while Amazon leverages a lot of open source, they don't tend to give back as much. Uh, they've got the big MX net announcement as to how they're going to be involved in, in the machine learning. And that's good to see they hired Adrian Cockcroft, uh, you know, who lots of us knew from his Netflix days. Uh, and when he was a venture capitalist, he's going to be driving a lot of the open source activity. But James, you know, kind of went through everything from, >>By the way, on your point about source, I set it on the cube and I'll say it again. And you Mark my words. If Amazon does not start thinking about the open source equation, they could see a revolt that no one's ever seen before in the tech industry. And that is the open source community. Now as a tier one, it has been for a long time tier one contributor to innovation, and as a difference between using open source for an application like Facebook and a specific point application or Google for search, if you are building open source to build a company, to take territory from others, there will be a revolts. Do you, John, do you agree? Am I off, >>Uh, revolt might be a little strong, but absolutely. We already see some pushback there. And anytime a company gets large power in the marketplace, you see pushback. We saw it with Oracle, with salt, with Microsoft, we see it with VMware. Uh, so you know, and I think Amazon, here's this point, uh, Andy Jassy talks about how they're making meaningful contributions. I expect Adrian, uh, to make that much more visible. Um, we'll have to get into some of the James Hamilton stuff at a later date, but >>Down with him with Rob posts more on that later, you and I will hit James Hamilton analysis on the key later final thoughts you were giving me some help before we came on to talk here about me saying, I'm bullish on VMware's relationship with AWS. And you said, really? And I said, I am because I am a big fan of VMware, um, also AWS, but for their customers, for AI, for VMware customers, this is a good thing. Now you might have some thoughts on execution. Maybe what's your, why? Why did you roll your eyes when I said that? >>So, John, I mean, you know, I've lots of love for the VMware community. Uh, you know, spent lots of time in that space. Uh, and it, it's good to see, uh, VMware working with the public clouds. However, uh, I think the balance of power Shilton shifts in the side of Amazon being in control here. Uh, and you know, there's a lot of nuance. Where are the services where the value is what's going to be good for customer. Amazon's really good at listening. Uh, and you know, this embarrassment of riches that they do, right? >>A real summary, what bottom line, what happened this morning and your mind abstracted all the way in one soundbite, wait, >>They rolled a truck out, out stage, John, this snowmobile a hundred terabytes, a hundred petabytes of storage and a terabyte of information. Something that, you know, we were like, this is amazing. It's it's the, the maturation of the hybrid message is different from what people have been talking about hybrid, uh, you know, where SAS lives, all the ISV is. Where's the data, where's the application. Amazon's in a really good position. John, there's a big and growing ecosystem here. Uh, but there's a huge battles that I know we're going to get into, uh, out in the marketplace. You know, who's going to win voice, uh, you know, everybody's their apples, their Microsoft, >>Because everyone's jocking for position. Got Google, you got Oracle, you've got IBM. You've got Microsoft all looking at AWS and saying, how do we change the game on them? And we'll be covering this. The cute we are here in Las Vegas studio B cube three days of wall-to-wall Cubs, I'm Jeffers do minimum, breaking it down on day one, keynotes and analysis. Thanks for watching. We'll be right back. Stay tuned to the cube cube siliconangle.tv. You go to siliconangle.com for all the special exclusive stories from re-invent specifically to, with Andy Jassy, James Hamilton, and more thanks for watching.
SUMMARY :
AWS reinvent 2016, brought to you by AWS and performance laying out as a, he's not a showman in the sense of, uh, Uh, and you know, I think Andy went through a couple of hundred of them up on stage. Uh, you know, this is a group of true believers pack. A lot of people, uh, you know, really getting, Um, just on and on the database seems to be the lock-in spec that they're trying to undo in that, uh, doing any refactoring, you know, tinkering, you know, those are hard things to do. what do you think about VMworld this year? talked to people that are already, you know, writing functions for that and figuring out how to can play with it. Uh, you know, green grass and high ties forever. It gives the developers, uh, it gives me, you know, I can use the arm processors, And he says, streaming goes on and on the list goes on and on, but you look at what you know, just the, the real heroes in this space, uh, that so many of us look up to, uh, it's been one of the real pleasures of And Stu to your point for their own use cases, the home, a prime Fridays and those spike days, And that's good to see they hired Adrian Cockcroft, uh, you know, who lots of us knew from his Netflix days. And you Mark my words. Uh, so you know, and I think Down with him with Rob posts more on that later, you and I will hit James Hamilton analysis on the key later final Uh, and you know, this embarrassment of riches that they do, right? been talking about hybrid, uh, you know, where SAS lives, all the ISV is. Got Google, you got Oracle, you've got IBM.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
James Hamilton | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
Sanjay Poonen | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Adrian | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Rob | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
fourth year | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
8:00 AM | DATE | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Annapurna | ORGANIZATION | 0.99+ |
twenty-five thousand | QUANTITY | 0.99+ |