Chip Childers, Cloud Foundry Foundation | Cloud Foundry Summit 2018
>> Announcer: From Boston, Massachusetts, it's theCUBE! Covering Cloud Foundry Summit 2018. Brought to you by the Cloud Foundry Foundation. >> I'm Stu Minamin and this is theCUBE's coverage of Cloud Foundry Summit 2018. Here in beautiful Boston, Massachusetts. Happy to welcome back to the program Chip Childers, who is the CTO of the Cloud Foundry Foundation. Chip, you started off this morning saying the runners this morning got a taste of the Boston Marathon. >> They did, they did! >> It's raining, it's cold, it's miserable. >> Yesterday was beautiful. >> At least there was less wind. >> Yesterday was absolutely beautiful. So we kicked off the summit, beautiful sun, but then we had our Fun Run this morning. >> As a local, I do apologize for the weather. Normally April's a great time. We want more tech coverage here in the area. More tech shows. We're in the center of a great tech hub, here in the Boston Seaport. We've talked to a couple of Boston startups, you know, here at the show. And, you know, great ecosystem if you go there. Thank you for bringing your show here. >> Absolutely, happy to be here. >> All right, so, last time we caught up was year ago at the show. And I think it was, what, 213 working days or something? I think Molly said >> Something like that Something like that yeah. >> The good thing is in our industry, nothings changing, we can talk about the same stuff as last year. >> Leisurely pace >> No concern, let's just sit back and you know, talk about our favorite pop culture references. Chip what's hot on your plate? And what are you hearing from the users in the community? >> Sure. So this year the theme Our events team came up with a very fun pun, which is Running at Scale. It means two things. One, the Boston Marathon was on Monday, but two it really does represent the stories that we're getting from our users, the customers, and the distributions, those that use the open source directly. So not only are we seeing a broadening of adoption across new organizations, but they're getting really deep into using it. We filled a survey, user survey, just did our second run of it. In fact we didn't have this data back in Santa Clara last year. So it's been less than a year since the 2017 one. And what we found was that there was a 21 point swing in those companies that were using Cloud Foundry with more than 50 developers, alright. So 50 developers and higher When you really talk to the interesting, large scale Fortune 500 companies, they're talking thousands of developers, that are working on the platform, being productive, and that truly is kind of what this event is about for us. >> I grew up around the infrastructure stuff, and scale means a lot of things to a lot of people, but had a great discussion with Dr. Nick, just before talking about how if you were to build your kind of utopian environment You look at some of the hyper-scale companies, the Facebooks and Googles of the world, and thing is they're such a scale that if they don't have good automation, and don't have you know really the distributive architectures that we're all talking about and things like that, there's no way that they could run their businesses. >> Yeah and the reality is a lot of the businesses that aren't Google, aren't Facebook, they have to be able to think about that level of scale. To me it really boils down to three basic principles, and to me this is the best definition of what Cloud native means. Whether you're talking about a platform, whether you're talking about how you design your applications, it's simple patterns, highly automated, which can be scaled with ease, right? And through that you can do really amazing things with software, but it has to be easily scaled, it has be easily managed, and you do that through the simplicity of the patterns that you apply. >> Yeah, and being simple is difficult. >> Yes >> How much we have arguments in the industry it's like well, let's throw an abstraction layer in there, do an overlay or underlay, but you know really building kind of distributed systems, is a little bit different. >> It is a little bit different. So one of the things that the Cloud Foundry ecosystem has, is a rich history of iterating towards a better and better developer experience. At its heart, the Cloud Foundry ecosystem of distribution, and tools, and the different products we have, they're all about helping the developer be a better developer in the context of their organization. So we've been iterating on that experience and just doing incremental constant improvement and change and we're very proud of that productivity, right? And that's really what drive these organizations to say look, this is a platform that is operated very easily with small teams. I think you've spoken with a couple companies, and if you ever ask them hot many operators do you have to handle thousands of engineers, tens of thousands of applications, they say, well, maybe ten. >> The T-Mobile example is >> Great example >> Ten to fifteen operators with 17000 developers so >> Chip: Yep, yep >> It's funny cause I remember we used to talk about you know in the enterprise how many servers can a single admin handle and then if you go to the hyper-scale ones it was three orders magnitude different. But in the hyper-scale ones they didn't really have server people, they had people that brought in servers, and people threw them in the wood chipper when they were done >> Chip: Absolutely >> And they didn't manage them. It was the old cattle versus pets analogy that we talked about in the other room, It's just totally different mindsets is how we think about this. I love, For me, it was in the enterprise you know, we harden the hardware, we think about this, and in the software world it's you know, No no, I built it in the application layer, because One of my favorite lines I use is you know, Hardware will eventually fail, and software will eventually work right? >> Absolutely. I think that's the difference between, So application centric thinking leads you to Necessarily, you have to have infrastructure to run it right? My favorite thing is this whole server-less term is absolutely ridiculous if anybody understands it, but there's a little bit behind it, which is, in fact I'd argue Cloud Foundry's fundamentally server-less because when you push code into it, you don't care what operating system's underneath it, right? All you care about is the fact that you've written some code in Java or in Nojass or in Ruby, you're handing it to a platform it deals with all of the details of building a container image, scaling it, managing it, pulling independencies, you don't care what underlying operating systems there, and then that ten person platform operations team, in the Cloud Foundry world, they have the benefit of upstream projects actually producing the operating system image that they can consume, within hours of major vulnerabilities being announced. >> I love actually, at this show you've got a containers and server-less track >> We do >> And I'm an infrastructure guy by background and when we went to virtualization we went little bit up the stack, I don't think about servers I'm trying to get closer to that application. Love you to comment on is Cloud Foundry helps gives some stability and control at that infrastructure level, but it still involved with infrastructure, from in my own data center, >> Chip: Yep >> or hosted data center or I know what could I'm on. When I start going up to like server-less, I'm a little bit higher up the stack, and that's why they can live together, >> Yeah, yeah >> And its closer tied to the application than it is to the infrastructure, so maybe you can tease that out for us a little. >> Yeah, so I think one of the main things that we've heard from the user community and this is actually coming from users of a number of the different distributions. They're saying, look there are roughly, today, roughly two different modes that we care about, cloud native application workloads. And this might expand to include functions and service but predominantly there's two. There's the custom software that we write, which the past experience is great for, and then there's the ISV delivered software, which today increasingly the medium of software delivery is becoming the container image, whether it's an OCI container, whether it's a Docker image, ISV ships software as container images, and you need a great place to land that, so those two abstractions, that paths, just hand the system your code, or the container service just hand it a container image, both of them work really well together, and part of what we're trying to do as a community, a technical community, is we're evolving those integrations so that we can work really well with the Kubernetes ecosystem. There are different options for how these things might be stacked, depending on the vendor that you're talking to, I think mostly that's immaterial to the customers, I think mostly the customers care about having those two experiences be unified from their developer or app owner prospective. >> When you come to this show, there's more than just Cloud Foundry. There's a lot of other projects >> Chip: For sure >> That are coming on to the space Gives us a little viewpoint as to how the foundation looks at this. What's the charter which it fits under Linux foundation There's so many different pieces, Some kind of bleed into what the CNCF is doing, and just try to help map out >> Chip: Yeah how some of these pieces and it's this great toolbox that we've talked about in open source. I love like the zip car guy got up and he's like, I use all the peripheral stuff, and none of the core stuff >> Right >> And that's okay >> Absolutely, that's the fun of open source. So there's a couple ways to look at this. So first, the open source communities collectively. There's a lot of innovations going on in this space, obviously What the Cloud Foundry ecosystem generally does, historically has done, and will continue to do, is that we are focused on the user needs, first and foremost. And what our technical project teams do is they look at what's available in the broader open source ecosystem. They adopt and integrate what makes sense, where we have to build something ourselves, simply because there isn't an equivalent, or it's necessary for technical reasons. We'll build that software. But our architecture has changed many times. In fact, since 2015, right. It hasn't been that many years, as you said, we move slow in this industry (Stu laughs) We've changed this architecture constantly. The upstream projects releasing at minimum of twice a month. That's a pretty high velocity. And it's a big coordinated release. So we're going to continue to evolve the architecture, to bring in new components, this is where CNCF relates. We've integrated Envoy, which is a CNCF project. We're now bringing in Kubernetes, in a couple of different ways. We're working closely with Istio, which is not a CNCF project, yet. But it looks like it might head that way. Service mesh capabilities, We were an early adopter of the container networking interface. Another Linux foundation effort was the open container initiative, right. Seeded from some code from Docker, again one of the earliest platforms to adopt that, outside of Docker. So we really look at the entire spectrum of open source software as a rich market of componentry that can be brought together. And we bring it together so that all these great users that you're talking to, can go along this journey, and think of it almost as a rationalization of the innovative chaos that's occurring. So we rationalize that. Our job is to rationalize our distributions, use that rationalization, and then all of the users get to take advantage of new things that come up. But also we take what gets integrated very seriously, because it has to reach a point of maturity. T-Mobile again, running their whole business on Cloud Foundry. Comcast, running their whole business on Cloud Foundry. US Air Force, fundamentally running their air traffic control, right, how do they get fuel to the jets, on Cloud Foundry. So we take that seriously. And so it's this combination of, harvesting innovation from where we can harvest it, bring it all together, be very thoughtful about how we bring it together, and then the distributions get the advantage of saying, here's a stable core that's going to evolve and take us into the future. >> Chip I've loved the discussion with real customers, doing digital transformation. What that means for them. How they're moving their business forward. Want to give you the final word, for those that couldn't come to the show, I know a lot of the stuffs online, there's a lot of information out there, anything particular do you want to call out, or say hey this is cool, interesting, or exciting you that you'd want to point to. >> Yeah, I actually. There are a lot of things but the one thing that I'll point to is as a US citizen, I'm particularly proud of some of the work that's happening in the US Government. Through 18F, with cloud.gov as an example, but if I step back even further, Cloud Foundry is serving as a vehicle for collaboration across multiple nations right now. We're seeing Australia, we're seeing the United Kingdom, Netherlands, Canada, South Korea, all of these national governments, are trying to figure out how to change citizen engagement to follow the lead of the startups, which are the internet companies, at the same time that these large Fortune 500 companies, are also trying to digitally transform. Governments are trying to do the same thing. So we had a, we're almost done for the day here, but there was almost a full day track of governments talking about their use of the tech, talking about that same digital transformation journey. So to me that's actually really inspiring to see that happen >> Alright well Chip Childers. He was a dancing stick figure >> Chip: I was in the keynote this morning, but here with us on theCUBE. Thank you so much for joining once again, and thank you to the foundation for helping us bring this program to our audience. >> Chip: We're happy to have you here. >> I'm Stu Miniman, and this is theCUBE. Thanks for watching (bright popping music)
SUMMARY :
Brought to you by the Cloud Foundry Foundation. I'm Stu Minamin and this is theCUBE's coverage it's miserable. So we kicked off the summit, beautiful sun, We're in the center of a great tech hub, And I think it was, what, 213 working days or something? Something like that we can talk about the same stuff as last year. And what are you hearing from the users in the community? and that truly is kind of what this event is about for us. and scale means a lot of things to a lot of people, but the simplicity of the patterns that you apply. in the industry it's like well, and if you ever ask them hot many operators and then if you go to the hyper-scale ones and in the software world it's you know, So application centric thinking leads you to Love you to comment on and that's why they can live together, so maybe you can tease that out for us a little. and you need a great place to land that, When you come to this show, What's the charter which it fits under Linux foundation I love like the zip car guy got up and he's like, again one of the earliest platforms to adopt that, Want to give you the final word, I'm particularly proud of some of the work He was a dancing stick figure in the keynote this morning, but here with us on theCUBE. I'm Stu Miniman, and this is theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Comcast | ORGANIZATION | 0.99+ |
Cloud Foundry Foundation | ORGANIZATION | 0.99+ |
Stu Minamin | PERSON | 0.99+ |
Ten | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
2017 | DATE | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
Java | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
21 point | QUANTITY | 0.99+ |
Molly | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
T-Mobile | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Boston Seaport | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Chip Childers | PERSON | 0.99+ |
50 developers | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
US Air Force | ORGANIZATION | 0.99+ |
Nick | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
17000 developers | QUANTITY | 0.99+ |
Chip | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Ruby | TITLE | 0.99+ |
Nojass | TITLE | 0.99+ |
ten | QUANTITY | 0.99+ |
less than a year | QUANTITY | 0.99+ |
Yesterday | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Cloud Foundry | ORGANIZATION | 0.99+ |
more than 50 developers | QUANTITY | 0.99+ |
two experiences | QUANTITY | 0.99+ |
Facebooks | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.98+ |
Cloud Foundry Summit 2018 | EVENT | 0.98+ |
ten person | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Boston Marathon | EVENT | 0.98+ |
two abstractions | QUANTITY | 0.98+ |
US | LOCATION | 0.98+ |
CNCF | ORGANIZATION | 0.97+ |
April | DATE | 0.97+ |
second run | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
today | DATE | 0.97+ |
twice a month | QUANTITY | 0.96+ |
Linux | TITLE | 0.95+ |
Australia | LOCATION | 0.95+ |
one thing | QUANTITY | 0.95+ |
three basic principles | QUANTITY | 0.95+ |
Canada | LOCATION | 0.94+ |
two different modes | QUANTITY | 0.94+ |
Netherlands | LOCATION | 0.94+ |
three orders | QUANTITY | 0.93+ |
US Government | ORGANIZATION | 0.93+ |
United Kingdom | LOCATION | 0.92+ |
Dr. | PERSON | 0.92+ |
Cloud Foundry | TITLE | 0.92+ |
one | QUANTITY | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
couple companies | QUANTITY | 0.9+ |
this morning | DATE | 0.88+ |
tens of thousands of applications | QUANTITY | 0.88+ |
213 working days | QUANTITY | 0.88+ |
thousands of developers | QUANTITY | 0.84+ |
Istio | ORGANIZATION | 0.84+ |
fifteen operators | QUANTITY | 0.83+ |
Kubernetes | TITLE | 0.83+ |
South Korea | LOCATION | 0.81+ |
Docker | TITLE | 0.81+ |
Chip Coyle, Infor | Inforum 2017
>> Announcer: Live from the Javits Center in New York City, it's theCUBE. Covering Inforum 2017, brought to you by Infor. >> Welcome back to theCUBE's coverage of Inforum 2017, I am your host, Rebecca Knight, along with my co-host, Dave Vellante. We are joined by Chip Coyle. He is Infor's CMO. Thanks so much for sitting down with theCUBE today. >> Thank you for having me. >> So we just kicked off the show, the general session, Charles Philips, a lot of other Infor executives up there on the main stage talking. Lay it out for us. How many people are here. What are sort of the big themes that you're trying to get across here. >> Yeah, well, first of all it's great for Infor to be having our conference here at the Javits Center. It's about 10 blocks from our home-- >> Rebecca: Your own back yard. >> In New York City, and so this year, we've got nearly 7,000 attendees over the course of the week. Many component programs as we do every year with our partner summit, with our various conferences for the different individual customer constituencies, and executive forum, and of course, a big customer appreciation event happening tomorrow night. >> You've also made some big announcements. I'm talking mostly about Coleman AI, and Burst. I want you, if you can unpack those for our viewers a little bit. >> Yeah, I would say the theme of the conference this year is the age of networked intelligence. And what does that mean? Well, we've had, for the last several years, a layered strategy in our business, starting at the foundation with very deep industry functional applications. Purpose built for the different industries. We've taken all of that technology and moved it to the cloud, so that you get the benefits of the efficiencies and the network capability of taking your applications to the cloud. We recently, a year ago, acquired GT Nexus, which expands our capability, in a broader sense, to a commerce network, and we're able to incorporate that into our traditional applications in different industries. And then, just a couple of months ago, we acquired a business intelligence software company, Burst, which brings some really great technology for business intelligence that we can layer on top of all of our applications in this network environment. And then finally, today, the big announcement was Coleman, as you said, and that was to take our new artificial intelligence platform and really create just profound new ways that the workers in the different industries and their different companies across the networked enterprise, can interact in a business setting, much like people do in a commercial setting today. >> Can you, Chip, talk about the evolution of the brand promise. So when we first met Infor, at AWS Reinvent, it was like who was Infor? Trying to educate people on who Infor is. And so I felt like last year was your sort of stamp of this is how Infor and why Infor is relevant, and now, there seems to be sort of an undertone of innovation. So can you talk about the evolution of the brand and what you see as the brand promise. >> Well, we are very consistent in our branding and positioning of Infor as really the first industry cloud company. We're the ones who have been, at an accelerated pace, bringing the most deep, industry-rich, functional applications to the cloud. And that has created a great layer now, for all of these future innovations that we have talked about today with the benefits of business intelligence enabled applications built right in, so that you can truly have all the information you need at the right time, for the right purpose to make immediate business decisions. And then the potential and capability of artificial intelligence on top of that. >> As the chief marketing officer, can you talk a little bit about how these innovations change how you do your job, and how they make your life easier, in terms of making the right decision at the right time, making the decision better, having the right data? >> Yeah, well some of the other announcements that we're making this week, actually are in my particular line of business, which is marketing, and one of those, for example, is we're broadening our Infor CRM suite, with a link to LinkedIn's Sales Navigator. So that brings a whole set of important data to, about customers, to enable better customer interactions, for our customers. So that's something that we look to be using in our business, along with Marketo, which is a new business partner, as the engine, or the marketing automation platform to fuel our marketing business. So that's how it's impacting me directly in what I do. >> So I wonder if you could help us sort of debunk some of the myths. So Oracle would say enterprise apps aren't moving to the cloud, and we are the company to move them to the cloud, and we're the only company that can move them to the cloud. You know, SAP, it's got it sort of some cloud going on, but most of the stuff remains on prem. We heard today 55% of your revenue comes from cloud. And we know you made a decision years ago to run on AWS. Help us understand, I mean these are core, hard core enterprise apps that are running in the cloud. So help us debunk some of those myths and add some color to that. >> The traditional processes of rolling out major applications and enterprise applications in an enterprise is completely changing. And it's also changing because of the capabilities of the cloud. And the approach that Infor takes, which is very easy to assemble and configure with our Ion technology and collaboration technology, such as Mingle, to put these applications in place in a much faster way for our customers than some of the traditional players in the ERP market have been accustomed to do. And they just don't have the current technology approach or foundation to be able to move quickly to the cloud, as we do at Infor. >> In talking about Infor, you talked a little bit about the brand evolution, how are you getting the word out? Infor is really a sleeping giant in the technology industry. How are you getting your name out there? >> Well one thing that we want to do with our brand is show, well first of all, introduce Infor to the world at large, that hasn't heard of us. And the way that we want to do that is by showing what kind of benefits we can give to customers in different industries. So we just recently launched our first-ever TV commercials. They have run on shows like Meet the Press, and some of the CNBC and MSNBC shows. That has, incidental, all of that was developed entirely, 100% in house, with Hook and Loop, our creative in-house creative agency. So we're very proud of that. We're looking to do more of that with TV. We also have a relationship with the Brooklyn Nets here in New York, where on the business side, we're enabling them with performance and team analytics with a whole slew of applications of that with biometric readings and imagery, when they're moving around on the court. That can then be used to help fine tune and make decisions on which personnel to use, which, what are the best players to be able to, say, shoot a free throw after one day of rest versus two days of rest. That level of analytics. So we are, in that partnership with the Nets, are also in a branding way, going to be on the Nets jersey starting this September with an Infor patch on the jersey. And we're announcing that also, this week. >> Awesome. This is definitely a New York theme here. We're here at the Javits Center, Brooklyn Nets, Hudson Yards, another huge project that you guys are intimately involved in. Not a lot of vendors are explicitly mentioned in that. Maybe talk about that a little bit. >> Well, Hudson Yards as a development is unique in that it is really a completely self-contained city in all respects. Where the concept is to be able to network the data and information of anybody within that city, with respect to where they live in the high-rises, where they shop in the retail stores or grocery stores, where they eat in the restaurants, and where they work with all of the businesses that are locating there, too. So that gives you so much potential to rethink how information can enable, just the way that you move about, even in the city. From keyless entry into facilities, to voice-activated tasks, like, can you please restock in my groceries in my refrigerator in my condo. So there's so many ways that that can be a broad showcase for the true smart city of the future. >> These are high-end clientele. This is very New York. I want to shift gears and talk about the eco system a little bit. There's a few names that I, maybe they were here before, but I hadn't seen them, at least prominently, certainly IBM, you mentioned Marketo, a great interesting partner, hot company, and some of the SIs are sort of coming out of the woodwork. >> Chip: Yes. >> Now when you think about your strategy for sort of micro verticals, the SIs, I always say, they love to eat at the trough. And if there's not a lot of customizations, they're not interested. However, you've attracted them, because you've now got a substantial enough estate. So talk about that evolution of the eco system. >> We're proud to have as our diamond sponsors this year, AVAAP, as well as Marketo. And AVAAP has been a longstanding partner for, implementation partner for us, in expanding areas. Their heritage is with Lawson in health care and they're doing a lot of implementations across our business in all geographies, in all industries. But what's new this year is we also have attracted some new, some of the big SIs, such as Deloitte and Accenture, Capgemini, Grant Thornton. So they have all come in as sponsors and we're really on the cusp of some big and bigger and better things with them in the different businesses. >> The other thing I wanted to ask you about is Infor has a unique way of attracting interesting speakers. I've done probably five or six thousand interviews in the last five or six years, and some of the most interesting have been at Inforum. Deborah Norville came on in New Orleans, last year Lara Logan, Naomi Tutu, Karina Hollekim, amazing three women interviews. >> Rebecca: This year Susan Rice. >> This year Susan Rice was here, so what's that all about? They're not techies, they're just interesting people. What are you trying to do there? >> Well, we have a program, the Women's Infor Network, WIN, that was created by Pam Murphy, our chief operating officer, and starting a few Inforums ago, we wanted to use Inforum as a platform to showcase innovative women in the world. And it's a little bit of a departure from our product and technology messages. And this year, we've got, as you mentioned, some great inspiring women, like Jill Biden, the former first, vice president-- >> Rebecca: Second lady. >> And also, Susan Rice, as you mentioned. So, it's going to be, it's always a very popular session. >> Yes, and we're looking forward to having those women on theCUBE, too, tomorrow. >> Chip: Absolutely. >> Chip, thanks so much for joining us, it's been a pleasure. >> Thank you for having me. >> I'm Rebecca Knight, for Dave Vellante. We'll have more from Inforum 2017 after this. (techno music)
SUMMARY :
Covering Inforum 2017, brought to you by Infor. Welcome back to theCUBE's coverage What are sort of the big themes that you're trying to be having our conference here at the Javits Center. for the different individual customer constituencies, for our viewers a little bit. to the cloud, so that you get the benefits of the brand promise. for the right purpose to make immediate business decisions. to be using in our business, along with Marketo, hard core enterprise apps that are running in the cloud. in the ERP market have been accustomed to do. about the brand evolution, how are you getting the word out? And the way that we want to do that you guys are intimately involved in. Where the concept is to be able to network the data and some of the SIs are sort of coming out of the woodwork. So talk about that evolution of the eco system. in the different businesses. of the most interesting have been at Inforum. What are you trying to do there? And this year, we've got, as you mentioned, And also, Susan Rice, as you mentioned. Yes, and we're looking forward to having it's been a pleasure. I'm Rebecca Knight, for Dave Vellante.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Susan Rice | PERSON | 0.99+ |
Karina Hollekim | PERSON | 0.99+ |
Deborah Norville | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Naomi Tutu | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Pam Murphy | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Charles Philips | PERSON | 0.99+ |
New Orleans | LOCATION | 0.99+ |
Jill Biden | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
Lara Logan | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Chip Coyle | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Hudson Yards | ORGANIZATION | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
Brooklyn Nets | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
tomorrow night | DATE | 0.99+ |
today | DATE | 0.99+ |
MSNBC | ORGANIZATION | 0.99+ |
Meet the Press | TITLE | 0.99+ |
a year ago | DATE | 0.99+ |
55% | QUANTITY | 0.99+ |
one day | QUANTITY | 0.99+ |
CNBC | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
this week | DATE | 0.99+ |
Hook | ORGANIZATION | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
Burst | ORGANIZATION | 0.99+ |
Nets | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.98+ |
This year | DATE | 0.98+ |
first | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Infor | ORGANIZATION | 0.97+ |
Grant Thornton | ORGANIZATION | 0.97+ |
Chip | PERSON | 0.97+ |
Javits Center | LOCATION | 0.97+ |
Javits Center | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.95+ |
Marketo | TITLE | 0.95+ |
Inforum | ORGANIZATION | 0.95+ |
Marketo | ORGANIZATION | 0.92+ |
six thousand interviews | QUANTITY | 0.92+ |
GT Nexus | ORGANIZATION | 0.91+ |
one | QUANTITY | 0.91+ |
first industry | QUANTITY | 0.9+ |
about 10 blocks | QUANTITY | 0.9+ |
Second | QUANTITY | 0.89+ |
Chip Childers, Cloud Foundry Foundation - Cloud Foundry Summit 2017 - #CloudFoundry - #theCUBE
>> Narrator: Live, from Santa Clara in the heart of Silicon Valley, it's theCUBE. Covering Cloud Foundry Summit 2017. Brought to you by the Cloud Foundry Foundation and Pivotal. >> Hi this is Stu Miniman, joined with my cohost, John Troyer. Happy to welcome to the program a first-time guest, Chip Childers, who's the CTO of the Cloud Foundry Foundation. Chip, fresh off the keynote stage, >> Yep. >> how's everything going? >> It's going great. We're really happy with the turnout of the conference. We are really happy with the number of large enterprises that are here to share their story. The really active vendor ecosystem around the project. It's great. It's a wonderful event so far. >> Yeah, I was looking back, I think the last time I came to the Cloud Foundry Show, it was before the Foundation existed, We were in the Hilton in San Francisco, it was obviously a way smaller group. Tell us kind of the goals of the Foundation, doing the event, bringing the community in. >> Yeah, you can think about our goals as being of course, we're the stewards of the intellectual property, the actual software that the vendors distribute. We see our role in the ecosystem as being really two key things. One: we're focused on supporting the users, the customers, and the direct uses of the Open Source software. That's first and foremost. Second though, we want to make sure there is a really robust market ecosystem that is wrapped around this project, right. Both in terms of the distribution, the regional providers that offer Cloud Foundry based services, but also large system integrators that are helping those customers go through digital transformation. Re-platform applications, you know really figure out their way through this process. So, it's all about supporting the users and then supporting the market around it. >> Yeah, as we go to a lot of these events, you know, there are certain themes that emerge. There were two big ones that both of them showed up in what you did in the Keynote. Number one is Multicloud, number two is you got all of these various open sourced pieces, >> Chip: Yep. you know, what fits together, what interlocks together, you know which ones sit side by side. Why don't we start with kind of the open source piece first? Because you're heavily involved in a lot of those. Cloud Foundry, you know, what are the new pieces that are bolting on, or sitting on top, or digging into it, and what's going on there? >> You know, I think first I want to start with a basic philosophy of our upstream community. There are billions of dollars that rely on this platform today. And that continues to grow. Right, because we're showing up in Fortune 500, Global 2000, as well as lots of small start-ups, that are using Cloud Foundry to get code shipped faster. So our community that builds the UpStream software, spends a lot of time being very thoughtful about their technical decisions. So what we release and that what gets productized by the down streams is a complete system. From operating system all the way up to including the various programming languages and frameworks and everything in between. And because we release a complete platform, at a really high velocity, so many people rely on it's quality, we're very thoughtful about when is the right time to build our own, when should we adopt and embrace and continue to support another OpenSource project, so we spend a lot of time really thinking about that. And the areas today that I highlight around specific collaborations include the Open Service Broker API which we actually spun out of being just a Club Foundry implementation. And we embrace other communities, and found a way to share the governance of that. So we move forward as a big industry together. >> Stu: Yeah and speaking on that a little bit more. Very interesting to see. I saw Red Hat for instance speaking with Open Shift, Kubernetes is there. So, how should customers think about this? Are the path wars over? Now you can choose all the pieces that you want? Or, it's probably oversimplifying it. >> I think it's over simplifying it, it depends. You can go try to build your own platform if you want, through a number of serious components, or you can just use something like Cloud Foundry, that has solve for that. But the important thing is that we have specifically designed Cloud Foundry to allow for the backing services to come from anywhere. And so, it's both a differentiator for the various distributions of Cloud Foundry, but also an opportunity for Cloud providers, and even more importantly, it's an opportunity for the enterprise users that live in complex worlds, right? They're going to have multiple platforms, they're going be multiple levels of abstraction from Bms to containers, you know, to the path abstraction even event driven frameworks. We want that all to work really well together. Regardless of the choices you make, because that's what's most valuable to the customers. >> Okay, the other piece, networking you talked about. Why don't you share. >> Yeah, yeah so, besides the Service Broker API, we've added support for what's called Container to Container Networking. I don't necessarily need to dig into the details there, but let's just say that when you're building microservices that the application that the user is experiencing is actually a combination of a lot of different applications. That all talk to each other and rely on each other. So we want to make sure there's a policy-based framework for describing how the webs here is going to talk to the authentication service or is going to talk to the booking service, or the inventory service. They all need to have rules about how they communicate with each other. And we want to do that in the most efficient way possible. So we've adopted the Containing Networking interface as the standard plugin that is now at CNCF, the Cloud Native Computing Foundation. We think it's the right abstraction, we think it's great. It gives us access to all the fascinating work that is going on around software networking, overlay networking, industry standard API plugin to our policy-driven framework. >> Along the same theme, Kubo, a big new news project also kind of integration of some Cloud Foundry concepts with a broader ecosystem, in this case another CNCF project, Kubernetes. Could you speak a little bit to that? >> The Kubernetes community is doing a great job creating great container driven experience. You know that abstraction is all about the container. It's not about, you know, the code. So it's different than Cloud Foundry. There are workloads that make sense to run in one or the other. And we want to make sure that they run really well. Right, so the problem that we're solving with the Kuber project is what deploys Kubernetes? What supports Kubernetes if there is an infrastructure adage and a node goes offline? Right, because it does a great job of restarting containers, but if you have ten nodes in a cluster, and then now you're down to nine, that's a problem. So what Bosh does, is it takes care of solving the node outage level problem. You can also do rolling upgrades that are seamless, no downtime for the Kubernetes cluster. It brings a level of operational maturity to the Kubernetes users that they may not have had otherwise. >> Chip, can you bring us inside a little bit the creation of Kubo, is that something that the market and customers drove towards you? I talked to a couple other Cloud Foundry ecosystem members that were doing some other ways of integrating in Kubernetes. So what lead to this way of deploying it with Bosh? >> Yeah, absolutely so, it came out of a direct collaboration between Pivotal and Google. And it was driven based on Pivotal customer demand. It also, if you speak with people from Google that are involved in the project, they also see it as a need, for the Kubernetes ecosystem. So it's driven based on real-world large financial services companies that wanted to have the multiple abstractions available, they wanted to do it with a common operational platform that is proven mature that they've already adopted. And then as that collaboration board, the fruit of the project, and it was announced by Pivotal and Google several months back, they realized that they needed to move it to the vendor neutral locations so that we can continue to expand the community that can work on it, that can build up the story. >> The other topic I raised at the beginning of the interview, was the Multicloud. So in a panel, Microsoft, Google, MTC for Amazon was there. All of the Cloud guys are going to tell you we have the best platform and can do the best things for you. >> Of course they do. >> How do you balance the "We want to live in a multicultural Cloud world" and be able to go there versus "Oh I'm going to take standard plus and get in a little bit deeper to make sure that we're stickier with the customers there." What role does Cloud Foundry play? What have you seen in the marketplace for that? >> Well the public lab providers are, if you look at the services that they offer, you can roughly categorize them with two things. One, are the infrastructure building blocks. Two, are the higher level services, like their database capabilities, their analytics capabilities, log aggregation, you know, and they all have a portfolio that varies, some have specific things that are very similar. So when we talk about MultiCloud we talk about Cloud Foundry as a way to make use of those common capabilities, now they're going to differentiate based on speeds and feeds, availability, whatever they choose to, but you can then as a user have choice. And then secondarily, that Open Service Broker initiative is what's really about saying "great, there's also all these really valuable additional capabilities, that, as a user, I may choose to integrate with a Google machine learning-service, or I may choose to integrate with a wonderful Microsoft capability, or an Amazon capability." And we just want to make that easy for a developer to make that choice. >> Chip, Cloud Founder was very early in terms of a concept of a platform of services, let's not call it platform as a service right now. But you know, this platform that going to make developers lives easier, multi-target, MultiCloud we call it now, on from your laptop to anywhere. And it's been a really interesting discussion over the last couple years as this parallel container thread can come up with Kubernetes and Mesosphere and all the orchestration tools, and the focus has been on orchestration tools. And I've always thought Cloud Foundry was kind of way ahead of the game in saying "wait a minute, there's a set of services that you're going to have for full life-cycles, day two operation, at scale that you all are going to have to pull together from components." As we're doing this interview here, and this year at Cloud Foundry Summit are there anything that you think people don't kind of realize that over and over again people who are using Cloud Foundry go, "Wow I'm really glad "I had logging or identity management," or what are some of the frameworks that people sometimes don't realize is in there that actually is a huge time-savor. >> Yeah, there are a lot of operational capabilities in the Cloud Foundry platform. When you include both our Bosh layer, as well as the elastic runtime which is in the developer centers experience-- >> John: Anything that people don't often realize is in there? >> Well, I think that the right way to think of it is, it's all the things you need in one application, right? So we've been doing this for years as developers. In the applications operators team, we've been doing it. We've just been doing it via bunch of tickets, we've been doing it via bunch of scripts. What Cloud Foundry does is it takes all of those capabilities you need to really trust a platform to operate something on your behalf, and give you the right view into it, right? The appropriate telemetry, log aggregation, and know that there's going to be help monitoring there. It makes it really easy. Right, so we were talking earlier about the haiku, that Onsi Fakhouri from Pivotal had authored, it's appropriate. It's a promise that a platform makes. And platforms designed to let a user trust that the declarative nature of asking a platform to do X, Y, or Z, will be delivered. >> Chip, we've been hearing Pivotal talks a lot about Spring, when Cloud Foundry's involved. Is it so much so that the Foundation needs to be behind that, or support that? How does that interact and work? >> Well, we're super supportive of all the languages in the framework communities that are out there. You know, even if you pick a particular vendor, Pivotal in this case has a very strong investment in the Spring, Spring Cloud, Spring Boot, they're doing really amazing things. But that's also, it's our software, you know, they steward that community, so all the other vendors actually get the advantage of that. Let's take Dot Net and Microsoft. Microsoft open sourced Dot Net. So now you can run Dot Net applications on Linux. They're embrace of the container details and the APIs and their operating system is making it so that now it can also run on Windows. So the whole Microsoft technology stack, languages and frameworks, they matter quite a bit to the enterprise as well. So we see ourselves as supportive of all of these communities, right? Even ones like the Ruby community. When there's an enterprise developer that chooses to use something like Ruby, with the Ruby on Rails framework, if they use Cloud Foundry, they're getting the latest and greatest version of that language, framework, they know that it's secure, they know that it's going to be patched for them. So it's actually a great experience for that developer, that's working with the language. So, we like to support all of them, we're big fans of any that work really well with the platform and maybe integrate deeper. But it's a polyglot platform. >> We want to give you the final word. People take away from Cloud Foundry Summit 2017, what would you want them to take away? >> Yeah the simple takeaway that I can give you is that this is an absolutely enterprise grade open source ecosystem. And you don't hear that often, right? Because normally we talk about products, being enterprise great. >> Did somebody say in the keynote enterprise great mean that there's a huge salesforce that's going to try sell you stuff? (Chip laughs) Well that's coming from the buying side of the market for years. And you know, it was a bit of a joke. What is "enterprise great?" Well, it means that there's a piece of paper that says, this product will cost x dollars and the salesperson is offering it to you. So of course it's going to be enterprise great. But really, we see it as four key things, right? It's about security, it's about being well-integrated, it's about being able to scale to the needs of even the largest enterprises, and it's also about that great developer experience. So, Cloud Foundry is an ecosystem and all of our downstream distributions get the advantage of this really robust and mature technical community that is producing this software. >> Chip, really appreciate you sharing all the updates with us, and appreciate the foundation's support to bring theCUBE here. We'll be back with lots more coverage here from The Cloud Foundry Summit 2017, you're watching theCUBE. (techno music)
SUMMARY :
Brought to you by the Cloud Foundry Foundation and Pivotal. the Cloud Foundry Foundation. of large enterprises that are here to share their story. doing the event, bringing the community in. of the Open Source software. in what you did in the Keynote. the open source piece first? So our community that builds the UpStream software, Are the path wars over? Regardless of the choices you make, Okay, the other piece, networking you talked about. that the application that the user is Along the same theme, Kubo, You know that abstraction is all about the container. the market and customers drove towards you? that are involved in the project, All of the Cloud guys are going to tell you to make sure that we're stickier with the customers there." I may choose to integrate with a Google machine at scale that you all are going in the Cloud Foundry platform. it's all the things you need in one application, right? Is it so much so that the Foundation needs They're embrace of the container details and the APIs We want to give you the final word. Yeah the simple takeaway that I can give you is the salesperson is offering it to you. Chip, really appreciate you sharing all the updates
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Troyer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Chip Childers | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cloud Foundry Foundation | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Bosh | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Pivotal | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Cloud Foundry | TITLE | 0.99+ |
Two | QUANTITY | 0.99+ |
Ruby on Rails | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Cloud Foundry Show | EVENT | 0.99+ |
Hilton | LOCATION | 0.99+ |
Kubo | ORGANIZATION | 0.98+ |
Santa Clara | LOCATION | 0.98+ |
Chip | PERSON | 0.98+ |
Ruby | TITLE | 0.98+ |
Stu | PERSON | 0.98+ |
MTC | ORGANIZATION | 0.98+ |
Spring Boot | TITLE | 0.98+ |
one application | QUANTITY | 0.98+ |
two things | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
first-time | QUANTITY | 0.97+ |
billions of dollars | QUANTITY | 0.97+ |
nine | QUANTITY | 0.97+ |
two key things | QUANTITY | 0.97+ |
ten nodes | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
Spring Cloud | TITLE | 0.96+ |
Narrator: Live | TITLE | 0.96+ |
Cloud Foundry Summit | EVENT | 0.95+ |
Global 2000 | ORGANIZATION | 0.94+ |
Cloud Foundry Summit 2017 | EVENT | 0.94+ |
Windows | TITLE | 0.94+ |
this year | DATE | 0.93+ |
four key | QUANTITY | 0.92+ |
today | DATE | 0.92+ |
Spring | TITLE | 0.92+ |
#theCUBE | ORGANIZATION | 0.91+ |
Linux | TITLE | 0.91+ |
Kubernetes | ORGANIZATION | 0.9+ |
Fortune 500 | ORGANIZATION | 0.9+ |
several months back | DATE | 0.9+ |
Dot Net | ORGANIZATION | 0.88+ |
Multicloud | ORGANIZATION | 0.86+ |
Onsi Fakhouri | PERSON | 0.86+ |
theCUBE | ORGANIZATION | 0.86+ |
Kuber | ORGANIZATION | 0.83+ |
Open Shift | TITLE | 0.82+ |
Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1
(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert Nishihara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Robert | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
35 times | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$100 million | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Ant Group | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
20% | QUANTITY | 0.99+ |
32 GPUs | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Anyscale | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
128 | QUANTITY | 0.99+ |
September | DATE | 0.99+ |
today | DATE | 0.99+ |
Moore's Law | TITLE | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
PyTorch | TITLE | 0.99+ |
Ray | ORGANIZATION | 0.99+ |
second reason | QUANTITY | 0.99+ |
64 | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
Photoshop | TITLE | 0.99+ |
UC Berkeley | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
OpenAI | ORGANIZATION | 0.99+ |
Anyscale | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
ByteDance | ORGANIZATION | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
95 | QUANTITY | 0.99+ |
Asure | ORGANIZATION | 0.98+ |
one line | QUANTITY | 0.98+ |
one GPU | QUANTITY | 0.98+ |
ChatGPT | TITLE | 0.98+ |
TensorFlow | TITLE | 0.98+ |
last year | DATE | 0.98+ |
first bucket | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two layers | QUANTITY | 0.98+ |
Cohere | ORGANIZATION | 0.98+ |
Alipay | ORGANIZATION | 0.98+ |
Ray | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
Instacart | ORGANIZATION | 0.97+ |
Breaking Analysis: How Palo Alto Networks Became the Gold Standard of Cybersecurity
>> From "theCube" Studios in Palo Alto in Boston bringing you data-driven insights from "theCube" and ETR. This is "Breaking Analysis" with Dave Vellante. >> As an independent pure play company, Palo Alto Networks has earned its status as the leader in security. You can measure this in a variety of ways. Revenue, market cap, execution, ethos, and most importantly, conversations with customers generally. In CISO specifically, who consistently affirm this position. The company's on track to double its revenues in fiscal year 23 relative to fiscal year 2020. Despite macro headwinds, which are likely to carry through next year, Palo Alto owes its position to a clarity of vision and strong execution on a TAM expansion strategy through acquisitions and integration into its cloud and SaaS offerings. Hello and welcome to this week's "Wikibon Cube Insights" powered by ETR and this breaking analysis and ahead of Palo Alto Ignite the company's user conference, we bring you the next chapter on top of the last week's cybersecurity update. We're going to dig into the ETR data on Palo Alto Networks as we promised and provide a glimpse of what we're going to look for at "Ignite" and posit what Palo Alto needs to do to stay on top of the hill. Now, the challenges for cybersecurity professionals. Dead simple to understand. Solving it, not so much. This is a taxonomic eye test, if you will, from Optiv. It's one of our favorite artifacts to make the point the cybersecurity landscape is a mosaic of stovepipes. Security professionals have to work with dozens of tools many legacy combined with shiny new toys to try and keep up with the relentless pace of innovation catalyzed by the incredibly capable well-funded and motivated adversaries. Cybersecurity is an anomalous market in that the leaders have low single digit market shares. Think about that. Cisco at one point held 60% market share in the networking business and it's still deep into the 40s. Oracle captures around 30% of database market revenue. EMC and storage at its peak had more than 30% of that market. Even Dell's PC market shares, you know, in the mid 20s or even over that from a revenue standpoint. So cybersecurity from a market share standpoint is even more fragmented perhaps than the software industry. Okay, you get the point. So despite its position as the number one player Palo Alto might have maybe three maybe 4% of the total market, depending on what you use as your denominator, but just a tiny slice. So how is it that we can sit here and declare Palo Alto as the undisputed leader? Well, we probably wouldn't go that far. They probably have quite a bit of competition. But this CISO from a recent ETR round table discussion with our friend Eric Bradley, summed up Palo Alto's allure. We thought pretty well. The question was why Palo Alto Networks? Here's the answer. Because of its completeness as a platform, its ability to integrate with its own products or they acquire, integrate then rebrand them as their own. We've looked at other vendors we just didn't think they were as mature and we already had implemented some of the Palo Alto tools like the firewalls and stuff and we thought why not go holistically with the vendor a single throat to choke, if you will, if stuff goes wrong. And I think that was probably the primary driver and familiarity with the tools and the resources that they provided. Now here's another stat from ETR's Eric Bradley. He gave us a glimpse of the January survey that's in the field now. The percent of IT buyers stating that they plan to consolidate redundant vendors, it went from 34% in the October survey and now stands at 44%. So we fo we feel this bodes well for consolidators like Palo Alto networks. And the same is true from Microsoft's kind of good enough approach. It should also be true for CrowdStrike although last quarter we saw softness reported on in their SMB market, whereas interestingly MongoDB actually saw consistent strength from its SMB and its self-serve. So that's something that we're watching very closely. Now, Palo Alto Networks has held up better than most of its peers in the stock market. So let's take a look at that real quick. This chart gives you a sense of how well. It's a one year comparison of Palo Alto with the bug ETF. That's the cyber basket that we like to compare often CrowdStrike, Zscaler, and Okta. Now remember Palo Alto, they didn't run up as much as CrowdStrike, ZS and Okta during the pandemic but you can see it's now down unquote only 9% for the year. Whereas the cyber basket ETF is off 27% roughly in line with the NASDAQ. We're not showing that CrowdStrike down 44%, Zscaler down 61% and Okta off a whopping 72% in the past 12 months. Now as we've indicated, Palo Alto is making a strong case for consolidating point tools and we think it will have a much harder time getting customers to switch off of big platforms like Cisco who's another leader in network security. But based on the fragmentation in the market there's plenty of room to grow in our view. We asked breaking analysis contributor Chip Simington for his take on the technicals of the stock and he said that despite Palo Alto's leadership position it doesn't seem to make much difference these days. It's all about interest rates. And even though this name has performed better than its peers, it looks like the stock wants to keep testing its 52 week lows, but he thinks Palo Alto got oversold during the last big selloff. And the fact that the company's free cash flow is so strong probably keeps it at the one 50 level or above maybe bouncing around there for a while. If it breaks through that under to the downside it's ne next test is at that low of around one 40 level. So thanks for that, Chip. Now having get that out of the way as we said on the previous chart Palo Alto has strong opinions, it's founder and CTO, Nir Zuk, is extremely clear on that point of view. So let's take a look at how Palo Alto got to where it is today and how we think you should think about his future. The company was founded around 18 years ago as a network security company focused on what they called NextGen firewalls. Now, what Palo Alto did was different. They didn't try to stuff a bunch of functionality inside of a hardware box. Rather they layered network security functions on top of its firewalls and delivered value as a service through software running at the time in its own cloud. So pretty obvious today, but forward thinking for the time and now they've moved to a more true cloud native platform and much more activity in the public cloud. In February, 2020, right before the pandemic we reported on the divergence in market values between Palo Alto and Fort Net and we cited some challenges that Palo Alto was happening having transitioning to a cloud native model. And at the time we said we were confident that Palo Alto would make it through the knot hole. And you could see from the previous chart that it has. So the company's architectural approach was to do the heavy lifting in the cloud. And this eliminates the need for customers to deploy sensors on prem or proxies on prem or sandboxes on prem sandboxes, you know for instance are vulnerable to overwhelming attacks. Think about it, if you're a sandbox is on prem you're not going to be updating that every day. No way. You're probably not going to updated even every week or every month. And if the capacity of your sandbox is let's say 20,000 files an hour you know a hacker's just going to turn up the volume, it'll overwhelm you. They'll send a hundred thousand emails attachments into your sandbox and they'll choke you out and then they'll have the run of the house while you're trying to recover. Now the cloud doesn't completely prevent that but what it does, it definitely increases the hacker's cost. So they're going to probably hit some easier targets and that's kind of the objective of security firms. You know, increase the denominator on the ROI. All right, the next thing that Palo Alto did is start acquiring aggressively, I think we counted 17 or 18 acquisitions to expand the TAM beyond network security into endpoint CASB, PaaS security, IaaS security, container security, serverless security, incident response, SD WAN, CICD pipeline security, attack service management, supply chain security. Just recently with the acquisition of Cider Security and Palo Alto by all accounts takes the time to integrate into its cloud and SaaS platform called Prisma. Unlike many acquisitive companies in the past EMC was a really good example where you ended up with a kind of a Franken portfolio. Now all this leads us to believe that Palo Alto wants to be the consolidator and is in a good position to do so. But beyond that, as multi-cloud becomes more prevalent and more of a strategy customers tell us they want a consistent experience across clouds. And is going to be the same by the way with IoT. So of the next wave here. Customers don't want another stove pipe. So we think Palo Alto is in a good position to build what we call the security super cloud that layer above the clouds that brings a common experience for devs and operational teams. So of course the obvious question is this, can Palo Alto networks continue on this path of acquire and integrate and still maintain best of breed status? Can it? Will it? Does it even have to? As Holger Mueller of Constellation Research and I talk about all the time integrated suites seem to always beat best of breed in the long run. We'll come back to that. Now, this next graphic that we're going to show you underscores this question about portfolio. Here's a picture and I don't expect you to digest it all but it's a screen grab of Palo Alto's product and solutions portfolios, network cloud, network security rather, cloud security, Sassy, CNAP, endpoint unit 42 which is their threat intelligence platform and every imaginable security service and solution for customers. Well, maybe not every, I'm sure there's more to come like supply chain with the recent Cider acquisition and maybe more IoT beyond ZingBox and earlier acquisition but we're sure there will be more in the future both organic and inorganic. Okay, let's bring in more of the ETR survey data. For those of you who don't know ETR, they are the number one enterprise data platform surveying thousands of end customers every quarter with additional drill down surveys and customer round tables just an awesome SaaS enabled platform. And here's a view that shows net score or spending momentum on the vertical axis in provision or presence within the ETR data set on the horizontal axis. You see that red dotted line at 40%. Anything at or over that indicates a highly elevated net score. And as you can see Palo Alto is right on that line just under. And I'll give you another glimpse it looks like Palo Alto despite the macro may even just edge up a bit in the next survey based on the glimpse that Eric gave us. Now those colored bars in the bottom right corner they show the breakdown of Palo Alto's net score and underscore the methodology that ETR uses. The lime green is new customer adoptions, that's 7%. The forest green at 38% represents the percent of customers that are spending 6% or more on Palo Alto solutions. The gray is at that 40 or 8% that's flat spending plus or minus 5%. The pinkish at 5% is spending is down on Palo Alto network products by 6% or worse. And the bright red at only 2% is churn or defections. Very low single digit numbers for Palo Alto, that's a real positive. What you do is you subtract the red from the green and you get a net score of 38% which is very good for a company of Palo Alto size. And we'll note this is based on just under 400 responses in the ETR survey that are Palo Alto customers out of around 1300 in the total survey. It's a really good representation of Palo Alto. And you can see the other leading companies like CrowdStrike, Okta, Zscaler, Forte, Cisco they loom large with similar aspirations. Well maybe not so much Okta. They don't necessarily rule want to rule the world. They want to rule identity and of course the ever ubiquitous Microsoft in the upper right. Now drilling deeper into the ETR data, let's look at how Palo Alto has progressed over the last three surveys in terms of market presence in the survey. This view of the data shows provision in the data going back to October, 2021, that's the gray bars. The blue is July 22 and the yellow is the latest survey from October, 2022. Remember, the January survey is currently in the field. Now the leftmost set of data there show size a company. The middle set of data shows the industry for a select number of industries in the right most shows, geographic region. Notice anything, yes, Palo Alto up across the board relative to both this past summer and last fall. So that's pretty impressive. Palo Alto network CEO, Nikesh Aurora, stressed on the last earnings call that the company is seeing somewhat elongated deal approvals and sometimes splitting up size of deals. He's stressed that certain industries like energy, government and financial services continue to spend. But we would expect even a pullback there as companies get more conservative. But the point is that Nikesh talked about how they're hiring more sales pros to work the pipeline because they understand that they have to work harder to pull deals forward 'cause they got to get more approvals and they got to increase the volume that's coming through the pipeline to account for the possibility that certain companies are going to split up the deals, you know, large deals they want to split into to smaller bite size chunks. So they're really going hard after they go to market expansion to account for that. All right, so we're going to wrap by sharing what we expect and what we're going to probe for at Palo Alto Ignite next week, Lisa Martin and I will be hosting "theCube" and here's what we'll be looking for. First, it's a four day event at the MGM with the meat of the program on days two and three. That's day two was the big keynote. That's when we'll start our broadcasting, we're going for two days. Now our understanding is we've never done Palo Alto Ignite before but our understanding it's a pretty technically oriented crowd that's going to be eager to hear what CTO and founder Nir Zuk has to say. And as well CEO Nikesh Aurora and as in addition to longtime friend of "theCube" and current president, BJ Jenkins, he's going to be speaking. Wendy Whitmore runs Unit 42 and is going to be several other high profile Palo Alto execs, as well, Thomas Kurian from Google is a featured speaker. Lee Claridge, who is Palo Alto's, chief product officer we think is going to be giving the audience heavy doses of Prisma Cloud and Cortex enhancements. Now, Cortex, you might remember, came from an acquisition and does threat detection and attack surface management. And we're going to hear a lot about we think about security automation. So we'll be listening for how Cortex has been integrated and what kind of uptake that it's getting. We've done some, you know, modeling in from the ETR. Guys have done some modeling of cortex, you know looks like it's got a lot of upside and through the Palo Alto go to market machine, you know could really pick up momentum. That's something that we'll be probing for. Now, one of the other things that we'll be watching is pricing. We want to talk to customers about their spend optimization, their spending patterns, their vendor consolidation strategies. Look, Palo Alto is a premium offering. It charges for value. It's expensive. So we also want to understand what kind of switching costs are customers willing to absorb and how onerous they are and what's the business case look like? How are they thinking about that business case. We also want to understand and really probe on how will Palo Alto maintain best of breed as it continues to acquire and integrate to expand its TAM and appeal as that one-stop shop. You know, can it do that as we talked about before. And will it do that? There's also an interesting tension going on sort of changing subjects here in security. There's a guy named Edward Hellekey who's been in "theCube" before. He hasn't been in "theCube" in a while but he's a security pro who has educated us on the nuances of protecting data privacy, public policy, how it varies by region and how complicated it is relative to security. Because securities you technically you have to show a chain of custody that proves unequivocally, for example that data has been deleted or scrubbed or that metadata does. It doesn't include any residual private data that violates the laws, the local laws. And the tension is this, you need good data and lots of it to have good security, really the more the better. But government policy is often at odds in a major blocker to sharing data and it's getting more so. So we want to understand this tension and how companies like Palo Alto are dealing with it. Our customers testing public policy in courts we think not quite yet, our government's making exceptions and policies like GDPR that favor security over data privacy. What are the trade-offs there? And finally, one theme of this breaking analysis is what does Palo Alto have to do to stay on top? And we would sum it up with three words. Ecosystem, ecosystem, ecosystem. And we said this at CrowdStrike Falcon in September that the one concern we had was the pace of ecosystem development for CrowdStrike. Is collaboration possible with competitors? Is being adopted aggressively? Is Palo Alto being adopted aggressively by global system integrators? What's the uptake there? What about developers? Look, the hallmark of a cloud company which Palo Alto is a cloud security company is a thriving ecosystem that has entries into and exits from its platform. So we'll be looking at what that ecosystem looks like how vibrant and inclusive it is where the public clouds fit and whether Palo Alto Networks can really become the security super cloud. Okay, that's a wrap stop by next week. If you're in Vegas, say hello to "theCube" team. We have an unbelievable lineup on the program. Now if you're not there, check out our coverage on theCube.net. I want to thank Eric Bradley for sharing a glimpse on short notice of the upcoming survey from ETR and his thoughts. And as always, thanks to Chip Symington for his sharp comments. Want to thank Alex Morrison, who's on production and manages the podcast Ken Schiffman as well in our Boston studio, Kristen Martin and Cheryl Knight they help get the word out on social and of course in our newsletters, Rob Hoof, is our editor in chief over at Silicon Angle who does some awesome editing, thank you to all. Remember all these episodes they're available as podcasts. Wherever you listen, all you got to do is search "Breaking Analysis" podcasts. I publish each week on wikibon.com and silicon angle.com where you can email me at david.valante@siliconangle.com or dm me at D Valante or comment on our LinkedIn post. And please do check out etr.ai. They've got the best survey data in the enterprise tech business. This is Dave Valante for "theCube" Insights powered by ETR. Thanks for watching. We'll see you next week on "Ignite" or next time on "Breaking Analysis". (upbeat music)
SUMMARY :
bringing you data-driven and of course the ever
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Morrison | PERSON | 0.99+ |
Edward Hellekey | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Thomas Kurian | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lee Claridge | PERSON | 0.99+ |
Rob Hoof | PERSON | 0.99+ |
17 | QUANTITY | 0.99+ |
October, 2021 | DATE | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
February, 2020 | DATE | 0.99+ |
October, 2022 | DATE | 0.99+ |
40 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Wendy Whitmore | PERSON | 0.99+ |
September | DATE | 0.99+ |
October | DATE | 0.99+ |
January | DATE | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
Forte | ORGANIZATION | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
Chip Simington | PERSON | 0.99+ |
52 week | QUANTITY | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
BJ Jenkins | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
July 22 | DATE | 0.99+ |
6% | QUANTITY | 0.99+ |
Eric | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two days | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
34% | QUANTITY | 0.99+ |
Chip Symington | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
7% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
27% | QUANTITY | 0.99+ |
44% | QUANTITY | 0.99+ |
61% | QUANTITY | 0.99+ |
38% | QUANTITY | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Nir Zuk | PERSON | 0.99+ |
72% | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
4% | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
Cider Security | ORGANIZATION | 0.99+ |
four day | QUANTITY | 0.99+ |
fiscal year 23 | DATE | 0.99+ |
8% | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
david.valante@siliconangle.com | OTHER | 0.99+ |
Fort Net | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
First | QUANTITY | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
GDPR | TITLE | 0.99+ |
last fall | DATE | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
fiscal year 2020 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
more than 30% | QUANTITY | 0.99+ |
three words | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Franken | ORGANIZATION | 0.99+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)
SUMMARY :
David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
David Nicholson | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
August | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
50 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
9,000 | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
128, 400 gig | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,352 | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
Tomahawk 4 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
512 fiber | QUANTITY | 0.98+ |
seven times | QUANTITY | 0.98+ |
Tomahawk 5 | COMMERCIAL_ITEM | 0.98+ |
four lanes | QUANTITY | 0.98+ |
9,000 plus | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
Aquaman | PERSON | 0.97+ |
Both | QUANTITY | 0.97+ |
InfiniBand | ORGANIZATION | 0.97+ |
QSFP 112 | OTHER | 0.96+ |
hundred gig | QUANTITY | 0.96+ |
Peter Del Vecchio | PERSON | 0.96+ |
25.6 terabytes per second | QUANTITY | 0.96+ |
two fascinating guests | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
64 OSFP | QUANTITY | 0.95+ |
Rocky | ORGANIZATION | 0.95+ |
two million CPUs | QUANTITY | 0.95+ |
25.6 T. | QUANTITY | 0.95+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.
SUMMARY :
how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
August | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
2 million | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
50 gig | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
400 giggi | QUANTITY | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
9,000 plus | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Peter Del Vecchio | PERSON | 0.99+ |
single source | QUANTITY | 0.99+ |
North America | LOCATION | 0.98+ |
double | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
Hawk four | COMMERCIAL_ITEM | 0.98+ |
three | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.97+ |
next year | DATE | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
Tomahawk five | COMMERCIAL_ITEM | 0.97+ |
Dell Technologies | ORGANIZATION | 0.97+ |
T six | COMMERCIAL_ITEM | 0.96+ |
two | QUANTITY | 0.96+ |
one switch | QUANTITY | 0.96+ |
Texas | LOCATION | 0.96+ |
six efficiency | QUANTITY | 0.96+ |
25 point | QUANTITY | 0.95+ |
Armando | ORGANIZATION | 0.95+ |
50 | QUANTITY | 0.93+ |
25.6 tets per second | QUANTITY | 0.92+ |
51.2 terabytes per second | QUANTITY | 0.92+ |
18 | QUANTITY | 0.91+ |
512 fiber pairs | QUANTITY | 0.91+ |
two fascinating guests | QUANTITY | 0.91+ |
hundred gig | QUANTITY | 0.91+ |
four lanes | QUANTITY | 0.9+ |
HPC | ORGANIZATION | 0.9+ |
51.2 T. | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.9+ |
256 end | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
Armando Acosta | PERSON | 0.89+ |
two different network technologies | QUANTITY | 0.88+ |
Breaking Analysis: Snowflake caught in the storm clouds
>> From the CUBE Studios in Palo Alto in Boston, bringing you data driven insights from the Cube and ETR. This is Breaking Analysis with Dave Vellante. >> A better than expected earnings report in late August got people excited about Snowflake again, but the negative sentiment in the market is weighed heavily on virtually all growth tech stocks and Snowflake is no exception. As we've stressed many times the company's management is on a long term mission to dramatically simplify the way organizations use data. Snowflake is tapping into a multi hundred billion dollar total available market and continues to grow at a rapid pace. In our view, Snowflake is embarking on its third major wave of innovation data apps, while its first and second waves are still bearing significant fruit. Now for short term traders focused on the next 90 or 180 days, that probably doesn't matter. But those taking a longer view are asking, "Should we still be optimistic about the future of this high flyer or is it just another over hyped tech play?" Hello and welcome to this week's Wiki Bond Cube Insights powered by ETR. Snowflake's Quarter just ended. And in this breaking analysis we take a look at the most recent survey data from ETR to see what clues and nuggets we can extract to predict the near term future in the long term outlook for Snowflake which is going to announce its earnings at the end of this month. Okay, so you know the story. If you've been investor in Snowflake this year, it's been painful. We said at IPO, "If you really want to own this stock on day one, just hold your nose and buy it." But like most IPOs we said there will be likely a better entry point in the future, and not surprisingly that's been the case. Snowflake IPOed a price of 120, which you couldn't touch on day one unless you got into a friends and family Delio. And if you did, you're still up 5% or so. So congratulations. But at one point last year you were up well over 200%. That's been the nature of this volatile stock, and I certainly can't help you with the timing of the market. But longer term Snowflake is targeting 10 billion in revenue for fiscal year 2028. A big number. Is it achievable? Is it big enough? Tell you what, let's come back to that. Now shorter term, our expert trader and breaking analysis contributor Chip Simonton said he got out of the stock a while ago after having taken a shot at what turned out to be a bear market rally. He pointed out that the stock had been bouncing around the 150 level for the last few months and broke that to the downside last Friday. So he'd expect 150 is where the stock is going to find resistance on the way back up, but there's no sign of support right now. He said maybe at 120, which was the July low and of course the IPO price that we just talked about. Now, perhaps earnings will be a catalyst, when Snowflake announces on November 30th, but until the mentality toward growth tech changes, nothing's likely to change dramatically according to Simonton. So now that we have that out of the way, let's take a look at the spending data for Snowflake in the ETR survey. Here's a chart that shows the time series breakdown of snowflake's net score going back to the October, 2021 survey. Now at that time, Snowflake's net score stood at a robust 77%. And remember, net score is a measure of spending velocity. It's a proprietary network, and ETR derives it from a quarterly survey of IT buyers and asks the respondents, "Are you adopting the platform new? Are you spending 6% or more? Is you're spending flat? Is you're spending down 6% or worse? Or are you leaving the platform decommissioning?" You subtract the percent of customers that are spending less or churning from those that are spending more and adopting or adopting and you get a net score. And that's expressed as a percentage of customers responding. In this chart we show Snowflake's in out of the total survey which ranges... The total survey ranges between 1,200 and 1,400 each quarter. And the very last column... Oh sorry, very last row, we show the number of Snowflake respondents that are coming in the survey from the Fortune 500 and the Global 2000. Those are two very important Snowflake constituencies. Now what this data tells us is that Snowflake exited 2021 with very strong momentum in a net score of 82%, which is off the charts and it was actually accelerating from the previous survey. Now by April that sentiment had flipped and Snowflake came down to earth with a 68% net score. Still highly elevated relative to its peers, but meaningfully down. Why was that? Because we saw a drop in new ads and an increase in flat spend. Then into the July and most recent October surveys, you saw a significant drop in the percentage of customers that were spending more. Now, notably, the percentage of customers who are contemplating adding the platform is actually staying pretty strong, but it is off a bit this past survey. And combined with a slight uptick in planned churn, net score is now down to 60%. That uptick from 0% and 1% and then 3%, it's still small, but that net score at 60% is still 20 percentage points higher than our highly elevated benchmark of 40% as you recall from listening to earlier breaking analysis. That 40% range is we consider a milestone. Anything above that is actually quite strong. But again, Snowflake is down and coming back to churn, while 3% churn is very low, in previous quarters we've seen Snowflake 0% or 1% decommissions. Now the last thing to note in this chart is the meaningful uptick in survey respondents that are citing, they're using the Snowflake platform. That's up to 212 in the survey. So look, it's hard to imagine that Snowflake doesn't feel the softening in the market like everyone else. Snowflake is guiding for around 60% growth in product revenue against the tough compare from a year ago with a 2% operating margin. So like every company, the reaction of the street is going to come down to how accurate or conservative the guide is from their CFO. Now, earlier this year, Snowflake acquired a company called Streamlit for around $800 million. Streamlit is an open source Python library and it makes it easier to build data apps with machine learning, obviously a huge trend. And like Snowflake, generally its focus is on simplifying the complex, in this case making data science easier to integrate into data apps that business people can use. So we were excited this summer in the July ETR survey to see that they added some nice data and pick on Streamlit, which we're showing here in comparison to Snowflake's core business on the left hand side. That's the data warehousing, the Streamlit pieces on the right hand side. And we show again net score over time from the previous survey for Snowflake's core database and data warehouse offering again on the left as compared to a Streamlit on the right. Snowflake's core product had 194 responses in the October, 22 survey, Streamlit had an end of 73, which is up from 52 in the July survey. So significant uptick of people responding that they're doing business in adopting Streamlit. That was pretty impressive to us. And it's hard to see, but the net scores stayed pretty constant for Streamlit at 51%. It was 52% I think in the previous quarter, well over that magic 40% mark. But when you blend it with Snowflake, it does sort of bring things down a little bit. Now there are two key points here. One is that the acquisition seems to have gained exposure right out of the gate as evidenced by the large number of responses. And two, the spending momentum. Again while it's lower than Snowflake overall, and when you blend it with Snowflake it does pull it down, it's very healthy and steady. Now let's do a little pure comparison with some of our favorite names in this space. This chart shows net score or spending velocity in the Y-axis, an overlap or presence, pervasiveness if you will, in the data set on the X-axis. That red dotted line again is that 40% highly elevated net score that we like to talk about. And that table inserted informs us as to how the companies are plotted, where the dots set up, the net score, the ins. And we're comparing a number of database players, although just a caution, Oracle includes all of Oracle including its apps. But we just put it in there for reference because it is the leader in database. Right off the bat, Snowflake jumps out with a net score of 64%. The 60% from the earlier chart, again included Streamlit. So you can see its core database, data warehouse business actually is higher than the total company average that we showed you before 'cause the Streamlit is blended in. So when you separate it out, Streamlit is right on top of data bricks. Isn't that ironic? Only Snowflake and Databricks in this selection of names are above the 40% level. You see Mongo and Couchbase, they know they're solid and Teradata cloud actually showing pretty well compared to some of the earlier survey results. Now let's isolate on the database data platform sector and see how that shapes up. And for this analysis, same XY dimensions, we've added the big giants, AWS and Microsoft and Google. And notice that those three plus Snowflake are just at or above the 40% line. Snowflake continues to lead by a significant margin in spending momentum and it keeps creeping to the right. That's that end that we talked about earlier. Now here's an interesting tidbit. Snowflake is often asked, and I've asked them myself many times, "How are you faring relative to AWS, Microsoft and Google, these big whales with Redshift and Synapse and Big Query?" And Snowflake has been telling folks that 80% of its business comes from AWS. And when Microsoft heard that, they said, "Whoa, wait a minute, Snowflake, let's partner up." 'Cause Microsoft is smart, and they understand that the market is enormous. And if they could do better with Snowflake, one, they may steal some business from AWS. And two, even if Snowflake is winning against some of the Microsoft database products, if it wins on Azure, Microsoft is going to sell more compute and more storage, more AI tools, more other stuff to these customers. Now AWS is really aggressive from a partnering standpoint with Snowflake. They're openly negotiating, not openly, but they're negotiating better prices. They're realizing that when it comes to data, the cheaper that you make the offering, the more people are going to consume. At scale economies and operating leverage are really powerful things at volume that kick in. Now Microsoft, they're coming along, they obviously get it, but Google is seemingly resistant to that type of go to market partnership. Rather than lean into Snowflake as a great partner Google's field force is kind of fighting fashion. Google itself at Cloud next heavily messaged what they call the open data cloud, which is a direct rip off of Snowflake. So what can we say about Google? They continue to be kind of behind the curve when it comes to go to market. Now just a brief aside on the competitive posture. I've seen Slootman, Frank Slootman, CEO of Snowflake in action with his prior companies and how he depositioned the competition. At Data Domain, he eviscerated a company called Avamar with their, what he called their expensive and slow post process architecture. I think he actually called it garbage, if I recall at one conference I heard him speak at. And that sort of destroyed BMC when he was at ServiceNow, kind of positioning them as the equivalent of the department of motor vehicles. And so it's interesting to hear how Snowflake openly talks about the data platforms of AWS, Microsoft, Google, and data bricks. I'll give you this sort of short bumper sticker. Redshift is just an on-prem database that AWS morphed to the cloud, which by the way is kind of true. They actually did a brilliant job of it, but it's basically a fact. Microsoft Excel, a collection of legacy databases, which also kind of morphed to run in the cloud. And even Big Query, which is considered cloud native by many if not most, is being positioned by Snowflake as originally an on-prem database to support Google's ad business, maybe. And data bricks is for those people smart enough to get it to Berkeley that love complexity. And now Snowflake doesn't, they don't mention Berkeley as far as I know. That's my addition. But you get the point. And the interesting thing about Databricks and Snowflake is a while ago in the cube I said that there was a new workload type emerging around data where you have AWS cloud, Snowflake obviously for the cloud database and Databricks data for the data science and EML, you bring those things together and there's this new workload emerging that's going to be very powerful in the future. And it's interesting to see now the aspirations of all three of these platforms are colliding. That's quite a dynamic, especially when you see both Snowflake and Databricks putting venture money and getting their hooks into the loyalties of the same companies like DBT labs and Calibra. Anyway, Snowflake's posture is that we are the pioneer in cloud native data warehouse, data sharing and now data apps. And our platform is designed for business people that want simplicity. The other guys, yes, they're formidable, but we Snowflake have an architectural lead and of course we run in multiple clouds. So it's pretty strong positioning or depositioning, you have to admit. Now I'm not sure I agree with the big query knockoffs completely. I think that's a bit of a stretch, but snowflake, as we see in the ETR survey data is winning. So in thinking about the longer term future, let's talk about what's different with Snowflake, where it's headed and what the opportunities are for the company. Snowflake put itself on the map by focusing on simplifying data analytics. What's interesting about that is the company's founders are as you probably know from Oracle. And rather than focusing on transactional data, which is Oracle's sweet spot, the stuff they worked on when they were at Oracle, the founder said, "We're going to go somewhere else. We're going to attack the data warehousing problem and the data analytics problem." And they completely re-imagined the database and how it could be applied to solve those challenges and reimagine what was possible if you had virtually unlimited compute and storage capacity. And of course Snowflake became famous for separating the compute from storage and being able to completely shut down compute so you didn't have to pay for it when you're not using it. And the ability to have multiple clusters hit the same data without making endless copies and a consumption/cloud pricing model. And then of course everyone on the planet realized, "Wow, that's a pretty good idea." Every venture capitalist in Silicon Valley has been funding companies to copy that move. And that today has pretty much become mainstream in table stakes. But I would argue that Snowflake not only had the lead, but when you look at how others are approaching this problem, it's not necessarily as clean and as elegant. Some of the startups, the early startups I think get it and maybe had an advantage of starting later, which can be a disadvantage too. But AWS is a good example of what I'm saying here. Is its version of separating compute from storage was an afterthought and it's good, it's... Given what they had it was actually quite clever and customers like it, but it's more of a, "Okay, we're going to tier to storage to lower cost, we're going to sort of dial down the compute not completely, we're not going to shut it off, we're going to minimize the compute required." It's really not true as separation is like for instance Snowflake has. But having said that, we're talking about competitors with lots of resources and cohort offerings. And so I don't want to make this necessarily all about the product, but all things being equal architecture matters, okay? So that's the cloud S-curve, the first one we're showing. Snowflake's still on that S-curve, and in and of itself it's got legs, but it's not what's going to power the company to 10 billion. The next S-curve we denote is the multi-cloud in the middle. And now while 80% of Snowflake's revenue is AWS, Microsoft is ramping up and Google, well, we'll see. But the interesting part of that curve is data sharing, and this idea of data clean rooms. I mean it really should be called the data sharing curve, but I have my reasons for calling it multi-cloud. And this is all about network effects and data gravity, and you're seeing this play out today, especially in industries like financial services and healthcare and government that are highly regulated verticals where folks are super paranoid about compliance. There not going to share data if they're going to get sued for it, if they're going to be in the front page of the Wall Street Journal for some kind of privacy breach. And what Snowflake has done is said, "Put all the data in our cloud." Now, of course now that triggers a lot of people because it's a walled garden, okay? It is. That's the trade off. It's not the Wild West, it's not Windows, it's Mac, it's more controlled. But the idea is that as different parts of the organization or even partners begin to share data that they need, it's got to be governed, it's got to be secure, it's got to be compliant, it's got to be trusted. So Snowflake introduced the idea of, they call these things stable edges. I think that's the term that they use. And they track a metric around stable edges. And so a stable edge, or think of it as a persistent edge is an ongoing relationship between two parties that last for some period of time, more than a month. It's not just a one shot deal, one a done type of, "Oh guys shared it for a day, done." It sent you an FTP, it's done. No, it's got to have trajectory over time. Four weeks or six weeks or some period of time that's meaningful. And that metric is growing. Now I think sort of a different metric that they track. I think around 20% of Snowflake customers are actively sharing data today and then they track the number of those edge relationships that exist. So that's something that's unique. Because again, most data sharing is all about making copies of data. That's great for storage companies, it's bad for auditors, and it's bad for compliance officers. And that trend is just starting out, that middle S-curve, it's going to kind of hit the base of that steep part of the S-curve and it's going to have legs through this decade we think. And then finally the third wave that we show here is what we call super cloud. That's why I called it multi-cloud before, so it could invoke super cloud. The idea that you've built a PAS layer that is purpose built for a specific objective, and in this case it's building data apps that are cloud native, shareable and governed. And is a long-term trend that's going to take some time to develop. I mean, application development platforms can take five to 10 years to mature and gain significant adoption, but this one's unique. This is a critical play for Snowflake. If it's going to compete with the big cloud players, it has to have an app development framework like Snowpark. It has to accommodate new data types like transactional data. That's why it announced this thing called UniStore last June, Snowflake a summit. And the pattern that's forming here is Snowflake is building layer upon layer with its architecture at the core. It's not currently anyway, it's not going out and saying, "All right, we're going to buy a company that's got to another billion dollars in revenue and that's how we're going to get to 10 billion." So it's not buying its way into new markets through revenue. It's actually buying smaller companies that can complement Snowflake and that it can turn into revenue for growth that fit in to the data cloud. Now as to the 10 billion by fiscal year 28, is that achievable? That's the question. Yeah, I think so. Would the momentum resources go to market product and management prowess that Snowflake has? Yes, it's definitely achievable. And one could argue to $10 billion is too conservative. Indeed, Snowflake CFO, Mike Scarpelli will fully admit his forecaster built on existing offerings. He's not including revenue as I understand it from all the new stuff that's in the pipeline because he doesn't know what it's going to look like. He doesn't know what the adoption is going to look like. He doesn't have data on that adoption, not just yet anyway. And now of course things can change quite dramatically. It's possible that is forecast for existing businesses don't materialize or competition picks them off or a company like Databricks actually is able in the longer term replicate the functionality of Snowflake with open source technologies, which would be a very competitive source of innovation. But in our view, there's plenty of room for growth, the market is enormous and the real key is, can and will Snowflake deliver on the promises of simplifying data? Of course we've heard this before from data warehouse, the data mars and data legs and master data management and ETLs and data movers and data copiers and Hadoop and a raft of technologies that have not lived up to expectations. And we've also, by the way, seen some tremendous successes in the software business with the likes of ServiceNow and Salesforce. So will Snowflake be the next great software name and hit that 10 billion magic mark? I think so. Let's reconnect in 2028 and see. Okay, we'll leave it there today. I want to thank Chip Simonton for his input to today's episode. Thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hove is our Editor in Chief over at Silicon Angle. He does some great editing for us. Check it out for all the news. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me to get in touch David.vallante@siliconangle.com. DM me @dvellante or comment on our LinkedIn post. And please do check out etr.ai, they've got the best survey data in the enterprise tech business. This is Dave Vellante for the CUBE Insights, powered by ETR. Thanks for watching, thanks for listening and we'll see you next time on breaking analysis. (upbeat music)
SUMMARY :
insights from the Cube and ETR. And the ability to have multiple
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Mike Scarpelli | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
November 30th | DATE | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Chip Simonton | PERSON | 0.99+ |
October, 2021 | DATE | 0.99+ |
Rob Hove | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Four weeks | QUANTITY | 0.99+ |
July | DATE | 0.99+ |
six weeks | QUANTITY | 0.99+ |
10 billion | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Slootman | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
6% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
October | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
1,400 | QUANTITY | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
April | DATE | 0.99+ |
3% | QUANTITY | 0.99+ |
77% | QUANTITY | 0.99+ |
64% | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
194 responses | QUANTITY | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
two parties | QUANTITY | 0.99+ |
51% | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
Silicon Angle | ORGANIZATION | 0.99+ |
fiscal year 28 | DATE | 0.99+ |
billion dollars | QUANTITY | 0.99+ |
0% | QUANTITY | 0.99+ |
Avamar | ORGANIZATION | 0.99+ |
52% | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
2028 | DATE | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
1% | QUANTITY | 0.99+ |
late August | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
fiscal year 2028 | DATE | 0.99+ |
Breaking Analysis: UiPath is a Rocket Ship Resetting its Course
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Like a marathon runner pumped up on adrenaline, UiPath sprinted to the lead in what is surely going to be a long journey toward enabling the modern automated enterprise. Now, in doing so the company has established itself as a leader in enterprise automation while at the same time, it got out over its skis on critical execution items and it disappointed investors along the way. In our view, the company has plenty of upside potential, but will have to slog through its current challenges, including restructuring its go-to market, prioritizing investments, balancing growth with profitability and dealing with a very difficult macro environment. Hello and welcome to this week's Wikibon Cube insights powered by ETR. In this Breaking Analysis and ahead of Forward 5, UiPath's big customer event, we once again dig into RPA and automation leader, UiPath, to share our most current data and view of the company's prospects relative to the competition and the market overall. Now, since the pandemic, four sectors have consistently outperformed in the overall spending landscape in the ETR dataset, cloud, containers, machine learning/AI, and robotic process automation. For the first time in a long time ML and AI and RPA have dropped below the elevated 40% line shown in this ETR graph with the red dotted line. The data here plots the net score or spending momentum for each sector with we put in video conferencing, we added it in simply to provide height to the vertical access. Now, you see those squiggly lines, they show the pattern for ML/AI and RPA, and they demonstrate the downward trajectory over time with only the most current period dropping below the 40% net score mark. While this is not surprising, it underscores one component of the macro headwinds facing all companies generally and UiPath specifically, that is the discretionary nature of certain technology investments. This has been a topic of conversation on theCUBE since the spring spanning data players like Mongo and Snowflake, the cloud, security, and other sectors. The point is ML/AI and RPA appear to be more discretionary than certain sectors, including cloud. Containers most likely benefit from the fact that much of the activity is spending on internal resources, staff like developers as much of the action in containers is free and open source. Now, security is not shown on this graphic, but as we've reported extensively in the last week at CrowdStrike's Falcon conference, security is somewhat less discretionary than other sectors. Now, as it relates to the big four that we've been highlighting since the pandemic hit, we're starting to see priorities shift from strategic investments like AI and automation to more tactical areas to keep the lights on. UiPath has not been immune to this downward pressure, but the company is still able to show some impressive metrics. Here's a snapshot chart from its investor deck. For the first time UiPath's ARR has surpassed $1 billion. The company now has more than 10,000 customers with a large number generating more than $100,000 in ARR. While not shown in this data, UiPath reported this month in its second quarter close that it had $191 million plus ARR customers, which is up 13% sequentially from its Q1. As well, the company's NRR is over 130%, which is very solid and underscores the low churn that we've previously reported for the company. But with that increased ARR comes slower growth. Here's some data we compiled that shows the dramatic growth in ARR, the blue bars, compared with the rapid deceleration and growth. That's the orange line on the right hand access there. For the first time UiPath's ARR growth dipped below 50% last quarter. Now, we've projected 34% and 25% respectively for the company's Q3 in Q4, which is slightly higher than the upper range of UiPath's CFO, Ashim Gupta's guidance from the last earnings call. That still puts UiPath exiting its fiscal year at a 25% ARR growth rate. While it's not unexpected that a company reaching $1 billion in ARR, that milestone, will begin to show lower, slower growth, net new ARR is well off its fiscal year '22 levels. The other perhaps more concerning factor is the company, despite strong 80% gross margins, remains unprofitable and free cash flow negative. New CEO, Rob Enslin, has emphasized the focus on profitability, and we'd like to see a consistent and more disciplined Rule of 40 or Rule of 45 to 50 type of performance going forward. As a result of this decelerating growth and lowered guidance stemming from significant macro challenges including currency fluctuations and weaker demand, especially in Europe and EP and inconsistent performance, the stock, as shown here, has been on a steady decline. What all growth stocks are facing, you know, challenges relative to inflation, rising interest rates, and looming recession, but as seen here, UiPath has significantly underperformed relative to the tech-heavy NASDAQ. UiPath has admitted to execution challenges, and it has brought in an expanded management team to facilitate its sales transition and desire to become a more strategic platform play versus a tactical point product. Now, adding to this challenge of foreign exchange issues, as we've previously reported unlike most high flying tech companies from Silicon Valley, UiPath has a much larger proportion of its business coming from locations outside of the United States, around 50% of its revenue, in fact. Because it prices in local currencies, when you convert back to appreciated dollars, there are less of them, and that weighs down on revenue. Now, we asked Breaking Analysis contributor, Chip Simonton, for his take on this stock, and he told us, "From a technical standpoint, there's really not much you can say, it just looks like a falling knife. It's trading at an all time low but that doesn't mean it can't go lower. New management with a good product is always a positive with a stock like this, but this is just a bad environment for UiPath and all growth stocks really, and," he added, "95% of money managers have never operated in this type of environment before. So that creates more uncertainty. There will be a bottom, but picking it in this high-inflation, high-interest rate world hasn't worked too well lately. There's really no floor to these stocks that don't have earnings, until you start to trade to cash levels." Well, okay, let's see, UiPath has $1.6 billion in cash in the balance sheet and no debt, so we're a long ways off from that target, the cash value with its current $7 billion valuation. You have to go back to April 2019 to UiPaths Series D to find a $7 billion valuation. So Simonton says, "The stock still could go lower." The valuation range for this stock has been quite remarkable from around $50 billion last May to $7 billion today. That's quite a swing. And the spending data from ETR sort of supports this story. This graphic here shows the net score or spending momentum granularity for UiPath. The lime green is new additions to the platform. The forest green is spending 6% or more. The gray is flat spending. The pink is spending down 6% or worse. And the bright red is churn. Subtract the red from the green and you get net score, which is that blue line. The yellow line is pervasiveness within the data set. Now, that yellow line is skewed somewhat because of Microsoft citations. There's a belief from some that competition from Microsoft is the reason for UiPath's troubles, but Microsoft is really delivering RPA for individuals and isn't an enterprise automation platform at least not today, but it's Microsoft, so you can't discount their presence in the market. And it probably is having some impact, but we think there are many other factors weighing on UiPath. Now, this is data through the July survey but taking a glimpse at the early October returns they're trending with the arrows, meaning less green more gray and red, which is going to lower UiPath's overall net score, which is consistent with the macro headwinds and the business performance that it's been seeing. Now, nonetheless, UiPath continues to get high marks from its customers, and relative to it's peers it maintains a leadership position. So this chart from ETR, shows net score or spending velocity in the vertical access, an overlap or presence in the dataset on the horizontal access. Microsoft continues to have a big presence, and as we mentioned, somewhat skews the data. UiPath has maintained its lead relative to automation anywhere on the horizontal access, and remains ahead of the legacy pack of business process and other RPA vendors. Solonis has popped up in the ETR data set recently as a process mining player and has a pretty high net score. It's a critical space UiPath has entered, via its acquisition of ProcessGold back in October 2019. Now, you can also see what we did is we added in the Gartner Magic Quadrant for robotic process automation. We didn't blow it up here but we circled the position of UiPath. You can see it's leading in both the vertical and the horizontal access, ahead of automation anywhere as well as Microsoft and others. Now, we're still not seeing the likes of SAP, Service Now, and Salesforce showing up in the ETR data, but these enterprise software vendors are in a reasonable position to capitalize on automation opportunities within their installed basis. This is why it's so important that UiPath transitions to an enterprise-wide horizontal play that can cut across multiple ERP, CRM, HCM, and service management platforms. While the big software companies can add automation to their respective stovepipes, and they're doing that, UiPath's opportunity is to bring automation to enable enterprises to build on top of and across these SaaS platforms that most companies are running. Now, on the chart, you see the red arrows slanting down. That signifies the expected trend from the upcoming October ETR survey, which is currently in the field and will run through early next month. Suffice it to say that there is downward spending pressure across the board, and we would expect most of these names, including UiPath, to dip below the 40% dotted line. Now, as it relates to the conversation about platform versus product, let's dig into that a bit more. Here's a graphic from UiPath's investor deck that underscores the move from product to platform. UiPath has expanded its platform from its initial on-prem point product to focus on automating tasks for individuals and back offices to a cloud-first platform approach. The company has added in technology from a number of acquisitions and added organically to those. These include, the previously mentioned, ProcessGold for process discovery, process documentation from the acquisition of StepShot, API automation via the acquisition of Cloud Elements, to its more recent acquisition of Re:infer, a natural language processing specialist. Now, we expect the platform to be a big focus of discussion at Forward 5 next week in Las Vegas. So let's close in on our expectations for the three-day event next week at the Venetian. UiPath's user conference has grown over the years and the Venetian should be by far be the biggest and most heavily attended in the company's history. We expect UiPath to really emphasize the role of automation, specifically in the context of digital transformation, and how UiPath has evolved, again, from point product to platform to support digital transformation. Expect to focus on platform maturity. When UiPath announced its platform intentions back in 2019, which was the last physical face-to-face customer event prior to COVID, it essentially was laying out a statement of direction. And over the past three years, it has matured the platform and taken it from vision to reality. You know, I said the last event, actually, the last event was 2021. Of course, theCUBE was there at the Bellagio in Las Vegas. But prior to that, 2019 is when they laid out that platform vision. Now, in a conjunction with this evolution, the company has evolved its partnerships, pairing up with the likes of Snowflake and the data cloud, CrowdStrike, to provide better security, and, of course, the big Global System Integrators, to help implement enterprise automation. And this is where we expect to hear a lot from customers. I've heard, there'll be over 100 speaking at the show about the outcomes and how they're digitally transforming. Now, I mentioned earlier that we haven't seen the big ERP and enterprise software companies show up yet in the ETR data, but believe me they're out there and they're selling automation and RPA and they're competing. So expect UiPath to position themselves and deposition those companies. Position UiPath as a layer above these bespoke platforms shown here on number four. With process discovery and task discovery, building automation across enterprise apps, and operationalizing process workflows as a horizontal play. And I'm sure there'll be some new graphics on this platform that we can share after the event that will emphasize this positioning. And finally, as we showed earlier in the platform discussion, we expect to hear a lot about the new platform capabilities and use cases, and not just RPA, but process mining, testing, testing automation, which is a new vector of growth for UiPath, document processing. And also, we expect UiPath to address its low code development capabilities to expand the number of people in the organization that can create automation capabilities and automations. Those domain experts is what we're talking about here that deeply understand the business but aren't software engineers. Enabling them is going to be really important, and we expect to hear more about that. And we expect this conference to set the tone for a new chapter in UiPath's history. The company's second in-person gathering, but the first one was last October. So really this is going to be sort of a build upon that, and many in-person events. For the first time this year, UiPath was one of the first to bring back its physical event, but we expect it to be bigger than what was at the Bellagio, and a lot of people were concerned about traveling. Although UiPath got a lot of customers there, but I think they're going to really up the game in terms of attendance this year. And really, that comparison is unfair because UiPath, again, it was sort of the middle of COVID last year. But anyway, we expect this new operations and go-to-market oriented focus from co-CEO, Rob Enslin, and new sales management, we're going to be, you know, hearing from them. And the so-called adult supervision has really been lacking at UiPath, historically. Daniel Dines will no doubt continue to have a big presence at the event and at the company. He's not a figurehead by any means. He's got a deep understanding of the product and the market and we'll be interviewing both Daniel and Rob Enslin on theCUBE to find out how they see the future. So tune in next week, or if you're in Las Vegas, definitely stop by theCUBE. If you're not go to thecube.net, you'll be able to watch all of our coverage. Okay, we're going to leave it there today. I want to thank Chip Simonton again for his input to today's episode. Thanks to Alex Morrison who's on production and manages our podcasts. Ken Schiffman, as well, from our Boston office, our Boston studio. Kristen Martin, and Cheryl Knight, they helped get the word out on social media and in our newsletters. And Rob Hof is our editor in chief over at SiliconANGLE that does some great editing. Thanks all. Remember, these episodes are all available as podcasts wherever you listen. All you got to do is search Breaking Analysis Podcasts. I publish each week on wikibon.com and siliconangle.com, and you could email me at david.vellante@siliconangle.com or DM me @dvellante. If you got anything interesting, I'll respond. If not, please keep trying, or comment on my LinkedIn post and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis. (gentle techno music)
SUMMARY :
in Palo Alto in Boston, but the company is still able to show
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Morrison | PERSON | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
April 2019 | DATE | 0.99+ |
October 2019 | DATE | 0.99+ |
Chip Simonton | PERSON | 0.99+ |
Rob Enslin | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
Rob Hof | PERSON | 0.99+ |
$7 billion | QUANTITY | 0.99+ |
$191 million | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
$1 billion | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
$1.6 billion | QUANTITY | 0.99+ |
UiPaths | ORGANIZATION | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
next week | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
25% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
July | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
last year | DATE | 0.99+ |
Ashim Gupta | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
34% | QUANTITY | 0.99+ |
early October | DATE | 0.99+ |
more than $100,000 | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
more than 10,000 customers | QUANTITY | 0.99+ |
last May | DATE | 0.99+ |
three-day | QUANTITY | 0.99+ |
Simonton | PERSON | 0.99+ |
Daniel Dines | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
around $50 billion | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
early next month | DATE | 0.99+ |
last October | DATE | 0.99+ |
each week | QUANTITY | 0.99+ |
October | DATE | 0.99+ |
this year | DATE | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
around 50% | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
wikibon.com | OTHER | 0.98+ |
over 100 | QUANTITY | 0.98+ |
SiliconANGLE | ORGANIZATION | 0.98+ |
Breaking Analysis: How CrowdStrike Plans to Become a Generational Platform
>> From theCUBE studios in Palo Alto in Boston bringing you data driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> In just over 10 years, CrowdStrike has become a leading independent security firm with more than 2 billion in annual recurring revenue, nearly 60% ARR growth, and approximate $40 billion market capitalization, very high retention rates, low churn, and a path to 5 billion in revenue by mid decade. The company has joined Palo Alto Networks as a gold standard pure play cyber security firm. It has achieved this lofty status with an architecture that goes beyond a point product. With outstanding go to market and financial execution, some sharp acquisitions and an ever increasing total available market. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this "Breaking Analysis" and ahead of Falcon, Fal.Con, CrowdStrike's user conference, we take a deeper look into CrowdStrike, its performance, its platform, and survey data from our partner ETR. Now, the general consensus is that spending on Cyber is non-discretionary and is held up better than other technology sectors. While this is generally true, as this data shows, it's nuanced. Let's explore this a bit. First, this is a year-to-date chart of the stock performance of CrowdStrike relative to Palo Alto, the BUG ETF, which is a Cyber index, the NASDAQ and SentinelOne, a relatively new entrant to the IPO public markets. Now, as you can see the security sector as evidenced by the orange line, that Cyber ETF, is holding up better than the overall NASDAQ which is off 28% year-to-date. Palo Alto has held up incredibly well, the best, being off only around 4% year-to-date. Whereas CrowdStrike is off in the double digits this year. But up as we talked about in one of our last "Breaking Analysis" on Cyber, up from its lows this past May. Now, CrowdStrike had a very nice beat and raise on August 30th. But the stop didn't respond well initially. We asked "Breaking Analysis" contributor, Chip Simonton for his technical take and he stated that CrowdStrike has bounced around for the last three months in its current range. He said that Cyber stocks have held up better than the rest of the market, as we're showing. And now might be a good time to take a shot but he is cautious. FedEx had a warning today of a global recession and that's obvious case for a concern. You know, maybe some of these quality Cyber stocks like Palo Alto and CrowdStrike and Zscaler will outperform in a recession, but that play is not for the faint of heart. In fact, it's feeling like a longer, more drawn out tech lash than many had hoped. Perhaps as much as 12 to 18 months of bouncing around with sellers still in control, is generally the sentiment from Simonton. So in terms of Cyber spending being non-discretionary, we'd say it's less discretionary than other it sectors but the CISO still does not have an open wallet, as we've reported before. We've seen that spending momentum has decelerated in all sectors throughout the year. This is an across the board trend. Now, independent of the stock price, George Kurtz, CEO of CrowdStrike, he's running a marathon, not a sprint. And this company is running at a nice pace despite tough macro headwinds. The company is free cash flow positive and is in the black, or a non-GAAP operating profit basis and yet it's growing ARR at nearly 60%. Frank Slootman uses the term inherent profitability, meaning that the company could drive more profits if it wanted to dial down expenses especially in go to market costs. But that would be a mistake for a company like CrowdStrike, in our opinion. While it has an impressive nearly 20,000 customers, there are hundreds of thousands of customers that CrowdStrike could penetrate. So like Snowflake and Slootman, Kurtz is not taking its foot off the gas. Now, the fundamental strength of CrowdStrike and its secret sauce is its architecture and platform, in our view, so let's take a deeper look. CrowdStrike believes that the unstoppable breach is a myth. Now, CISOs don't agree with that because they assume they're going to get breached, but that's CrowdStrike's point of view, so lofty vision. CrowdStrike's mission is to consolidate the patchwork of solutions by introducing modules that go beyond point products. CrowdStrike has more than 20 modules, I think 22, that span a range of capabilities as shown in this table. Now, there are a few critical aspects of the CrowdStrike architecture that bear mentioning. First is the lightweight agent, that is fundamental. You know, we're used to thinking that agentless is good and agent is bad, but in this case, a powerful but small, slim and easy to install but unobtrusive agent has its advantages because it supports multiple CrowdStrike modules. The second point is CrowdStrike from the beginning has been dogmatic about getting all the telemetry data into the cloud. It sort of shunned doing bespoke on prem so that all the data could be analyzed. So the more agents that CrowdStrike installs around the world, the more data it has access to and the better its intelligence. Few companies have access to more data, perhaps Microsoft given it scale and size is an exception in that endpoint space. CrowdStrike has developed a purpose-built threat graph and analytics platform that allows it to quickly ingest in near real time key telemetry data and detect not only known malware, that's pretty straightforward, pretty much anybody could do that. But using machine intelligence, it can also detect unknown malware and other potentially malicious behavior using indicators of attack, IOC, or IOAs. Humio is shown here as a company that CrowdStrike bought for around 400 million in early 2020, early 2021. It's the company's Splunk killer and will serve as an observability platform. It's really starting to take off, that's a great market for them to go after. CrowdStrike, to try to put it into sort of a summary, uses a three pronged approach. First is it's next generation anti-virus, meaning it's SaaS base. SAS based solution that can do fast lookups to telemetry data and that data lives in the cloud. And this leverages cloud strikes proprietary threat graph. Now, the second is endpoint detection and response. CrowdStrike sends all endpoint activity to the cloud and can process the data in real time. CrowdStrike EDR allows you to search data history and its partners with threat intelligent platforms who push the data into CrowdStrike, the CrowdStrike cloud. This increases CloudStrike's observation space. It also has containment capabilities in EDR to fence off compromised system. Now, the third leg of the stool is CrowdStrike's world class manage hunting approach. Like many firms, CrowdStrike has a crack team of experts that is looking at the data, but CrowdStrike's advantage is the amount of data, that observation space that we just talked about, and near real time capabilities of the architecture thanks to that proprietary database that they've developed. And all this is built in the cloud and so it enables global scale. And of course, agility. Now, let's dig into some of the survey data and take a look at what ETR respondents are saying about the spending momentum for CrowdStrike in context with its peers. Here's a very recent dataset, the October preliminary data from the October dataset in ETR's survey. Eric Bradley shared with us, ETR's head of strategy, and he runs the round tables, he's a frequent "Breaking Analysis" contributor. This is an XY graph with Netcore or spending momentum on the vertical axis and the overlap or pervasiveness in the survey on the horizontal axis. That dotted red line at 40% indicates an elevated level of spending velocity. Anything above that, we consider really impressive. Note the CrowdStrike progression since the pandemic started. The two notable points are one, that CrowdStrike has remained consistently above that 40% mark and two, it has made notable progress to the right. You can see that sort of squiggly line consistently increasing its share with one little anomaly there in the early days of over a two-year period. The other call out here is Microsoft in the upper-right. We circled Microsoft as usual. Microsoft messes up the data because it's such a dominant player and has referenced earlier as a massive scale and very quality telemetry from its endpoints. Unlike AWS, Microsoft is a direct competitor of CrowdStrike's. Nonetheless, the sector remains very strong with lots of players. Cyber is a large and expanding TAM with too many point tools that CrowdStrike is well positioned to consolidate, in our view. Now, here's a more narrow view of that same XY graph. What it does is it takes out Microsoft to kind of normalize the data a bit and it compares a number of firms that specialize in endpoint, along with CrowdStrike such as Tanium which also has a lightweight agent, by the way, and appears to be doing pretty well. SentinelOne did a relatively recent IPO, took off, stock hasn't done as well since, as you saw earlier. Carbon Black which VMware bought for around $2 billion and Cylance which is the Blackberry pivot. Now, we've also for context included Palo Alto and Cisco because they are major players with the big presence in security and they've got solutions that compete with CrowdStrike. But you can see how CrowdStrike looms large with a higher net score than these others. Although Palo Alto is very impressive, as is Cisco, steady. But Palo Alto also, sorry, CrowdStrike also has a very steady posture instead of just looming on that X axis. Let's now take a look at XDR, extended detection and response. XDR is kind of this bit of a buzzword but CrowdStrike seems to be taking the mantle and trying to sort of own the category and define it, in our view. It's a natural evolution of endpoint detection and response, EDR. In a recent ETR Roundtable hosted by our colleague, Eric Bradley, the sentiment among several CIOs is that existing SIEM, security information and event management platforms are inadequate and some see XDR as a replacement for, or at least a strong compliment to SIEM. CISOs want a single view of their data. Hmm, you haven't heard that before. They want help prioritizing potentially high impact breaches and they want to automate the low level stuff because the problem is sometimes too much information becomes information overload and you can't prioritize. So they want to consolidate platforms. They want better co consistency. They have too many dashboards, too many stove pipes. They have difficulty scaling and they have inconsistent telemetry data. As one CISO said, it's a call out here. "If the regulatory requirement isn't there, I absolutely would get rid of my SIEM." So CrowdStrike, we feel, is in a good position to continue to gain, share and disrupt this space. And that's what Dave Nicholson and I will be looking for next week when theCUBE is at Fal.Con, CrowdStrike's user conference. We'll be there for two days at the area in Vegas. In addition to CrowdStrike CEO, we'll hear from government cyber experts. We always hear that at security conferences and the CEO of Mandiant. Google just the other day closed its $5 billion plus acquisition of Mandiant, which is a threat intelligence expert and MSSP. I'm going to hear a lot about MSSPs by the way. CrowdStrike is a growing MSSP base. We think that's a really interesting sector because many companies don't have a SOC. As many as 50% of companies in the United States don't have a security operations center. So they need help, that's where MSPs come in. At the conference, there'll be a real focus on the Falcon platform. And we expect CrowdStrike to educate the audience on its multiple modules and how to take advantage of the capabilities beyond endpoint. And we'll also be watching for the ecosystem conversations. We saw this at reinforced, for example, where CrowdStrike and Okta were presenting together to show how these companies products compliment each other in the marketplace. Sometimes it gets confusing when you hear that CrowdStrike has an identity product. Okta, of course, is the identity specialist. So we'll be helping extract that signal from the noise. Because a generational company must have a strong ecosystem. CrowdStrike is evolving and our belief is that it has some work to do to create a stronger partner flywheel, and we're eager to dig into that next week. So if you're at the event, please do stop by theCUBE, say hello to Dave Nicholson and myself. Okay, we're going to leave it there today. Many thanks to Chip Simonton and Eric Bradley for their input and contributions to today's episode. Thanks to Alex Myerson, who does production, he also manages our podcast, Ken Schiffman as well, in our Boston studios, Kristen Martin and Cheryl Knight help get the word out on social media and our newsletters, and Rob Hof is our editor in chief over at siliconangle.com. He does some wonderful editing and I really appreciate that. Remember, all these episodes are available as podcasts wherever you listen, just search "Breaking Analysis" Podcast. I publish each week on wikibon.com and siliconangle.com and you can email me at david.vellante@siliconangle.com or DM me @DVellante or comment on our LinkedIn post. And please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis". (upbeat music)
SUMMARY :
This is "Breaking Analysis" and is in the black, or a
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Chip Simonton | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
George Kurtz | PERSON | 0.99+ |
August 30th | DATE | 0.99+ |
October | DATE | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
FedEx | ORGANIZATION | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
5 billion | QUANTITY | 0.99+ |
Mandiant | ORGANIZATION | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
28% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
$5 billion | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
12 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
second point | QUANTITY | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Tanium | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
more than 2 billion | QUANTITY | 0.99+ |
early 2021 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Blackberry | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
more than 20 modules | QUANTITY | 0.99+ |
nearly 20,000 customers | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
around $2 billion | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
Chip Simonton | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
this year | DATE | 0.98+ |
early 2020 | DATE | 0.98+ |
each week | QUANTITY | 0.98+ |
nearly 60% | QUANTITY | 0.98+ |
SentinelOne | ORGANIZATION | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
today | DATE | 0.98+ |
CrowdStrike | TITLE | 0.98+ |
Humio | ORGANIZATION | 0.97+ |
ETR | ORGANIZATION | 0.97+ |
second | QUANTITY | 0.97+ |
Breaking Analysis: How the cloud is changing security defenses in the 2020s
>> Announcer: From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> The rapid pace of cloud adoption has changed the way organizations approach cybersecurity. Specifically, the cloud is increasingly becoming the first line of cyber defense. As such, along with communicating to the board and creating a security aware culture, the chief information security officer must ensure that the shared responsibility model is being applied properly. Meanwhile, the DevSecOps team has emerged as the critical link between strategy and execution, while audit becomes the free safety, if you will, in the equation, i.e., the last line of defense. Hello, and welcome to this week's, we keep on CUBE Insights, powered by ETR. In this "Breaking Analysis", we'll share the latest data on hyperscale, IaaS, and PaaS market performance, along with some fresh ETR survey data. And we'll share some highlights and the puts and takes from the recent AWS re:Inforce event in Boston. But first, the macro. It's earning season, and that's what many people want to talk about, including us. As we reported last week, the macro spending picture is very mixed and weird. Think back to a week ago when SNAP reported. A player like SNAP misses and the Nasdaq drops 300 points. Meanwhile, Intel, the great semiconductor hope for America misses by a mile, cuts its revenue outlook by 15% for the year, and the Nasdaq was up nearly 250 points just ahead of the close, go figure. Earnings reports from Meta, Google, Microsoft, ServiceNow, and some others underscored cautious outlooks, especially those exposed to the advertising revenue sector. But at the same time, Apple, Microsoft, and Google, were, let's say less bad than expected. And that brought a sigh of relief. And then there's Amazon, which beat on revenue, it beat on cloud revenue, and it gave positive guidance. The Nasdaq has seen this month best month since the isolation economy, which "Breaking Analysis" contributor, Chip Symington, attributes to what he calls an oversold rally. But there are many unknowns that remain. How bad will inflation be? Will the fed really stop tightening after September? The Senate just approved a big spending bill along with corporate tax hikes, which generally don't favor the economy. And on Monday, August 1st, the market will likely realize that we are in the summer quarter, and there's some work to be done. Which is why it's not surprising that investors sold the Nasdaq at the close today on Friday. Are people ready to call the bottom? Hmm, some maybe, but there's still lots of uncertainty. However, the cloud continues its march, despite some very slight deceleration in growth rates from the two leaders. Here's an update of our big four IaaS quarterly revenue data. The big four hyperscalers will account for $165 billion in revenue this year, slightly lower than what we had last quarter. We expect AWS to surpass 83 billion this year in revenue. Azure will be more than 2/3rds the size of AWS, a milestone from Microsoft. Both AWS and Azure came in slightly below our expectations, but still very solid growth at 33% and 46% respectively. GCP, Google Cloud Platform is the big concern. By our estimates GCP's growth rate decelerated from 47% in Q1, and was 38% this past quarter. The company is struggling to keep up with the two giants. Remember, both GCP and Azure, they play a shell game and hide the ball on their IaaS numbers, so we have to use a survey data and other means of estimating. But this is how we see the market shaping up in 2022. Now, before we leave the overall cloud discussion, here's some ETR data that shows the net score or spending momentum granularity for each of the hyperscalers. These bars show the breakdown for each company, with net score on the right and in parenthesis, net score from last quarter. lime green is new adoptions, forest green is spending up 6% or more, the gray is flat, pink is spending at 6% down or worse, and the bright red is replacement or churn. Subtract the reds from the greens and you get net score. One note is this is for each company's overall portfolio. So it's not just cloud. So it's a bit of a mixed bag, but there are a couple points worth noting. First, anything above 40% or 40, here as shown in the chart, is considered elevated. AWS, as you can see, is well above that 40% mark, as is Microsoft. And if you isolate Microsoft's Azure, only Azure, it jumps above AWS's momentum. Google is just barely hanging on to that 40 line, and Alibaba is well below, with both Google and Alibaba showing much higher replacements, that bright red. But here's the key point. AWS and Azure have virtually no churn, no replacements in that bright red. And all four companies are experiencing single-digit numbers in terms of decreased spending within customer accounts. People may be moving some workloads back on-prem selectively, but repatriation is definitely not a trend to bet the house on, in our view. Okay, let's get to the main subject of this "Breaking Analysis". TheCube was at AWS re:Inforce in Boston this week, and we have some observations to share. First, we had keynotes from Steven Schmidt who used to be the chief information security officer at Amazon on Web Services, now he's the CSO, the chief security officer of Amazon. Overall, he dropped the I in his title. CJ Moses is the CISO for AWS. Kurt Kufeld of AWS also spoke, as did Lena Smart, who's the MongoDB CISO, and she keynoted and also came on theCUBE. We'll go back to her in a moment. The key point Schmidt made, one of them anyway, was that Amazon sees more data points in a day than most organizations see in a lifetime. Actually, it adds up to quadrillions over a fairly short period of time, I think, it was within a month. That's quadrillion, it's 15 zeros, by the way. Now, there was drill down focus on data protection and privacy, governance, risk, and compliance, GRC, identity, big, big topic, both within AWS and the ecosystem, network security, and threat detection. Those are the five really highlighted areas. Re:Inforce is really about bringing a lot of best practice guidance to security practitioners, like how to get the most out of AWS tooling. Schmidt had a very strong statement saying, he said, "I can assure you with a 100% certainty that single controls and binary states will absolutely positively fail." Hence, the importance of course, of layered security. We heard a little bit of chat about getting ready for the future and skating to the security puck where quantum computing threatens to hack all of the existing cryptographic algorithms, and how AWS is trying to get in front of all that, and a new set of algorithms came out, AWS is testing. And, you know, we'll talk about that maybe in the future, but that's a ways off. And by its prominent presence, the ecosystem was there enforced, to talk about their role and filling the gaps and picking up where AWS leaves off. We heard a little bit about ransomware defense, but surprisingly, at least in the keynotes, no discussion about air gaps, which we've talked about in previous "Breaking Analysis", is a key factor. We heard a lot about services to help with threat detection and container security and DevOps, et cetera, but there really wasn't a lot of specific talk about how AWS is simplifying the life of the CISO. Now, maybe it's inherently assumed as AWS did a good job stressing that security is job number one, very credible and believable in that front. But you have to wonder if the world is getting simpler or more complex with cloud. And, you know, you might say, "Well, Dave, come on, of course it's better with cloud." But look, attacks are up, the threat surface is expanding, and new exfiltration records are being set every day. I think the hard truth is, the cloud is driving businesses forward and accelerating digital, and those businesses are now exposed more than ever. And that's why security has become such an important topic to boards and throughout the entire organization. Now, the other epiphany that we had at re:Inforce is that there are new layers and a new trust framework emerging in cyber. Roles are shifting, and as a direct result of the cloud, things are changing within organizations. And this first hit me in a conversation with long-time cyber practitioner and Wikibon colleague from our early Wikibon days, and friend, Mike Versace. And I spent two days testing the premise that Michael and I talked about. And here's an attempt to put that conversation into a graphic. The cloud is now the first line of defense. AWS specifically, but hyperscalers generally provide the services, the talent, the best practices, and automation tools to secure infrastructure and their physical data centers. And they're really good at it. The security inside of hyperscaler clouds is best of breed, it's world class. And that first line of defense does take some of the responsibility off of CISOs, but they have to understand and apply the shared responsibility model, where the cloud provider leaves it to the customer, of course, to make sure that the infrastructure they're deploying is properly configured. So in addition to creating a cyber aware culture and communicating up to the board, the CISO has to ensure compliance with and adherence to the model. That includes attracting and retaining the talent necessary to succeed. Now, on the subject of building a security culture, listen to this clip on one of the techniques that Lena Smart, remember, she's the CISO of MongoDB, one of the techniques she uses to foster awareness and build security cultures in her organization. Play the clip >> Having the Security Champion program, so that's just, it's like one of my babies. That and helping underrepresented groups in MongoDB kind of get on in the tech world are both really important to me. And so the Security Champion program is purely purely voluntary. We have over 100 members. And these are people, there's no bar to join, you don't have to be technical. If you're an executive assistant who wants to learn more about security, like my assistant does, you're more than welcome. Up to, we actually, people grade themselves when they join us. We give them a little tick box, like five is, I walk on security water, one is I can spell security, but I'd like to learn more. Mixing those groups together has been game-changing for us. >> Now, the next layer is really where it gets interesting. DevSecOps, you know, we hear about it all the time, shifting left. It implies designing security into the code at the dev level. Shift left and shield right is the kind of buzz phrase. But it's getting more and more complicated. So there are layers within the development cycle, i.e., securing the container. So the app code can't be threatened by backdoors or weaknesses in the containers. Then, securing the runtime to make sure the code is maintained and compliant. Then, the DevOps platform so that change management doesn't create gaps and exposures, and screw things up. And this is just for the application security side of the equation. What about the network and implementing zero trust principles, and securing endpoints, and machine to machine, and human to app communication? So there's a lot of burden being placed on the DevOps team, and they have to partner with the SecOps team to succeed. Those guys are not security experts. And finally, there's audit, which is the last line of defense or what I called at the open, the free safety, for you football fans. They have to do more than just tick the box for the board. That doesn't cut it anymore. They really have to know their stuff and make sure that what they sign off on is real. And then you throw ESG into the mix is becoming more important, making sure the supply chain is green and also secure. So you can see, while much of this stuff has been around for a long, long time, the cloud is accelerating innovation in the pace of delivery. And so much is changing as a result. Now, next, I want to share a graphic that we shared last week, but a little different twist. It's an XY graphic with net score or spending velocity in the vertical axis and overlap or presence in the dataset on the horizontal. With that magic 40% red line as shown. Okay, I won't dig into the data and draw conclusions 'cause we did that last week, but two points I want to make. First, look at Microsoft in the upper-right hand corner. They are big in security and they're attracting a lot of dollars in the space. We've reported on this for a while. They're a five-star security company. And every time, from a spending standpoint in ETR data, that little methodology we use, every time I've run this chart, I've wondered, where the heck is AWS? Why aren't they showing up there? If security is so important to AWS, which it is, and its customers, why aren't they spending money with Amazon on security? And I asked this very question to Merrit Baer, who resides in the office of the CISO at AWS. Listen to her answer. >> It doesn't mean don't spend on security. There is a lot of goodness that we have to offer in ESS, external security services. But I think one of the unique parts of AWS is that we don't believe that security is something you should buy, it's something that you get from us. It's something that we do for you a lot of the time. I mean, this is the definition of the shared responsibility model, right? >> Now, maybe that's good messaging to the market. Merritt, you know, didn't say it outright, but essentially, Microsoft they charge for security. At AWS, it comes with the package. But it does answer my question. And, of course, the fact is that AWS can subsidize all this with egress charges. Now, on the flip side of that, (chuckles) you got Microsoft, you know, they're both, they're competing now. We can take CrowdStrike for instance. Microsoft and CrowdStrike, they compete with each other head to head. So it's an interesting dynamic within the ecosystem. Okay, but I want to turn to a powerful example of how AWS designs in security. And that is the idea of confidential computing. Of course, AWS is not the only one, but we're coming off of re:Inforce, and I really want to dig into something that David Floyer and I have talked about in previous episodes. And we had an opportunity to sit down with Arvind Raghu and J.D. Bean, two security experts from AWS, to talk about this subject. And let's share what we learned and why we think it matters. First, what is confidential computing? That's what this slide is designed to convey. To AWS, they would describe it this way. It's the use of special hardware and the associated firmware that protects customer code and data from any unauthorized access while the data is in use, i.e., while it's being processed. That's oftentimes a security gap. And there are two dimensions here. One is protecting the data and the code from operators on the cloud provider, i.e, in this case, AWS, and protecting the data and code from the customers themselves. In other words, from admin level users are possible malicious actors on the customer side where the code and data is being processed. And there are three capabilities that enable this. First, the AWS Nitro System, which is the foundation for virtualization. The second is Nitro Enclaves, which isolate environments, and then third, the Nitro Trusted Platform Module, TPM, which enables cryptographic assurances of the integrity of the Nitro instances. Now, we've talked about Nitro in the past, and we think it's a revolutionary innovation, so let's dig into that a bit. This is an AWS slide that was shared about how they protect and isolate data and code. On the left-hand side is a classical view of a virtualized architecture. You have a single host or a single server, and those white boxes represent processes on the main board, X86, or could be Intel, or AMD, or alternative architectures. And you have the hypervisor at the bottom which translates instructions to the CPU, allowing direct execution from a virtual machine into the CPU. But notice, you also have blocks for networking, and storage, and security. And the hypervisor emulates or translates IOS between the physical resources and the virtual machines. And it creates some overhead. Now, companies like VMware have done a great job, and others, of stripping out some of that overhead, but there's still an overhead there. That's why people still like to run on bare metal. Now, and while it's not shown in the graphic, there's an operating system in there somewhere, which is privileged, so it's got access to these resources, and it provides the services to the VMs. Now, on the right-hand side, you have the Nitro system. And you can see immediately the differences between the left and right, because the networking, the storage, and the security, the management, et cetera, they've been separated from the hypervisor and that main board, which has the Intel, AMD, throw in Graviton and Trainium, you know, whatever XPUs are in use in the cloud. And you can see that orange Nitro hypervisor. That is a purpose-built lightweight component for this system. And all the other functions are separated in isolated domains. So very strong isolation between the cloud software and the physical hardware running workloads, i.e., those white boxes on the main board. Now, this will run at practically bare metal speeds, and there are other benefits as well. One of the biggest is security. As we've previously reported, this came out of AWS's acquisition of Annapurna Labs, which we've estimated was picked up for a measly $350 million, which is a drop in the bucket for AWS to get such a strategic asset. And there are three enablers on this side. One is the Nitro cards, which are accelerators to offload that wasted work that's done in traditional architectures by typically the X86. We've estimated 25% to 30% of core capacity and cycles is wasted on those offloads. The second is the Nitro security chip, which is embedded and extends the root of trust to the main board hardware. And finally, the Nitro hypervisor, which allocates memory and CPU resources. So the Nitro cards communicate directly with the VMs without the hypervisors getting in the way, and they're not in the path. And all that data is encrypted while it's in motion, and of course, encryption at rest has been around for a while. We asked AWS, is this an, we presumed it was an Arm-based architecture. We wanted to confirm that. Or is it some other type of maybe hybrid using X86 and Arm? They told us the following, and quote, "The SoC, system on chips, for these hardware components are purpose-built and custom designed in-house by Amazon and Annapurna Labs. The same group responsible for other silicon innovations such as Graviton, Inferentia, Trainium, and AQUA. Now, the Nitro cards are Arm-based and do not use any X86 or X86/64 bit CPUs. Okay, so it confirms what we thought. So you may say, "Why should we even care about all this technical mumbo jumbo, Dave?" Well, a year ago, David Floyer and I published this piece explaining why Nitro and Graviton are secret weapons of Amazon that have been a decade in the making, and why everybody needs some type of Nitro to compete in the future. This is enabled, this Nitro innovations and the custom silicon enabled by the Annapurna acquisition. And AWS has the volume economics to make custom silicon. Not everybody can do it. And it's leveraging the Arm ecosystem, the standard software, and the fabrication volume, the manufacturing volume to revolutionize enterprise computing. Nitro, with the alternative processor, architectures like Graviton and others, enables AWS to be on a performance, cost, and power consumption curve that blows away anything we've ever seen from Intel. And Intel's disastrous earnings results that we saw this past week are a symptom of this mega trend that we've been talking about for years. In the same way that Intel and X86 destroyed the market for RISC chips, thanks to PC volumes, Arm is blowing away X86 with volume economics that cannot be matched by Intel. Thanks to, of course, to mobile and edge. Our prediction is that these innovations and the Arm ecosystem are migrating and will migrate further into enterprise computing, which is Intel's stronghold. Now, that stronghold is getting eaten away by the likes of AMD, Nvidia, and of course, Arm in the form of Graviton and other Arm-based alternatives. Apple, Tesla, Amazon, Google, Microsoft, Alibaba, and others are all designing custom silicon, and doing so much faster than Intel can go from design to tape out, roughly cutting that time in half. And the premise of this piece is that every company needs a Nitro to enable alternatives to the X86 in order to support emergent workloads that are data rich and AI-based, and to compete from an economic standpoint. So while at re:Inforce, we heard that the impetus for Nitro was security. Of course, the Arm ecosystem, and its ascendancy has enabled, in our view, AWS to create a platform that will set the enterprise computing market this decade and beyond. Okay, that's it for today. Thanks to Alex Morrison, who is on production. And he does the podcast. And Ken Schiffman, our newest member of our Boston Studio team is also on production. Kristen Martin and Cheryl Knight help spread the word on social media and in the community. And Rob Hof is our editor in chief over at SiliconANGLE. He does some great, great work for us. Remember, all these episodes are available as podcast. Wherever you listen, just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me directly at David.Vellante@siliconangle.com or DM me @dvellante, comment on my LinkedIn post. And please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. Be well, and we'll see you next time on "Breaking Analysis." (upbeat theme music)
SUMMARY :
This is "Breaking Analysis" and the Nasdaq was up nearly 250 points And so the Security Champion program the SecOps team to succeed. of the shared responsibility model, right? and it provides the services to the VMs.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Morrison | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Mike Versace | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Kurt Kufeld | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
J.D. Bean | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Arvind Raghu | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Schmidt | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2022 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Annapurna Labs | ORGANIZATION | 0.99+ |
6% | QUANTITY | 0.99+ |
SNAP | ORGANIZATION | 0.99+ |
five-star | QUANTITY | 0.99+ |
Chip Symington | PERSON | 0.99+ |
47% | QUANTITY | 0.99+ |
Annapurna | ORGANIZATION | 0.99+ |
$350 million | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Merrit Baer | PERSON | 0.99+ |
CJ Moses | PERSON | 0.99+ |
40 | QUANTITY | 0.99+ |
Merritt | PERSON | 0.99+ |
15% | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Breaking Analysis: AWS re:Inforce marks a summer checkpoint on cybersecurity
>> From theCUBE Studios in Palo Alto and Boston bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> After a two year hiatus, AWS re:Inforce is back on as an in-person event in Boston next week. Like the All-Star break in baseball, re:Inforce gives us an opportunity to evaluate the cyber security market overall, the state of cloud security and cross cloud security and more specifically what AWS is up to in the sector. Welcome to this week's Wikibon cube insights powered by ETR. In this Breaking Analysis we'll share our view of what's changed since our last cyber update in May. We'll look at the macro environment, how it's impacting cyber security plays in the market, what the ETR data tells us and what to expect at next week's AWS re:Inforce. We start this week with a checkpoint from Breaking Analysis contributor and stock trader Chip Simonton. We asked for his assessment of the market generally in cyber stocks specifically. So we'll summarize right here. We've kind of moved on from a narrative of the sky is falling to one where the glass is half empty you know, and before today's big selloff it was looking more and more like glass half full. The SNAP miss has dragged down many of the big names that comprise the major indices. You know, earning season as always brings heightened interest and this time we're seeing many cross currents. It starts as usual with the banks and the money centers. With the exception of JP Morgan the numbers were pretty good according to Simonton. Investment banks were not so great with Morgan and Goldman missing estimates but in general, pretty positive outlooks. But the market also shrugged off IBM's growth. And of course, social media because of SNAP is getting hammered today. The question is no longer recession or not but rather how deep the recession will be. And today's PMI data was the weakest since the start of the pandemic. Bond yields continue to weaken and there's a growing consensus that Fed tightening may be over after September as commodity prices weaken. Now gas prices of course are still high but they've come down. Tesla, Nokia and AT&T all indicated that supply issues were getting better which is also going to help with inflation. So it's no shock that the NASDAQ has done pretty well as beaten down as tech stocks started to look oversold you know, despite today's sell off. But AT&T and Verizon, they blamed their misses in part on people not paying their bills on time. SNAP's huge miss even after guiding lower and then refusing to offer future guidance took that stock down nearly 40% today and other social media stocks are off on sympathy. Meta and Google were off, you know, over 7% at midday. I think at one point hit 14% down and Google, Meta and Twitter have all said they're freezing new hires. So we're starting to see according to Simonton for the first time in a long time, the lower income, younger generation really feeling the pinch of inflation. Along of course with struggling families that have to choose food and shelter over discretionary spend. Now back to the NASDAQ for a moment. As we've been reporting back in mid-June and NASDAQ was off nearly 33% year to date and has since rallied. It's now down about 25% year to date as of midday today. But as I say, it had been, you know much deeper back in early June. But it's broken that downward trend that we talked about where the highs are actually lower and the lows are lower. That's started to change for now anyway. We'll see if it holds. But chip stocks, software stocks, and of course the cyber names have broken those down trends and have been trading above their 50 day moving averages for the first time in around four months. And again, according to Simonton, we'll see if that holds. If it does, that's a positive sign. Now remember on June 24th, we recorded a Breaking Analysis and talked about Qualcomm trading at a 12 X multiple with an implied 15% growth rate. On that day the stock was 124 and it surpassed 155 earlier this month. That was a really good call by Simonton. So looking at some of the cyber players here SailPoint is of course the anomaly with the Thoma Bravo 7 billion acquisition of the company holding that stock up. But the Bug ETF of basket of cyber stocks has definitely improved. When we last reported on cyber in May, CrowdStrike was off 23% year to date. It's now off 4%. Palo Alto has held steadily. Okta is still underperforming its peers as it works through the fallout from the breach and the ingestion of its Auth0 acquisition. Meanwhile, Zscaler and SentinelOne, those high flyers are still well off year to date, with Ping Identity and CyberArk not getting hit as hard as their valuations hadn't run up as much. But virtually all these tech stocks generally in cyber issues specifically, they've been breaking their down trend. So it will now come down to earnings guidance in the coming months. But the SNAP reaction is quite stunning. I mean, the environment is slowing, we know that. Ad spending gets cut in that type of market, we know that too. So it shouldn't be a huge surprise to anyone but as Chip Simonton says, this shows that sellers are still in control here. So it's going to take a little while to work through that despite the positive signs that we're seeing. Okay. We also turned to our friend Eric Bradley from ETR who follows these markets quite closely. He frequently interviews CISOs on his program, on his round tables. So we asked to get his take and here's what ETR is saying. Again, as we've reported while CIOs and IT buyers have tempered spending expectations since December and early January when they called for an 8% plus spending growth, they're still expecting a six to seven percent uptick in spend this year. So that's pretty good. Security remains the number one priority and also is the highest ranked sector in the ETR data set when you measure in terms of pervasiveness in the study. Within security endpoint detection and extended detection and response along with identity and privileged account management are the sub-sectors with the most spending velocity. And when you exclude Microsoft which is just dominant across the board in so many sectors, CrowdStrike has taken over the number one spot in terms of spending momentum in ETR surveys with CyberArk and Tanium showing very strong as well. Okta has seen a big dropoff in net score from 54% last survey to 45% in July as customers maybe put a pause on new Okta adoptions. That clearly shows in the survey. We'll talk about that in a moment. Look Okta still elevated in terms of spending momentum, but it doesn't have the dominant leadership position it once held in spend velocity. Year on year, according to ETR, Tenable and Elastic are seeing the biggest jumps in spending momentum, with SailPoint, Tanium, Veronis, CrowdStrike and Zscaler seeing the biggest jump in new adoptions since the last survey. Now on the downside, SonicWall, Symantec, Trellic which is McAfee, Barracuda and TrendMicro are seeing the highest percentage of defections and replacements. Let's take a deeper look at what the ETR data tells us about the cybersecurity space. This is a popular view that we like to share with net score or spending momentum on the Y axis and overlap or pervasiveness in the data on the X axis. It's a measure of presence in the data set we used to call it market share. With the data, the dot positions, you see that little inserted table, that's how the dots are plotted. And it's important to note that this data is filtered for firms with at least 100 Ns in the survey. That's why some of the other ones that we mentioned might have dropped off. The red dotted line at 40% that indicates highly elevated spending momentum and there are several firms above that mark including of course, Microsoft, which is literally off the charts in both dimensions in the upper right. It's quite incredible actually. But for the rest of the pack, CrowdStrike has now taken back its number one net score position in the ETR survey. And CyberArk and Okta and Zscaler, CloudFlare and Auth0 now Okta through the acquisition, are all above the 40% mark. You can stare at the data at your leisure but I'll just point out, make three quick points. First Palo Alto continues to impress and as steady as she goes. Two, it's a very crowded market still and it's complicated space. And three there's lots of spending in different pockets. This market has too many tools and will continue to consolidate. Now I'd like to drill into a couple of firms net scores and pick out some of the pure plays that are leading the way. This series of charts shows the net score or spending velocity or granularity for Okta, CrowdStrike, Zscaler and CyberArk. Four of the top pure plays in the ETR survey that also have over a hundred responses. Now the colors represent the following. Bright red is defections. We're leaving the platform. The pink is we're spending less, meaning we're spending 6% or worse. The gray is flat spend plus or minus 5%. The forest green is spending more, i.e, 6% or more and the lime green is we're adding the platform new. That red dotted line at the 40% net score mark is the same elevated level that we like to talk about. All four are above that target. Now that blue line you see there is net score. The yellow line is pervasiveness in the data. The data shown in each bar goes back 10 surveys all the way back to January 2020. First I want to call out that all four again are seeing down trends in spending momentum with the whole market. That's that blue line. They're seeing that this quarter, again, the market is off overall. Everybody is kind of seeing that down trend for the most part. Very few exceptions. Okta is being hurt by fewer new additions which is why we highlighted in red, that red dotted area, that square that we put there in the upper right of that Okta bar. That lime green, new ads are off as well. And the gray for Okta, flat spending is noticeably up. So it feels like people are pausing a bit and taking a breather for Okta. And as we said earlier, perhaps with the breach earlier this year and the ingestion of Auth0 acquisition the company is seeing some friction in its business. Now, having said that, you can see Okta's yellow line or presence in the data set, continues to grow. So it's a good proxy from market presence. So Okta remains a leader in identity. So again, I'll let you stare at the data if you want at your leisure, but despite some concerns on declining momentum, notice this very little red at these companies when it comes to the ETR survey data. Now one more data slide which brings us to our four star cyber firms. We started a tradition a few years ago where we sorted the ETR data by net score. That's the left hand side of this graphic. And we sorted by shared end or presence in the data set. That's the right hand side. And again, we filtered by companies with at least 100 N and oh, by the way we've excluded Microsoft just to level the playing field. The red dotted line signifies the top 10. If a company cracks the top 10 in both spending momentum and presence, we give them four stars. So Palo Alto, CrowdStrike, Okta, Fortinet and Zscaler all made the cut this time. Now, as we pointed out in May if you combined Auth0 with Okta, they jumped to the number two on the right hand chart in terms of presence. And they would lead the pure plays there although it would bring down Okta's net score somewhat, as you can see, Auth0's net score is lower than Okta's. So when you combine them it would drag that down a little bit but it would give them bigger presence in the data set. Now, the other point we'll make is that Proofpoint and Splunk both dropped off the four star list this time as they both saw marked declines in net score or spending velocity. They both got four stars last quarter. Okay. We're going to close on what to expect at re:Inforce this coming week. Re:Inforce, if you don't know, is AWS's security event. They first held it in Boston back in 2019. It's dedicated to cloud security. The past two years has been virtual and they announced that reinvent that it would take place in Houston in June, which everybody said, that's crazy. Who wants to go to Houston in June and turns out nobody did so they postponed the event, thankfully. And so now they're back in Boston, starting on Monday. Not that it's going to be much cooler in Boston. Anyway, Steven Schmidt had been the face of AWS security at all these previous events as the Chief Information Security Officer. Now he's dropped the I from his title and is now the Chief Security Officer at Amazon. So he went with Jesse to the mothership. Presumably he dropped the I because he deals with physical security now too, like at the warehouses. Not that he didn't have to worry about physical security at the AWS data centers. I don't know. Anyway, he and CJ Moses who is now the new CISO at AWS will be keynoting along with some others including MongoDB's Chief Information Security Officer. So that should be interesting. Now, if you've been following AWS you'll know they like to break things down into, you know, a couple of security categories. Identity, detection and response, data protection slash privacy slash GRC which is governance, risk and compliance, and we would expect a lot more talk this year on container security. So you're going to hear also product updates and they like to talk about how they're adding value to services and try to help, they try to help customers understand how to apply services. Things like GuardDuty, which is their threat detection that has machine learning in it. They'll talk about Security Hub, which centralizes views and alerts and automates security checks. They have a service called Detective which does root cause analysis, and they have tools to mitigate denial of service attacks. And they'll talk about security in Nitro which isolates a lot of the hardware resources. This whole idea of, you know, confidential computing which is, you know, AWS will point out it's kind of become a buzzword. They take it really seriously. I think others do as well, like Arm. We've talked about that on previous Breaking Analysis. And again, you're going to hear something on container security because it's the hottest thing going right now and because AWS really still serves developers and really that's what they're trying to do. They're trying to enable developers to design security in but you're also going to hear a lot of best practice advice from AWS i.e, they'll share the AWS dogfooding playbooks with you for their own security practices. AWS like all good security practitioners, understand that the keys to a successful security strategy and implementation don't start with the technology, rather they're about the methods and practices that you apply to solve security threats and a top to bottom cultural approach to security awareness, designing security into systems, that's really where the developers come in, and training for continuous improvements. So you're going to get heavy doses of really strong best practices and guidance and you know, some good preaching. You're also going to hear and see a lot of partners. They'll be very visible at re:Inforce. AWS is all about ecosystem enablement and AWS is going to host close to a hundred security partners at the event. This is key because AWS doesn't do it all. Interestingly, they don't even show up in the ETR security taxonomy, right? They just sort of imply that it's built in there even though they have a lot of security tooling. So they have to apply the shared responsibility model not only with customers but partners as well. They need an ecosystem to fill gaps and provide deeper problem solving with more mature and deeper security tooling. And you're going to hear a lot of positivity around how great cloud security is and how it can be done well. But the truth is this stuff is still incredibly complicated and challenging for CISOs and practitioners who are understaffed when it comes to top talent. Now, finally, theCUBE will be at re:Inforce in force. John Furry and I will be hosting two days of broadcast so please do stop by if you're in Boston and say hello. We'll have a little chat, we'll share some data and we'll share our overall impressions of the event, the market, what we're seeing, what we're learning, what we're worried about in this dynamic space. Okay. That's it for today. Thanks for watching. Thanks to Alex Myerson, who is on production and manages the podcast. Kristin Martin and Cheryl Knight, they helped get the word out on social and in our newsletters and Rob Hoff is our Editor in Chief over at siliconangle.com. You did some great editing. Thank you all. Remember all these episodes they're available, this podcast. Wherever you listen, all you do is search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com. You can get in touch with me by emailing avid.vellante@siliconangle.com or DM me @dvellante, or comment on my LinkedIn post and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you in Boston next week if you're there or next time on Breaking Analysis (soft music)
SUMMARY :
in Palo Alto and Boston and of course the cyber names
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chip Simonton | PERSON | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
January 2020 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
June 24th | DATE | 0.99+ |
Houston | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Okta | ORGANIZATION | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
July | DATE | 0.99+ |
SNAP | ORGANIZATION | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
CJ Moses | PERSON | 0.99+ |
John Furry | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
6% | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Jesse | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
Four | QUANTITY | 0.99+ |
54% | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Simonton | PERSON | 0.99+ |
JP Morgan | ORGANIZATION | 0.99+ |
8% | QUANTITY | 0.99+ |
14% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
SailPoint | ORGANIZATION | 0.99+ |
TrendMicro | ORGANIZATION | 0.99+ |
Monday | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
45% | QUANTITY | 0.99+ |
10 surveys | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
CyberArk | ORGANIZATION | 0.99+ |
Thoma Bravo | ORGANIZATION | 0.99+ |
Tenable | ORGANIZATION | 0.99+ |
avid.vellante@siliconangle.com | OTHER | 0.99+ |
next week | DATE | 0.99+ |
SentinelOne | ORGANIZATION | 0.99+ |
early June | DATE | 0.99+ |
Meta | ORGANIZATION | 0.99+ |
Breaking Analysis: Tech Spending Intentions are Holding Despite Macro Concerns
>> From theCUBE studios in Palo Alto in Boston bringing you data driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> Despite fears of inflation, supply chain issues skyrocketing energy and home prices and global instability caused by the Ukraine crisis CIOs and IT buyers continue to expect overall spending to increase more than 6% in 2022. Now, while this is lower than our 8% prediction that we made earlier this year in January, it remains in line with last year's roughly six to 7% growth and is holding firm with the expectations reported by tech executives on the ETR surveys last quarter. Hello and welcome to this week's wiki bond cube insights powered by ETR in this breaking analysis, we'll update you on our latest look at tech spending with a preliminary take from ETR's latest macro drill down survey. We'll share some insights to which vendors have shown the biggest change in spending trajectory. And we'll tap our technical analysts to get a read on what they think it means for technology stocks going forward. The IT spending sentiment among IT buyers remains pretty solid. >> In the past two months, we've had conversations with dozens of CIOs, chief digital officers data executives, IT managers, and application developers, and across the board, they've indicated that for now at least their spending levels remain largely unchanged. The latest ETR drill down data which will share shortly, confirms these anecdotal checks. However, the interpretation of this data it's somewhat nuanced. Part of the reason for the spending levels being you know reasonably strong and holding up is inflation. Stuff costs more so spending levels are higher forcing IT managers to prioritize. Now security remains the number one priority and is less susceptible to cuts, cloud migration, productivity initiatives and other data projects remain top priorities. >> So where are CIO's robbing from Peter to pay Paul to focus on these priorities? Well, we've seen a slight uptick in certain speculative. IT projects being put on hold or frozen for a period of time. And according to ETR survey data we've seen some hiring freezes reported and this is especially notable in the healthcare sector. ETR also surveyed its buyer base to find out where they were adjusting their budgets and the strategies and tactics they were using to do so. Consolidating IT vendors was by far the most cited tactic. Now this makes sense as companies in an effort to negotiate better deals will often forego investments in newer so-called best of breed products and services, and negotiate bundles from larger suppliers. You know, even though they might not be as functional, the buyers >> can get a better deal if they bundle together from one of their larger suppliers. Think Microsoft or a Dell or other, you know, large companies. ETR survey respondents also cited cutting the cloud bill where discretionary spending was in play was another strategy or tactic that they were using. We certainly saw this with some of the largest snowflake customers this past quarter. Where even though they were still growing consumption rapidly certain snowflake customers dialed down their consumption and pushed spending off to future quarters. Now remember in the case of snowflake, anyway, customers negotiate consumption rates and their pricing based on a total commitment over a period of time. So while they may consume less in one quarter, over the lifetime of the contract, snowflake, as do many other cloud companies, have good visibility on the lifetime value of a deal. Now this next chart shows the latest ETR spending expectations among more than 900 respondents. The bars represent spending growth expectations from the periods of December, 2021 that's the gray bars, March of 2022 survey in the blue, and the most recent June data, That's the yellow bar. So you can see spending expectations for the quarter is down slightly in the mid 5% range. But overall for the year expectations remain in the mid 6% range. Now it's down from 8%, 8.3% in December where it looked like 2022 was going to really be a breakout year and have more momentum than even last year. Now, remember this was before Russia invaded Ukraine which occurred in mid-February of this year. So expectations were a little higher. So look, generally speaking CIOs have told us that their CFOs and CEOs have lowered their earnings outlooks and communicated that to Wall Street. They've told us that unless and until these revised forecasts appear at risk, they continue to expect their budget levels to remain pretty constant. Now there's still plenty of momentum and spending velocity on specific vendor platforms. Let's take a look at that. >> This chart shows the companies with the greatest spending momentum as measured by ETRs proprietary net score methodology. Net score essentially measures the net percent of customers spending more on a particular platform. That measurement is shown on the Y axis. The red line there that's inserted that red dotted line at 40%, we consider to be a highly elevated mark. And the green dots are companies in the ETR survey that are near or above that line. The X axis measures the presence in the data set, how much, you know sort of pervasiveness, if you will, is in the data. It's kind of a proxy for market presence. Now, of course we all know Kubernetes is not a company, but it remains an area where organizations are spending lots of resources and time particularly to modernize and mobilize applications. Snowflake remains the company which leads all firms in spending velocity, but as you'll see momentarily, despite its highest position relative to everybody else in the survey, it's still down from its previous levels in the high seventies and low 80% range. AWS is incredibly impressive because it has an elevated level but also a big presence in the data set in the survey. Same with Microsoft, same with ServiceNow which also stands out. And you can see the other smaller vendors like HashiCorp which is increasingly being seen as a strategic cross cloud enabler. They're showing, spending momentum. The RPA vendors you see in there automation anywhere and UI path are in the mix with numerous security companies, CrowdStrike, CyberArk, Netskope, Cloudflare, Tenable Okta, Zscaler Palo Alto networks, Sale Point Fortunate. A big number of cybersecurity firms hovering at or above that 40% mark you can see pure storage remains elevated as do PagerDuty and Coupa. So plenty of good news here, despite the recent tech crash. So that was the good, here's the not so good. So >> there is no 40% line on this chart because all these companies are well below that line. Now this doesn't mean these companies are bad companies. They just don't have the spending velocity of the ones we showed earlier. A good example here is Oracle. Look how they stand out on the X axis with a huge market presence. And Oracle remains an incredibly successful company selling to high end customers and really owning that mission critical data and application space. And remember ETR measures spending activity, but not actual spending dollars. So Oracle is skewed as a result because Oracle customers spend big bucks. But the fact is that Oracle has a large legacy install base that pulls down their growth rates. And that does show up in the ETR survey data. Broadcom is another example. They're one of the most successful companies in the industry, and they're not going after growth at all costs at all. They're going after EBITDA and of course ETR doesn't measure EBIT. So just keep that in mind, as you look at this data. Now another way to look at the data and the survey, is exploring the net score movement over the last period amongst companies. So how are they moving? What's happening to the net score over time. And this chart shows the year over year >> net score change for vendors that participate in at least three sectors within the ETR taxonomy. Remember ETR taxonomy has 12, 15 different segments. So the names above or below the gray dotted line are those companies where the net score has increased or decreased meaningfully. So to the earlier chart, it's all relative, right? Look at Oracle. While having lower net scores has also shown a more meaningful improvement in net score than some of the others, as have SAP and Teradata. Now what's impressive to me here is how AWS, Microsoft, and Google are actually holding that dotted line that gray line pretty well despite their size and the other ironically interesting two data points here are Broadcom and Nutanix. Now Broadcom, of course, as we've reported and dug into, is buying VMware and, and of, of course most customers are concerned about getting hit with higher prices. Once Broadcom takes over. Well Nutanix despite its change in net scores, in a good position potentially to capture some of that VMware business. Just yesterday, I talked to a customer who told me he migrated his entire portfolio off VMware using Nutanix AHV, the Acropolis hypervisor. And that was in an effort to avoid the VTEX specifically. Now this was a smaller customer granted and it's not representative of what I feel is Broadcom's ICP the ideal customer profile, but look, Nutanix should benefit from the Broadcom acquisition. If it can position itself to pick up the business that Broadcom really doesn't want. That kind of bottom of the pyramid. One person's trash is another's treasure as they say, okay. And here's that same chart for companies >> that participate in less than three segments. So, two or one of the segments in the ETR taxonomy. Only three names are seeing positive movement year over year in net score. SUSE under the leadership of amazing CEO, Melissa Di Donato. She's making moves. The company went public last year and acquired rancher labs in 2020. Look, we know that red hat is the big dog in Kubernetes but since the IBM acquisition people have looked to SUSE as a possible alternative and it's showing up in the numbers. It's a nice business. It's going to do more than 600 million this year in revenue, SUSE that is. It's got solid double digit growth in kind of the low teens. It's profitability is under pressure but they're definitely a player that is found a niche and is worth watching. The SolarWinds, What can I say there? I mean, maybe it's a dead cat bounce coming off the major breach that we saw a couple years ago. Some of its customers maybe just can't move off the platform. Constant contact we really don't follow and don't really, you know, focus on them. So, not much to say there. Now look at all the high priced earning stocks or infinite PE stocks that have no E and divide by zero or a negative number and boom, you have infinite PE and look at how their net scores have dropped. We've reported extensively on snowflake. They're still number one as we showed you earlier, net score, but big moves off their highs. Okta, Datadog, Zscaler, SentinelOne Dynatrace, big downward moves, and you can see the rest. So this chart really speaks to the change in expectations from the COVID bubble. Despite the fact that many of these companies CFOs would tell you that the pandemic wasn't necessarily a tailwind for them, but it certainly seemed to be the case when you look back in some of the ETR data. But a big question in the community is what's going to happen to these tech stocks, these tech companies in the market? We reached out to both Eric Bradley of ETR who used to be a technical analyst on Wall Street, and the long time trader and breaking analysis contributor, Chip Symington to get a read on what they thought. First, you know the market >> first point of the market has been off 11 out of the past 12 weeks. And bare market rallies like what we're seeing today and yesterday, they happen from time to time and it was kind of expected. Chair Powell's testimony was broadly viewed as a positive by the street because higher interest rates appear to be pushing commodity prices down. And a weaker consumer sentiment may point to a less onerous inflation outlook. That's good for the market. Chip Symington pointed out to breaking analysis a while ago that the NASDAQ has been on a trend line for the past six months where its highs are lower and the lows are lower and that's a bad sign. And we're bumping up against that trend line here. Meaning if it breaks through that trend it could be a buying signal. As he feels that tech stocks are oversold. He pointed to a recent bounce in semiconductors and cited the Qualcomm example. Here's a company trading at 12 times forward earnings with a sustained 14% growth rate over the next couple of years. And their cash flow is able to support their 2.4, 2% annual dividend. So overall Symington feels this rally was absolutely expected. He's cautious because we're still in a bear market but he's beginning to, to turn bullish. And Eric Bradley added that He feels the market is building a base here and he doesn't expect a 1970s or early 1980s year long sideways move because of all the money that's still in the system. You know, but it could bounce around for several months And remember with higher interest rates there are going to be more options other than equities which for many years has not been the case. Obviously inflation and recession. They are like two looming towers that we're all watching closely and will ultimately determine if, when, and how this market turns around. Okay, that's it for today. Thanks to my colleagues, Stephanie Chan, who helps research breaking analysis topics sometimes, and Alex Myerson who is on production in the podcast. Kristin Martin and Cheryl Knight they help get the word out and do all of our newsletters. And Rob Hof is our Editor in Chief over at siliconangle.com and does some wonderful editing for breaking analysis. Thank you. Remember, all these episodes are available as podcasts wherever you listen. All you got to do is search breaking analysis podcasts. I publish each week on wikibon.com and Siliconangle.com. And of course you can reach me by email at david.vellante@siliconangle.com or DM me at DVellante comment on my LinkedIn post and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for the CUBE insights powered by ETR. Stay safe, be well. And we'll see you next time. (soft music)
SUMMARY :
bringing you data driven by tech executives on the and across the board, they've and the strategies and tactics and the most recent June in the data set, how much, you know and the survey, is exploring That kind of bottom of the pyramid. in kind of the low teens. and the lows are lower
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephanie Chan | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Melissa Di Donato | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
December | DATE | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
2.4, 2% | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
12 times | QUANTITY | 0.99+ |
December, 2021 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
14% | QUANTITY | 0.99+ |
Chip Symington | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
PagerDuty | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
1970s | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
11 | QUANTITY | 0.99+ |
more than 600 million | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
8% | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
more than 900 respondents | QUANTITY | 0.99+ |
two looming towers | QUANTITY | 0.99+ |
more than 6% | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
Netskope | ORGANIZATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Coupa | ORGANIZATION | 0.99+ |
VTEX | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
zero | QUANTITY | 0.98+ |
each week | QUANTITY | 0.98+ |
Acropolis | ORGANIZATION | 0.98+ |
less than three segments | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
early 1980s | DATE | 0.98+ |
three names | QUANTITY | 0.97+ |
siliconangle.com | OTHER | 0.97+ |
this week | DATE | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Teradata | ORGANIZATION | 0.97+ |
Nutanix AHV | ORGANIZATION | 0.97+ |
CyberArk | ORGANIZATION | 0.97+ |
8.3% | QUANTITY | 0.96+ |
Breaking Analysis: Are Cyber Stocks Oversold or Still too Pricey?
>> From theCUBE Studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Cybersecurity stocks have been sending mixed signals as of late, mostly negative like much of tech, but some such as Palo Alto Networks, despite a tough go of it recently have held up better than most tech names. Others like CrowdStrike, had been out performing Broader Tech in March, but then flipped in May. Okta's performance was pretty much tracking along with CrowdStrike for most of the past several months, a little bit below, but then the Okta hack changed the trajectory of that name. Zscaler has crossed the critical billion dollar ARR revenue milestone, and now sees a path to five billion dollars in revenue, but the company stock fell sharply after its last earnings report and has been on a down trend since last November. Meanwhile, CyberArk's recent beat and raise, was encouraging and the stock acted well after its last report. Security remains the number one initiative priority amongst IT organizations and the spending momentum for many high flying cyber names remain strong. So what gives in cyber security? Hello, and welcome to this week's Wikibon CUBE insights powered by ETR. In this breaking analysis, we focus on security and will update you on the latest data from ETR to try to make sense out of the market and read into what this all means in both the near and long term, for some of our favorite names in cyber. First, the news. There's always something happening in security news cycles. The big recent news is new President Rodrigo Chavez declared a national emergency in Costa Rica due to the preponderance of Russian cyber attacks on the country's critical infrastructure. Such measures are normally reserved for natural disasters like earthquakes, but this move speaks to the nature of today's cyber threats. Of no surprise is modern superpower warfare even for a depleted power like Russia almost certainly involves cyber warfare as we continue to see in Ukraine. Privately held Arctic Wolf Networks hired Dustin Williams as its new CFO. Williams has taken three companies to IPO, including Nutanix in 2016, a very successful IPO for that company. Whether AWN chooses to pull the trigger this year or will wait until markets are less choppy or obviously remains to be seen. But it's a pretty clear sign the company is headed to IPO at some point. Now, big point of discussion this week at Red Hat Summit in Boston and the prior week at Dell technologies world was security. In the case of Red Hat, securing the digital supply chain was the main theme. And from Dell building, many security features into its storage arrays and cyber resilience services into its as a service offering called Apex. And we're seeing a trend where buyers want to reduce the number of bespoke tools they use if they, in fact can. Here's IDC's Jim Mercer, sharing data from a recent survey they conducted on the topic. Play the clip. >> Interestingly, we did a survey, I think around last August or something. And one of the questions was around where do you want your security, right? Where do you want to get your DevSecOps security from? Do you want to get it from individual vendors, right? Or do you want to get it from like your platforms that you're using and deploying changes in Kubernetes? >> Great question. What did they say? >> The majority of them, they're hoping they can get it built into the platform. That's really what they want-- >> Now, whether that's actually achievable is debatable because you have so much innovation and investment going on from the likes of startups and for instance, lace work or sneak and security companies that you see even trying to build platforms, you've got CrowdStrike, Okta, Zscaler and many others, trying to build security platforms and put it all under their umbrella. Now the last point will hit here is there was a lot of buzz in the news about Okta. The reaction to what was a relatively benign hack was pretty severe and probably overblown, but Okta's stock is paying the price of what is generally considered a blown communications plan versus a technical failure. Remember, identity is not an easy thing to rip and replace and Okta remains a best-of-breed player and leader in the space. So we're going to look at some ETR data later in this segment to try and make sense of the recent action in the market and certain names. Speaking of which let's take a look at how some of the names in cybersecurity have fared relative to some of the indices and relative indicators that we like to look at. Here's a Google finance comparison for a number of stocks and names in the bottom there you can see we plot the hack ETF which tracks security stocks. This is a year to date view. And so we don't show it here but the tech heavy NASDAQ is off around 26% year to date whereas the cyber ETF that we're showing is down 18%, okay. So cyber holding up a little bit better than broader tech as we've reported earlier, was actually much better and still seems to be a gap there, but the data are mixed. You can see Okta is way off relative to its peers. That's a combination of the breach that we talked about but also the run up in the stock since COVID. CrowdStrike was actually faring better but broke this month, we'll see how it's upcoming earnings announcements are received when it announces on June 2nd after the close. Palo Alto in the light blue has done better than most and until recently was holding up quite well. And of course, Sailpoint is another identity specialist, it is kind of off the charts here because it's going private with the acquisition by Thoma Bravo at nearly seven billion dollars. So you see some mixed signals in cyber these past several months and weeks. And so we're trying to understand what that all means. So let's take a look at the survey data and see how spending momentum is holding up. As we've reported IT spending forecast, at the macro level, they've come off their 8% highs from the end of the year, the ETRS December survey, but robust tech spending is still there. It's expected at nearly seven percent and this is amongst 1200 ETR respondents. Here's a picture from the ETR survey of the cybersecurity landscape. That y-axis that's net score or a measure of spending momentum and that horizontal access is overlap. We used to talk about it as a market share which is a measure of pervasiveness in the data set. That dotted red line at 40% indicates an elevated spending momentum level on the vertical axis and we filter the names and limited to only those with a hundred or more responses in the ETR survey. Then the pictures still pretty crowded as you can see. You got lots of companies above the red dotted line, including Microsoft which is up into the right, they're so far off the chart, it's just amazing. But also Palo Alto and Okta, Auth0, which of course is now owned by Okta, Zscaler, CyberArk is making moves. Sailpoint and Cloudflare, they're all above that magic 40% line. Now, you look at Cisco, it shows a very large presence in the horizontal axis in the data set. And it's got pretty respectable momentum and you see Splunk doing okay, no before and tenable just below that 40% line and a lot of names in the very respectable 20% zone. And we've included some legacy names just for context that fall below the zero percent line with a negative net score. And that means a larger proportion, that negative net score means a larger proportion of their customers in the survey are spending less than those that are spending more. Now, typically for these legacy names you're going to have a huge proportion of customers who have flat spending that kind of fat middle and that's why they sort of don't have that highly elevated score, but they're still viable as they get the recurring revenue each year. But the bottom line is that spending remains robust for some of the top names that we've talked about earlier despite their rocky stock performance. Now, let's filter this data a bit more to make it a little bit easier to read. So to do that, we take out Microsoft because they're just so dominant and we cherry pick some names to make the data more consumable and scannable. The other data point we've added is Okta's net score breakdown, the multicolored rows there, that row in the bottom right. Net score, it measures the percent of customers that are adding the platform new, that's the lime green, at 18% for Okta. The forest green is at 42%. That's the percent of customers in the survey that are spending six percent or more. The gray is flat spending. That's 32% for Okta, this past survey. The pink is customers that are spending less, that's three percent. They're spending six percent or worse in the survey, so only three percent for Okta. And the bright red at three percent is decommissioning the platform. You subtract the reds from the greens and you get a net score, well, into the 50s for Okta and you can see. We highlight Okta here because it's a name that we've been following for quite some time and customers have given us really solid feedback on the technology and up until the hack, they're affinity to Okta, but that seems to be continuing. We'll talk more about that. This recent breach to Okta has caused us to take a closer look. And you may recall, we reported with our ETR colleague, Eric Bradley. The breach was announced right in the middle of ETR collecting data in the last survey. And while we did see a noticeable downtick right after the announcement, the exposure of the hack and Okta's net score just after the breach was disclosed, you can see the combination of Okta and Auth0 remains very strong. I asked Eric Bradley this morning what he thought about Okta, and he pointed out that you can't evaluate this company on its price to earnings ratio. But it's forward sales multiple is now below 7X. And while attractive, these high flyers at some point, Eric says, they got to start making a profit. So you going to hold that thought, we'll come back to that. Now, another cut of the ETR data to look at our four star security names here. A while back we developed a methodology to try and cut through the noise of the crowded security sector using the ETR data to evaluate two key metrics; net score and shared N. Net score again is, spending momentum, the latter is an indicator of presence in the data set which is a proxy for market presence. Okay, we assigned those companies that cracked the top 10 in both net score and shared N, we give them four stars, okay, if they make the top 10. This chart here shows the April survey data for those companies with an N that's greater than, equal to a hundred responses. So again, we're filtering on those with a hundred or more responses. The table on the left that you see there, that's sorted by net score, okay. So we're sorting by spending momentum. And then the one on the right is sorted by shared N, so their presence in the data set. Seven companies hit the top 10 for both categories; Palo Alto Network, Splunk, CrowdStrike Okta, Proofpoint, Fortinet and Zscaler. Now, remember, take a look, Okta excludes Auth0, in this little methodology that we came up with. Auth0 didn't make the cuts but it hits the top 10 for net score. So if you add in Auth0's 112 N there that you see on the right. You add that into Okta, we put Okta in the number two spot in the survey on the right most table with the shared N of 354. Only Cisco has a higher presence in the data set. And you can see Cisco in the left lands just below that red dotted line. That's the top 10 in security. So if we were to combine Okta and Auth0 as one, Cisco would make the cut and earn four stars. Now, some other notables are CyberArk, which is just below the red line on the right most chart with an impressive 177 shared N. Again, if you combine Auth0 and Okta, CyberArk makes the four star grade because it's in the top 10 for net score on the left. And Sailpoint is another notable with a net score above 50% and it's got a shared N of 122, which is respectable. So despite the market's choppy waters, we're seeing some positive signs in the survey data for some of the more prominent names that we've been following for the last couple of years. So what does this mean for the markets going forward? As always, when we see these confusing signs we like to reach out to the network and one of the sharpest traders out there is Chip Simonton. We've quoted him before and we like to share some of his insights. And so we're going to highlight some of that here. So technically, almost every good tech stock is oversold. And as such, he suggested we might see a bounce here. We certainly are seeing that on this Friday, the 13th. But the right call tactically has been to sell into the rally these past several months, so we'll see what happens on Monday. The key issue with the name like Okta and some other momentum names like CrowdStrike and Zscaler is that when money comes back into tech, it's likely going to go to the FAANG stocks, the Facebook, Apple, Amazon, Netflix, Google, and of course, you put Microsoft in there as well. And we'll see about Amazon, by the way, it's kind of out of favor right now, as everyone's focused on the retail side of the business meanwhile it's cloud business is booming and that's where all the profit is. We think that should be the real focus for Amazon. But the point is, for these momentum names in cybersecurity that don't make money, they face real headwinds, as growth is slowing overall and interest rates rise, that makes the net present value of these investments much less attractive. We've talked about that before. But longer term, we agree with Chip Simonton that these are excellent companies and they will weather the storm and we think they're going to lead their respective markets. And in cyber, we would expect continued M&A activity, which could act as a booster shot in the arms of these names. Now in 2019, we saw the ETR data, it pointed to CrowdStrike, Zscaler, Okta and others in the security space. Some of those names that really looked to us like they were moving forward and the pandemic just created a surge in these names and admittedly they got out over their skis. But the data suggests that these leading companies have continued momentum and the potential for stay in power. Unlike the SolarWinds hack, it seems at this point anyway that Okta will recover in the market. For the reasons that we cited, investors, they might stay away for some time but longer term, there's a shift in CSO security strategies that appear to be permanent. They're really valuing cloud-based modern platforms, these platforms will likely continue to gain share and carry their momentum forward. Okay, that's it for now, thanks to Stephanie Chan, who helps with the background research and with social, Kristen Martin and Cheryl Knight help get the word out and do some great work as well. Alex Morrison is on production and handles all of our podcast. Alex, thank you. And Rob Hof is our Editor in Chief at SiliconANGLE. Remember, all these episodes, they're available as podcast, you can pop in the headphones and listen, just search "Breaking Analysis Podcast." I publish each week on wikibon.com and SiliconANGLE.com. Don't forget to check out etr.ai, best in the business for real customer data. It's an awesome platform. You can reach me at dave.vellante@siliconangle.com or @dvellante. You can comment on our LinkedIn posts. This is Dave Vellante for the CUBEinsights powered by ETR. Thanks for watching. And we'll see you next time. (bright upbeat music)
SUMMARY :
in Palo Alto in Boston, and the prior week at Dell And one of the questions was around What did they say? it built into the platform. and a lot of names in the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Dustin Williams | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Netflix | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Jim Mercer | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
March | DATE | 0.99+ |
Alex Morrison | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
May | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
April | DATE | 0.99+ |
June 2nd | DATE | 0.99+ |
Arctic Wolf Networks | ORGANIZATION | 0.99+ |
six percent | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
8% | QUANTITY | 0.99+ |
AWN | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
Proofpoint | ORGANIZATION | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Monday | DATE | 0.99+ |
CyberArk | ORGANIZATION | 0.99+ |
Ukraine | LOCATION | 0.99+ |
Palo Alto Network | ORGANIZATION | 0.99+ |
Seven companies | QUANTITY | 0.99+ |
Williams | PERSON | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
President | PERSON | 0.99+ |
Sailpoint | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Alex | PERSON | 0.99+ |
five billion dollars | QUANTITY | 0.99+ |
50s | QUANTITY | 0.99+ |
32% | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
dave.vellante@siliconangle.com | OTHER | 0.99+ |
40% | QUANTITY | 0.99+ |
last November | DATE | 0.99+ |
42% | QUANTITY | 0.99+ |
three percent | QUANTITY | 0.99+ |
18% | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
zero percent | QUANTITY | 0.99+ |
Auth0 | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
three companies | QUANTITY | 0.99+ |
Costa Rica | LOCATION | 0.99+ |
Chip Simonton | PERSON | 0.99+ |
Power Panel: Does Hardware Still Matter
(upbeat music) >> The ascendancy of cloud and SAS has shown new light on how organizations think about, pay for, and value hardware. Once sought after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays, and maximizing server utilization has been superseded by demand for cloud architects, DevOps pros, developers with expertise in microservices, container, application development, and like. Even a company like Dell, the largest hardware company in enterprise tech touts that it has more software engineers than those working in hardware. Begs the question, is hardware going the way of Coball? Well, not likely. Software has to run on something, but the labor needed to deploy, and troubleshoot, and manage hardware infrastructure is shifting. At the same time, we've seen the value flow also shifting in hardware. Once a world dominated by X86 processors value is flowing to alternatives like Nvidia and arm based designs. Moreover, other componentry like NICs, accelerators, and storage controllers are becoming more advanced, integrated, and increasingly important. The question is, does it matter? And if so, why does it matter and to whom? What does it mean to customers, workloads, OEMs, and the broader society? Hello and welcome to this week's Wikibon theCUBE Insights powered by ETR. In this breaking analysis, we've organized a special power panel of industry analysts and experts to address the question, does hardware still matter? Allow me to introduce the panel. Bob O'Donnell is president and chief analyst at TECHnalysis Research. Zeus Kerravala is the founder and principal analyst at ZK Research. David Nicholson is a CTO and tech expert. Keith Townson is CEO and founder of CTO Advisor. And Marc Staimer is the chief dragon slayer at Dragon Slayer Consulting and oftentimes a Wikibon contributor. Guys, welcome to theCUBE. Thanks so much for spending some time here. >> Good to be here. >> Thanks. >> Thanks for having us. >> Okay before we get into it, I just want to bring up some data from ETR. This is a survey that ETR does every quarter. It's a survey of about 1200 to 1500 CIOs and IT buyers and I'm showing a subset of the taxonomy here. This XY axis and the vertical axis is something called net score. That's a measure of spending momentum. It's essentially the percentage of customers that are spending more on a particular area than those spending less. You subtract the lesses from the mores and you get a net score. Anything the horizontal axis is pervasion in the data set. Sometimes they call it market share. It's not like IDC market share. It's just the percentage of activity in the data set as a percentage of the total. That red 40% line, anything over that is considered highly elevated. And for the past, I don't know, eight to 12 quarters, the big four have been AI and machine learning, containers, RPA and cloud and cloud of course is very impressive because not only is it elevated in the vertical access, but you know it's very highly pervasive on the horizontal. So what I've done is highlighted in red that historical hardware sector. The server, the storage, the networking, and even PCs despite the work from home are depressed in relative terms. And of course, data center collocation services. Okay so you're seeing obviously hardware is not... People don't have the spending momentum today that they used to. They've got other priorities, et cetera, but I want to start and go kind of around the horn with each of you, what is the number one trend that each of you sees in hardware and why does it matter? Bob O'Donnell, can you please start us off? >> Sure Dave, so look, I mean, hardware is incredibly important and one comment first I'll make on that slide is let's not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It's just a little bit more stable. It's not as subject to big jumps as we see certainly in other software areas. But look, the important thing that's happening in hardware is the diversification of the types of chip architectures we're seeing and how and where they're being deployed, right? You refer to this in your opening. We've moved from a world of x86 CPUs from Intel and AMD to things like obviously GPUs, DPUs. We've got VPU for, you know, computer vision processing. We've got AI-dedicated accelerators, we've got all kinds of other network acceleration tools and AI-powered tools. There's an incredible diversification of these chip architectures and that's been happening for a while but now we're seeing them more widely deployed and it's being done that way because workloads are evolving. The kinds of workloads that we're seeing in some of these software areas require different types of compute engines than traditionally we've had. The other thing is (coughs), excuse me, the power requirements based on where geographically that compute happens is also evolving. This whole notion of the edge, which I'm sure we'll get into a little bit more detail later is driven by the fact that where the compute actually sits closer to in theory the edge and where edge devices are, depending on your definition, changes the power requirements. It changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures. And that's a very long-term trend that I think we're going to continue to see play out through this decade and well into the 2030s as well. >> Excellent, great, great points. Thank you, Bob. Zeus up next, please. >> Yeah, and I think the other thing when you look at this chart to remember too is, you know, through the pandemic and the work from home period a lot of companies did put their office modernization projects on hold and you heard that echoed, you know, from really all the network manufacturers anyways. They always had projects underway to upgrade networks. They put 'em on hold. Now that people are starting to come back to the office, they're looking at that now. So we might see some change there, but Bob's right. The size of those market are quite a bit different. I think the other big trend here is the hardware companies, at least in the areas that I look at networking are understanding now that it's a combination of hardware and software and silicon that works together that creates that optimum type of performance and experience, right? So some things are best done in silicon. Some like data forwarding and things like that. Historically when you look at the way network devices were built, you did everything in hardware. You configured in hardware, they did all the data for you, and did all the management. And that's been decoupled now. So more and more of the control element has been placed in software. A lot of the high-performance things, encryption, and as I mentioned, data forwarding, packet analysis, stuff like that is still done in hardware, but not everything is done in hardware. And so it's a combination of the two. I think, for the people that work with the equipment as well, there's been more shift to understanding how to work with software. And this is a mistake I think the industry made for a while is we had everybody convinced they had to become a programmer. It's really more a software power user. Can you pull things out of software? Can you through API calls and things like that. But I think the big frame here is, David, it's a combination of hardware, software working together that really make a difference. And you know how much you invest in hardware versus software kind of depends on the performance requirements you have. And I'll talk about that later but that's really the big shift that's happened here. It's the vendors that figured out how to optimize performance by leveraging the best of all of those. >> Excellent. You guys both brought up some really good themes that we can tap into Dave Nicholson, please. >> Yeah, so just kind of picking up where Bob started off. Not only are we seeing the rise of a variety of CPU designs, but I think increasingly the connectivity that's involved from a hardware perspective, from a kind of a server or service design perspective has become increasingly important. I think we'll get a chance to look at this in more depth a little bit later but when you look at what happens on the motherboard, you know we're not in so much a CPU-centric world anymore. Various application environments have various demands and you can meet them by using a variety of components. And it's extremely significant when you start looking down at the component level. It's really important that you optimize around those components. So I guess my summary would be, I think we are moving out of the CPU-centric hardware model into more of a connectivity-centric model. We can talk more about that later. >> Yeah, great. And thank you, David, and Keith Townsend I really interested in your perspectives on this. I mean, for years you worked in a data center surrounded by hardware. Now that we have the software defined data center, please chime in here. >> Well, you know, I'm going to dig deeper into that software-defined data center nature of what's happening with hardware. Hardware is meeting software infrastructure as code is a thing. What does that code look like? We're still trying to figure out but servicing up these capabilities that the previous analysts have brought up, how do I ensure that I can get the level of services needed for the applications that I need? Whether they're legacy, traditional data center, workloads, AI ML, workloads, workloads at the edge. How do I codify that and consume that as a service? And hardware vendors are figuring this out. HPE, the big push into GreenLake as a service. Dale now with Apex taking what we need, these bare bone components, moving it forward with DDR five, six CXL, et cetera, and surfacing that as cold or as services. This is a very tough problem. As we transition from consuming a hardware-based configuration to this infrastructure as cold paradigm shift. >> Yeah, programmable infrastructure, really attacking that sort of labor discussion that we were having earlier, okay. Last but not least Marc Staimer, please. >> Thanks, Dave. My peers raised really good points. I agree with most of them, but I'm going to disagree with the title of this session, which is, does hardware matter? It absolutely matters. You can't run software on the air. You can't run it in an ephemeral cloud, although there's the technical cloud and that's a different issue. The cloud is kind of changed everything. And from a market perspective in the 40 plus years I've been in this business, I've seen this perception that hardware has to go down in price every year. And part of that was driven by Moore's law. And we're coming to, let's say a lag or an end, depending on who you talk to Moore's law. So we're not doubling our transistors every 18 to 24 months in a chip and as a result of that, there's been a higher emphasis on software. From a market perception, there's no penalty. They don't put the same pressure on software from the market to reduce the cost every year that they do on hardware, which kind of bass ackwards when you think about it. Hardware costs are fixed. Software costs tend to be very low. It's kind of a weird thing that we do in the market. And what's changing is we're now starting to treat hardware like software from an OPEX versus CapEx perspective. So yes, hardware matters. And we'll talk about that more in length. >> You know, I want to follow up on that. And I wonder if you guys have a thought on this, Bob O'Donnell, you and I have talked about this a little bit. Marc, you just pointed out that Moore's laws could have waning. Pat Gelsinger recently at their investor meeting said that he promised that Moore's law is alive and well. And the point I made in breaking analysis was okay, great. You know, Pat said, doubling transistors every 18 to 24 months, let's say that Intel can do that. Even though we know it's waning somewhat. Look at the M1 Ultra from Apple (chuckles). In about 15 months increased transistor density on their package by 6X. So to your earlier point, Bob, we have this sort of these alternative processors that are really changing things. And to Dave Nicholson's point, there's a whole lot of supporting components as well. Do you have a comment on that, Bob? >> Yeah, I mean, it's a great point, Dave. And one thing to bear in mind as well, not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised the other big point and I think it was Keith that mentioned it. CXL and interconnect on the chip itself is dramatically changing it. And a lot of the more interesting advances that are going to continue to drive Moore's law forward in terms of the way we think about performance, if perhaps not number of transistors per se, is the interconnects that become available. You're seeing the development of chiplets or tiles, people use different names, but the idea is you can have different components being put together eventually in sort of a Lego block style. And what that's also going to allow, not only is that going to give interesting performance possibilities 'cause of the faster interconnect. So you can share, have shared memory between things which for big workloads like AI, huge data sets can make a huge difference in terms of how you talk to memory over a network connection, for example, but not only that you're going to see more diversity in the types of solutions that can be built. So we're going to see even more choices in hardware from a silicon perspective because you'll be able to piece together different elements. And oh, by the way, the other benefit of that is we've reached a point in chip architectures where not everything benefits from being smaller. We've been so focused and so obsessed when it comes to Moore's law, to the size of each individual transistor and yes, for certain architecture types, CPUs and GPUs in particular, that's absolutely true, but we've already hit the point where things like RF for 5g and wifi and other wireless technologies and a whole bunch of other things actually don't get any better with a smaller transistor size. They actually get worse. So the beauty of these chiplet architectures is you could actually combine different chip manufacturing sizes. You know you hear about four nanometer and five nanometer along with 14 nanometer on a single chip, each one optimized for its specific application yet together, they can give you the best of all worlds. And so we're just at the very beginning of that era, which I think is going to drive a ton of innovation. Again, gets back to my comment about different types of devices located geographically different places at the edge, in the data center, you know, in a private cloud versus a public cloud. All of those things are going to be impacted and there'll be a lot more options because of this silicon diversity and this interconnect diversity that we're just starting to see. >> Yeah, David. David Nicholson's got a graphic on that. They're going to show later. Before we do that, I want to introduce some data. I actually want to ask Keith to comment on this before we, you know, go on. This next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware. And you can see the red is they had significant issues and it's most pronounced in laptops and networking hardware on the far right-hand side, but virtually all categories, firewalls, peripheral servers, storage are having moderately difficult procurement issues. That's the sort of pinkish or significant challenges. So Keith, I mean, what are you seeing with your customers in the hardware supply chains and bottlenecks? And you know we're seeing it with automobiles and appliances but so it goes beyond IT. The semiconductor, you know, challenges. What's been the impact on the buyer community and society and do you have any sense as to when it will subside? >> You know, I was just asked this question yesterday and I'm feeling the pain. People question, kind of a side project within the CTO advisor, we built a hybrid infrastructure, traditional IT data center that we're walking with the traditional customer and modernizing that data center. So it was, you know, kind of a snapshot of time in 2016, 2017, 10 gigabit, ARISTA switches, some older Dell's 730 XD switches, you know, speeds and feeds. And we said we would modern that with the latest Intel stack and connected to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges. I thought we'd easily migrate from 10 gig networking to 25 gig networking path that customers are going on. The 10 gig network switches that I bought used are now double the price because you can't get legacy 10 gig network switches because all of the manufacturers are focusing on the more profitable 25 gig for capacity, even the 25 gig switches. And we're focused on networking right now. It's hard to procure. We're talking about nine to 12 months or more lead time. So we're seeing customers adjust by adopting cloud. But if you remember early on in the pandemic, Microsoft Azure kind of gated customers that didn't have a capacity agreement. So customers are keeping an eye on that. There's a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware VP or some other virtualization technology where it doesn't matter who can get me the hardware, they can just get me the hardware because it's critically impacting projects and timelines. >> So that's a great setup Zeus for you with Keith mentioned the earlier the software-defined data center with software-defined networking and cloud. Do you see a day where networking hardware is monetized and it's all about the software, or are we there already? >> No, we're not there already. And I don't see that really happening any time in the near future. I do think it's changed though. And just to be clear, I mean, when you look at that data, this is saying customers have had problems procuring the equipment, right? And there's not a network vendor out there. I've talked to Norman Rice at Extreme, and I've talked to the folks at Cisco and ARISTA about this. They all said they could have had blowout quarters had they had the inventory to ship. So it's not like customers aren't buying this anymore. Right? I do think though, when it comes to networking network has certainly changed some because there's a lot more controls as I mentioned before that you can do in software. And I think the customers need to start thinking about the types of hardware they buy and you know, where they're going to use it and, you know, what its purpose is. Because I've talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it's bogged down, right? It just doesn't have the horsepower to run it. And, you know, even when you do that, you have to start thinking of the components you use. The NICs you buy. And I've talked to customers that have simply just gone through the process replacing a NIC card and a commodity box and had some performance problems and, you know, things like that. So if agility is more important than performance, then by all means try running software on commodity hardware. I think that works in some cases. If performance though is more important, that's when you need that kind of turnkey hardware system. And I've actually seen more and more customers reverting back to that model. In fact, when you talk to even some startups I think today about when they come to market, they're delivering things more on appliances because that's what customers want. And so there's this kind of app pivot this pendulum of agility and performance. And if performance absolutely matters, that's when you do need to buy these kind of turnkey, prebuilt hardware systems. If agility matters more, that's when you can go more to software, but the underlying hardware still does matter. So I think, you know, will we ever have a day where you can just run it on whatever hardware? Maybe but I'll long be retired by that point. So I don't care. >> Well, you bring up a good point Zeus. And I remember the early days of cloud, the narrative was, oh, the cloud vendors. They don't use EMC storage, they just run on commodity storage. And then of course, low and behold, you know, they've trot out James Hamilton to talk about all the custom hardware that they were building. And you saw Google and Microsoft follow suit. >> Well, (indistinct) been falling for this forever. Right? And I mean, all the way back to the turn of the century, we were calling for the commodity of hardware. And it's never really happened because you can still drive. As long as you can drive innovation into it, customers will always lean towards the innovation cycles 'cause they get more features faster and things. And so the vendors have done a good job of keeping that cycle up but it'll be a long time before. >> Yeah, and that's why you see companies like Pure Storage. A storage company has 69% gross margins. All right. I want to go jump ahead. We're going to bring up the slide four. I want to go back to something that Bob O'Donnell was talking about, the sort of supporting act. The diversity of silicon and we've marched to the cadence of Moore's law for decades. You know, we asked, you know, is Moore's law dead? We say it's moderating. Dave Nicholson. You want to talk about those supporting components. And you shared with us a slide that shift. You call it a shift from a processor-centric world to a connect-centric world. What do you mean by that? And let's bring up slide four and you can talk to that. >> Yeah, yeah. So first, I want to echo this sentiment that the question does hardware matter is sort of the answer is of course it matters. Maybe the real question should be, should you care about it? And the answer to that is it depends who you are. If you're an end user using an application on your mobile device, maybe you don't care how the architecture is put together. You just care that the service is delivered but as you back away from that and you get closer and closer to the source, someone needs to care about the hardware and it should matter. Why? Because essentially what hardware is doing is it's consuming electricity and dollars and the more efficiently you can configure hardware, the more bang you're going to get for your buck. So it's not only a quantitative question in terms of how much can you deliver? But it also ends up being a qualitative change as capabilities allow for things we couldn't do before, because we just didn't have the aggregate horsepower to do it. So this chart actually comes out of some performance tests that were done. So it happens to be Dell servers with Broadcom components. And the point here was to peel back, you know, peel off the top of the server and look at what's in that server, starting with, you know, the PCI interconnect. So PCIE gen three, gen four, moving forward. What are the effects on from an interconnect versus on performance application performance, translating into new orders per minute, processed per dollar, et cetera, et cetera? If you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance, you can see that CPU architecture is sort of lagging behind in a way. And Bob mentioned this idea of tiling and all of the different ways to get around that. When we do performance testing, we can actually peg CPUs, just running the performance tests without any actual database environments working. So right now we're at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input. So the key thing here what this is highlighting is just as a very specific example, you take a card that's designed as a gen three PCIE device, and you plug it into a gen four slot. Now the card is the bottleneck. You plug a gen four card into a gen four slot. Now the gen four slot is the bottleneck. So we're constantly chasing these bottlenecks. Someone has to be focused on that from an architectural perspective, it's critically important. So there's no question that it matters. But of course, various people in this food chain won't care where it comes from. I guess a good analogy might be, where does our food come from? If I get a steak, it's a pink thing wrapped in plastic, right? Well, there are a lot of inputs that a lot of people have to care about to get that to me. Do I care about all of those things? No. Are they important? They're critically important. >> So, okay. So all I want to get to the, okay. So what does this all mean to customers? And so what I'm hearing from you is to balance a system it's becoming, you know, more complicated. And I kind of been waiting for this day for a long time, because as we all know the bottleneck was always the spinning disc, the last mechanical. So people who wrote software knew that when they were doing it right, the disc had to go and do stuff. And so they were doing other things in the software. And now with all these new interconnects and flash and things like you could do atomic rights. And so that opens up new software possibilities and combine that with alternative processes. But what's the so what on this to the customer and the application impact? Can anybody address that? >> Yeah, let me address that for a moment. I want to leverage some of the things that Bob said, Keith said, Zeus said, and David said, yeah. So I'm a bit of a contrarian in some of this. For example, on the chip side. As the chips get smaller, 14 nanometer, 10 nanometer, five nanometer, soon three nanometer, we talk about more cores, but the biggest problem on the chip is the interconnect from the chip 'cause the wires get smaller. People don't realize in 2004 the latency on those wires in the chips was 80 picoseconds. Today it's 1300 picoseconds. That's on the chip. This is why they're not getting faster. So we maybe getting a little bit slowing down in Moore's law. But even as we kind of conquer that you still have the interconnect problem and the interconnect problem goes beyond the chip. It goes within the system, composable architectures. It goes to the point where Keith made, ultimately you need a hybrid because what we're seeing, what I'm seeing and I'm talking to customers, the biggest issue they have is moving data. Whether it be in a chip, in a system, in a data center, between data centers, moving data is now the biggest gating item in performance. So if you want to move it from, let's say your transactional database to your machine learning, it's the bottleneck, it's moving the data. And so when you look at it from a distributed environment, now you've got to move the compute to the data. The only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute, the software, running on hardware closer to the data. Go ahead. >> So is this what you mean when Nicholson was talking about a shift from a processor centric world to a connectivity centric world? You're talking about moving the bits across all the different components, not having the processor you're saying is essentially becoming the bottleneck or the memory, I guess. >> Well, that's one of them and there's a lot of different bottlenecks, but it's the data movement itself. It's moving away from, wait, why do we need to move the data? Can we move the compute, the processing closer to the data? Because if we keep them separate and this has been a trend now where people are moving processing away from it. It's like the edge. I think it was Zeus or David. You were talking about the edge earlier. As you look at the edge, who defines the edge, right? Is the edge a closet or is it a sensor? If it's a sensor, how do you do AI at the edge? When you don't have enough power, you don't have enough computable. People were inventing chips to do that. To do all that at the edge, to do AI within the sensor, instead of moving the data to a data center or a cloud to do the processing. Because the lag in latency is always limited by speed of light. How fast can you move the electrons? And all this interconnecting, all the processing, and all the improvement we're seeing in the PCIE bus from three, to four, to five, to CXL, to a higher bandwidth on the network. And that's all great but none of that deals with the speed of light latency. And that's an-- Go ahead. >> You know Marc, no, I just want to just because what you're referring to could be looked at at a macro level, which I think is what you're describing. You can also look at it at a more micro level from a systems design perspective, right? I'm going to be the resident knuckle dragging hardware guy on the panel today. But it's exactly right. You moving compute closer to data includes concepts like peripheral cards that have built in intelligence, right? So again, in some of this testing that I'm referring to, we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for the like IO. Now you have essentially offload engines in the form of storage controllers, rate controllers, of course, for ethernet NICs, smart NICs. And so when you can have these sort of offload engines and we've gone through these waves over time. People think, well, wait a minute, raid controller and NVMe? You know, flash storage devices. Does that make sense? It turns out it does. Why? Because you're actually at a micro level doing exactly what you're referring to. You're bringing compute closer to the data. Now, closer to the data meaning closer to the data storage subsystem. It doesn't solve the macro issue that you're referring to but it is important. Again, going back to this idea of system design optimization, always chasing the bottleneck, plugging the holes. Someone needs to do that in this value chain in order to get the best value for every kilowatt hour of power and every dollar. >> Yeah. >> Well this whole drive performance has created some really interesting architectural designs, right? Like Nickelson, the rise of the DPU right? Brings more processing power into systems that already had a lot of processing power. There's also been some really interesting, you know, kind of innovation in the area of systems architecture too. If you look at the way Nvidia goes to market, their drive kit is a prebuilt piece of hardware, you know, optimized for self-driving cars, right? They partnered with Pure Storage and ARISTA to build that AI-ready infrastructure. I remember when I talked to Charlie Giancarlo, the CEO of Pure about when the three companies rolled that out. He said, "Look, if you're going to do AI, "you need good store. "You need fast storage, fast processor and fast network." And so for customers to be able to put that together themselves was very, very difficult. There's a lot of software that needs tuning as well. So the three companies partner together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it. And so in that case, in some ways the hardware was leading the software innovation. And so, the variety of different architectures we have today around hardware has really exploded. And I think it, part of the what Bob brought up at the beginning about the different chip design. >> Yeah, Bob talked about that earlier. Bob, I mean, most AI today is modeling, you know, and a lot of that's done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge. And that's a radically different architecture, Bob, isn't it? >> It is, it's a completely different architecture. And just to follow up on a couple points, excellent conversation guys. Dave talked about system architecture and really this that's what this boils down to, right? But it's looking at architecture at every level. I was talking about the individual different components the new interconnect methods. There's this new thing called UCIE universal connection. I forget what it stands answer for, but it's a mechanism for doing chiplet architectures, but then again, you have to take it up to the system level, 'cause it's all fine and good. If you have this SOC that's tuned and optimized, but it has to talk to the rest of the system. And that's where you see other issues. And you've seen things like CXL and other interconnect standards, you know, and nobody likes to talk about interconnect 'cause it's really wonky and really technical and not that sexy, but at the end of the day it's incredibly important exactly. To the other points that were being raised like mark raised, for example, about getting that compute closer to where the data is and that's where again, a diversity of chip architectures help and exactly to your last comment there Dave, putting that ability in an edge device is really at the cutting edge of what we're seeing on a semiconductor design and the ability to, for example, maybe it's an FPGA, maybe it's a dedicated AI chip. It's another kind of chip architecture that's being created to do that inferencing on the edge. Because again, it's that the cost and the challenges of moving lots of data, whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters. And the other thing is we're tackling bigger problems. So architecturally, not even just architecturally within a system, but when we think about DPUs and the sort of the east west data center movement conversation that we hear Nvidia and others talk about, it's about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data. So really is about tackling where the processing is needed, having the interconnect and the ability to get where the data you need to the right place at the right time. And because those needs are diversifying, we're just going to continue to see an explosion of different choices and options, which is going to make hardware even more essential I would argue than it is today. And so I think what we're going to see not only does hardware matter, it's going to matter even more in the future than it does now. >> Great, yeah. Great discussion, guys. I want to bring Keith back into the conversation here. Keith, if your main expertise in tech is provisioning LUNs, you probably you want to look for another job. So maybe clearly hardware matters, but with software defined everything, do people with hardware expertise matter outside of for instance, component manufacturers or cloud companies? I mean, VMware certainly changed the dynamic in servers. Dell just spun off its most profitable asset and VMware. So it obviously thinks hardware can stand alone. How does an enterprise architect view the shift to software defined hyperscale cloud and how do you see the shifting demand for skills in enterprise IT? >> So I love the question and I'll take a different view of it. If you're a data analyst and your primary value add is that you do ETL transformation, talk to a CDO, a chief data officer over midsize bank a little bit ago. He said 80% of his data scientists' time is done on ETL. Super not value ad. He wants his data scientists to do data science work. Chances are if your only value is that you do LUN provisioning, then you probably don't have a job now. The technologies have gotten much more intelligent. As infrastructure pros, we want to give infrastructure pros the opportunities to shine and I think the software defined nature and the automation that we're seeing vendors undertake, whether it's Dell, HP, Lenovo take your pick that Pure Storage, NetApp that are doing the automation and the ML needed so that these practitioners don't spend 80% of their time doing LUN provisioning and focusing on their true expertise, which is ensuring that data is stored. Data is retrievable, data's protected, et cetera. I think the shift is to focus on that part of the job that you're ensuring no matter where the data's at, because as my data is spread across the enterprise hybrid different types, you know, Dave, you talk about the super cloud a lot. If my data is in the super cloud, protecting that data and securing that data becomes much more complicated when than when it was me just procuring or provisioning LUNs. So when you say, where should the shift be, or look be, you know, focusing on the real value, which is making sure that customers can access data, can recover data, can get data at performance levels that they need within the price point. They need to get at those datasets and where they need it. We talked a lot about where they need out. One last point about this interconnecting. I have this vision and I think we all do of composable infrastructure. This idea that scaled out does not solve every problem. The cloud can give me infinite scale out. Sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances that single OS does not exist today. And the opportunity is to create composable infrastructure so that we solve a lot of these problems that just simply don't scale out. >> You know, wow. So many interesting points there. I had just interviewed Zhamak Dehghani, who's the founder of Data Mesh last week. And she made a really interesting point. She said, "Think about, we have separate stacks. "We have an application stack and we have "a data pipeline stack and the transaction systems, "the transaction database, we extract data from that," to your point, "We ETL it in, you know, it takes forever. "And then we have this separate sort of data stack." If we're going to inject more intelligence and data and AI into applications, those two stacks, her contention is they have to come together. And when you think about, you know, super cloud bringing compute to data, that was what Haduck was supposed to be. It ended up all sort of going into a central location, but it's almost a rhetorical question. I mean, it seems that that necessitates new thinking around hardware architectures as it kind of everything's the edge. And the other point is to your point, Keith, it's really hard to secure that. So when you can think about offloads, right, you've heard the stats, you know, Nvidia talks about it. Broadcom talks about it that, you know, that 30%, 25 to 30% of the CPU cycles are wasted on doing things like storage offloads, or networking or security. It seems like maybe Zeus you have a comment on this. It seems like new architectures need to come other to support, you know, all of that stuff that Keith and I just dispute. >> Yeah, and by the way, I do want to Keith, the question you just asked. Keith, it's the point I made at the beginning too about engineers do need to be more software-centric, right? They do need to have better software skills. In fact, I remember talking to Cisco about this last year when they surveyed their engineer base, only about a third of 'em had ever made an API call, which you know that that kind of shows this big skillset change, you know, that has to come. But on the point of architectures, I think the big change here is edge because it brings in distributed compute models. Historically, when you think about compute, even with multi-cloud, we never really had multi-cloud. We'd use multiple centralized clouds, but compute was always centralized, right? It was in a branch office, in a data center, in a cloud. With edge what we creates is the rise of distributed computing where we'll have an application that actually accesses different resources and at different edge locations. And I think Marc, you were talking about this, like the edge could be in your IoT device. It could be your campus edge. It could be cellular edge, it could be your car, right? And so we need to start thinkin' about how our applications interact with all those different parts of that edge ecosystem, you know, to create a single experience. The consumer apps, a lot of consumer apps largely works that way. If you think of like app like Uber, right? It pulls in information from all kinds of different edge application, edge services. And, you know, it creates pretty cool experience. We're just starting to get to that point in the business world now. There's a lot of security implications and things like that, but I do think it drives more architectural decisions to be made about how I deploy what data where and where I do my processing, where I do my AI and things like that. It actually makes the world more complicated. In some ways we can do so much more with it, but I think it does drive us more towards turnkey systems, at least initially in order to, you know, ensure performance and security. >> Right. Marc, I wanted to go to you. You had indicated to me that you wanted to chat about this a little bit. You've written quite a bit about the integration of hardware and software. You know, we've watched Oracle's move from, you know, buying Sun and then basically using that in a highly differentiated approach. Engineered systems. What's your take on all that? I know you also have some thoughts on the shift from CapEx to OPEX chime in on that. >> Sure. When you look at it, there are advantages to having one vendor who has the software and hardware. They can synergistically make them work together that you can't do in a commodity basis. If you own the software and somebody else has the hardware, I'll give you an example would be Oracle. As you talked about with their exit data platform, they literally are leveraging microcode in the Intel chips. And now in AMD chips and all the way down to Optane, they make basically AMD database servers work with Optane memory PMM in their storage systems, not MVME, SSD PMM. I'm talking about the cards itself. So there are advantages you can take advantage of if you own the stack, as you were putting out earlier, Dave, of both the software and the hardware. Okay, that's great. But on the other side of that, that tends to give you better performance, but it tends to cost a little more. On the commodity side it costs less but you get less performance. What Zeus had said earlier, it depends where you're running your application. How much performance do you need? What kind of performance do you need? One of the things about moving to the edge and I'll get to the OPEX CapEx in a second. One of the issues about moving to the edge is what kind of processing do you need? If you're running in a CCTV camera on top of a traffic light, how much power do you have? How much cooling do you have that you can run this? And more importantly, do you have to take the data you're getting and move it somewhere else and get processed and the information is sent back? I mean, there are companies out there like Brain Chip that have developed AI chips that can run on the sensor without a CPU. Without any additional memory. So, I mean, there's innovation going on to deal with this question of data movement. There's companies out there like Tachyon that are combining GPUs, CPUs, and DPUs in a single chip. Think of it as super composable architecture. They're looking at being able to do more in less. On the OPEX and CapEx issue. >> Hold that thought, hold that thought on the OPEX CapEx, 'cause we're running out of time and maybe you can wrap on that. I just wanted to pick up on something you said about the integrated hardware software. I mean, other than the fact that, you know, Michael Dell unlocked whatever $40 billion for himself and Silverlake, I was always a fan of a spin in with VMware basically become the Oracle of hardware. Now I know it would've been a nightmare for the ecosystem and culturally, they probably would've had a VMware brain drain, but what does anybody have any thoughts on that as a sort of a thought exercise? I was always a fan of that on paper. >> I got to eat a little crow. I did not like the Dale VMware acquisition for the industry in general. And I think it hurt the industry in general, HPE, Cisco walked away a little bit from that VMware relationship. But when I talked to customers, they loved it. You know, I got to be honest. They absolutely loved the integration. The VxRail, VxRack solution exploded. Nutanix became kind of a afterthought when it came to competing. So that spin in, when we talk about the ability to innovate and the ability to create solutions that you just simply can't create because you don't have the full stack. Dell was well positioned to do that with a potential span in of VMware. >> Yeah, we're going to be-- Go ahead please. >> Yeah, in fact, I think you're right, Keith, it was terrible for the industry. Great for Dell. And I remember talking to Chad Sakac when he was running, you know, VCE, which became Rack and Rail, their ability to stay in lockstep with what VMware was doing. What was the number one workload running on hyperconverged forever? It was VMware. So their ability to remain in lockstep with VMware gave them a huge competitive advantage. And Dell came out of nowhere in, you know, the hyper-converged market and just started taking share because of that relationship. So, you know, this sort I guess it's, you know, from a Dell perspective I thought it gave them a pretty big advantage that they didn't really exploit across their other properties, right? Networking and service and things like they could have given the dominance that VMware had. From an industry perspective though, I do think it's better to have them be coupled. So. >> I agree. I mean, they could. I think they could have dominated in super cloud and maybe they would become the next Oracle where everybody hates 'em, but they kick ass. But guys. We got to wrap up here. And so what I'm going to ask you is I'm going to go and reverse the order this time, you know, big takeaways from this conversation today, which guys by the way, I can't thank you enough phenomenal insights, but big takeaways, any final thoughts, any research that you're working on that you want highlight or you know, what you look for in the future? Try to keep it brief. We'll go in reverse order. Maybe Marc, you could start us off please. >> Sure, on the research front, I'm working on a total cost of ownership of an integrated database analytics machine learning versus separate services. On the other aspect that I would wanted to chat about real quickly, OPEX versus CapEx, the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software. As you use it, pay for what you use in arrears. The good thing about that is you're only paying for what you use, period. You're not for what you don't use. I mean, it's compute time, everything else. The bad side about that is you have no predictability in your bill. It's elastic, but every user I've talked to says every month it's different. And from a budgeting perspective, it's very hard to set up your budget year to year and it's causing a lot of nightmares. So it's just something to be aware of. From a CapEx perspective, you have no more CapEx if you're using that kind of base system but you lose a certain amount of control as well. So ultimately that's some of the issues. But my biggest point, my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about Keith or other aspects moving it between hybrid locations, moving it within a system, moving it within a chip. All those are key issues. >> Great, thank you. Okay, CTO advisor, give us your final thoughts. >> All right. Really, really great commentary. Again, I'm going to point back to us taking the walk that our customers are taking, which is trying to do this conversion of all primary data center to a hybrid of which I have this hard earned philosophy that enterprise IT is additive. When we add a service, we rarely subtract a service. So the landscape and service area what we support has to grow. So our research focuses on taking that walk. We are taking a monolithic application, decomposing that to containers, and putting that in a public cloud, and connecting that back private data center and telling that story and walking that walk with our customers. This has been a super enlightening panel. >> Yeah, thank you. Real, real different world coming. David Nicholson, please. >> You know, it really hearkens back to the beginning of the conversation. You talked about momentum in the direction of cloud. I'm sort of spending my time under the hood, getting grease under my fingernails, focusing on where still the lions share of spend will be in coming years, which is OnPrem. And then of course, obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture. I know we all know Sapphire Rapids pushed into the future. When's the next Intel release coming? Who knows? We think, you know, in 2023. There have been a lot of people standing by from a practitioner's standpoint asking, well, what do I do between now and then? Does it make sense to upgrade bits and pieces of hardware or go from a last generation to a current generation when we know the next generation is coming? And so I've been very, very focused on looking at how these connectivity components like rate controllers and NICs. I know it's not as sexy as talking about cloud but just how these opponents completely change the game and actually can justify movement from say a 14th-generation architecture to a 15th-generation architecture today, even though gen 16 is coming, let's say 12 months from now. So that's where I am. Keep my phone number in the Rolodex. I literally reference Rolodex intentionally because like I said, I'm in there under the hood and it's not as sexy. But yeah, so that's what I'm focused on Dave. >> Well, you know, to paraphrase it, maybe derivative paraphrase of, you know, Larry Ellison's rant on what is cloud? It's operating systems and databases, et cetera. Rate controllers and NICs live inside of clouds. All right. You know, one of the reasons I love working with you guys is 'cause have such a wide observation space and Zeus Kerravala you, of all people, you know you have your fingers in a lot of pies. So give us your final thoughts. >> Yeah, I'm not a propeller heady as my chip counterparts here. (all laugh) So, you know, I look at the world a little differently and a lot of my research I'm doing now is the impact that distributed computing has on customer employee experiences, right? You talk to every business and how the experiences they deliver to their customers is really differentiating how they go to market. And so they're looking at these different ways of feeding up data and analytics and things like that in different places. And I think this is going to have a really profound impact on enterprise IT architecture. We're putting more data, more compute in more places all the way down to like little micro edges and retailers and things like that. And so we need the variety. Historically, if you think back to when I was in IT you know, pre-Y2K, we didn't have a lot of choice in things, right? We had a server that was rack mount or standup, right? And there wasn't a whole lot of, you know, differences in choice. But today we can deploy, you know, these really high-performance compute systems on little blades inside servers or inside, you know, autonomous vehicles and things. I think the world from here gets... You know, just the choice of what we have and the way hardware and software works together is really going to, I think, change the world the way we do things. We're already seeing that, like I said, in the consumer world, right? There's so many things you can do from, you know, smart home perspective, you know, natural language processing, stuff like that. And it's starting to hit businesses now. So just wait and watch the next five years. >> Yeah, totally. The computing power at the edge is just going to be mind blowing. >> It's unbelievable what you can do at the edge. >> Yeah, yeah. Hey Z, I just want to say that we know you're not a propeller head and I for one would like to thank you for having your master's thesis hanging on the wall behind you 'cause we know that you studied basket weaving. >> I was actually a physics math major, so. >> Good man. Another math major. All right, Bob O'Donnell, you're going to bring us home. I mean, we've seen the importance of semiconductors and silicon in our everyday lives, but your last thoughts please. >> Sure and just to clarify, by the way I was a great books major and this was actually for my final paper. And so I was like philosophy and all that kind of stuff and literature but I still somehow got into tech. Look, it's been a great conversation and I want to pick up a little bit on a comment Zeus made, which is this it's the combination of the hardware and the software and coming together and the manner with which that needs to happen, I think is critically important. And the other thing is because of the diversity of the chip architectures and all those different pieces and elements, it's going to be how software tools evolve to adapt to that new world. So I look at things like what Intel's trying to do with oneAPI. You know, what Nvidia has done with CUDA. What other platform companies are trying to create tools that allow them to leverage the hardware, but also embrace the variety of hardware that is there. And so as those software development environments and software development tools evolve to take advantage of these new capabilities, that's going to open up a lot of interesting opportunities that can leverage all these new chip architectures. That can leverage all these new interconnects. That can leverage all these new system architectures and figure out ways to make that all happen, I think is going to be critically important. And then finally, I'll mention the research I'm actually currently working on is on private 5g and how companies are thinking about deploying private 5g and the potential for edge applications for that. So I'm doing a survey of several hundred us companies as we speak and really looking forward to getting that done in the next couple of weeks. >> Yeah, look forward to that. Guys, again, thank you so much. Outstanding conversation. Anybody going to be at Dell tech world in a couple of weeks? Bob's going to be there. Dave Nicholson. Well drinks on me and guys I really can't thank you enough for the insights and your participation today. Really appreciate it. Okay, and thank you for watching this special power panel episode of theCube Insights powered by ETR. Remember we publish each week on Siliconangle.com and wikibon.com. All these episodes they're available as podcasts. DM me or any of these guys. I'm at DVellante. You can email me at David.Vellante@siliconangle.com. Check out etr.ai for all the data. This is Dave Vellante. We'll see you next time. (upbeat music)
SUMMARY :
but the labor needed to go kind of around the horn the applications to those edge devices Zeus up next, please. on the performance requirements you have. that we can tap into It's really important that you optimize I mean, for years you worked for the applications that I need? that we were having earlier, okay. on software from the market And the point I made in breaking at the edge, in the data center, you know, and society and do you have any sense as and I'm feeling the pain. and it's all about the software, of the components you use. And I remember the early days And I mean, all the way back Yeah, and that's why you see And the answer to that is the disc had to go and do stuff. the compute to the data. So is this what you mean when Nicholson the processing closer to the data? And so when you can have kind of innovation in the area that the future is going to be the ability to get where and how do you see the shifting demand And the opportunity is to to support, you know, of that edge ecosystem, you know, that you wanted to chat One of the things about moving to the edge I mean, other than the and the ability to create solutions Yeah, we're going to be-- And I remember talking to Chad the order this time, you know, in the sense that you can use hardware us your final thoughts. So the landscape and service area Yeah, thank you. in the direction of cloud. You know, one of the reasons And I think this is going to The computing power at the edge you can do at the edge. on the wall behind you I was actually a of semiconductors and silicon and the manner with which Okay, and thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Marc Staimer | PERSON | 0.99+ |
Keith Townson | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Bob O'Donnell | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
2004 | DATE | 0.99+ |
Charlie Giancarlo | PERSON | 0.99+ |
ZK Research | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Keith Townsend | PERSON | 0.99+ |
10 gig | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
ARISTA | ORGANIZATION | 0.99+ |
64 terabytes | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Zeus Kerravala | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
25 gig | QUANTITY | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
2016 | DATE | 0.99+ |
Norman Rice | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
69% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
$40 billion | QUANTITY | 0.99+ |
Dragon Slayer Consulting | ORGANIZATION | 0.99+ |
Breaking Analysis: Snowflake’s Wild Ride
from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante snowflake they love the stock at 400 and hated at 165 that's the nature of the business i guess especially in this crazy cycle over the last two years of lockdowns free money exploding demand and now rising inflation and rates but with the fed providing some clarity on its actions the time has come to really dig into the fundamentals of companies and there's no tech company that's more fun to analyze than snowflake hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we look at the action of snowflake stock since its ipo why it's behaved the way it has how some sharp traders are looking at the stock and most importantly what customer demand looks like the stock has really provided some great theater since its ipo i know people who got in at 120 before the open and i know lots of people who kind of held their noses and bought the stock on day one at over 300 a day when it closed at around 240 that first day of trading snowflake hit 164 this week it's all-time low as a public company as my college roommate chip simonton a long time trader told me when great companies trade at all times time lows because of panic it's worth taking a shot he did now of course the stock could go lower there's geopolitical risk and the stock with a 64 billion market cap is expensive for a company that's forecast to do around 2 billion in product revenue this year and remember i don't recommend stocks you shouldn't take my advice and my comments you got to do your own research but i have lots of data and i have opinions and i'm willing to share that with you stocks like snowflake crowdstrike z-scaler octa and companies like this are highly volatile when markets are moving up they're going to move up faster than the mean when they're declining they're going to drop more severely and that's clearly what's happened to snowflake so with a company like this you when you see panic selling you'll also see panic buying sometimes like we we've seen with this name it went from 220 to 320 in a very short period earlier snowflake put in a short-term bottom this week and many traders feel the issue was oversold so they bought okay but not everyone felt this way and you can see this in the headlines snowflake hits low but cloud stocks rise and we're going to come back to that is it a buy don't buy the dip buy the dip and what snowflake investors can learn from microsoft and from the street.com snow stock is sliding on the back of ill-conceived guidance and to that i would say that conservative guidance these days is anything but ill-conceived now let's unpack all this a bit and to do so i reached out to ivana delevska who has been on this program before she's with spear invest a female-led etf that goes deep into understanding supply chains she came on breaking analysis and laid out her thesis to buy the dip on snowflake this is a while ago she told me currently spear still likes snowflake and has doubled its position let me share her analysis she called out two drivers for the downside interest rates you know rising of course in snowflakes guidance which my own publication called weak in that previous chart that i just showed you so let's dig into that a bit snowflake guided for product revenues of 67 year on year which was below buy side expectations but i believe within sell side consensus regardless the guide was nuanced and driven by snowflake's decision to pass along price efficiencies to customers from optimizing processor price performance predominantly from aws's graviton too this is going to hit snowflakes revenue a net of about a hundred million dollars this year but the timing's not precise because it's going to hit 165 million but they're going to make up 65 million in increased demand frank slootman on the earnings call made this very clear he said quote this is not philanthropy this stimulates demand classic slootman the point is spear and other bulls believe that this will result in a gain for snowflake over the medium term and we would agree price goes down roi gets better you throw more projects at snowflakes customers going to buy more snowflake and when that happens and it gives the company an advantage as they continue to build their moat it's a longer term bet on cloud and data which are good bets now some of this could also be competitive pressures there have been you know studies that are out there from competitors attacking snowflakes pricing and price performance and they make comparisons oracle's been pretty aggressive as have others but so far the company's customers continue to consume now at a very fast rate now on on this front what can we learn from microsoft that applies to snowflake that's the headline here from benzinga so the article quoted a wealth manager named josh brown talking about what happened to microsoft after the dot-com bubble burst and how they quadrupled earnings over the next decade and the stock went sideways suggesting the same thing could happen to snowflake now i'd like to make a couple of comments here first at the time microsoft was a 23 billion dollar company and it had a monopoly and was already highly profitable steve ballmer became the ceo of microsoft right after the dot-com bubble burst and he hugged onto windows for dear life and lived off of microsoft's pc software monopoly microsoft became an extremely profitable and remarkably uninteresting caretaker of a pc in on-prem software estate during balmer's tenure so i just don't see the comparison as relevant snowflake you know they're going to make struggle for other reasons but that one didn't really resonate with me what's interesting is this chart it poses the question do cloud and data markets behave differently it's a chart that shows aws growth rates over time and superimposes the revenue in the red in q1 2018 aws generated 5.4 billion dollars in revenue and that was growing at the time at nearly a 50 rate now that rate as you can see decelerated quite significantly as aws grew to a 50 billion dollar run rate company that down below where you see it bottoms now it makes sense right law of large numbers you can't keep growing that fast when you get that big well oops look what happened in 2021 aws's growth rate bottoms in the high 20s and then rockets back up to 40 this past quarter as aws surpasses a 70 billion dollar run rate so you have to ask is cloud different is data different is cloud data different or data cloud different let's put it in the snowflake parlance can cloud because of its consumption model and the speed of innovation and ecosystem depth and breadth enable snowflake to exhibit lots of variability in its growth rates versus a say progressive and somewhat linear decline as the company grows revenue which is what you would expect historically and part of the answer relates to its market size here's a chart we've shared before with some additions it's our version of snowflake's total available market they're tam which snowflake's version that that blue data cloud thing superimposed on the right it shows the various layers of market opportunity that we came up with that that snowflake and others we think have in front of them emerging from the disruption of legacy data lakes and data warehouses to what snowflake refers to as its data cloud we think about the data mesh concept and decentralized data architectures with domain ownership and data product and service builders as consistent with snowflake's data cloud vision where snowflake data stores are nodes they're just simply discoverable nodes on the mesh you could have you know data bricks data lakes you know s3 buckets on that mesh it doesn't matter they can be discovered they can be shared and of course they're governed in a federated model now in snowflake's model it's all inside the snowflake data cloud that's fine then you'll go to the out years it gets a little fuzzy you know from edge locations and ai inference it becomes massive and decision making occurs in real time where machines and machine data take over the world instead of you know clicks and keystrokes sounds out there but it's real and how exactly snowflake plays there at this point is unclear but one thing's for sure there'll be a lot of data and it's going to find its way into snowflake you know snowflake's not a real-time engine it's an analytical system it's moving into the realm of data science and you know we've talked about the need for you know semantic layer between those those two worlds of analytics and data science but expanding the scope further out we think that snowflake is a big role to play in this future and the future is massive okay check you got the big tam now as someone that looks at companies through a fundamentals prism you've got to look obviously at the markets in the tan which we just did but you also want to understand customers and it's not hard to find snowflake customers capital one disney micron alliance sainsbury sonos and hundreds of other companies i've talked to snowflake customers who have also been customers of oracle teradata ibm neteza vertica serious database practitioners and they tell me it's consistent soulflake is different they say it's simpler it's more agile it's less complicated to secure and it's disruptive to their traditional ways of doing data management now of course there are naysayers i've spoken to a number of analysts that feel snowflake is deficient in areas like workload management and course complex joins and it's too specialized in a world where we're seeing the convergence of analytics and transactional workloads our own david floyer believes that what oracle is doing with mysql heatwave is radically disruptive to many of the database architectures and blows away anything out there and he believes that snowflake and the likes of aws are going to have to respond now this the other criticism here is that snowflake is not architected for real-time inference where a lot of that edge activity is is going to happen it's a multi-hundred billion dollar market and so look snowflake has a ton of competition that's the other thing all the major cloud players have very capable and competitive database platforms even though they all partner with snowflake except oracle of course but companies like databricks and have garnered tons of vc other vc funded companies have raised billions of dollars to do this kind of elastic consumption based separate compute from storage stuff so you have to always keep an open mind and be aware of potential blind spots for these companies but to the criticisms i would say look snowflake they got there first and watch their ecosystem it's a real key to its continued success snowflake's not going to go it alone and it's going to use its ecosystem partners to expand its reach and accelerate the network effects and fill those gaps and it will acquire its stock is valuable so it should be doing that just as it did with streamlit a zero revenue company that it bought for 800 million dollars in stock and cash just recently streamlit is an open source python library that gets snowflake further deeper into that data science space that data brick space and look watch what snowflake is doing with snowpark it's an api library for processing data and building data intensive applications we've talked about snowflake essentially being becoming the super cloud and building this sort of path-like layer across clouds rather than trying to do it all themselves it seems snowflake is really staring at the api economy and building its ecosystem to plug those holes so let's come back to the customers here's a chart that shows snowflakes customer spending momentum or net score on the the top line that's the vertical axis and pervasiveness in the data or market share and that bottom brown line snowflake has unprecedented net scores and held them up for many many quarters as you can see here going back you know a couple years all leading to its expanded market penetration and measured as pervasiveness of so-called market share within the etr survey it's not like idc market share it's pervasiveness in the data set now i'll say this i don't see how this is sustainable i've been waiting for this to moderate i wouldn't be surprised to see snowflake come back to earth a little bit i think they'll clearly still be highly elevated based on the data that i've seen but but i could see in in one or more of the etr surveys this year this starting to moderate as they get they get big it's just it has to happen um but i would again expect them to have a high spending velocity score but i think we're going to see snowflake you know maybe porpoise a bit here meaning you know it moderates it comes back up it's just really hard to sustain this piece of momentum and higher train retain and scale without absorbing some some friction and some head woods that's going to slow you down but back to the aws growth example it's entirely possible that we could see a similar dynamic with snowflake that you saw with aws and you kind of see it with salesforce and servicenow very successful large entrenched entrenched companies and it's very possible that snowflake could pull back moderate and then accelerate that growth even though people are concerned about the moderated guidance of 80 percent growth yeah that's that's the new definition of tepid i guess i look i like to look at other some other metrics the one that really called you know my my my attention was the remaining performance obligations this last quarter rpo snowflakes is up to something like 2.6 billion and that is a forward-looking indicator of of future revenues so i want to i'd like to see that growing and it's growing at a fast pace so you're going to see some ups and downs with snowflake i have no doubt but i think things are still looking pretty solid for the company growth companies like snowflake and octa and z scalar those other ones that i mentioned earlier have probably been repriced and refactored by investors while there's always going to be market and of course geopolitical risk especially in these times fundamentals matter you've got huge market well capitalized you got a leadership position great products and strong customer adoption you also have a great team team is something else that we look for we haven't touched on that but i'll leave you with this thought everyone knows about frank slootman mike scarpelli and what they've accomplished in their years of working together that's why the stock you know in ipo was was so overvalued they had seen these guys do it before slootman just documented in all this in his book amp it up which gives great insight into the history of of that though you know that pair and and the teams that they've built the companies that they've built how he thinks about building companies and markets and and how you know total available markets super important but the whole philosophy and culture that that he's building in his management style but you got to wonder right how long is this guy going to keep going what keeps him motivated you know i asked him that one time here's what he said why i mean are you in this for the sport what's the story here uh actually that that's not a bad way of characterizing it i think i am in it uh you know for the sport uh you know the only way to become the best version of yourself is to be uh to be under the gun and uh you know every single day and that's that's certainly uh what we are it sort of has its own rewards building great products building great companies uh you know regardless of you know uh what the spoils may be uh it has its own rewards and i i it's hard for people like us to get off the field and uh you know hang it up so here we are so there you have it he's in it for the sport how great is that he loves building companies and that my opinion that's how frank slootman thinks about success it's not about money money's the byproduct of success as earl nightingale would say success is the progressive realization of a worthy ideal i love that quote building great companies building products that change the world changing people's lives with data and insights creating jobs creating life-altering wealth opportunities not for himself but for thousands of employees and partners i'd say that's a pretty worthy ideal and i hope frank slootman sticks with it for a while okay that's it for today thanks to stephanie chan for the background research she does for breaking analysis alex meyerson on production kristen martin and cheryl knight on social with rob hoff on siliconangle and thanks to ivana delevska of spear invest and my friend chip symington for the angles from the money side of things remember all these episodes are available as podcasts just search breaking analysis podcast i publish weekly on wikibon.com and siliconangle.com and don't forget to check out etr.plus for all the survey data you can reach me at devolante or david.velante siliconangle.com and this is dave vellante for cube insights powered by etrbsafe stay well and we'll see you next time [Music] you
SUMMARY :
the history of of that though you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
microsoft | ORGANIZATION | 0.99+ |
josh brown | PERSON | 0.99+ |
alex meyerson | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
80 percent | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
slootman | PERSON | 0.99+ |
rob hoff | PERSON | 0.99+ |
67 year | QUANTITY | 0.99+ |
5.4 billion dollars | QUANTITY | 0.99+ |
50 billion dollar | QUANTITY | 0.99+ |
64 billion | QUANTITY | 0.99+ |
800 million dollars | QUANTITY | 0.99+ |
165 million | QUANTITY | 0.99+ |
23 billion dollar | QUANTITY | 0.99+ |
stephanie chan | PERSON | 0.99+ |
david floyer | PERSON | 0.99+ |
ivana delevska | PERSON | 0.99+ |
steve ballmer | PERSON | 0.99+ |
this year | DATE | 0.99+ |
2.6 billion | QUANTITY | 0.99+ |
frank slootman | PERSON | 0.99+ |
mike scarpelli | PERSON | 0.99+ |
billions of dollars | QUANTITY | 0.99+ |
oracle | ORGANIZATION | 0.99+ |
earl nightingale | PERSON | 0.99+ |
two drivers | QUANTITY | 0.99+ |
multi-hundred billion dollar | QUANTITY | 0.99+ |
david.velante | OTHER | 0.98+ |
boston | LOCATION | 0.98+ |
dave vellante | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
about a hundred million dollars | QUANTITY | 0.98+ |
120 | QUANTITY | 0.98+ |
aws | ORGANIZATION | 0.98+ |
Snowflake’s Wild Ride | TITLE | 0.98+ |
frank slootman | PERSON | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
this week | DATE | 0.98+ |
around 2 billion | QUANTITY | 0.98+ |
70 billion dollar | QUANTITY | 0.97+ |
400 | QUANTITY | 0.97+ |
320 | QUANTITY | 0.97+ |
q1 2018 | DATE | 0.97+ |
kristen martin | PERSON | 0.97+ |
220 | QUANTITY | 0.97+ |
chip symington | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
benzinga | ORGANIZATION | 0.96+ |
164 | QUANTITY | 0.96+ |
over 300 a day | QUANTITY | 0.96+ |
first day | QUANTITY | 0.95+ |
earth | LOCATION | 0.95+ |
windows | TITLE | 0.95+ |
two worlds | QUANTITY | 0.95+ |
past quarter | DATE | 0.95+ |
165 | QUANTITY | 0.94+ |
disney | ORGANIZATION | 0.94+ |
65 million | QUANTITY | 0.94+ |
simonton | LOCATION | 0.94+ |
python | TITLE | 0.94+ |
street.com | OTHER | 0.93+ |
a lot of data | QUANTITY | 0.92+ |
last quarter | DATE | 0.92+ |
cheryl knight | PERSON | 0.92+ |
today | DATE | 0.92+ |
50 rate | QUANTITY | 0.91+ |
day one | QUANTITY | 0.9+ |
zero revenue | QUANTITY | 0.9+ |
devolante | OTHER | 0.9+ |
tons | QUANTITY | 0.89+ |
wikibon.com | OTHER | 0.88+ |
one time | QUANTITY | 0.88+ |
hundreds of other companies | QUANTITY | 0.88+ |
etr | ORGANIZATION | 0.87+ |
single day | QUANTITY | 0.86+ |
balmer | PERSON | 0.85+ |
around 240 | QUANTITY | 0.85+ |
ipo | ORGANIZATION | 0.85+ |
20s | QUANTITY | 0.84+ |
lots of data | QUANTITY | 0.83+ |
Breaking Analysis: Pat Gelsinger has the Vision Intel Just Needs Time, Cash & a Miracle
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> If it weren't for Pat Gelsinger, Intel's future would be a disaster. Even with his clear vision, fantastic leadership, deep technical and business acumen, and amazing positivity, the company's future is in serious jeopardy. It's the same story we've been telling for years. Volume is king in the semiconductor industry, and Intel no longer is the volume leader. Despite Intel's efforts to change that dynamic With several recent moves, including making another go at its Foundry business, the company is years away from reversing its lagging position relative to today's leading foundries and design shops. Intel's best chance to survive as a leader in our view, will come from a combination of a massive market, continued supply constraints, government money, and luck, perhaps in the form of a deal with apple in the midterm. Hello, and welcome to this week's "Wikibon CUBE Insights, Powered by ETR." In this "Breaking Analysis," we'll update you on our latest assessment of Intel's competitive position and unpack nuggets from the company's February investor conference. Let's go back in history a bit and review what we said in the early 2010s. If you've followed this program, you know that our David Floyer sounded the alarm for Intel as far back as 2012, the year after PC volumes peaked. Yes, they've ticked up a bit in the past couple of years but they pale in comparison to the volumes that the ARM ecosystem is producing. The world has changed from people entering data into machines, and now it's machines that are driving all the data. Data volumes in Web 1.0 were largely driven by keystrokes and clicks. Web 3.0 is going to be driven by machines entering data into sensors, cameras. Other edge devices are going to drive enormous data volumes and processing power to boot. Every windmill, every factory device, every consumer device, every car, will require processing at the edge to run AI, facial recognition, inference, and data intensive workloads. And the volume of this space compared to PCs and even the iPhone itself is about to be dwarfed with an explosion of devices. Intel is not well positioned for this new world in our view. Intel has to catch up on the process, Intel has to catch up on architecture, Intel has to play catch up on security, Intel has to play catch up on volume. The ARM ecosystem has cumulatively shipped 200 billion chips to date, and is shipping 10x Intel's wafer volume. Intel has to have an architecture that accommodates much more diversity. And while it's working on that, it's years behind. All that said, Pat Gelsinger is doing everything he can and more to close the gap. Here's a partial list of the moves that Pat is making. A year ago, he announced IDM 2.0, a new integrated device manufacturing strategy that opened up its world to partners for manufacturing and other innovation. Intel has restructured, reorganized, and many executives have boomeranged back in, many previous Intel execs. They understand the business and have a deep passion to help the company regain its prominence. As part of the IDM 2.0 announcement, Intel created, recreated if you will, a Foundry division and recently acquired Tower Semiconductor an Israeli firm, that is going to help it in that mission. It's opening up partnerships with alternative processor manufacturers and designers. And the company has announced major investments in CAPEX to build out Foundry capacity. Intel is going to spin out Mobileye, a company it had acquired for 15 billion in 2017. Or does it try and get a $50 billion valuation? Mobileye is about $1.4 billion in revenue, and is likely going to be worth more around 25 to 30 billion, we'll see. But Intel is going to maybe get $10 billion in cash from that, that spin out that IPO and it can use that to fund more FABS and more equipment. Intel is leveraging its 19,000 software engineers to move up the stack and sell more subscriptions and high margin software. He got to sell what he got. And finally Pat is playing politics beautifully. Announcing for example, FAB investments in Ohio, which he dubbed Silicon Heartland. Brilliant! Again, there's no doubt that Pat is moving fast and doing the right things. Here's Pat at his investor event in a T-shirt that says, "torrid, bringing back the torrid pace and discipline that Intel is used to." And on the right is Pat at the State of the Union address, looking sharp in shirt and tie and suit. And he has said, "a bet on Intel is a hedge against geopolitical instability in the world." That's just so good. To that statement, he showed this chart at his investor meeting. Basically it shows that whereas semiconductor manufacturing capacity has gone from 80% of the world's volume to 20%, he wants to get it back to 50% by 2030, and reset supply chains in a market that has become important as oil. Again, just brilliant positioning and pushing all the right hot buttons. And here's a slide underscoring that commitment, showing manufacturing facilities around the world with new capacity coming online in the next few years in Ohio and the EU. Mentioning the CHIPS Act in his presentation in The US and Europe as part of a public private partnership, no doubt, he's going to need all the help he can get. Now, we couldn't resist the chart on the left here shows wafer starts and transistor capacity growth. For Intel, overtime speaks to its volume aspirations. But we couldn't help notice that the shape of the curve is somewhat misleading because it shows a two-year (mumbles) and then widens the aperture to three years to make the curve look steeper. Fun with numbers. Okay, maybe a little nitpick, but these are some of the telling nuggets we pulled from the investor day, and they're important. Another nitpick is in our view, wafers would be a better measure of volume than transistors. It's like a company saying we shipped 20% more exabytes or MIPS this year than last year. Of course you did, and your revenue shrank. Anyway, Pat went through a detailed analysis of the various Intel businesses and promised mid to high double digit growth by 2026, half of which will come from Intel's traditional PC they center in network edge businesses and the rest from advanced graphics HPC, Mobileye and Foundry. Okay, that sounds pretty good. But it has to be taken into context that the balance of the semiconductor industry, yeah, this would be a pretty competitive growth rate, in our view, especially for a 70 plus billion dollar company. So kudos to Pat for sticking his neck out on this one. But again, the promise is several years away, at least four years away. Now we want to focus on Foundry because that's the only way Intel is going to get back into the volume game and the volume necessary for the company to compete. Pat built this slide showing the baby blue for today's Foundry business just under a billion dollars and adding in another $1.5 billion for Tower Semiconductor, the Israeli firm that it just acquired. So a few billion dollars in the near term future for the Foundry business. And then by 2026, this really fuzzy blue bar. Now remember, TSM is the new volume leader, and is a $50 billion company growing. So there's definitely a market there that it can go after. And adding in ARM processors to the mix, and, you know, opening up and partnering with the ecosystems out there can only help volume if Intel can win that business, which you know, it should be able to, given the likelihood of long term supply constraints. But we remain skeptical. This is another chart Pat showed, which makes the case that Foundry and IDM 2.0 will allow expensive assets to have a longer useful life. Okay, that's cool. It will also solve the cumulative output problem highlighted in the bottom right. We've talked at length about Wright's Law. That is, for every cumulative doubling of units manufactured, cost will fall by a constant percentage. You know, let's say around 15% in semiconductor world, which is vitally important to accommodate next generation chips, which are always more expensive at the start of the cycle. So you need that 15% cost buffer to jump curves and make any money. So let's unpack this a bit. You know, does this chart at the bottom right address our Wright's Law concerns, i.e. that Intel can't take advantage of Wright's Law because it can't double cumulative output fast enough? Now note the decline in wafer starts and then the slight uptick, and then the flattening. It's hard to tell what years we're talking about here. Intel is not going to share the sausage making because it's probably not pretty, But you can see on the bottom left, the flattening of the cumulative output curve in IDM 1.0 otherwise known as the death spiral. Okay, back to the power of Wright's Law. Now, assume for a second that wafer density doesn't grow. It does, but just work with us for a second. Let's say you produce 50 million units per year, just making a number up. That gets you cumulative output to $100 million in, sorry, 100 million units in the second year to take you two years to get to that 100 million. So in other words, it takes two years to lower your manufacturing cost by, let's say, roughly 15%. Now, assuming you can get wafer volumes to be flat, which that chart showed, with good yields, you're at 150 now in year three, 200 in year four, 250 in year five, 300 in year six, now, that's four years before you can take advantage of Wright's Law. You keep going at that flat wafer start, and that simplifying assumption we made at the start and 50 million units a year, and well, you get to the point. You get the point, it's now eight years before you can get the Wright's Law to kick in, and you know, by then you're cooked. But now you can grow the density of transistors on a chip, right? Yes, of course. So let's come back to Moore's Law. The graphic on the left says that all the growth is in the new stuff. Totally agree with that. Huge term that Pat presented. Now he also said that until we exhaust the periodic table of elements, Moore's Law is alive and well, and Intel is the steward of Moore's Law. Okay, that's cool. The chart on the right shows Intel going from 100 billion transistors today to a trillion by 2030. Hold that thought. So Intel is assuming that we'll keep up with Moore's Law, meaning a doubling of transistors every let's say two years, and I believe it. So bring that back to Wright's Law, in the previous chart, it means with IDM 2.0, Intel can get back to enjoying the benefits of Wright's Law every two years, let's say, versus IDM 1.0 where they were failing to keep up. Okay, so Intel is saved, yeah? Well, let's bring into this discussion one of our favorite examples, Apple's M1 ARM-based chip. The M1 Ultra is a new architecture. And you can see the stats here, 114 billion transistors on a five nanometer process and all the other stats. The M1 Ultra has two chips. They're bonded together. And Apple put an interposer between the two chips. An interposer is a pathway that allows electrical signals to pass through it onto another chip. It's a super fast connection. You can see 2.5 terabytes per second. But the brilliance is the two chips act as a single chip. So you don't have to change the software at all. The way Intel's architecture works is it takes two different chips on a substrate, and then each has its own memory. The memory is not shared. Apple shares the memory for the CPU, the NPU, the GPU. All of it is shared, meaning it needs no change in software unlike Intel. Now Intel is working on a new architecture, but Apple and others are way ahead. Now let's make this really straightforward. The original Apple M1 had 16 billion transistors per chip. And you could see in that diagram, the recently launched M1 Ultra has $114 billion per chip. Now if you take into account the size of the chips, which are increasing, and the increase in the number of transistors per chip, that transistor density, that's a factor of around 6x growth in transistor density per chip in 18 months. Remember Intel, assuming the results in the two previous charts that we showed, assuming they were achievable, is running at 2x every two years, versus 6x for the competition. And AMD and Nvidia are close to that as well because they can take advantage of TSM's learning curve. So in the previous chart with Moore's Law, alive and well, Intel gets to a trillion transistors by 2030. The Apple ARM and Nvidia ecosystems will arrive at that point years ahead of Intel. That means lower costs and significantly better competitive advantage. Okay, so where does that leave Intel? The story is really not resonating with investors and hasn't for a while. On February 18th, the day after its investor meeting, the stock was off. It's rebound a little bit but investors are, you know, they're probably prudent to wait unless they have really a long term view. And you can see Intel's performance relative to some of the major competitors. You know, Pat talked about five nodes in for years. He made a big deal out of that, and he shared proof points with Alder Lake and Meteor Lake and other nodes, but Intel just delayed granite rapids last month that pushed it out from 2023 to 2024. And it told investors that we're going to have to boost spending to turn this ship around, which is absolutely the case. And that delay in chips I feel like the first disappointment won't be the last. But as we've said many times, it's very difficult, actually, it's impossible to quickly catch up in semiconductors, and Intel will never catch up without volume. So we'll leave you by iterating our scenario that could save Intel, and that's if its Foundry business can eventually win back Apple to supercharge its volume story. It's going to be tough to wrestle that business away from TSM especially as TSM is setting up shop in Arizona, with US manufacturing that's going to placate The US government. But look, maybe the government cuts a deal with Apple, says, hey, maybe we'll back off with the DOJ and FTC and as part of the CHIPS Act, you'll have to throw some business at Intel. Would that be enough when combined with other Foundry opportunities Intel could theoretically produce? Maybe. But from this vantage point, it's very unlikely Intel will gain back its true number one leadership position. If it were really paranoid back when David Floyer sounded the alarm 10 years ago, yeah, that might have made a pretty big difference. But honestly, the best we can hope for is Intel's strategy and execution allows it to get competitive volumes by the end of the decade, and this national treasure survives to fight for its leadership position in the 2030s. Because it would take a miracle for that to happen in the 2020s. Okay, that's it for today. Thanks to David Floyer for his contributions to this research. Always a pleasure working with David. Stephanie Chan helps me do much of the background research for "Breaking Analysis," and works with our CUBE editorial team. Kristen Martin and Cheryl Knight to get the word out. And thanks to SiliconANGLE's editor in chief Rob Hof, who comes up with a lot of the great titles that we have for "Breaking Analysis" and gets the word out to the SiliconANGLE audience. Thanks, guys. Great teamwork. Remember, these episodes are all available as podcast wherever you listen. Just search "Breaking Analysis Podcast." You'll want to check out ETR's website @etr.ai. We also publish a full report every week on wikibon.com and siliconangle.com. You could always get in touch with me on email, david.vellante@siliconangle.com or DM me @dvellante, and comment on my LinkedIn posts. This is Dave Vellante for "theCUBE Insights, Powered by ETR." Have a great week. Stay safe, be well, and we'll see you next time. (upbeat music)
SUMMARY :
in Palo Alto in Boston, and Intel is the steward of Moore's Law.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephanie Chan | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
TSM | ORGANIZATION | 0.99+ |
Ohio | LOCATION | 0.99+ |
February 18th | DATE | 0.99+ |
Mobileye | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
$100 million | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Arizona | LOCATION | 0.99+ |
Wright | PERSON | 0.99+ |
18 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
6x | QUANTITY | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
2x | QUANTITY | 0.99+ |
$50 billion | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
$1.5 billion | QUANTITY | 0.99+ |
2030s | DATE | 0.99+ |
2030 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
CHIPS Act | TITLE | 0.99+ |
last year | DATE | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
2020s | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
two-year | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
February | DATE | 0.99+ |
two chips | QUANTITY | 0.99+ |
15 billion | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Tower Semiconductor | ORGANIZATION | 0.99+ |
M1 Ultra | COMMERCIAL_ITEM | 0.99+ |
2024 | DATE | 0.99+ |
70 plus billion dollar | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
A year ago | DATE | 0.99+ |
200 billion chips | QUANTITY | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
three years | QUANTITY | 0.99+ |
CHIPS Act | TITLE | 0.99+ |
second year | QUANTITY | 0.99+ |
about $1.4 billion | QUANTITY | 0.99+ |
early 2010s | DATE | 0.99+ |
Mohammed Imam, Cisco
perfect all right we're good uh muhammad you ready yeah i have a watery eyes always so i always tell my interviewers or the producers that sometimes it shouldn't there shouldn't be a problem in the 10-minute window but well yeah so do that while i'm talking you'll see it on the return feed it's a little delayed but and most people have tears when they see dave vellante yeah i i have that effect on people thanks for that okay we all said we good leonard why don't you go alex bye-bye yeah alex got the i just got the thumbs up we're good okay muhammad here we go on dave in five four three we continue now with the network powering hybrid work now we just heard from lawrence wang on the rapid move to wi-fi 6e which is going to increase wi-fi efficiency enable routers and devices to more efficiently use bandwidth and that additional spectrum that lawrence talked about that means more wi-fi channels which is really going to help reduce overlap between networks and make a noticeable difference especially in crowded places we're here now with muhammad imam who's senior director of product management for catalyst switching this is a multi-billion dollar business for cisco if you ever listen to cisco's earnings calls you'll hear the cfo scott heron he'll talk about the catalyst 9000 and double-digit growth and switching this is the fastest ramping product in cisco's history so muhammad that's got to make you feel pretty good yes indeed thank you david and thank you for having me here yeah great to have you so uh look catalyst 9000 it's been really successful what does the 9000x bring to the table for your customers yeah absolutely and um indeed the catalyst 9000 family of switches have been extremely popular with our customers as you said fastest ramping product in cisco's history and the last four or five years we have really evolved the catalyst 9000 family of switches to a very comprehensive product portfolio um addressing the various enterprise use cases that that we that we address but now we see increase in demand on the networks and that really stems from some of the most recent trends that we are seeing right part of it is hybrid workspaces is going to be a video dominant hybrid workspace right a lot of cases is going to be high definition 4k 8k videos we are seeing cloud-based applications everywhere right my spreadsheet is used to be on excel sheet now it's either an office 365 or smartsheets my files used to be on my computer now it's on in the dropbox right so these are trends that are really uh putting pressure on our networks we are also seeing trends where vr headsets are becoming common they are being used for trainings and education use cases webex hologram in certain industries we are seeing robotics are becoming more and more popular and they come with a lot of um applications that are very latency sensitive and as lawrence mentioned earlier wi-fi 6e is really making over the year multi gigabit wi-fi possible right and for all of these different trends and the recent technologies that that are evolving we really need the network that can really address and deliver for these applications and that's where we are bringing the catalyst 9000 x that addresses the increase in network demand we are expanding the catalyst 9000 family with top-of-line premium introductions in the access layer of the switches of the network as well as in the aggregation and core layers so we are bringing 400 gig high-speed core and enterprise core and edge layers of the network we are bringing point-to-point ip ipsec security which will give you 100 gig of ipsec encryption um high density of multi-gigabit which is becoming very common as we evolve our wi-fi networks because we don't want our wired infrastructure to be the bottleneck when the wireless infrastructure is capable of going more than a gig high density of 90 watt powering the smart buildings use cases right right um these are all different use cases that are being enabled by the catalyst 9000 and the new getless 9000x family is really addressing some of these new trends and applications well it's good because the metaverse is coming too and we're going to need some help with that right who knows how much bandwidth will need for metabolism absolutely yeah guarantee will be a lot more but so i want to i want to hear more about the the new products that you've just launched and maybe how these offerings are going to help with this new hybrid work model that we've just been discussing absolutely so let me start with the catalyst 9300 we are introducing the catalyst 9300x which is the highest density full multi-gigabit platform with 100 gig uplinks and 90 watt of power on every port available right that's an industry first that we are bringing on the catalyst 9300 family it is also capable of one terabit per second of a stacking which is also unheard of in the industry this will serve our customers with all the new trends that we talked about including the hybrid world um and some of the new trends that are going to come in the next decade but 9300x is not just a high-end campus switch it can also be a lean branch and a box solution where you don't really need an sd van but you do need an encryption point to point from the catalyst 93 from your front branch with the catalyst 9300x to the data center or to the cloud so for the first time we are introducing the ipsec based encryption natively in the hardware and that means no compromise on performance and you can get up to 100 gig of encrypted traffic with the catalyst 9300x second is the catalyst 9400 we are introducing soup 2 and soup 2 xl with 100 gig uplinks enhancing and the the scale and performance giving our customers options for fully loaded line rate multi give it board on a 10 slot chassis right it will give you two to three times bandwidth boost to your existing line cards since it completely removes the over subscriptions and you know the soup 2 on the catalyst 9400 is coming up with the version of the asic that we used in the past on the catalyst 9600 that means it's also bringing the core capabilities that we used that we today have on 9600 on the catalyst 9400 and that brings high density 10 gig um ports on the catalyst 9400 without over subscription right with the core capabilities then we have the catalyst 9600 where we are introducing is supervisor 2 which really triples the bandwidth per slot on the catalyst 98600 it introduces 400 gig uplink and truly drives the transition to 200 gig in the core get 6k customers uh with excel scale requirements now they can transition to the cat 9k with soup 2. and by the way we are also introducing a combo line card on the catalyst 9600 which means now you don't have to burn a whole slot for your uplink pores in fact you can get up to 400 gig of uplink with this new line card um so that's that's a bunch of things that we are bringing on the catalyst 9600 in line with catalyst 9600 we are also introducing catalyst 9500x 100 gig box with 400 gig uplinks in a fixed form factor and all the benefits that i just talked about on the on the supervisor 2 and 9600 it's also available in a fixed form factor on catalyst 9500x got it so that's in summary kind of the multiple uh product lines that we are introducing yeah it's a lot to unpack there i mean your the big theme there of course is optionality you got a lot of choices for customers i love the encrypt everything without a trade-off you know no performance impact and anytime you can reduce my oversubscription it's going to make me happy you know muhammad we've reported in our breaking analysis segments the importance of custom silicon and not every company has the resources or the expertise to develop their own silicon cisco of course does catalyst 9k is bringing silicon 1 based products with this launch tell us more about that why is this important yeah that's really exciting development that we have on the cad 9k family because you know the silicon one is a powerful asic that enables high performance and high scale with modern silicon architecture bringing the architect a converged architecture for switching as well as routing cad 9k as we know has been running on a uadp asic which has been a programmable asic it has served us really well so far on the cat9k family but with the silicon one we are taking it to another level silicon one brings the capabilities of uadp asic and unlocks the excel scale and high performance in the enterprise switches this is a critical and foundational element to meet the core requirement for the next ticket silicon one is a 12.8 terabits per second chip supports up to 10 million routes supports much deeper buffers brings multi-slice voq architectures with this new architecture silicon 1a6 has paved the way to transition the cad 6k xl deployments to cat 9k right so that's kind of the the um the silicon one uh importance in the ket99k family that we are bringing now yeah and it brings differentiation a lot of people kind of sometimes don't appreciate that but but when you have the control like that you can do things that you might not be able to do with off-the-shelf silicon but so but i i want to ask you what about customers that previously purchased from you as you evolve the portfolio to 9k x how do you protect their investment yeah thank you for asking that question because when we started building the cad 9k we always thought about investment protection for our customers so if you buy today how you will have a very long life for that for that product and you will be able to unlock new powers on that platform that you have purchased maybe five years back right that's exactly what we are doing with the catalyst 199000x talking about modular right on the modular side the supervisors that that that we are introducing now are backward compatible with the line cars that you already have in some cases the lime card throughput is doubling and tripling because now you have a new machine that is going to power these line cards right so you don't have to change your line card you just change your supervisor and you have much higher performance and scale with this new supervisor similarly on the stackables you can stack with the existing catalyst 9300s for example and you will be able to you don't have to rip and replace everything it's not a forklift upgrade for our customers you can continue benefiting from your existing catalyst 9000 deployments and add to the power with the catalyst 9000x components as well as new platforms that we are introducing nice that's key this just speaks to the software content that you guys i know you have a lot of software engineers running around and this is welcome to the 2020s folks new world you know i i muhammad zero trust was kind of a buzzword before the pandemic but it's really become a mainstream topic today we talked about the infrastructure we know security has to be built in from the start it can't be bolted on and zero trust is really top of mind for customers how are their security requirements changing as a result of hybrid work and and how do you make sure that as we shift to hybrid that these new security requirements are addressed what are you doing there absolutely and we know as you said security is top of mind for our customers in fact security has been highlighted as the number one reason why a lot of customers pick cisco and cat9k we have a comprehensive zero truss architecture with software defined access where we started with segmentation and expanded into endpoint classification and visibility now we are taking that to the next level and we are introducing talus powered truss assessment for unmanaged endpoints to further make the the workplace is stronger with zero trust and software defined access truss analytics it detects traffic from end points that are exhibiting unusual um behavior by pretending to be um using a mag spoofing or probe is spoofing or man the metal techniques when truss analytics detects such anomalies it signals endpoint analytics to lower the trusted score so we have a trusted score system when when the trusted score goes down it shows up on the dashboard and the network admin can completely deny or limit the access to the network from these endpoints from other security aspect that we are introducing and i touched on that briefly earlier is um for non-sdvan internet only branches where we are where where services security services might be in the cloud right that's a trend that we are seeing to secure that connectivity from a lean branch to the cloud we are introducing the ipsec capability with the catalyst 9300x and that's built in as as we just talked about and as far as the automation is concerned for these use cases they are we are bringing those automation with our command center the cisco dna center and we are bringing the full life cycle of automation as well as assurance for the secure connectivity that is being provided with the with the cisco dna center well a couple takeaways there for me i mean endpoint security has really become much more important up for obvious reasons when you have remote workers the built-in ipsec just that really emphasizes that you got to have it you know built in from the ground up you can't just bolt it on and the automation is key the number one problem that csos face is you know lack of talent so automation you know definitely helps helps with that so okay muhammad thank you so much really appreciate you coming on in a moment we'll look at private 5g and what's been happening at mobile world congress you're watching cube's coverage of the network powering hybrid work made possible by cisco
SUMMARY :
and by the way we are also introducing a
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
100 gig | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
10-minute | QUANTITY | 0.99+ |
cisco | ORGANIZATION | 0.99+ |
david | PERSON | 0.99+ |
10 slot | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
90 watt | QUANTITY | 0.99+ |
10 gig | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
6k | QUANTITY | 0.99+ |
9300x | COMMERCIAL_ITEM | 0.99+ |
lawrence | PERSON | 0.99+ |
catalyst 93 | COMMERCIAL_ITEM | 0.99+ |
catalyst 9000 | COMMERCIAL_ITEM | 0.99+ |
congress | ORGANIZATION | 0.99+ |
catalyst 9000 x | COMMERCIAL_ITEM | 0.99+ |
catalyst 9000 | COMMERCIAL_ITEM | 0.99+ |
catalyst 9600 | COMMERCIAL_ITEM | 0.98+ |
muhammad imam | PERSON | 0.98+ |
catalyst 9400 | COMMERCIAL_ITEM | 0.98+ |
catalyst 9600 | COMMERCIAL_ITEM | 0.98+ |
first time | QUANTITY | 0.98+ |
9600 | COMMERCIAL_ITEM | 0.98+ |
cat9k | ORGANIZATION | 0.98+ |
multi-billion dollar | QUANTITY | 0.98+ |
catalyst 9300s | COMMERCIAL_ITEM | 0.98+ |
supervisor 2 | COMMERCIAL_ITEM | 0.98+ |
alex | PERSON | 0.98+ |
9000x | COMMERCIAL_ITEM | 0.97+ |
9k | QUANTITY | 0.97+ |
Mohammed Imam | PERSON | 0.97+ |
catalyst 9300 | COMMERCIAL_ITEM | 0.97+ |
more than a gig | QUANTITY | 0.97+ |
catalyst 9300x | COMMERCIAL_ITEM | 0.97+ |
catalyst 98600 | COMMERCIAL_ITEM | 0.97+ |
three times | QUANTITY | 0.97+ |
2020s | DATE | 0.97+ |
muhammad | PERSON | 0.97+ |
catalyst 9300 | COMMERCIAL_ITEM | 0.96+ |
today | DATE | 0.96+ |
catalyst 9000x | COMMERCIAL_ITEM | 0.96+ |
catalyst 9500x | COMMERCIAL_ITEM | 0.96+ |
dave vellante | PERSON | 0.96+ |
8k | QUANTITY | 0.96+ |
next decade | DATE | 0.95+ |
leonard | PERSON | 0.95+ |
five years back | DATE | 0.95+ |
first | QUANTITY | 0.94+ |
soup 2 xl | COMMERCIAL_ITEM | 0.94+ |
dave | PERSON | 0.93+ |
pandemic | EVENT | 0.93+ |
one terabit per second | QUANTITY | 0.92+ |
number one reason | QUANTITY | 0.92+ |
lawrence wang | PERSON | 0.87+ |
excel | TITLE | 0.86+ |
4k | QUANTITY | 0.85+ |
soup 2 | COMMERCIAL_ITEM | 0.85+ |
catalyst 9000 family | COMMERCIAL_ITEM | 0.82+ |
up to 100 gig | QUANTITY | 0.8+ |
up to 10 million routes | QUANTITY | 0.8+ |
up to 400 gig | QUANTITY | 0.8+ |
five years | QUANTITY | 0.79+ |
Dave Brown, AWS | AWS re:Invent 2021
(bright music) >> Welcome back everyone to theCUBE's coverage of AWS re:Invent 2021 in person. So a live event, physical in-person, also virtual hybrid. So a lot of great action online, check out the website. All the videos are there on theCUBE, as well as what's going on all of the actions on site and theCUBE's here. I'm John Furrier, your host with Dave Vellante, my cohost. Finally, we've got David Brown, VP of Elastic Compute Cloud. EC2, the bread and butter. Our favorite part of Amazon. David, great to have you back on theCUBE in person. >> John, it's great to be back. It's the first time I'd been on theCUBE in person as well. A lot of virtual events with you guys, but it's amazing to be back at re:Invent. >> We're so excited for you. I know, Matt Garman and I've talked in the past. We've talked in the past. EC2 is just an amazing product. It's always been the core block of AWS. More and more action happening and developers are now getting more action and there's well, we wrote a big piece about it. What's going on? The Silicon's really paying off. You've got to also general purpose Intel and AMD, and you've got the custom silicon, all working together. What's the new update? Give us a scoop. >> Well, John, it's actually 15 years of EC2 this year and I've been lucky to be on that team for 14 years and so incredible to see the growth. It's been an amazing journey. The thing that's really driven us, two things. One is supporting new workloads. And so what are the workloads that customers have available out there trying to do on the cloud that we don't support and launch new instance types. And that's the first thing. The second one is price performance. How do we give customers more performance at a continuously decreasing price year-over-year? And that's just driven innovation across EC2 over the years with things like Graviton. All of our inferential chips are custom silicon, but also instance types with the latest Intel Ice Lake CPU's, latest Milan. We just announced the AMD Milan instance. It's just constantly innovation across the ever-increasing list of instances. So super exciting. >> So instances become the new thing. Provision an instance, spin up an instance. Instance becomes, and you can get instances, flavors, almost like flavors, right? >> David: Yeah. >> Take us through the difference between an instance and then the EC2 itself. >> That's correct, yeah. So we actually have, by end of the year, right now we have over 475 different instances available to you whether it's GPU accelerators, high-performance computing instances, memory optimized, just enormous number. We'll actually hit 500 by the end of the year, but that is it. I mean, customers are looking for different types of machines and those are the instances. >> So the Custom Silicon, it's one of the most interesting developments. We've written about it. AWS secret weapon is one of them. I wonder if you could take us back to the decision points and the journey. The Annapurna acquisition, you started working with them as a partner, then you said, all right, let's just buy the company. >> David: Yeah. >> And then now, you're seeing the acceleration, your time to tapeout is way, way compressed. Maybe what was the catalyst and maybe we can get into where it's going. >> Yeah, absolutely. Super interesting story 'cause it actually starts all the way back in 2008. In 2008, EC2 had actually been around for just a little under two years. And if you remember back then, everybody was like, will virtualize and hypervisors, specialization would never really get you the same performances, what they were calling bare metal back then. Everybody's looking at the cloud. And so we took a look at that. And I mean, network latencies, in some cases with hypervisors were as high as 200 or 300 milliseconds. And it was a number of real challenges. And so we knew that we would have to change the way that virtualization works and get into hardware. And so in 2010, 2011, we started to look at how could I offload my network processing, my IO processing to additional hardware. And that's what we delivered our first Nitro card in 2012 and 2013. We actually offloaded all of the processing of network to a Nitro card. And that Nitro card actually had a Annapurna arm chip on it. Our Nitro 1 chip. >> For the offload? >> The offload card, yeah. And so that's when my team started to code for Arm. We started to work on our Linux works for Arm. We actually had to write our own operating system initially 'cause there weren't any operating systems available we could use. And so that's what we started this journey. And over the years, when we saw how well it worked for networking, we said, let's do it for storage as well. And then we said, Hey, we could actually improve security significantly. And by 2017, we'd actually offloaded 100% of everything we did on that server to our offload cards Leaving a 100% of the server available for customers. And we're still actually the only cloud provider that does that today. >> Just to interject, in the data center today, probably 30% of the general purpose cores are used for offloads. You're saying 0% in the cloud. >> On our nitro instances, so every instance we've launched since 2017, our C5. We use 0% of that central core. And you can actually see that in our instance types. If you look at our largest instance type, you can see that we're giving you 96 cores and we're giving you, and our largest instance, 24 terabytes of memory. We're not giving you 23.6 terabytes 'cause we need some. It's all given to you as the customer. >> So much more efficient, >> Much, much more efficient, much better, better price performance as well. But then ultimately those Nitro chips, we went through Nitro 1, Nitro 2, Nitro 3, Nitro 4. We said, Hey, could we build a general purpose server chip? Could we actually bring Arm into the cloud? And in 2018, we launched the A1 instance, which was our Graviton1 instance. And what we didn't tell people at the time is that it was actually the same chip we were using on our network card. So essentially, it was a network card that we were giving to you as a server. But what it did is it sparked the ecosystem. That's why we put it out there. And I remember before launch, some was saying, is this just going to be a university project? Are we going to see people from big universities using Arm in the cloud? Was it really going to take off? And the response was amazing. The ecosystem just grew. We had customers move to it and immediately begin to see improvements. And we knew that a year later, Graviton2 was going to come out. And Graviton2 was just an amazing chip. It continues to see incredible adoption, 40% price performance improvement over other instances. >> So this is worth calling out because I think that example of the network card, I mean, innovation can come from anywhere. This is what Jassy always would say is do the experiments. Think about the impact of what's going on here. You're focused on a mission. Let's get that processing of the lowest cost, pick up some workloads. So you're constantly tinkering with tuning the engine. New discovery comes in. Nitro is born. The chip comes in. But I think the fundamental thing, and I want to get your reaction to this 'cause we've put this out there on our post on Sunday. And I said, in every inflection point, I'm old enough, my birthday was yesterday. I'm old enough to know that. >> David: I saw that. >> I'm old enough to know that in the eighties, the client server shifts. Every inflection point where development changed, the methodology, the mindset or platforms change, all the apps went to the better platform. Who wants to run their application on a slower platform? And so, and those inflects. So now that's happening now, I believe. So you got better performance and I'm imagining that the app developers are coding for it. Take us through how you see that because okay, you're offering up great performance for workloads. Now it's cloud workloads. That's almost all apps. Can you comment on that? >> Well, it has been really interesting to see. I mean, as I said, we were unsure who was going to use it when we initially launched and the adoption has been amazing. Initially, obviously it's always, a lot of the startups, a lot of the more agile companies that can move a lot faster, typically a little bit smaller. They started experimenting, but the data got out there. That 40% price performance was a reality. And not only for specific workloads, it was broadly successful across a number of workloads. And so we actually just had SAP who obviously is an enormous enterprise, supporting enterprises all over the world, announced that they are going to be moving the S/4 HANA Cloud to run on Graviton2. It's just phenomenal. And we've seen enterprises of that scale and game developers, every single vertical looking to move to Graviton2 and get that 40% price performance. >> Now we have to, as analysts, we have to say, okay, how did you get to that 40%? And you have to make some assumptions obviously. And it feels like you still have some dry powder when you looked at Graviton2. I think you were running, I don't know, it's speculated anyway. I don't know if you guys, it's your data, two and a half, 2.5 gigahertz. >> David: Yeah. >> I don't know if we can share what's going on with Graviton3, but my point is you had some dry powder and now with Graviton3, quite a range of performance, 'cause it really depends on the workload. >> David: That's right. >> Maybe you could give some insight as to that. What can you share about how you tuned Graviton3? >> When we look at benchmarking, we don't want to be trying to find that benchmark that's highly tuned and then put out something that is, Hey, this is the absolute best we can get it to and that's 40%. So that 40% is actually just on average. So we just went and ran real world workloads. And we saw some that were 55%. We saw some that were 25. It depends on what it was, but on average, it was around the 35, 45%, and we said 40%. And the great thing about that is customers come back and say, Hey, we saw 40% in this workload. It wasn't that I had to tune it. And so with Graviton3, launching this week. Available in our C7g instance, we said 25%. And that is just a very standard benchmark in what we're seeing. And as we start to see more customer workloads, I think it's going to be incredible to see what that range looks like. Graviton2 for single-threaded applications, it didn't give you that much of a performance. That's what we meant by cloud applications, generally, multi-threaded. In Graviton3, that's no longer the case. So we've had some customers report up to 80% performance improvements of Graviton2 to Graviton3 when the application was more of a single-threaded application. So we started to see. (group chattering) >> You have to keep going, the time to market is compressing. So you have that, go ahead, sorry. >> No, no, I always want to add one thing on the difference between single and multi-threaded applications. A lot of legacy, you're single threaded. So this is kind of an interesting thing. So the mainframe, migration stuff, you start to see that. Is that where that comes in the whole? >> Well, a lot of the legacy apps, but also even some of the new apps, like single threading like video transcoding, for example, is all done on a single core. It's very difficult. I mean, almost impossible to do that multi-threaded way. A lot of the crypto algorithms as well, encryption and cryptography is often single core. So with Graviton3, we've seen a significant performance boost for video encoding, cryptographic algorithms, that sort of thing, which really impacts even the most modern applications. >> So that's an interesting point because now single threaded is where the vertical use cases come in. It's not like more general purpose OS kind of things. >> Yeah, and Graviton has already been very broad. I think we're just knocking down the last few verticals where maybe it didn't support it and now it absolutely does. >> And if an ISV then ports, like an SAP's ports to Graviton, then the customer doesn't see any, I mean, they're going to see the performance difference, but they don't have to think about it. >> David: Yeah. >> They just say, I choose that instance and I'm going to get better price performance. >> Exactly, so we've seen that from our ISVs. We've also been doing that with our AWS services. So services like EMR, RDS, Elastic Cache, it will be moving and making Graviton2 available for customers, which means the customer doesn't have to do the migration at all. It's all done for them. They just pick the instance and get the price performance benefits, and so yeah. >> I think, oh, no, that was serverless. Sorry. >> Well, Lambda actually just did launch on Graviton2. And I think they were talking about a 35% price performance improvement. >> Who was that? >> Lambda, a couple of months ago. >> So what does an ISV have to do to port to Graviton. >> It's relatively straightforward, and this is actually one of the things that has slowed customers down is the, wow, that must be a big migration. And that ecosystem that I spoke about is the important part. And today, with all the Linux operating systems being available for Arm running on Graviton2, with all of the container runtimes being available, and then slowly open source applications in ISV is being available. It's actually really, really easy. And we just ran the Graviton2 four-day challenge. And we did that because we actually had an enterprise migrate one of the largest production applications in just four days. Now, I probably wouldn't recommend that to most enterprises that we see is a little too fast, but they could actually do that. >> But just from a numbers standpoint, that's insanely amazing. I mean, when you think about four days. >> Yeah. >> And when we talked on virtually last year, this year, I can't remember now. You said, we'll just try it. >> David: That's right. >> And see what happens, so I presume a lot of people have tried it. >> Well, that's my advice. It's the unknown, it's the what will it take? So take a single engineer, tell them and give them a time. Say you have one week, get this running on Graviton2, and I think the results are pretty amazing, very surprised. >> We were one of the first, if not the first to say that Arm is going to be dominant in the enterprise. We know it's dominant in the Edge. And when you look at the performance curves and the time to tape out, it's just astounding. And I don't know if people appreciate that relative to the traditional Moore's law curve. I mean, it's a style. And then when you combine the power of the CPU, the GPU, the NPU, kind of what Apple does in the iPhone, it blows away the historical performance curves. And you're on that curve. >> That's right. >> I wonder if you could sort of explain that. >> So with Graviton, we're optimizing just across every single part of AWS. So one of the nice things is we actually own that end-to-end. So when it starts with the early design of Graviton2 and Graviton3, and we obviously working on other chips right now. We're actually using the cloud to do all of the electronic design automation. So we're able to test with AWS how that Graviton3 chip is going to work long before we've even started taping it out. And so those workloads are running on high-frequency CPU's on Graviton. Actually we're using Graviton to build Graviton now in the cloud. The other thing we're doing is we're making sure that the Annapurna team that's building those CPUs is deeply engaged with my team and we're going to ultimately go and build those instances so that when that chip arrives from tapeout. I'm not waiting nine months or two years, like would normally be the case, but I actually had an instance up and running within a week or two on somebody's desk studying to do the integration. And that's something we've optimized significantly to get done. And so it allows us to get that iteration time. It also allows us to be very, very accurate with our tapeouts. We're not having to go back with Graviton. They're all A1 chips. We're not having to go back and do multiple runs of these things because we can do so much validation and performance testing in the cloud ahead of time. >> This is the epiphany of the Arm model. >> It really is. >> It's a standard. When you send it to the fab, they know what's going to work. You hit volume and it's just no fab. >> Well, this is a great thread. We'll stay on this 'cause Adam told us when we met with them for re:Invent that they're seeing a lot more visibility into use cases at the scale. So the scale gives you an advantage on what instances might work. >> And makes the economics works. >> Makes the economics work, hence the timing, the shrinking time to market, not there, but also for the apps. Talk about the scale advantage you guys have. >> Absolutely. I mean, the scale advantage of AWS plays out in a number of ways for our customers. The first thing is being able to deliver highly optimized hardware. So we don't just look at the Graviton3 CPU, you were speaking about the core count and the frequency and Peter spoke about a lot of that in his keynote yesterday. But we look at how does the Graviton3 CPU work with the rest of the instance. What is the right balance between the CPU and memory? The CPU and the Hydro. What's the performance and the drive? We just launched the Nitro SSD, which is now we've actually building our own custom SSDs for Nitro getting better performance, being able to do updates, better security, making it more cloudy. We're just saying, we've been challenged with the SSD in the parts. The other place that scales really helping is in capacity. Being able to make sure that we can absorb things like the COVID spike, or the stuff you see in the financial industry with just enormous demand for compute. We can do that because of our scale. We are able to scale. And the final area is actually in quality because I have such an enormous fleet. I'm actually able to drive down AFR. So annual failure rates, are we well below what the mathematical theoretical tenant or possibility is? So if you look at what's put on that actual sticker on the box that says you should be able to get a full percent AFR. At scale and with focus, we're actually able to get that down to significantly below what the mathematical entitlement was actually be. >> Yeah, it's incredible. I've got a great, and this is the advantage, and that's why I believe anyone who's writing applications that has includes a database, data transfer, any kind of execution of code will use the stack. >> Why would they? Really, why? We've seen this, like you said before, whether it was PC, then the fastest Pentium or somebody. >> Why would you want your app to run slower? >> Unix box, right? ISVS want it to run as fast and as cheaply as possible. Now power plays into it as well. >> Yeah, well, we do have, I agree with what you're saying. We do have a number of customers that are still looking to run on x86, but obviously customers that want windows. Windows isn't available for Arm and so that's a challenge. They'll continue to do that. And you know the way we do look at it is most law kind of died out on us in 2002, 2003. And what I'm hoping is, not necessarily bringing wars a little back, but then we say, let's not accept the 10%, 15% improvement year-over-year. There's absolutely more we can all be doing. And so I'm excited to see where the x86 world's going and they doing a lot of great stuff. Intel Ice Lakes looking amazing. Milan is really great to have an AWS as well. >> Well, I'm thinking it's fair point 'cause we certainly look what Pat's doing it at Intel and he's remaking the company. I've said he's going to follow on the Arm playbook in my mind a little bit, and which is the right thing to do. So competition is a good thing. >> David: Absolutely. >> We're excited for you and a great to see Graviton and you guys have this kind of inflection point. We've been tracking for a while, but now the world's starting to see it. So congratulations to your team. >> David: Thank you. >> Just a couple of things. You guys have some news on instances. Talk about the deprecation issue and how you guys are keeping instances alive real quick. >> Yeah, we're super customer obsessed at Amazon. And so that really drives us. And one of the worst things for us to do is to have to tell a customer that we no longer supporting a service. We recently actually just deprecated the ECG classic network. I'm not sure if you saw that and that's actually off the 10 years of continuing to support it. And the only reason we did it is we have a tiny percentage of customers still using that from back in 2012. But one of the challenges is obviously instance hardware eventually will ultimately time out and fail and have hardware issues as it gets older and older. And so we didn't want to be in a place, in EC2, where we would have to constantly go to customers and say that M1 small, that C3, whatever you were running, it's no longer supported, please move. That's just a text that customers shouldn't have to do. And if they still getting value out of an older instance, let them keep using it. So we actually just announced at re:Invent, in my keynote on Tuesday, the longevity support for EC2 instances, which means we will never come back to you again and ask you to please get off an instance, because we can actually emulate all those instances on our Nitro system. And so all of these instances are starting to migrate to Nitro. You're getting all the benefits of Nitro for now some of our older zen instances, but also you don't have to worry about that work. That's just not something you need to do to get off in all the instance. >> That's great. That's a great test service. Stay on as long as you want. When you're ready to move, move. Okay, final question for you. I know we've got time, I want to get this in. The global network, you guys are known for AWS cloud WAN serve. Gives you updates on what's going on with that. >> So Werner just announced that in his keynote and over the last two to three years or so, we've seen a lot of customers starting to use the AWS backbone, which is extensive. I mean, you've seen the slides in Werner's keynote. It really does span the world. I think it's probably one of the largest networks out there. Customers starting to use that for actually their branch office communication. So instead of going and provisioning the own international MPLS networks and that sort of thing, they say, let me onboard to AWS with VPN or direct connect, and I can actually run the AWS backbone around the world. Now doing that actually has some complexity. You got to think about transit gateways. You got to think about those inter-region peering. And AWS cloud when takes all of that complexity away, you essentially create a cloud WAN, connecting to it to VPN or direct connect, and you can even go and actually set up network segments. So essentially VLANs for different parts of the organization. So super excited to get out that out of there. >> So the ease of use is the key there. >> Massively easy to use. and we have 26 SD-WAN partners. We even partnering with folks like Verizon and Swisscom in Switzerland to telco to actually allow them to use it for their customers as well. >> We'll probably use your service someday when we have a global rollout date. >> Let's do that, CUBE Global. And then the other was the M1 EC2 instance, which got a lot of applause. >> David: Absolutely. >> M1, I think it was based on A15. >> Yeah, that's for Mac. We've got to be careful 'cause M1 is our first instance as well. >> Yeah right, it's a little confusion there. >> So it's a Mac. The EC2 Mac is with M1 silicon from Apple, which super excited to put out there. >> Awesome. >> David Brown, great to see you in person. Congratulations to you and the team and all the work you guys have done over the years. And now that people starting to realize the cloud platform, the compute just gets better and better. It's a key part of the system. >> Thanks John, it's great to be here. >> Thanks for sharing. >> The SiliconANGLE is here. We're talking about custom silicon here on AWS. I'm John Furrier with Dave Vellante. You're watching theCUBE. The global leader in tech coverage. We'll be right back with more covers from re:Invent after this break. (bright music)
SUMMARY :
all of the actions on site A lot of virtual events with you guys, It's always been the core block of AWS. And that's the first thing. So instances become the new thing. and then the EC2 itself. available to you whether So the Custom Silicon, seeing the acceleration, of the processing of network And over the years, when we saw You're saying 0% in the cloud. It's all given to you as the customer. And the response was amazing. example of the network card, and I'm imagining that the app a lot of the more agile companies And it feels like you 'cause it really depends on the workload. some insight as to that. And the great thing about You have to keep going, the So the mainframe, migration Well, a lot of the legacy apps, So that's an interesting down the last few verticals but they don't have to think about it. and I'm going to get and get the price performance I think, oh, no, that was serverless. And I think they were talking about a 35% to do to port to Graviton. about is the important part. I mean, when you think about four days. And when we talked And see what happens, so I presume the what will it take? and the time to tape out, I wonder if you could that the Annapurna team When you send it to the fab, So the scale gives you an advantage the shrinking time to market, or the stuff you see in and that's why I believe anyone We've seen this, like you said before, and as cheaply as possible. And so I'm excited to see is the right thing to do. and a great to see Graviton Talk about the deprecation issue And the only reason we did it Stay on as long as you want. and over the last two and Swisscom in Switzerland to We'll probably use your service someday the M1 EC2 instance, We've got to be careful little confusion there. The EC2 Mac is with M1 silicon from Apple, and all the work you guys The SiliconANGLE is here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Brown | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Werner | PERSON | 0.99+ |
Swisscom | ORGANIZATION | 0.99+ |
Matt Garman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Switzerland | LOCATION | 0.99+ |
Dave Brown | PERSON | 0.99+ |
Sunday | DATE | 0.99+ |
40% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
14 years | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2002 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
23.6 terabytes | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
10 years | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
four days | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
55% | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
200 | QUANTITY | 0.99+ |
2003 | DATE | 0.99+ |
24 terabytes | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
one week | QUANTITY | 0.99+ |
four-day | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
two and a half | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
a year later | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Elastic Compute Cloud | ORGANIZATION | 0.99+ |
500 | QUANTITY | 0.99+ |
Breaking Analysis The Future of the Semiconductor Industry
from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante semiconductors are the heart of technology innovation for decades technology improvements have marched the cadence of silicon advancements in performance cost power and packaging in the past 10 years the dynamics of the semiconductor industry have changed dramatically soaring factory costs device volume explosions fabulous chip companies greater programmability compressed time to tape out a lot more software content the looming presence of china these and other factors have changed the power structure of the semiconductor business chips today power every aspect of our lives and have led to a global semiconductor shortage that's been well covered but we've never seen anything like it before we believe silicon's success in the next 20 years will be determined by volume manufacturing capabilities design innovation public policy geopolitical dynamics visionary leadership and innovative business models that can survive the intense competition in one of the most challenging businesses in the world hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis it's our pleasure to welcome daniel newman in one of the leading analysts in the technology business and founder of futurum research daniel welcome to the program thanks so much dave great to see you thanks for having me big topic yeah i'll say i'm really looking forward to this and so here's some of the topics that we want to cover today if we have time changes in the semiconductor industry i've said they've been dramatic the shift to nofap companies we're going to talk about volume manufacturing those shifts that have occurred largely due to the arm model we want to cover intel and dig into that and what it has to do to to survive and thrive these changes and then we want to take a look at how alternative processors are impacting the world people talk about is moore's law dead is it alive and well daniel you have strong perspectives on all of this including nvidia love to get your thoughts on on that plus talk about the looming china threat as i mentioned in in the intro but daniel before we get into it do these topics they sound okay how do you see the state of the semiconductor industry today where have we come from where are we and where are we going at the macro level there are a lot of different narratives that are streaming alongside and they're not running in parallel so much as they're running and converging towards one another but it gradually different uh you know degrees so the last two years has welcomed a semiconductor conversation that we really hadn't had and that was supply chain driven the covid19 pandemic brought pretty much unprecedented desire demand thirst or products that are powered by semiconductors and it wasn't until we started running out of laptops of vehicles of servers that the whole world kind of put the semiconductor in focus again like it was just one of those things dave that we as a society it's sort of taken for granted like if you need a laptop you go buy a laptop if you needed a vehicle there'd always be one on the lot um but as we've seen kind of this exponentialism that's taken place throughout the pandemic what we ended up realizing is that semiconductors are eating the world and in fact the next industrial the entire industrial itself the complex is powered by semiconductor technology so everything we we do and we want to do right you went from a vehicle that might have had 50 or 100 worth of semiconductors on a few different parts to one that might have 700 800 different chips in it thousands of dollars worth of semi of semiconductors so you know across the board though yes you're dealing with the dynamics of the shortage you're dealing with the dynamics of innovation you're dealing with moore's law and sort of coming to the end which is leading to new process we're dealing with the foundry versus fab versus invention and product development uh situation so there's so many different concurrent semiconductor narratives that are going on dave and we can talk about any of them and all of them and i'm sure as we do we'll overlap all these different themes you know maybe you can solve this mystery for me there's this this this chip shortage and you can't invent vehicle inventory is so tight but yet when you listen to uh the the ads if the the auto manufacturers are pounding the advertising maybe they're afraid of tesla they don't want to lose their brand awareness but anyway so listen it's by the way a background i want to get a little bit academic here but but bear with me i want to introduce actually reintroduce the concept of wright's law to our audience we know we all know about moore's law but the earlier instantiation actually comes from theodore wright t.p wright he was this engineer in the airplane industry and the math is a little bit abstract to apply but roughly translated says as the cumulative number of units produced doubles your cost per unit declines by a fixed percentage now in airplanes that was around 15 percent in semiconductors we think that numbers more like 20 25 when you add the performance improvements you get from silicon advancements it translates into something like 33 percent cost cost declines when you can double your cumulative volume so that's very important because it confers strategic advantage to the company with the largest volume so it's a learning curve dynamic and it's like andy jassy says daniel there's no compression algorithm for experience and it definitely applies here so if you apply wright's law to what's happening in the industry today we think we can get a better understanding of for instance why tsmc is dominating and why intel is struggling any quick thoughts on that well you have to take every formula like that in any sort of standard mathematics and kind of throw it out the window when you're dealing with the economic situation we are right now i'm not i'm not actually throwing it out the window but what i'm saying is that when supply and demand get out of whack some of those laws become a little bit um more difficult to sustain over the long term what i will say about that is we have certainly seen this found um this fabulous model explode over the last few years you're seeing companies that can focus on software frameworks and innovation that aren't necessarily getting caught up in dealing with the large capital expenditures and overhead the ability to as you suggested in the topics here partner with a company like arm that's developing innovation and then and then um you know offering it uh to everybody right and for a licensee and then they can quickly build we're seeing what that's doing with companies like aws that are saying we're going to just build it alibaba we're just going to build it these aren't chip makers these aren't companies that were even considered chip makers they are now today competing as chip makers so there's a lot of different dynamics going back to your comment about wright's law like i said as we normalize and we figure out this situation on a global scale um i do believe that the who can manufacture the most will certainly continue to have significant competitive advantages yeah no so that's a really interesting point that you're bringing up because one of the things that it leads me to think is that the chip shortage could actually benefit intel i think will benefit intel so i want to introduce this some other data and then get your thoughts on this very simply the chart on the left shows pc shipments which peaked in in 2011 and then began at steady decline until covid and they've the pcs as we know have popped up in terms of volume in the past year and looks like they'll be up again this year the chart on the right is cumulative arm shipments and so as we've reported we think arm wafer volumes are 10x those of x86 volumes and and as such the arm ecosystem has far better cost structure than intel and that's why pat gelsinger was called in to sort of save the day so so daniel i just kind of again opened up this this can of worms but i think you're saying long term volume is going to be critical that's going to confer low cost advantages but in the in in the near to mid-term intel could actually benefit from uh from this chip shortage well intel is the opportunity to position itself as a leader in solving the repatriation crisis uh this will kind of carry over when we talk more about china and taiwan and that relationship and what's going on there we've really identified a massive gap in our uh in america supply chain in the global supply chain because we went from i don't have the stat off hand but i have a rough number dave and we can validate this later but i think it was in like the 30-ish high 30ish percentile of manufacturing of chips were done here in the united states around 1990 and now we're sub 10 as of 2020. so we we offshored almost all of our production and so when we hit this crisis and we needed more manufacturing volume we didn't have it ready part of the problem is you get people like elon musk that come out and make comments to the media like oh it'll be fixed later this year well you can't build a fab in a year you can't build a fab and start producing volume and the other problem is not all chips are the same so not every fab can produce every chip and when you do have fabs that are capable of producing multiple chips it costs millions of dollars to change the hardware and to actually change the process so it's not like oh we're going to build 28 today because that's what ford needs to get all those f-150s out of the lot and tomorrow we're going to pump out more sevens for you know a bunch of hp pcs it's a major overhaul every time you want to retool so there's a lot of complexity here but intel is the one domestic company us-based that has basically raised its hand and said we're going to put major dollars into this and by the way dave the arm chart you showed me could have a very big implication as to why intel wants to do that yeah so right because that's that's a big part of of foundry right is is get those volumes up so i want to hold that thought because i just want to introduce one more data point because one of the things we often talk about is the way in which alternative processors have exploded onto the scene and this chart here if you could bring that up patrick thank you shows the way in which i think you're pointing out intel is responding uh by leveraging alternative fat but once again you know kind of getting getting serious about manufacturing chips what the chart shows is the performance curve it's on a log scale for in the blue line is x86 and the orange line is apple's a series and we're using that as a proxy for sort of the curve that arm is on and it's in its performance over time culminating in the a15 and it measures trillions of operations per second so if you take the traditional x86 curve of doubling every 18 to 24 months that comes out roughly to about 40 percent improvement per year in performance and that's diminishing as we all know to around 30 percent a year because the moore's law is waning the orange line is powered by arm and it's growing at over a hundred percent really 110 per year when you do the math and that's when you combine the cpu the the the neural processing unit the the the xpu the dsps the accelerators et cetera so we're seeing apple use arm aws to you to your point is building chips on on graviton and and and tesla's using our list is long and this is one reason why so daniel this curve is it feels like it's the new performance curve in the industry yeah we are certainly in an era where companies are able to take control of the innovation curve using the development using the open ecosystem of arm having more direct control and price control and of course part of that massive arm number has to do with you know mobile devices and iot and devices that have huge scale but at the same time a lot of companies have made the decision either to move some portion of their product development on arm or to move entirely on arm part of why it was so attractive to nvidia part of the reason that it's under so much scrutiny that that deal um whether that deal will end up getting completed dave but we are seeing an era where we want we i said lust for power i talked about lust for semiconductors our lust for our technology to do more uh whether that's software-defined vehicles whether that's the smartphones we keep in our pocket or the desktop computer we use we want these machines to be as powerful and fast and responsive and scalable as possible if you can get 100 where you can get 30 improvement with each year and generation what is the consumer going to want so i think companies are as normal following the demand of consumers and what's available and at the same time there's some economic benefits they're they're able to realize as well i i don't want to i don't want to go too deep into nvidia arm but what do you handicap that that the chances that that acquisition actually happens oh boy um right now there's a lot of reasons it should happen but there are some reasons that it shouldn't i still kind of consider it a coin toss at this point because fundamentally speaking um you know it should create more competition but there are some people out there that believe it could cause less and so i think this is going to be hung up with regulators a little bit longer than we thought we've already sort of had some previews into that dave with the extensions and some of the timelines that have already been given um i know that was a safe answer and i will take credit for being safe this one's going to be a hard one to call but it certainly makes nvidia an amazing uh it gives amazing prospects to nvidia if they're able to get this deal done yeah i i agree with you i think it's 50 50. okay my i want to pose the question is intel too strategic to fail in march of this year we published this article where we posed that question uh you and i both know pat pretty well we talked about at the time the multi-front war intel is waging in a war with amd the arm ecosystem tsmc the design firms china and we looked at the company's moves which seemed to be right from a strategy standpoint the looking at the potential impact of the u.s government intel's partnership with ibm and what that might portend the us government has a huge incentive to make sure intel wins with onshore manufacturing and that looming threat from china but daniel is intel too strategic to fail and is pat gelsinger making the right moves well first of all i do believe at this current juncture where the semiconductor and supply chain shortage and crisis still looms that intel is too strategic to fail i also believe that intel's demise is somewhat overstated not to say intel doesn't have a slate of challenges that it's going to need to address long term just with the technology adoption curve that you showed being one of them dave but you have to remember the company still has nearly 90 of the server cpu market it still has a significant market share in client and pc it is seeing market share erosion but it's not happened nearly as fast as some people had suggested it would happen with right now with the demand in place and as high as it is intel is selling chips just about as quickly as it can make them and so we right now are sort of seeing the tam as a whole the demand as a whole continue to expand and so intel is fulfilling that need but where are they really too strategic to fail i mean we've seen in certain markets in certain uh process in um you know client for instance where amd has gained of course that's still x86 we've seen uh where the m1 was kind of initially thought to be potentially a pro product that would take some time it didn't take nearly as long for them to get that product in good shape um but the foundry and fab side is where i think intel really has a chance to flourish right now one it can play in the arm space it can build these facilities to be able to produce and help support the production of volumes of chips using arm designs so that actually gives intel and inroads two is it's the company that has made the most outspoken commitment to invest in the manufacturing needs of the united states both here in the united states and in other places across the world where we have friendly ally relationships and need more production capabilities if not in intel b and there is no other logical company that's us-based that's going to meet the regulator and policymakers requirements right now that is also raising their hand and saying we have the know-how we've been doing this we can do more of this and so i think pat is leaning into the right area and i think what will happen is very likely intel will support manufacturing of chips by companies like qualcomm companies like nvidia and if they're able to do that some of the market share losses that they're potentially facing with innovation challenges um and engineering challenges could be offset with growth in their fab and foundry businesses and i think i think pat identified it i think he's going to market with it and you know convincing the street that's going to be a whole nother thing that this is exciting um but i think as the street sees the opportunity here this is an area that intel can really lean into so i think i i think people generally would recognize at least the folks i talk to and it'll be interested in your thoughts who really know this business that intel you know had the best manufacturing process in in the world obviously that's coming to question but but but but for instance people say well intel's 10 nanometer you know is comparable to tsm seven nanometer and that's sort of overstated their their nanometer you know loss but but so so they they were able to point as they were able to sort of hide some of the issues maybe in design with great process and and i i believe that comes down to volume so the question i have then is and i think so i think patrick's pat is doing the right thing because he's going after volume and that's what foundry brings but can he get enough volume or does he need for inst for instance i mean one of the theories i've put out there is that apple could could save the day for intel if the if the us government gets apple in a headlock and says hey we'll back off on break up big tech but you got to give pat some of your foundry volume that puts him on a steeper learning curve do you do you worry sometimes though daniel that intel just even with like qualcomm and broadcom who by the way are competitors of theirs and don't necessarily love them but even even so if they could get that those wins that they still won't have the volume to compete on a cost basis or do you feel like even if they're numbered a number three even behind samsung it's good enough what are your thoughts on that well i don't believe a company like intel goes into a business full steam and they're not new to this business but the obvious volume and expansion that they're looking at with the intention of being number two or three these great companies and you know that's same thing i always say with google cloud google's not out to be the third cloud they're out to be one well that's intel will want to to be stronger if the us government and these investments that it's looking at making this 50 plus billion dollars is looking to pour into this particular space which i don't think is actually enough but if if the government makes these commitments and intel being likely one of the recipients of at least some of these dollars to help expedite this process move forward with building these facilities to make increased manufacturing very likely there's going to be some precedent of law a policy that is going to be put in place to make sure that a certain amount of the volume is done here stateside with companies this is a strategic imperative this is a government strategic imperative this is a putting the country at risk of losing its technology leadership if we cannot manufacture and control this process of innovation so i think intel is going to have that as a benefit that the government is going to most likely require some of this manufacturing to take place here um especially if this investment is made the last thing they're going to want to do is build a bunch of foundries and build a bunch of fabs and end up having them not at capacity especially when the world has seen how much of the manufacturing is now being done in taiwan so i think we're concluding and i i i correctly if i'm wrong but intel is too strategic to fail and and i i sometimes worry they can go bankrupt you know trying to compete with the likes of tsmc and that's why the the the public policy and the in the in the partnership with the u.s government and the eu is i think so important yeah i don't think bankruptcy is an immediate issue i think um but while i follow your train of thought dave i think what you're really looking at more is can the company grow and continue to get support where i worry about is shareholders getting exhausted with intel's the merry-go-round of not growing fast enough not gaining market share not being clearly identified as a leader in any particular process or technology and sort of just playing the role of the incumbent and they the company needs to whether it's in ai whether it's at the edge whether it's in the communications and service provider space intel is doing well you look at their quarterly numbers they're making money but if you had to say where are they leading right now what what which thing is intel really winning uh consistently at you know you look at like ai and ml and people will point to nvidia you look at you know innovation for um client you know and even amd has been super disruptive and difficult for intel uh of course you we've already talked about in like mobile um how impactful arm has been and arm is also playing a pretty big role in servers so like i said the market share and the technology leadership are a little out of skew right now and i think that's where pat's really working hard is identifying the opportunities for for intel to play market leader and technology leader again and for the market to clearly say yes um fab and foundry you know could this be an area where intel becomes the clear leader domestically and i think that the answer is definitely yes because none of the big chipmakers in the us are are doing fabrication you know they're they're all outsourcing it to overseas so if intel can really lead that here grow that large here then it takes some of the pressure off of the process and the innovation side and that's not to say that intel won't have to keep moving there but it does augment the revenue creates a new profit center and makes the company even more strategic here domestically yeah and global foundry tapped out of of sub 10 nanometer and that's why ibm's pseudonym hey wait a minute you had a commitment there the concern i have and this is where again your point is i think really important with the chip shortage you know to go from you know initial design to tape out took tesla and apple you know sub sub 24 months you know probably 18 months with intel we're on a three-year design to tape out cycle maybe even four years so they've got to compress that but that as you well know that's a really hard thing to do but the chip shortage is buying them time and i think that's a really important point that you brought out early in this segment so but the other big question daniel i want to test with you is well you mentioned this about seeing arm in the enterprise not a lot of people talk about that or have visibility on that but i think you're right on so will arm and nvidia be able to seriously penetrate the enterprise the server business in particular clearly jensen wants to be there now this data from etr lays out many of the enterprise players and we've superimposed the semiconductor giants in logos the data is an xy chart it shows net score that's etr's measure of spending momentum on the vertical axis and market share on the horizontal axis market share is not like idc market share its presence in the data set and as we reported before aws is leading the charge in enterprise architecture as daniel mentioned they're they're designing their own chips nitro and graviton microsoft is following suit as is google vmware has project monterey cisco is on the chart dell hp ibm with red hat are also shown and we've superimposed intel nvidia china and arm and now we can debate the position of the logos but we know that one intel has a dominant position in the data center it's got to protect that business it cannot lose ground as it has in pcs because the margin pressure it would face two we know aws with its annapurna acquisition is trying to control its own destiny three we know vmware has project monterey and is following aws's lead to support these new workloads beyond x86 general purpose they got partnerships with pansando and arm and others and four we know cisco they've got chip design chops as does hpe maybe to a lesser extent and of course we know ibm has excellent semiconductor design expertise especially when it comes to things like memory disaggregation as i said jensen's going hard after the data center you know him well daniel we know china wants to control its own destiny and then there's arm it dominates mobile as you pointed out in iot can it make a play for the data center daniel how do you see this picture and what are your thoughts on the future of enterprise in the context of semiconductor competition it's going to take some time i believe but some of the investments and products that have been brought to market and you mentioned that shorter tape out period that shorter period for innovation whether it's you know the graviton uh you know on aws or the aiml chips that uh with trainium and inferentia how quickly aws was able to you know develop build deploy to market an arm-based solution that is being well received and becoming an increasing component of the services and and uh products that are being offered from aws at this point it's still pretty small and i would i would suggest that nvidia and arm in the spirit of trying to get this deal done probably don't necess don't want the enterprise opportunity to be overly inflated as to how quickly the company's going to be able to play in that space because that would somewhat maybe slow or bring up some caution flags that of the regulators that are that are monitoring this at the same time you could argue that arm offering additional options in competition much like it's doing in client will offer new form factors new designs um new uh you know new skus the oems will be able to create more customized uh hardware offerings that might be able to be unique for certain enterprises industries can put more focus you know we're seeing the disaggregation with dpus and how that technology using arm with what aws is doing with nitro but what what these different companies are doing to use you know semiconductor technology to split out security networking and storage and so you start to see design innovation could become very interesting on the foundation of arm so in time i certainly see momentum right now the thing is is most companies in the enterprise are looking for something that's fairly well baked off the shelf that can meet their needs whether it's sap or whether it's you know running different custom applications that the business is built on top of commerce solutions and so intel meets most of those needs and so arm has made a lot of sense for instance with these cloud scale providers but not necessarily as much sense for enterprises especially those that don't want to necessarily look at refactoring all the workloads but as software becomes simpler as refactoring becomes easier to do between different uh different technologies and processes you start to say well arm could be compelling and you know because the the bottom line is we know this from mobile devices is most of us don't care what the processor is the average person the average data you know they look at many of these companies the same in enterprise it's always mattered um kind of like in the pc world it used to really matter that's where intel inside was born but as we continue to grow up and you see these different processes these different companies nvidia amd intel all seen as very worthy companies with very capable technologies in the data center if they can offer economics if they can offer performance if they can offer faster time to value people will look at them so i'd say in time dave the answer is arm will certainly become more and more competitive in the data center like it was able to do at the edge in immobile yeah one of the things that we've talked about is that you know the software-defined data center is awesome but it also created a lot of wasted overhead in terms of offloading storage and and networking security and that much of that is being done with general purpose x86 processors which are more expensive than than for instance using um if you look at what as you mentioned great summary of what aws is doing with graviton and trainium and other other tooling what ampere is doing um in in in oracle and you're seeing both of those companies for example particularly aws get isvs to write so they can run general purpose applications on um on arm-based processors as well it sets up well for ai inferencing at the edge which we know arms dominating the edge we see all these new types of workloads coming into the data center if you look at what companies like nebulon and pensando and and others are doing uh you're seeing a lot of their offloads are going to arm they're putting arm in even though they're still using x86 in a lot of cases but but but they're offloading to arm so it seems like they're coming into the back door i understand your point actually about they don't want to overplay their hand there especially during these negotiations but we think that that long term you know it bears watching but intel they have such a strong presence they got a super strong ecosystem and they really have great relationships with a lot of the the enterprise players and they have influence over them so they're going to use that the the the chip shortage benefits them the uh the relationship with the us government pat is spending a lot of time you know working that so it's really going to be interesting to see how this plays out daniel i want to give you the last word your final thoughts on what we talked about today and where you see this all headed i think the world benefits as a whole with more competition and more innovation pressure i like to see more players coming into the fray i think we've seen intel react over the last year under pat gelsinger's leadership we've seen the technology innovation the angstrom era the 20a we're starting to see what that roadmap is going to look like we've certainly seen how companies like nvidia can disrupt come into market and not just using hardware but using software to play a major role but as a whole as innovation continues to take form at scale we all benefit it means more intelligent software-defined vehicles it puts phones in our hands that are more powerful it gives power to you know cities governments and enterprises that can build applications and tools that give us social networks and give us data-driven experiences so i'm very bullish and optimistic on as a whole i said this before i say it again i believe semiconductors will eat the world and then you know you look at the we didn't even really talk about the companies um you know whether it's in ai uh like you know grok or grav core there are some very cool companies building things you've got qualcomm bought nuvia another company that could you know come out of the blue and offer us new innovations in mobile and personal computing i mean there's so many cool companies dave with the scale of data the uh the the growth and demand and desire for connectivity in the world um it's never been a more interesting time to be a fan of technology the only thing i will say as a whole as a society as i hope we can fix this problem because it does create risks the supply chain inflation the economics all that stuff ties together and a lot of people don't see that but if we can't get this manufacturing issue under control we didn't really talk about china dave and i'll just say taiwan and china are very physically close together and the way that china sees taiwan and the way we see taiwan is completely different we have very little control over what can happen we've all seen what's happened with hong kong so there's just so many as i said when i started this conversation we've got all these trains on the track they're all moving but they're not in parallel these tracks are all converging but the convergence isn't perpendicular so sometimes we don't see how all these things interrelate but as a whole it's a very exciting time love being in technology and uh love having the chance to come out here and talk with you i love the optimism and you're right uh that competition in china that's going to come from china as well xi has made it a part of his legacy i think to you know re-incorporate taiwan that's going to be interesting to see i mean taiwan ebbs and flows with regard to you know its leadership sometimes they're more pro i guess i should say less anti-china maybe that's the better way to say it uh and and and you know china's putting in big fab capacity for nand you know maybe maybe people look at that you know some of that is the low end of the market but you know clay christensen would say well to go take a look at the steel industry and see what happened there so so we didn't talk much about china and that was my oversight but but they're after self-sufficiency it's not like they haven't tried before kind of like intel has tried foundry before but i think they're really going for it this time but but now what are your do you believe that china will be able to get self-sufficiency let's say within the next 10 to 15 years with semiconductors yes i would never count china out of anything if they put their mind to it if it's something that they want to put absolute focus on i think um right now china vacillates between wanting to be a good player and a good steward to the world and wanting to completely run its own show the the politicization of what's going on over there we all saw what happened in the real estate market this past week we saw what happened with tech ed over the last few months we've seen what's happened with uh innovation and entrepreneurship it is not entirely clear if china wants to give the more capitalistic and innovation ecosystem a full try but it is certainly shown that it wants to be seen as a world leader over the last few decades it's accomplished that in almost any area that it wants to compete dave i would say if this is one of gigi ping's primary focuses wanting to do this it would be very irresponsible to rule it out as a possibility daniel i gotta tell you i i love collaborating with you um we met face to face just recently and i hope we could do this again i'd love to have you you back on on the program thanks so much for your your time and insights today thanks for having me dave so daniel's website futuram research that's three use in futurum uh check that out for termresearch.com uh the the this individual is really plugged in he's forward thinking and and a great resource at daniel newman uv is his twitter so go follow him for some great stuff and remember these episodes are all available as podcasts wherever you listen all you do is search for breaking analysis podcast we publish each week on wikibon.com and siliconangle.com and by the way daniel thank you for contributing your your quotes to siliconangle the writers there love you uh you can always connect on twitter i'm at divalanto you can email me at david.velante at siliconangle.com appreciate the comments on linkedin and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time you
SUMMARY :
benefit that the government is going to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
50 | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
patrick | PERSON | 0.99+ |
three-year | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
33 percent | QUANTITY | 0.99+ |
nvidia | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
daniel | PERSON | 0.99+ |
taiwan | LOCATION | 0.99+ |
700 | QUANTITY | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
alibaba | ORGANIZATION | 0.99+ |
boston | LOCATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
samsung | ORGANIZATION | 0.99+ |
daniel newman | PERSON | 0.99+ |
thousands of dollars | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
america | LOCATION | 0.99+ |
dave vellante | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
one reason | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
each week | QUANTITY | 0.99+ |
amd | ORGANIZATION | 0.98+ |
aws | ORGANIZATION | 0.98+ |
dave | PERSON | 0.98+ |
10 nanometer | QUANTITY | 0.98+ |
ibm | ORGANIZATION | 0.98+ |
intel | ORGANIZATION | 0.98+ |
pansando | ORGANIZATION | 0.98+ |
palo alto | ORGANIZATION | 0.98+ |
each year | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
u.s government | ORGANIZATION | 0.98+ |
united states | LOCATION | 0.98+ |
china | LOCATION | 0.98+ |
24 months | QUANTITY | 0.97+ |
andy jassy | PERSON | 0.97+ |
this year | DATE | 0.97+ |
50 plus billion dollars | QUANTITY | 0.97+ |
f-150s | COMMERCIAL_ITEM | 0.97+ |
last year | DATE | 0.97+ |
march of this year | DATE | 0.97+ |
termresearch.com | OTHER | 0.97+ |
around 15 percent | QUANTITY | 0.96+ |
vmware | ORGANIZATION | 0.96+ |
The Future of the Semiconductor Industry | TITLE | 0.96+ |
cisco | ORGANIZATION | 0.96+ |
nuvia | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
broadcom | ORGANIZATION | 0.96+ |
clay christensen | PERSON | 0.96+ |
tesla | PERSON | 0.96+ |
china | ORGANIZATION | 0.95+ |
around 30 percent a year | QUANTITY | 0.95+ |
Breaking Analysis: How Nvidia Wins the Enterprise With AI
from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante nvidia wants to completely transform enterprise computing by making data centers run 10x faster at one tenth the cost and video's ceo jensen wang is crafting a strategy to re-architect today's on-prem data centers public clouds and edge computing installations with a vision that leverages the company's strong position in ai architectures the keys to this end-to-end strategy include a clarity of vision massive chip design skills a new arm-based architecture approach that integrates memory processors i o and networking and a compelling software consumption model even if nvidia is unsuccessful at acquiring arm we believe it will still be able to execute on this strategy by actively participating in the arm ecosystem however if its attempts to acquire arm are successful we believe it will transform nvidia from the world's most valuable chip company into the world's most valuable supplier of integrated computing architectures hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll explain why we believe nvidia is in the right position to power the world's computing centers and how it plans to disrupt the grip that x86 architectures have had on the data center for decades the data center market is in transition like the universe the cloud is expanding at an accelerated pace no longer is the cloud an opaque set of remote services i always say somewhere out there sitting in a mega data center no rather the cloud is extending to on-premises data centers data centers are moving into the cloud and they're connecting through adjacent locations that create hybrid interactions clouds are being meshed together across regions and eventually will stretch to the far edge this new definition or view of cloud will be hyper distributed and run by software kubernetes is changing the world of software development and enabling workloads to run anywhere open apis external applications expanding the digital supply chains and this expanding cloud they all increase the threat surface and vulnerability to the most sensitive information that resides within the data center and around the world zero trust has become a mandate we're also seeing ai being injected into every application and it's the technology area that we see with the most momentum coming out of the pandemic this new world will not be powered by general purpose x86 processors rather it will be supported by an ecosystem of arm-based providers in our opinion that are affecting an unprecedented increase in processor performance as we have been reporting and nvidia in our view is sitting in the poll position and is currently the favorite to dominate the next era of computing architecture for global data centers public clouds as well as the near and far edge let's talk about jensen wang's clarity of vision for this new world here's a chart that underscores some of the fundamental assumptions that he's leveraging to expand his market the first is that there's a lot of waste in the data center he claims that only half of the cpu cores deployed in the data center today actually support applications the other half are processing the infrastructure all around the applications that run the software defined data center and they're terribly under utilized nvidia's blue field three dpu the data processing unit was described in a blog post on siliconangle by analyst zias caravala as a complete mini server on a card i like that with software defined networking storage and security acceleration built in this product has the bandwidth and according to nvidia can replace 300 general purpose x86 cores jensen believes that every network chip will be intelligent programmable and capable of this type of acceleration to offload conventional cpus he believes that every server node will have this capability and enable every packed of every packet and every application to be monitored in real time all the time for intrusion and as servers move to the edge bluefield will be included as a core component in his view and this last statement by jensen is critical in our opinion he says ai is the most powerful force of our time whether you agree with that or not it's relevant because ai is everywhere an invidious position in ai and the architectures the company is building are the fundamental linchpin of its data center enterprise strategy so let's take a look at some etr spending data to see where ai fits on the priority list here's a set of data in a view that we often like to share the horizontal axis is market share or pervasiveness in the etr data but we want to call your attention to the vertical axis that's really really what really we want to pay attention today that's net score or spending momentum exiting the pandemic we've seen ai capture the number one position in the last two surveys and we think this dynamic will continue for quite some time as ai becomes the staple of digital transformations and automations an ai will be infused in every single dot you see on this chart nvidia's architectures it just so happens are tailor made for ai workloads and that is how it will enter these markets let's quantify what that means and lay out our view of how nvidia with the help of arm will go after the enterprise market here's some data from wikibon research that depicts the percent of worldwide spending on server infrastructure by workload type here are the key points first the market last year was around 78 billion dollars worldwide and is expected to approach 115 billion by the end of the decade this might even be a conservative figure and we've split the market into three broad workload categories the blue is ai and other related applications what david floyer calls matrix workloads the orange is general purpose think things like erp supply chain hcm collaboration basically oracle saps and microsoft work that's being supported today and of course many other software providers and the gray that's the area that jensen was referring to is about being wasted the offload work for networking and storage and all the software defined management in the data centers around the world okay you can see the squeeze that we think compute infrastructure is gonna gonna occur around that orange area that general-purpose workloads that we think is going to really get squeezed in the next several years on a percentage basis and on an absolute basis it's really not growing nearly as fast as the other two and video with arm in our view is well positioned to attack that blue area and the gray area those those workload offsets and the new emerging ai applications but even the orange as we've reported is under pressure as for example companies like aws and oracle they use arm-based designs to service general purpose workloads why are they doing that cost is the reason because x86 generally and intel specifically are not delivering the price performance and efficiency required to keep up with the demands to reduce data center costs and if intel doesn't respond which we believe it will but if it doesn't act arm we think will get 50 percent of the general purpose workloads by the end of the decade and with nvidia it will dominate the blue the ai and the gray the offload work when we say dominate we're talking like capture 90 percent of the available market if intel doesn't respond now intel they're not just going to sit back and let that happen pat gelsinger is well aware of this in moving intel to a new strategy but nvidia and arm are way ahead in the game in our view and as we've reported this is going to be a real challenge for intel to catch up now let's take a quick look at what nvidia is doing with relevant parts of its pretty massive portfolio here's a slide that shows nvidia's three chip strategy the company is shifting to arm-based architectures which we'll describe in more detail in a moment the slide shows at the top line nvidia's ampere architecture not to be confused with the company ampere computing nvidia is taking a gpu centric approach no surprise obvious reasons there that's their sort of stronghold but we think over time it may rethink this a little bit and lean more into npus the neural processing unit we look at what apple's doing what tesla are doing we see opportunities for companies like nvidia to really sort of go after that but we'll save that for another day nvidia has announced its grace cpu a nod to the famous computer scientist grace hopper grace is a new architecture that doesn't rely on x86 and much more efficiently uses memory resources we'll again describe this in more detail later and the bottom line there that roadmap line shows the bluefield dpu which we described is essentially a complete server on a card in this approach using arm will reduce the elapsed time to go from chip design to production by 50 we're talking about shaving years down to 18 months or less we don't have time to do a deep dive into nvidia's portfolio it's large but we want to share some things that we think are important and this next graphic is one of them this shows some of the details of nvidia's jetson architecture which is designed to accelerate those ai plus workloads that we showed earlier and the reason is that this is important in our view is because the same software supports from small to very large including edge systems and we think this type of architecture is very well suited for ai inference at the edge as well as core data center applications that use ai and as we've said before a lot of the action in ai is going to happen at the edge so this is a good example of leveraging an architecture across a wide spectrum of performance and cost now we want to take a moment to explain why the moved arm-based architectures is so critical to nvidia one of the biggest cost challenges for nvidia today is keeping the gpu utilized typical utilization of gpu is well below 20 percent here's why the left hand side of this chart shows essentially racks if you will of traditional compute and the bottlenecks that nvidia faces the processor and dram they're tied together in separate blocks imagine there are thousands thousands of cores in a rack and every time you need data that lives in another processor you have to send a request and go retrieve it it's very overhead intensive now technologies like rocky are designed to help but it doesn't solve the fundamental architectural bottleneck every gpu shown here also has its own dram and it has to communicate with the processors to get the data i.e they can't communicate with each other efficiently now the right hand side side shows where nvidia is headed start in the middle with system on chip socs cpus are packaged in with npus ipu's that's the image processing unit you know x dot dot dot x pu's the the alternative processors they're all connected with sram which is think of that as a high speed layer like an layer one cache the os for the system on a chip lives inside of this and that's where nvidia has this killer software model what they're doing is they're licensing the consumption of the operating system that's running this system on chip in this entire system and they're affecting a new and really compelling subscription model you know maybe they should just give away the chips and charge for the software like a razer blade model talk about disruptive now the outer layer is the the dpu and the shared dram and other resources like the ampere computing the company this time cpus ssds and other resources these are the processors that will manage the socs together this design is based on nvidia's three chip approach using bluefield dpu leveraging melanox that's the networking component the network enables shared dram across the cpus which will eventually be all arm based grace lives inside the system on a chip and also on the outside layers and of course the gpu lives inside the soc in a scaled-down version like for instance a rendering gpu and we show some gpus on the outer layer as well for ai workloads at least in the near term you know eventually we think they may reside solely in the system on chip but only time will tell okay so you as you can see nvidia is making some serious moves and by teaming up with arm and leaning into the arm ecosystem it plans to take the company to its next level so let's talk about how we think competition for the next era of compute stacks up here's that same xy graph that we love to show market share or pervasiveness on the horizontal tracking against next net score on the vertical net score again is spending velocity and we've cut the etr data to capture players that are that are big in compute and storage and networking we've plugged in a couple of the cloud players these are the guys that we feel are vying for data center leadership around compute aws is a very strong position we believe that more than half of its revenues comes from compute you know ec2 we're talking about more than 25 billion on a run rate basis that's huge the company designs its own silicon graviton 2 etc and is working with isvs to run general purpose workloads on arm-based graviton chips microsoft and google they're going to follow suit they're big consumers of compute they sell a lot but microsoft in particular you know they're likely to continue to work with oem partners to attack that on-prem data center opportunity but it's really intel that's the provider of compute to the likes of hpe and dell and cisco and the odms which are the odms are not shown here now hpe let's talk about them for a second they have architectures and i hate to bring it up but remember the machine i know it's the butt of many jokes especially from competitors it had been you know frankly hpe and hp they deserve some of that heat for all the fanfare and then that they they put out there and then quietly you know pulled the machine or put it out the pasture but hpe has a strong position in high performance computing and the work that it did on new computing architectures with the machine and shared memories that might be still kicking around somewhere inside of hp and could come in handy for some day in the future so hpe has some chops there plus hpe has been known hp historically has been known to design its own custom silicon so i would not count them out as an innovator in this race cisco is interesting because it not only has custom silicon designs but its entry into the compute business with ucs a decade ago was notable and they created a new way to think about integrating resources particularly compute and networking with partnerships to add in the storage piece initially it was within within emc prior to the dell acquisition but you know it continues with netapp and pure and others cisco invests they spend money investing in architectures and we expect the next generation of ucs oh ucs2 ucs 2.0 will mark another notable milestone in the company's data center business dell just had an amazing quarterly earnings report the company grew top line revenue by around 12 percent and it wasn't because of an easy compare to last year dells is simply executing despite continued softness in the legacy emc storage business laptop the laptop demand continued to soar in dell server business it's growing again but we don't see dell as an architectural innovator per se in compute rather we think the company will be content to partner with suppliers whether it's intel nvidia arm-based partners or all of the above dell we think will rely on its massive portfolio its excellent supply chain and execution ethos to compete now ibm is notable for historical reasons with its mainframe ibm created the first great compute monopoly before it unwind and wittingly handed it to intel along with microsoft we don't see ibm necessarily aspiring to retake that compute platform mantle that once once held with mainframes rather red hat in the march to hybrid cloud is the path that we think in our view is ibm's approach now let's get down to the elephants in the room intel nvidia and china inc china is of course relevant because of companies like alibaba and huawei and the chinese chinese government's desire to be self-sufficient in semiconductor technology and technology generally but our premise here is that the trends are favoring nvidia over intel in this picture because nvidia is making moves to further position itself for new workloads in the data center and compete for intel's stronghold intel is going to attempt to remake itself but it should have been doing this seven years ago what pat gelsinger is doing today intel is simply far behind and it's going to take at least a couple years for them to really start to to make inroads in this new model let's stay on the nvidia v intel comparison for a moment and take a snapshot of the two companies here's a quick chart that we put together with some basic kpis some of these figures are approximations or they're rounded so don't stress over it too much but you can see intel is an 80 billion dollar company 4x the size of nvidia but nvidia's market cap far exceeds that of intel why is that of course growth in our view it's justified due to that growth and nvidia's strategic positioning intel used to be the gross margin king but nvidia has much higher gross margins interesting now when it comes down to free cash flow intel is still dominant as it pertains to the balance sheet intel is way more capital intensive than nvidia and as it starts to build out its foundries that's going to eat into intel's cash position now what we did is we put together a little pro forma on the third column of nvidia plus arm circa let's say the end of 2022. we think they could get to a run rate that is about half the size of intel and that can propel the company's market cap to well over half a trillion dollars if they get any credit for arm they're paying 40 billion dollars for arm a company that's you know sub 2 billion the risk is that because of the arm because the arm deal is based on cash plus tons of stock it could put pressure on the market capitalization for some time arm has 90 percent gross margins because it pretty much has a pure license model so it helps the gross margin line a little bit for this in this pro forma and the balance sheet is a swag arm has said that it's not going to take on debt to do the transaction but we haven't had time to really dig into that and figure out how they're going to structure it so we took a took a swag in in what we would do with this low interest rate environment but but take that with a grain of salt we'll do more research in there the point is given the momentum and growth of nvidia its strategic position in ai is in its deep engineering they're aimed at all the right places and its potential to unlock huge value with arm on paper it looks like the horse to beat if it can execute all right let's wrap up here's a summary look the architectures on which nvidia is building its dominant ai business are evolving and nvidia is well positioned to drive a truck right to the enterprise in our view the power has shifted from intel to the arm ecosystem and nvidia is leaning in big time whereas intel it has to preserve its current business while recreating itself at the same time this is going to take a couple of years but intel potentially has the powerful backing of the us government too strategic to fail the wild card is will nvidia be successful in acquiring arm certain factions in the uk and eu are fighting the deal because they don't want the u.s dictating to whom arm can sell its technology for example the restrictions placed on huawei for many suppliers of arm-based chips based on u.s sanctions nvidia's competitors like broadcom qualcomm at all are nervous that if nvidia gets armed they will be at a competitive disadvantage they being invidious competitors and for sure china doesn't want nvidia controlling arm for obvious reasons and it will do what it can to block the deal and or put handcuffs on how business can be done in china we can see a scenario where the u.s government pressures the uk and eu regulators to let this deal go through look ai and semiconductors you can't get much more strategic than that for the u.s military and the u.s long-term competitiveness in exchange for maybe facilitating the deal the government pressures nvidia to guarantee some feed to the intel foundry business while at the same time imposing conditions that secure access to arm-based technology for nvidia's competitors and maybe as we've talked about before having them funnel business to intel's foundry actually we've talked about the us government enticing apple to do so but it could also entice nvidia's competitors to do so propping up intel's foundry business which is clearly starting from ground zero and is going to need help outside of intel's own semiconductor manufacturing internally look we don't have any inside information as to what's happening behind the scenes with the us government and so forth but on its earning call on its earnings call nvidia said they're working with regulators that are on track to complete the deal in early 2022. we'll see okay that's it for today thank you to david floyer who co-created this episode with me and remember i publish each week on wikibon.com and siliconangle.com these episodes they're all available as podcasts all you're going to do is search breaking analysis podcast and you can always connect with me on twitter at dvalante or email me at david.valante siliconangle.com i always appreciate the comments on linkedin and in the clubhouse please follow me so you can be notified when we start a room and riff on these topics and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time [Music] you
SUMMARY :
and it's the technology area that we see
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
alibaba | ORGANIZATION | 0.99+ |
nvidia | ORGANIZATION | 0.99+ |
50 percent | QUANTITY | 0.99+ |
90 percent | QUANTITY | 0.99+ |
huawei | ORGANIZATION | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
david floyer | PERSON | 0.99+ |
40 billion dollars | QUANTITY | 0.99+ |
china | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
david.valante | OTHER | 0.99+ |
last year | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
boston | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
10x | QUANTITY | 0.99+ |
early 2022 | DATE | 0.99+ |
jensen | PERSON | 0.99+ |
ibm | ORGANIZATION | 0.99+ |
around 78 billion dollars | QUANTITY | 0.99+ |
third column | QUANTITY | 0.99+ |
80 billion dollar | QUANTITY | 0.99+ |
more than half | QUANTITY | 0.99+ |
uk | LOCATION | 0.99+ |
first | QUANTITY | 0.98+ |
around 12 percent | QUANTITY | 0.98+ |
a decade ago | DATE | 0.98+ |
115 billion | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
each week | QUANTITY | 0.97+ |
dells | ORGANIZATION | 0.97+ |
seven years ago | DATE | 0.97+ |
50 | QUANTITY | 0.97+ |
dell | ORGANIZATION | 0.97+ |
jensen wang | PERSON | 0.97+ |
two | QUANTITY | 0.97+ |
end of 2022 | DATE | 0.97+ |
over half a trillion dollars | QUANTITY | 0.97+ |
siliconangle.com | OTHER | 0.96+ |
intel | ORGANIZATION | 0.96+ |
2021 045 Shiv Gupta V2
>>mhm Yes. Welcome back to the Qantas industry summit on the demise of third party cookies, the cookie conundrum, a recipe for success. I'm john furrier host of the cube. The changing landscape of advertising is here and Chip Gupta, founder of you of digital is joining us Chip, thanks for coming on this segment. Really appreciate, I know you're busy, you've got two young kids as well as providing education to the digital industry. You got some kids to take care of and train them to. So welcome to the cube conversation here as part of the program. >>Yeah, thanks for having me excited to be here. >>So the house of the changing landscape of advertising really centers around the open to walled garden mindset of the web and the big power players. We know the big 34 tech players dominate the marketplace. So clearly in a major inflection point and we've seen this movie before Web mobile revolution, which was basically a reply platform NG of capabilities. But now we're in an error of re factoring the industry, not re platt forming a complete changing over of the value proposition. So a lot at stake here as this open web, open internet global internet evolved. What are your, what's your take on this, this industry proposals out there that are talking to this specific cookie issue? What does it mean? And what proposals are out there? >>Yeah, so, you know, I I really view the identity proposals and kind of to to kind of groups, two separate groups. So on one side you have what the walled gardens are doing and really that's being led by google. Right, so google um you know, introduce something called the privacy sandbox when they announced that they would be deprecating third party cookies uh as part of the privacy sandbox, they've had a number of proposals unfortunately, or you know, however you want to say they're all bird themed for some reason, I don't know why. Um but the one the bird theme proposal that they've chosen to move forward with is called flock, which stands for Federated learning of cohorts. And essentially what it all boils down to is google is moving forward with cohort level Learning and understanding of users in the future after 3rd party cookies, unlike what we've been accustomed to in this space, which is a user level understanding of people and what they're doing online for targeting tracking purposes. And so that's on one side of the equation, it's what google is doing with flock and privacy sandbox. Now On the other side is, you know, things like unified, I need to point or the work that 85 is doing around building new identity frameworks for the entire space, that actually can still get down to the user level. Right? And so again, unified I. d 2.0 comes to mind because it's the one that's probably got the most adoption in the space. It's an open source framework. So the idea is that it's free and pretty much publicly available to anybody that wants to use it and unified, I need to point out again is user level. So it's it's basically taking data that's authenticated data from users across various websites you know that are logging in and taking those authenticated users to create some kind of identity map. And so if you think about those two work streams right, you've got the walled gardens and or you know, google with flock on one side and then you've got unified I. D. Two point oh and other I. D. Frameworks for the open internet on the other side, you've got these two very differing type of approaches to identity in the future. Again on the google side it's cohort level, it's gonna be built into chrome. Um The idea is that you can pretty much do a lot of the things that we do with advertising today, but now you're just doing it at a group level so that you're protecting privacy whereas on the other side of the open internet you're still getting down to the user level. Um And that's pretty powerful. But the the issue there is scale, right? We know that a lot of people are not logged in on lots of websites. I think the stat that I saw is under five of all website traffic is authenticated. So really if you if you simplify things, you boil it all down, you have kind of these two very differing approaches. >>I guess the question it really comes down to what alternatives are out there for cookies, and which ones do you think will be more successful? Because I think, you know, the consensus is at least from my reporting in my view, is that the world agrees, Let's make it open, Which one is going to be better. >>Yeah, that's a great question, john So as I mentioned, right, we have we have to kind of work streams here, we've got the walled garden work work stream being led by google and their work around flock, and then we've got the open internet, right? Let's say unified I. D to kind of represents that. I personally don't believe that there is a right answer or an endgame here. I don't think that one of them wins over the other, frankly, I think that, you know, first of all, you have those two frameworks, neither of them are perfect, they're both flawed in their own ways. There are pros and cons to both of them. And so what we're starting to see now is you have other companies kind of coming in and building on top of both of them as kind of a hybrid solution. Right? So they're saying, hey, we use, you know, an open I. D. Framework in this way to get down to the user level and use that authenticated data and that's important. But we don't have all the scale. So now we go to google and we go to flock to kind of fill the scale. Oh and hey, by the way, we have some of our own special sauce, right? We have some of our own data, we have some of our own partnerships, we're gonna bring that in and layer it on top. Right? And so really where I think things are headed is the right answer, frankly, is not one or the other. It's a little mishmash of both. With a little extra something on top. I think that's, that's what we're starting to see out of a lot of companies in the space and I think that's frankly where we're headed. >>What do you think the industry will evolve to, in your opinion? Because I think this is gonna, you can't ignore the big guys on this, has these programmatic, you mentioned also the data is there. But what do you think the market will evolve to with this, with this conundrum? >>So, so I think john where we're headed? Um, you know, I think we're right now we're having this existential existential crisis, right? About identity in this industry, because our world is being turned upside down, all the mechanisms that we've used for years and years are being thrown out the window and we're being told that we're gonna have new mechanisms, right? So cookies are going away device IEDs are going away and now we got to come up with new things and so the world is being turned upside down and everything that you read about in the trades and you know, we're here talking about it, right? Like everyone's always talking about identity right now, where do I think this is going if I was to look into my crystal ball, you know, this is how I would kind of play this out. If you think about identity today, Right? Forget about all the changes. Just think about it now and maybe a few years before today, Identity for marketers in my opinion, has been a little bit of a checkbox activity. Right? It's been, hey, um, okay, uh, you know, ad tech company or media company, do you have an identity solution? Okay. Tell me a little bit more about it. Okay. Sounds good. That sounds good. Now can we move on and talk about my business and how are you going to drive meaningful outcomes or whatever for my business? And I believe the reason that is, is because identity is a little abstract, right? It's not something that you can actually get meaningful validation against. It's just something that, you know, Yes, you have it. Okay, great. Let's move on, type of thing. Right. And so that, that's, that's kind of where we've been now, all of a sudden the cookies are going away, the device IDs are going away. And so the world is turning upside down. We're in this crisis of how are we going to keep doing what we were doing for the last 10 years in the future. So everyone's talking about it and we're trying to re engineer right? The mechanisms now if I was to look into the crystal ball right two or three years from now where I think we're headed is not much is going to change. And what I mean by that john is um I think that marketers will still go to companies and say do you have an ID solution? Okay tell me more about it. Okay uh let me understand a little bit better. Okay you do it this way. Sounds good. Now the ways in which companies are going to do it will be different right now. It's flock and unified I. D. And this and that right. The ways the mechanisms will be a little bit different but the end state right? Like the actual way in which we operate as an industry and kind of like the view of the landscape, in my opinion will be very simple or very similar, right? Because marketers will still view it as a tell me you have an ID solution, Make me feel good about it. Help me check the box and let's move on and talk about my business and how you're going to solve for my needs. So I think that's where we're going. That is not by any means to discount this existential moment that we're in. This is a really important moment where we do have to talk about and figure out what we're gonna do in the future. My just my viewpoint is that the future will actually not look all that different than the present. >>And I'll say the user base is the audience. Their their data behind it helps create new experiences, machine learning and Ai are going to create those and we have the data. You have the sharing it or using it as we're finding shit. Gupta great insight dropping some nice gems here, Founder of You of Digital and also the Adjunct professor of Programmatic advertising at Levi School of Business and santa Clara University Professor. Thank you for coming, dropping the gems here and insight. Thank you. >>Thanks a lot for having me john really appreciate it. >>Thanks for watching the cooking 100 is the cube host Jon ferrier. Me. Thanks for watching. Yeah. Yeah.
SUMMARY :
I'm john furrier host of the cube. So the house of the changing landscape of advertising really centers around the open to Now On the other side is, you know, things like unified, I guess the question it really comes down to what alternatives are out there for cookies, So they're saying, hey, we use, you know, an open I. Because I think this is gonna, you can't ignore the big guys And so the world is turning upside down. And I'll say the user base is the audience. Thanks for watching the cooking 100 is the cube host Jon ferrier.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jon ferrier | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
two young kids | QUANTITY | 0.99+ |
Chip Gupta | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
john | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
chrome | TITLE | 0.99+ |
Levi School of Business | ORGANIZATION | 0.99+ |
Chip | PERSON | 0.99+ |
two separate groups | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
one side | QUANTITY | 0.98+ |
two frameworks | QUANTITY | 0.98+ |
three years | QUANTITY | 0.97+ |
Gupta | PERSON | 0.96+ |
santa Clara University | ORGANIZATION | 0.96+ |
john furrier | PERSON | 0.95+ |
two very differing approaches | QUANTITY | 0.95+ |
one side | QUANTITY | 0.95+ |
Shiv Gupta | PERSON | 0.95+ |
today | DATE | 0.94+ |
I. d 2.0 | TITLE | 0.87+ |
34 tech players | QUANTITY | 0.87+ |
years | QUANTITY | 0.87+ |
two work streams | QUANTITY | 0.85+ |
Qantas industry | EVENT | 0.85+ |
85 | ORGANIZATION | 0.82+ |
under five | QUANTITY | 0.79+ |
cooking 100 | TITLE | 0.78+ |
last 10 years | DATE | 0.74+ |
first | QUANTITY | 0.73+ |
I. D. | TITLE | 0.7+ |
few years before | DATE | 0.7+ |
Two | QUANTITY | 0.69+ |
2021 045 | OTHER | 0.69+ |
lot of people | QUANTITY | 0.61+ |
I. D. | TITLE | 0.61+ |
V2 | OTHER | 0.54+ |
Breaking Analysis: Moore's Law is Accelerating and AI is Ready to Explode
>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> Moore's Law is dead, right? Think again. Massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. Hello everyone, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to unveil some new data that suggests we're entering a new era of innovation that will be powered by cheap processing capabilities that AI will exploit. We'll also tell you where the new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Moore's Law is dead, you say? We must have heard that hundreds, if not, thousands of times in the past decade. EE Times has written about it, MIT Technology Review, CNET, and even industry associations that have lived by Moore's Law. But our friend Patrick Moorhead got it right when he said, "Moore's Law, by the strictest definition of doubling chip densities every two years, isn't happening anymore." And you know what, that's true. He's absolutely correct. And he couched that statement by saying by the strict definition. And he did that for a reason, because he's smart enough to know that the chip industry are masters at doing work arounds. Here's proof that the death of Moore's Law by its strictest definition is largely irrelevant. My colleague, David Foyer and I were hard at work this week and here's the result. The fact is that the historical outcome of Moore's Law is actually accelerating and in quite dramatically. This graphic digs into the progression of Apple's SoC, system on chip developments from the A9 and culminating with the A14, 15 nanometer bionic system on a chip. The vertical axis shows operations per second and the horizontal axis shows time for three processor types. The CPU which we measure here in terahertz, that's the blue line which you can't even hardly see, the GPU which is the orange that's measured in trillions of floating point operations per second and then the NPU, the neural processing unit and that's measured in trillions of operations per second which is that exploding gray area. Now, historically, we always rushed out to buy the latest and greatest PC, because the newer models had faster cycles or more gigahertz. Moore's Law would double that performance every 24 months. Now that equates to about 40% annually. CPU performance is now moderated. That growth is now down to roughly 30% annual improvements. So technically speaking, Moore's Law as we know it was dead. But combined, if you look at the improvements in Apple's SoC since 2015, they've been on a pace that's higher than 118% annually. And it's even higher than that, because the actual figure for these three processor types we're not even counting the impact of DSPs and accelerator components of Apple system on a chip. It would push this even higher. Apple's A14 which is shown in the right hand side here is quite amazing. It's got a 64 bit architecture, it's got many, many cores. It's got a number of alternative processor types. But the important thing is what you can do with all this processing power. In an iPhone, the types of AI that we show here that continue to evolve, facial recognition, speech, natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. It's quite incredible. So what does this mean for other parts of the IT stack? Well, we recently reported Satya Nadella's epic quote that "We've now reached peak centralization." So this graphic paints a picture that was quite telling. We just shared the processing powers exploding. The costs consequently are dropping like a rock. Apple's A14 cost the company approximately 50 bucks per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators. These chips are going to optimize energy usage and save 10% annually on your power consumption. They said, this chip will cost a buck, a dollar to shave 10% of your refrigerator electricity bill. It's just astounding. But look at where the expensive bottlenecks are, it's networks and it's storage. So what does this mean? Well, it means the processing is going to get pushed to the edge, i.e., wherever the data is born. Storage and networking are going to become increasingly distributed and decentralized. Now with custom silicon and all that processing power placed throughout the system, an AI is going to be embedded into software, into hardware and it's going to optimize a workloads for latency, performance, bandwidth, and security. And remember, most of that data, 99% is going to stay at the edge. And we love to use Tesla as an example. The vast majority of data that a Tesla car creates is never going to go back to the cloud. Most of it doesn't even get persisted. I think Tesla saves like five minutes of data. But some data will connect occasionally back to the cloud to train AI models and we're going to come back to that. But this picture says if you're a hardware company, you'd better start thinking about how to take advantage of that blue line that's exploding, Cisco. Cisco is already designing its own chips. But Dell, HPE, who kind of does maybe used to do a lot of its own custom silicon, but Pure Storage, NetApp, I mean, the list goes on and on and on either you're going to get start designing custom silicon or you're going to get disrupted in our view. AWS, Google and Microsoft are all doing it for a reason as is IBM and to Sarbjeet Johal said recently this is not your grandfather's semiconductor business. And if you're a software engineer, you're going to be writing applications that take advantage of all the data being collected and bringing to bear this processing power that we're talking about to create new capabilities like we've never seen it before. So let's get into that a little bit and dig into AI. You can think of AI as the superset. Just as an aside, interestingly in his book, "Seeing Digital", author David Moschella says, there's nothing artificial about this. He uses the term machine intelligence, instead of artificial intelligence and says that there's nothing artificial about machine intelligence just like there's nothing artificial about the strength of a tractor. It's a nuance, but it's kind of interesting, nonetheless, words matter. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get "smarter", make better models, for example, that can lead to augmented intelligence and help humans make better decisions. These models improve as they get more data and are iterated over time. Now deep learning is a more advanced type of machine learning. It uses more complex math. But the point that we want to make here is that today much of the activity in AI is around building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years. Inference is the deployment of that model that we were just talking about, taking real time data from sensors, processing that data locally and then applying that training that has been developed in the cloud and making micro adjustments in real time. So let's take an example. Again, we love Tesla examples. Think about an algorithm that optimizes the performance and safety of a car on a turn, the model take data on friction, road condition, angles of the tires, the tire wear, the tire pressure, all this data, and it keeps testing and iterating, testing and iterating, testing iterating that model until it's ready to be deployed. And then the intelligence, all this intelligence goes into an inference engine which is a chip that goes into a car and gets data from sensors and makes these micro adjustments in real time on steering and braking and the like. Now, as you said before, Tesla persist the data for very short time, because there's so much of it. It just can't push it back to the cloud. But it can now ever selectively store certain data if it needs to, and then send back that data to the cloud to further train them all. Let's say for instance, an animal runs into the road during slick conditions, Tesla wants to grab that data, because they notice that there's a lot of accidents in New England in certain months. And maybe Tesla takes that snapshot and sends it back to the cloud and combines it with other data and maybe other parts of the country or other regions of New England and it perfects that model further to improve safety. This is just one example of thousands and thousands that are going to further develop in the coming decade. I want to talk about how we see this evolving over time. Inference is where we think the value is. That's where the rubber meets the road, so to speak, based on the previous example. Now this conceptual chart shows the percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these applications will mature over time as inference becomes more and more mainstream, the opportunities for AI inference at the edge and in IOT are enormous. And we think that over time, 95% of that spending is going to go to inference where it's probably only 5% today. Now today's modeling workloads are pretty prevalent and things like fraud, adtech, weather, pricing, recommendation engines, and those kinds of things, and now those will keep getting better and better and better over time. Now in the middle here, we show the industries which are all going to be transformed by these trends. Now, one of the point that Moschella had made in his book, he kind of explains why historically vertically industries are pretty stovepiped, they have their own stack, sales and marketing and engineering and supply chains, et cetera, and experts within those industries tend to stay within those industries and they're largely insulated from disruption from other industries, maybe unless they were part of a supply chain. But today, you see all kinds of cross industry activity. Amazon entering grocery, entering media. Apple in finance and potentially getting into EV. Tesla, eyeing insurance. There are many, many, many examples of tech giants who are crossing traditional industry boundaries. And the reason is because of data. They have the data. And they're applying machine intelligence to that data and improving. Auto manufacturers, for example, over time they're going to have better data than insurance companies. DeFi, decentralized finance platforms going to use the blockchain and they're continuing to improve. Blockchain today is not great performance, it's very overhead intensive all that encryption. But as they take advantage of this new processing power and better software and AI, it could very well disrupt traditional payment systems. And again, so many examples here. But what I want to do now is dig into enterprise AI a bit. And just a quick reminder, we showed this last week in our Armv9 post. This is data from ETR. The vertical axis is net score. That's a measure of spending momentum. The horizontal axis is market share or pervasiveness in the dataset. The red line at 40% is like a subjective anchor that we use. Anything above 40% we think is really good. Machine learning and AI is the number one area of spending velocity and has been for awhile. RPA is right there. Very frankly, it's an adjacency to AI and you could even argue. So it's cloud where all the ML action is taking place today. But that will change, we think, as we just described, because data's going to get pushed to the edge. And this chart will show you some of the vendors in that space. These are the companies that CIOs and IT buyers associate with their AI and machine learning spend. So it's the same XY graph, spending velocity by market share on the horizontal axis. Microsoft, AWS, Google, of course, the big cloud guys they dominate AI and machine learning. Facebook's not on here. Facebook's got great AI as well, but it's not enterprise tech spending. These cloud companies they have the tooling, they have the data, they have the scale and as we said, lots of modeling is going on today, but this is going to increasingly be pushed into remote AI inference engines that will have massive processing capabilities collectively. So we're moving away from that peak centralization as Satya Nadella described. You see Databricks on here. They're seen as an AI leader. SparkCognition, they're off the charts, literally, in the upper left. They have extremely high net score albeit with a small sample. They apply machine learning to massive data sets. DataRobot does automated AI. They're super high in the y-axis. Dataiku, they help create machine learning based apps. C3.ai, you're hearing a lot more about them. Tom Siebel's involved in that company. It's an enterprise AI firm, hear a lot of ads now doing AI and responsible way really kind of enterprise AI that's sort of always been IBM. IBM Watson's calling card. There's SAP with Leonardo. Salesforce with Einstein. Again, IBM Watson is right there just at the 40% line. You see Oracle is there as well. They're embedding automated and tele or machine intelligence with their self-driving database they call it that sort of machine intelligence in the database. You see Adobe there. So a lot of typical enterprise company names. And the point is that these software companies they're all embedding AI into their offerings. So if you're an incumbent company and you're trying not to get disrupted, the good news is you can buy AI from these software companies. You don't have to build it. You don't have to be an expert at AI. The hard part is going to be how and where to apply AI. And the simplest answer there is follow the data. There's so much more to the story, but we just have to leave it there for now and I want to summarize. We have been pounding the table that the post x86 era is here. It's a function of volume. Arm volumes are a way for volumes are 10X those of x86. Pat Gelsinger understands this. That's why he made that big announcement. He's trying to transform the company. The importance of volume in terms of lowering the cost of semiconductors it can't be understated. And today, we've quantified something that we haven't really seen much of and really haven't seen before. And that's that the actual performance improvements that we're seeing in processing today are far outstripping anything we've seen before, forget Moore's Law being dead that's irrelevant. The original finding is being blown away this decade and who knows with quantum computing what the future holds. This is a fundamental enabler of AI applications. And this is most often the case the innovation is coming from the consumer use cases first. Apple continues to lead the way. And Apple's integrated hardware and software model we think increasingly is going to move into the enterprise mindset. Clearly the cloud vendors are moving in this direction, building their own custom silicon and doing really that deep integration. You see this with Oracle who kind of really a good example of the iPhone for the enterprise, if you will. It just makes sense that optimizing hardware and software together is going to gain momentum, because there's so much opportunity for customization in chips as we discussed last week with Arm's announcement, especially with the diversity of edge use cases. And it's the direction that Pat Gelsinger is taking Intel trying to provide more flexibility. One aside, Pat Gelsinger he may face massive challenges that we laid out a couple of posts ago with our Intel breaking analysis, but he is right on in our view that semiconductor demand is increasing. There's no end in sight. We don't think we're going to see these ebbs and flows as we've seen in the past that these boom and bust cycles for semiconductor. We just think that prices are coming down. The market's elastic and the market is absolutely exploding with huge demand for fab capacity. Now, if you're an enterprise, you should not stress about and trying to invent AI, rather you should put your focus on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You're going to be buying, not building AI and you're going to be applying it. Now data as John Furrier has said in the past is becoming the new development kit. He said that 10 years ago and he seems right. Finally, if you're an enterprise hardware player, you're going to be designing your own chips and writing more software to exploit AI. You'll be embedding custom silicon in AI throughout your product portfolio and storage and networking and you'll be increasingly bringing compute to the data. And that data will mostly stay where it's created. Again, systems and storage and networking stacks they're all being completely re-imagined. If you're a software developer, you now have processing capabilities in the palm of your hand that are incredible. And you're going to rewriting new applications to take advantage of this and use AI to change the world, literally. You'll have to figure out how to get access to the most relevant data. You have to figure out how to secure your platforms and innovate. And if you're a services company, your opportunity is to help customers that are trying not to get disrupted are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. Privacy? AI for good? Yeah well, that's a whole another topic. I think for now, we have to get a better understanding of how far AI can go before we determine how far it should go. Look, protecting our personal data and privacy should definitely be something that we're concerned about and we should protect. But generally, I'd rather not stifle innovation at this point. I'd be interested in what you think about that. Okay. That's it for today. Thanks to David Foyer, who helped me with this segment again and did a lot of the charts and the data behind this. He's done some great work there. Remember these episodes are all available as podcasts wherever you listen, just search breaking it analysis podcast and please subscribe to the series. We'd appreciate that. Check out ETR's website at ETR.plus. We also publish a full report with more detail every week on Wikibon.com and siliconangle.com, so check that out. You can get in touch with me. I'm dave.vellante@siliconangle.com. You can DM me on Twitter @dvellante or comment on our LinkedIn posts. I always appreciate that. This is Dave Vellante for theCUBE Insights powered by ETR. Stay safe, be well. And we'll see you next time. (bright music)
SUMMARY :
This is breaking analysis and did a lot of the charts
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Foyer | PERSON | 0.99+ |
David Moschella | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Patrick Moorhead | PERSON | 0.99+ |
Tom Siebel | PERSON | 0.99+ |
New England | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
CNET | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
MIT Technology Review | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
10% | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
95% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
99% | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
dave.vellante@siliconangle.com | OTHER | 0.99+ |
John Furrier | PERSON | 0.99+ |
EE Times | ORGANIZATION | 0.99+ |
Sarbjeet Johal | PERSON | 0.99+ |
10X | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Moschella | PERSON | 0.99+ |
theCUBE | ORGANIZATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
15 nanometer | QUANTITY | 0.98+ |
2015 | DATE | 0.98+ |
today | DATE | 0.98+ |
Seeing Digital | TITLE | 0.98+ |
30% | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
A14 | COMMERCIAL_ITEM | 0.98+ |
higher than 118% | QUANTITY | 0.98+ |
5% | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
Ein | ORGANIZATION | 0.97+ |
a buck | QUANTITY | 0.97+ |
64 bit | QUANTITY | 0.97+ |
C3.ai | TITLE | 0.97+ |
Databricks | ORGANIZATION | 0.97+ |
about 40% | QUANTITY | 0.96+ |
theCUBE Studios | ORGANIZATION | 0.96+ |
Dataiku | ORGANIZATION | 0.95+ |
siliconangle.com | OTHER | 0.94+ |
Breaking Analysis with Dave Vellante: Intel, Too Strategic to Fail
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Braking Analysis with Dave Vellante. >> Intel's big announcement this week underscores the threat that the United States faces from China. The US needs to lead in semiconductor design and manufacturing. And that lead is slipping because Intel has been fumbling the ball over the past several years, a mere two months into the job, new CEO Pat Gelsinger wasted no time in setting a new course for perhaps, the most strategically important American technology company. We believe that Gelsinger has only shown us part of his plan. This is the beginning of a long and highly complex journey. Despite Gelsinger's clear vision, his deep understanding of technology and execution ethos, in order to regain its number one position, Intel we believe we'll need to have help from partners, competitors and very importantly, the US government. Hello everyone and welcome to this week's Wikibon CUBE insights powered by ETR. In this breaking analysis we'll peel the onion Intel's announcement of this week and explain why we're perhaps not as sanguine as was Wall Street on Intel's prospects. And we'll lay out what we think needs to take place for Intel to once again, become top gun and for us to gain more confidence. By the way this is the first time we're broadcasting live with Braking Analysis. We're broadcasting on the CUBE handles on Twitch, Periscope and YouTube and going forward we'll do this regularly as a live program and we'll bring in the community perspective into the conversation through chat. Now you may recall that in January, we kind of dismissed analysis that said Intel didn't have to make any major strategic changes to its business when they brought on Pat Gelsinger. Rather we said the exact opposite. Our view at time was that the root of Intel's problems could be traced to the fact that it wasn't no longer the volume leader. Because mobile volumes dwarf those of x86. As such we said that Intel couldn't go up the learning curve for next gen technologies as fast as its competitors and it needed to shed its dogma of being highly vertically integrated. We said Intel needed to more heavily leverage outsourced foundries. But more specifically, we suggested that in order for Intel to regain its volume lead, it needed to, we said at the time, spin out its manufacturing, create a joint venture sure with a volume leader, leveraging Intel's US manufacturing presence. This, we still believe with some slight refreshes to our thinking based on what Gelsinger has announced. And we'll talk about that today. Now specifically there were three main pieces and a lot of details to Intel's announcement. Gelsinger made it clear that Intel is not giving up its IDM or integrated device manufacturing ethos. He called this IDM 2.0, which comprises Intel's internal manufacturing, leveraging external Foundries and creating a new business unit called Intel Foundry Services. It's okay. Gelsinger said, "We are not giving up on integrated manufacturing." However, we think this is somewhat nuanced. Clearly Intel can't, won't and shouldn't give up on IDM. However, we believe Intel is entering a new era where it's giving designers more choice. This was not explicitly stated. However we feel like Intel's internal manufacturing arm will have increased pressure to serve its designers in a more competitive manner. We've already seen this with Intel finally embracing EUV or extreme ultraviolet lithography. Gelsinger basically said that Intel didn't lean into EUV early on and that it created more complexity in its 10 nanometer process, which dominoed into seven nanometer and as you know the rest of the story and Intel's delays. But since mid last year, it's embraced the technology. Now as a point of reference, Samsung started applying EUV for its seven nanometer technology in 2018. And it began shipping in early 2020. So as you can see, it takes years to get this technology into volume production. The point is that Intel realizes it needs to be more competitive. And we suspect, it will give more freedom to designers to leverage outsource manufacturing. But Gelsinger clearly signaled that IDM is not going away. But the really big news is that Intel is setting up a new division with a separate PNL that's going to report directly to Pat. Essentially it's hanging out a shingle and saying, we're open for business to make your chips. Intel is building two new Fabs in Arizona and investing $20 billion as part of this initiative. Now well Intel has tried this before earlier last decade. Gelsinger says that this time we're serious and we're going to do it right. We'll come back to that. This organizational move while not a spin out or a joint venture, it's part of the recipe that we saw as necessary for Intel to be more competitive. Let's talk about why Intel is doing this. Look at lots has changed in the world of semiconductors. When you think about it back when Pat was at Intel in the '90s, Intel was the volume leader. It crushed the competition with x86. And the competition at the time was coming from risk chips. And when Apple changed the game with iPod and iPhone and iPad, the volume equation flipped to mobile. And that led to big changes in the industry. Specifically, the world started to separate design from manufacturing. We now see firms going from design to tape out in 12 months versus taking three years. A good example is Tesla and his deal with ARM and Samsung. And what's happened is Intel has gone from number one in Foundry in terms of clock speed, wafer density, volume, lowest cost, highest margin to falling behind. TSMC, Samsung and alternative processor competitors like NVIDIA. Volume is still the maker of kings in this business. That hasn't changed and it confers advantage in terms of cost, speed and efficiency. But ARM wafer volumes, we estimate are 10x those of x86. That's a big change since Pat left Intel more than a decade ago. There's also a major chip shortage today. But you know this time, it feels a little different than the typical semiconductor boom and bust cycles. Semiconductor consumption is entering a new era and new use cases emerging from automobiles to factories, to every imaginable device piece of equipment, infrastructure, silicon is everywhere. But the biggest threat of all is China. China wants to be self-sufficient in semiconductors by 2025. It's putting approximately $60 billion into new chip Fabs, and there's more to come. China wants to be the new economic leader of the world and semiconductors are critical to that goal. Now there are those poopoo the China threat. This recent article from Scott Foster lays out some really good information. But the one thing that caught our attention is a statement that China's semiconductor industry is nowhere near being a major competitor in the global market. Let alone an existential threat to the international order and the American way of life. I think Scotty is stuck in the engine room and can't see the forest of the trees, wake up. Sure. You can say China is way behind. Let's take an example. NAND. Today China is at about 64 3D layers whereas Micron they're at 172. By 2022 China's going to be at 128. Micron, it's going to be well over 200. So what's the big deal? We say talk to us in 2025 because we think China will be at parody. That's just one example. Now the type of thinking that says don't worry about China and semi's reminds me of the epic lecture series that Clay Christiansen gave as a visiting professor at Oxford University on the history of, and the economics of the steel industry. Now if you haven't watched this series, you should. Basically Christiansen took the audience through the dynamics of steel production. And he asked the question, "Who told the steel manufacturers that gross margin was the number one measure of profitability? Was it God?" he joked. His point was, when new entrance came into the market in the '70s, they were bottom feeders going after the low margin, low quality, easiest to make rebar sector. And the incumbents nearly pulled back and their mix shifted to higher margin products and their gross margins went up and life was good. Until they lost the next layer. And then the next, and then the next, until it was game over. Now, one of the things that got lost in Pat's big announcement on the 23rd of March was that Intel guided the street below consensus on revenue and earnings. But the stock went up the next day. Now when asked about gross margin in the Q&A segment of the announcement, yes, gross margin is a if not the key metric in semi's in terms of measuring profitability. When asked Intel CFO George Davis explained that with the uptick in PCs last year there was a product shift to the lower margin PC sector and that put pressure on gross margins. It was a product mix thing. And revenue because PC chips are less expensive than server chips was affected as were margins. Now we shared this chart in our last Intel update showing, spending momentum over time for Dell's laptop business from ETR. And you can see in the inset, the unit growth and the market data from IDC, yes, Dell's laptop business is growing, everybody's laptop business is growing. Thank you COVID. But you see the numbers from IDC, Gartner, et cetera. Now, as we pointed out last time, PC volumes had peaked in 2011 and that's when the long arm of rights law began to eat into Intel's dominance. Today ARM wafer production as we said is far greater than Intel's and well, you know the story. Here's the irony, the very bucket that conferred volume adventures to Intel PCs, yes, it had a slight uptick last year, which was great news for Dell. But according to Intel it pulled down its margins. The point is Intel is loving the high end of the market because it's higher margin and more profitable. I wonder what Clay Christensen would say to that. Now there's more to this story. Intel's CFO blame the supply constraints on Intel's revenue and profit pressures yet AMD's revenue and profits are booming. So RTSMCs. Only Intel can't seem to thrive when there's this massive chip shortage. Now let's get back to Pat's announcement. Intel is for sure, going forward investing $20 billion in two new US-based fabrication facilities. This chart shows Intel's investments in US R&D, US CapEx and the job growth that's created as a result, as well as R&D and CapEx investments in Ireland and Israel. Now we added the bar on the right hand side from a Wall Street journal article that compares TSMC CapEx in the dark green to that of Intel and the light green. You can see TSMC surpass the CapEx investment of Intel in 2015, and then Intel took the lead back again. And in 2017 was, hey it on in 2018. But last year TSMC took the lead, again. And appears to be widening that lead quite substantially. Leading us to our conclusion that this will not be enough. These moves by Intel will not be enough. They need to do more. And a big part of this announcement was partnerships and packaging. Okay. So here's where it gets interesting. Intel, as you may know was late to the party with SOC system on a chip. And it's going to use its packaging prowess to try and leap frog the competition. SOC bundles things like GPU, NPU, DSUs, accelerators caches on a single chip. So better use the real estate if you will. Now Intel wants to build system on package which will dis-aggregate memory from compute. Now remember today, memory is very poorly utilized. What Intel is going to do is to create a package with literally thousands of nodes comprising small processors, big processors, alternative processors, ARM processors, custom Silicon all sharing a pool of memory. This is a huge innovation and we'll come back to this in a moment. Now as part of the announcement, Intel trotted out some big name customers, prospects and even competitors that it wants to turn into prospects and customers. Amazon, Google, Satya Nadella gave a quick talk from Microsoft to Cisco. All those guys are designing their own chips as does Ericsson and look even Qualcomm is on the list, a competitor. Intel wants to earn the right to make chips for these firms. Now many on the list like Microsoft and Google they'd be happy to do so because they want more competition. And Qualcomm, well look if Intel can do a good job and be a strong second sourced, why not? Well, one reason is they compete aggressively with Intel but we don't like Intel so much but it's very possible. But the two most important partners on this slide are one IBM and two, the US government. Now many people were going to gloss over IBM in this announcement, but we think it's one of the most important pieces of the puzzle. Yes. IBM and semiconductors. IBM actually has some of the best semiconductor technology in the world. It's got great architecture and is two to three years ahead of Intel with POWER10. Yes, POWER. IBM is the world's leader in terms of dis-aggregating compute from memory with the ability to scale to thousands of nodes, sound familiar? IBM leads in power density, efficiency and it can put more stuff closer together. And it's looking now at a 20x increase in AI inference performance. We think Pat has been thinking about this for a while and he said, how can I leave leap frog system on chip. And we think he thought and said, I'll use our outstanding process manufacturing and I'll tap IBM as a partner for R&D and architectural chips to build the next generation of systems that are more flexible and performant than anything that's out there. Now look, this is super high end stuff. And guess who needs really high end massive supercomputing capabilities? Well, the US military. Pat said straight up, "We've talked to the government and we're honored to be competing for the government/military chips boundary." I mean, look Intel in my view was going to have to fall down into face not win this business. And by making the commitment to Foundry Services we think they will get a huge contract from the government, as large, perhaps as $10 billion or more to build a secure government Foundry and serve the military for decades to come. Now Pat was specifically asked in the Q&A section is this Foundry strategy that you're embarking on viable without the help of the US government? Kind of implying that it was a handout or a bailout. And Pat of course said all the right things. He said, "This is the right thing for Intel. Independent of the government, we haven't received any commitment or subsidies or anything like that from the US government." Okay, cool. But they have had conversations and I have no doubt, and Pat confirmed this, that those conversations were very, very positive that Intel should head in this direction. Well, we know what's happening here. The US government wants Intel to win. It needs Intel to win and its participation greatly increases the probability of success. But unfortunately, we still don't think it's enough for Intel to regain its number one position. Let's look at that in a little bit more detail. The headwinds for Intel are many. Look it can't just flick a switch and catch up on manufacturing leadership. It's going to take four years. And lots can change in that time. It tells market momentum as well as we pointed out earlier is headed in the wrong direction from a financial perspective. Moreover, where is the volume going to come from? It's going to take years for Intel to catch up for ARMS if it never can. And it's going to have to fight to win that business from its current competitors. Now I have no doubt. It will fight hard under Pat's excellent leadership. But the Foundry business is different. Consider this, Intel's annual CapEx expenditures, if you divide that by their yearly revenue it comes out to about 20% of revenue. TSMC spends 50% of its revenue each year on CapEx. This is a different animal, very service oriented. So look, we're not pounding the table saying Intel's worst days are over. We don't think they are. Now, there are some positives, I'm showing those in the right-hand side. Pat Gelsinger was born for this job. He proved that the other day, even though we already knew it. I have never seen him more excited and more clearheaded. And we agreed that the chip demand dynamic is going to have legs in this decade and beyond with Digital, Edge, AI and new use cases that are going to power that demand. And Intel is too strategic to fail. And the US government has huge incentives to make sure that it succeeds. But it's still not enough in our opinion because like the steel manufacturers Intel's real advantage today is increasingly in the high end high margin business. And without volume, China is going to win this battle. So we continue to believe that a new joint venture is going to emerge. Here's our prediction. We see a triumvirate emerging in a new joint venture that is led by Intel. It brings x86, that volume associated with that. It brings cash, manufacturing prowess, R&D. It brings global resources, so much more than we show in this chart. IBM as we laid out brings architecture, it's R&D, it's longstanding relationships. It's deal flow, it can funnel its business to the joint venture as can of course, parts of Intel. We see IBM getting a nice licensed deal from Intel and or the JV. And it has to get paid for its contribution and we think it'll also get a sweet deal and the manufacturing fees from this Intel Foundry. But it's still not enough to beat China. Intel needs volume. And that's where Samsung comes in. It has the volume with ARM, has the experience and a complete offering across products. We also think that South Korea is a more geographically appealing spot in the globe than Taiwan with its proximity to China. Not to mention that TSMC, it doesn't need Intel. It's already number one. Intel can get a better deal from number two, Samsung. And together these three we think, in this unique structure could give it a chance to become number one by the end of the decade or early in the 2030s. We think what's happening is our take, is that Intel is going to fight hard to win that government business, put itself in a stronger negotiating position and then cut a deal with some supplier. We think Samsung makes more sense than anybody else. Now finally, we want to leave you with some comments and some thoughts from the community. First, I want to thank David Foyer. His decade plus of work and knowledge of this industry along with this collaboration made this work possible. His fingerprints are all over this research in case you didn't notice. And next I want to share comments from two of my colleagues. The first is Serbjeet Johal. He sent this to me last night. He said, "We are not in our grandfather's compute era anymore. Compute is getting spread into every aspect of our economy and lives. The use of processors is getting more and more specialized and will intensify with the rise in edge computing, AI inference and new workloads." Yes, I totally agree with Sarbjeet. And that's the dynamic which Pat is betting and betting big. But the bottom line is summed up by my friend and former IDC mentor, Dave Moschella. He says, "This is all about China. History suggests that there are very few second acts, you know other than Microsoft and Apple. History also will say that the antitrust pressures that enabled AMD to thrive are the ones, the very ones that starved Intel's cash. Microsoft made the shift it's PC software cash cows proved impervious to competition. The irony is the same government that attacked Intel's monopoly now wants to be Intel's protector because of China. Perhaps it's a cautionary tale to those who want to break up big tech." Wow. What more can I add to that? Okay. That's it for now. Remember I publish each week on wikibon.com and siliconangle.com. These episodes are all available as podcasts. All you got to do is search for Braking Analysis podcasts and you can always connect with me on Twitter @dvellante or email me at david.vellante, siliconangle.com As always I appreciate the comments on LinkedIn and in clubhouse please follow me so that you're notified when we start a room and start riffing on these topics. And don't forget to check out etr.plus for all the survey data. This is Dave Vellante for theCUBE insights powered by ETR, be well, and we'll see you next time. (upbeat music)
SUMMARY :
in Palo Alto in Boston, in the dark green to that of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Samsung | ORGANIZATION | 0.99+ |
Dave Moschella | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Pat | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Gelsinger | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
January | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2025 | DATE | 0.99+ |
Ireland | LOCATION | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
$20 billion | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Arizona | LOCATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Clay Christensen | PERSON | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Clay Christiansen | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Israel | LOCATION | 0.99+ |
David Foyer | PERSON | 0.99+ |
12 months | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Christiansen | PERSON | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
20x | QUANTITY | 0.99+ |
Serbjeet Johal | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
mid last year | DATE | 0.99+ |
Brian Bohan and Chris Wegmann | AWS Executive Summit 2020
>> Announcer: From around the globe, it's theCUBE. With digital coverage of AWS reInvent Executive Summit 2020, sponsored by Accenture and AWS. >> Hello and welcome back to theCUBE's coverage of AWS reInvent 2020. This is special programming for the Accenture Executive Summit where all the thought leaders are going to extract the signal from those share with you their perspective of this year's reInvent conference as it respects the customers' digital transformation. Brian Bohan is the director and head of Accenture, AWS Business Group at Amazon web services. Brian, great to see you. And Chris Wegmann is the Accenture Amazon Business Group technology lead at Accenture. Guys this is about technology vision this conversation. Chris, I want to start with you because you're Andy Jackson's keynote. You heard about the strategy of digital transformation, how you got to lean into it. You got to have the guts to go for it and you got to decompose. He went everywhere.(chuckles) So what did you hear? What was striking about the keynote? Because he covered a lot of topics. >> Yeah. It was epic as always from Andy. Lot of topics, a lot to cover in the three hours. There was a couple of things that stood out for me. First of all, hybrid. The concept, the new concept of hybrid and how Andy talked about it, bringing the compute and the power to all parts of an enterprise, whether it be at the edge or are in the big public cloud, whether it be in an Outpost or wherever it'd be, right with containerization now. Being able to do Amazon containerization in my data center and that's awesome. I think that's going to make a big difference. All that being underneath the Amazon console and billing and things like that, which is great. I'll also say the chips, right? I know computer is always something that we always kind of take for granted but I think again, this year, Amazon and Andy really focused on what they're doing with the chips and compute and the compute is still at the heart of everything in cloud. And that continued advancement is making an impact and will make and continue to make a big impact. >> Yeah, I would agree. I think one of the things that really... I mean the container thing was I think really kind of a nuance point. When you've got Deepak Singh on the opening day with Andy Jassy and he runs a container group over there. When we need a small little team, he's on the front stage. That really is the key to the hybrid. I think this showcases this new layer. We're taking advantage of the Graviton2 chips, which I thought was huge. Brian, this is really a key part of the platform change, not change, but the continuation of AWS. Higher level servers, >> Yep. building blocks that provide more capabilities, heavy lifting as they say but the new services that are coming on top really speaks to hybrid and speaks to the edge. >> It does. Yeah. I think like Andy talks about and we talked about we really want to provide choice to our customers, first and foremost. And you can see that in the array of services we have, we can see it in the the hybrid options that Chris talked about. Being able to run your containers through ECS or EKS anywhere. It just get to the customers choice. And one of the things that I'm excited about as you talk about going up the stack and on the edge are things, most certainly Outpost, right? So now Outpost was launched last year but then with the new form factors and then you look at services like Panorama, right? Being able to take computer vision and embed machine learning and computer vision, and do that as a managed capability at the edge for customers. And so we see this across a number of industries. And so what we're really thinking about is customers no longer have to make trade-offs and have to think about those choices, that they can really deploy natively in the cloud and then they can take those capabilities, train those models, and then deploy them where they need to whether that's on premises or at the edge, whether it be in a factory or retail environment. I think we're really well positioned when hopefully next year we start seeing the travel industry rebound and the need more than ever really to kind of rethink about how we kind of monitor and make those environments safe. Having this kind of capability at the edge is really going to help our customers as we come out of this year and hopefully rebound next year. >> Chris, I want to go back to you for a second. It's hard to pick your favorite innovation from the keynote because, Brian, just reminded me of some things I forgot happened. It was like a buffet of innovation. Some keynotes have one or two, there was like 20. You got the industrial piece that was huge. Computer vision, machine learning, that's just a game changer. The connect thing came out of nowhere in my opinion. I mean, it's a call center technology so it's boring as hell, what are you going to do with that?(Brian and Chris chuckle) It turns out it's a game changer. It's not about the calls but the contact and that's distant intermediating in the stack as well. So again, a feature that looks old is actually new and relevant. What was your favorite innovation announcement? >> It's hard to say. I will say my personal favorite was the Mac OS. I think that is a phenomenal just addition, right? And the fact that AWS has worked with Apple to integrate the Nitro chip into the iMac and offer that out. A lot of people are doing development for IOS and that stuff and that's just been a huge benefit for the development teams. But I will say, I'll come back to Connect. You mentioned it but you're right. It's a boring area but it's an area that we've seen huge success with since Connect was launched and the additional features that Amazon continues to bring, obviously with the pandemic and now that customer engagement through the phone, through omni-channel has just been critical for companies, right? And to be able to have those agents at home, working from home versus being in the office, it was a huge advantage for several customers that are using Connect. We did some great stuff with some different customers but the continue technology like you said, the call translation and during a call to be able to pop up those keywords and have a supervisor listen is awesome. And some of that was already being done but we are stitching multiple services together. Now that's right out of the box. And that Google's location is only going to make that go faster and make us to be able to innovate faster for that piece of the business. >> It's interesting not to get all nerdy and business school like but you've got systems of records, systems of engagement. If you look at the call center and the Connect thing, what got my attention was not only the model of disintermediating that part of the engagement in the stack but what actually cloud does to something that's a feature or something that could be an element like say call center, the old days of calling the 800 number and getting some support. You got infra chip, you have machine learning, you actually have stuff in the in the stack that actually makes that different now. The thing that impressed me was Andy was saying, you could have machine learning detect pauses, voice inflections. So now you have technology making that more relevant and better and different. So a lot going on. This is just one example of many things that are happening from a disruption innovation standpoint. What do you guys think about that? Am I getting it right? Can you share other examples? >> I think you are right and I think what's implied there and what you're saying and even in the other Mac OS example is the ability... We're talking about features, right? Which by themselves you're saying, Oh, wow! What's so unique about that? But because it's on AWS and now because whether you're a developer working with Mac iOS and you have access to the 175 plus services that you can then weave into your new application. Talk about the Connect scenario. Now we're embedding that kind of inference and machine learning to do what you say, but then your data Lake is also most likely running in AWS, right? And then the other channels whether they be mobile channels or web channels or in-store physical channels, that data can be captured and that same machine learning could be applied there to get that full picture across the spectrum, right? So that's the power of bringing you together on AWS, the access to all those different capabilities and services and then also where the data is and pulling all that together for that end to end view. >> Can you guys give some examples of work you've done together? I know there's stuff we've reported on, in the last session we talked about some of the connect stuff but that kind of encapsulates where this is all going with respect to the tech. >> Yeah. I think one of them, it was called out on Doug's Partner Summit is a SAP Data Lake Accelerator, right? Almost every enterprise has SAP, right? And getting data out of SAP has always been a challenge, right? Whether it be through data warehouses and AWS, or sorry, SAP BW. What we've focused on is getting that data when you have SAP on AWS, getting that data into the Data Lake, right? Getting it into a model that you can pull the value out and the customers can pull the value out, use those AI models. So that's one thing we worked on in the last 12 months. Super excited about seeing great success with customers. A lot of customers had ideas. They want to do this, they had different models. What we've done is made it very simplified. Framework which allows customers to do it very quickly, get the data out there and start getting value out of it and iterating on that data. We saw customers are spending way too much time trying to stitch it all together and trying to get it to work technically. And we've now cut all of that out and they can immediately start getting down to the data and taking advantage of those different services that are out there by AWS. >> Brian, you want to weigh in as things you see as relevant builds that you guys done together that kind of tease out the future and connect the dots to what's coming? >> I'm going to use a customer example. We worked with, it just came out, with Unilever around their blue air, connected, smart air purifier. And what I think is interesting about that, I think it touches on some of the themes we're talking about as well as some of the themes we talked about in the last session, which is we started that program before the pandemic, but Unilever recognized that they needed to differentiate their product in the marketplace, move to more of a services oriented business which we're seeing as a trend. We enabled this capability. So now it's a smart air purifier that can be remote managed. And now when the pandemic hit, they are in a really good position, obviously, with a very relevant product and capability to be used. And so, that data then as we were talking about is going to reside on the cloud. And so the learning that can now happen about usage and about filter changes, et cetera can find its way back into future iterations of that picked out that product. And I think that's keeping with what Chris is talking about where we might be systems of record like in SAP, how do we bring those in and then start learning from that data so that we can get better on our future iterations? >> Hey, Chris, on the last segment we did on the business mission session, Andy Tay from your team talked about partnerships within a century and working with other folks. I want to take that now on the technical side because one of the things that we heard from Doug's keynote and during the partner day was integrations and data were two big themes. When you're in the cloud technically, the integrations are different. You're going to get unique things in the public cloud that you're just not going to get on-premise access to other cloud native technologies and companies. How do you see the partnering of Accenture with people within your ecosystem and how the data and the integration play together? What's your vision? >> Yeah. I think there's two parts of it. One there's from a commercial standpoint, right? Some marketplace, you heard Dave talk about that in the partner summit, right? That marketplace is now bringing together this ecosystem in a very easy way to consume by the customers and by the users and bringing multiple partners together. And we're working with our ecosystem to put more products out in the marketplace that are integrated together already. I think one from a technical perspective though. If you look at Salesforce, I talked a little earlier about Connect. Another good example technically underneath the covers, how we've integrated Connect and Salesforce, some of it being pre-built by AWS and Salesforce, other things that we've added on top of it, I think are good examples. And I think as these ecosystems these ISVs put their products out there and start exposing more and more APIs on the Amazon platform may opening it up, having those pre-built network connections there between the different VPCs of the different areas within within a customer's network and having them all opened up and connected and having all that networking done underneath the covers. It's one thing to call the APIs, it's one thing to have access to those and that's not a big focus of a lot of ISVs and customers who build those APIs and expose them but having that network infrastructure underneath and being able to stay within the cloud, within AWS to make those connections that pass that data. We always talk about scale, right? It's one thing if I just need to pass like a simple user ID back and forth, right? That's fine. We're not talking massive data sets, whether it be seismic data or whatever it be, passing those large data sets between customers across the Amazon network is going to open up the world. >> Yeah, I see huge possibilities there and love to keep on this story. I think it's going to be important and something to keep track of. I'm sure you guys will be on top of it. One of the things I want to dig into with you guys now is Andy had kind of this philosophical thing in his keynote talk about societal change and how tough the pandemic is. Everything's on full display and this kind of brings out kind of like where we are and the truth. If you look at the truth it's a virtual event. I mean, it's a website and you got some sessions out there, we're doing remote best we can and you've got software and you've got technology and the other concept of a mechanism, it's software, it does something It does a purpose. Accenture, you guys have a concept called Living Systems where growth strategy powered by technology. How do you take the concept of a living organism or a system and replace the mechanism staleness of computing and software? And this is kind of interesting because we're on the cusp of a major inflection point post COVID. I get the digital transformation being slow. That's yes, that's happening. There's other things going on in society. What do you guys think about this Living Systems concept? Yeah. I'll start. I think the living system concept, it started out very much thinking about how do you rapidly change your system, right? And because of cloud, because of DevOps, because of all these software technologies and processes that we've created, that's where it started making it much easier, make it a much faster being able to change rapidly. But you're right. I think if you now bring in more technologies, the AI technology, self-healing technologies. Again, you heard Andy in his keynote talk about the systems and services they're building to detect problems and resolve those problems, right? Obviously automation is a big part of that. Living Systems, being able to bring that all together and to be able to react in real time to either when a customer asks, either through the AI models that have been generated and turning those AI models around much faster and being able to get all the information that came in the last 20 minutes, right? Society is moving fast and changing fast and even in one part of the world, if something in 10 minutes can change. And being able to have systems to react to that, learn from that and be able to pass that on to the next country especially in this world of COVID and things changing very quickly and diagnosis and medical response all that so quickly to be able to react to that and have systems pass that information, learn from that information is going to be critical. >> That's awesome. Brian, one of the things that comes up every year is, oh, the cloud's scalable. This year I think we've talked on theCUBE before, years ago certainly with the Accenture and Amazon. I think it was like three or four years ago. Yeah. The clouds horizontally scalable but vertically specialized at the application layer. But if you look at the Data Lake stuff that you guys have been doing where you have machine learning, the data is horizontally scalable and then you got the specialization in the app changes the whole vertical thing. You don't need to have a whole vertical solution or do you? So, how has this year's cloud news impacted vertical industries? Because it used to be, oh, oil and gas, financial services. They've got a team for that. We got a stack for that. Not anymore. Is it going away? What's changing? >> Well. It's a really good question. I think what we're seeing, and I was just on a call this morning talking about banking and capital markets and I do think the challenges are still pretty sector specific. But what we do see is the kind of commonality when we start looking at the, and we talked about this, the industry solutions that we're building as a partnership, most of them follow the pattern of ingesting data, analyzing that data and then being able to provide insights and then actions, right? So if you think about creating that kind of common chassis of that in just the Data Lake and then the machine learning, and you talk about the nuances around SageMaker and being able to manage these models, what changes then really are the very specific industries' algorithms that you're writing, right, within that framework. And so, we're doing a lot and Connect is a good example of this too, where you look at it and yeah, customer service is a horizontal capability that we're building out, but then when you stamp it into insurance or retail banking, or utilities, there are nuances then that we then extend and build so that we meet the unique needs of those industries and that's usually around those models. >> Yeah. I think this year was the first reInvent that I saw real products coming out that actually solved that problem. I mean, it was there last year SageMaker was kind of moving up the stack, but now you have apps embedding machine learning directly in and users don't even know it's in there. I mean, cause this is kind of where it's going, right? I mean-- >> You saw that was in announcements, right? How many announcements where machine learning is just embedded in? I mean, CodeGuru, DevOps Guru, the Panorama we talked about, it's just there. >> Yeah. I mean having that knowledge about the linguistics and the metadata, knowing the business logic, those are important specific use cases for the vertical and you can get to it faster. Chris, how is this changing on the tech side, your perspective? >> Yeah. I keep coming back to AWS and cloud makes it easier, right? All this stuff can be done and some of it has been done, but what Amazon continues to do is make it easier to consume by the developer, by the customer and to actually embed it into applications much easier than it would be if I had to go set up the stack and build it all on them and embed it, right? So it's shortcoming that process and again, as these products continue to mature, right, and some of this stuff is embedded, it makes that process so much faster. It reduces the amount of work required by the developers the engineers to get there. So, I'm expecting you're going to see more of this, right. I think you're going to see more and more of these multi connected services by AWS, that has a lot of the AI ML pre-configured Data Lakes, all that kind of stuff embedded in those services. So you don't have to do it yourself and continue to go up the stack. And we always talk about Amazon's built for builders, right? But, builders have been super specialized and are becoming, as engineers were being asked to be bigger and bigger and to be be able to do more stuff and I think these kind of integrated services are going to help us do that >> And certainly needed more now when you have hybrid edge that they're going to be operating with microservices on a cloud model and with all those advantages that are going to come around the corner for being in the cloud. I mean, I think there's going to be a whole clarity around benefits in the cloud with all these capabilities and benefits. Cloud Guru I think it's my favorite this year because it just points to why that could happen. I mean that happens because of the cloud data.(laughs) If you're on-premise, you may not have a little Cloud Guru. you are going to get more data but they're all different. Edge certainly will come in too. Your vision on the edge, Chris, how you see that evolving for customers because that could be complex, new stuff. How is it going to get easier? >> Yeah. It's super complex now, right? I mean, you got to design for all the different edge 5G protocols are out there and solutions, right? Amazon's simplifying that. Again, I come back to simplification, right? I can build an app that works on any 5G network that's been integrated with AWS, right. I don't have to set up all the different layers to get back to my cloud or back to my my bigger data set. And that's kind of choking. I don't even know where to call the cloud anymore. I got big cloud which is a central and I go down then you've got a cloud at the edge. Right? So what do I call that? >> Brian: It's just really computing.(laughing) Exactly. So, again, I think is this next generation of technology with the edge comes right and we put more and more data at the edge. We're asking for more and more compute at the edge, right? Whether it be industrial or for personal use or consumer use, that processing is going to get more and more intense to be able to maintain under a single console, under a single platform and be able to move the code that I developed across that entire platform, whether I have to go all the way down to the very edge at the 5G level, right, or all the way back into the bigger cloud and how that processing in there, being able to do that seamlessly is going to allow the speed of development that's needed. >> Wow. You guys done a great job and no better time to be a techie or interested in technology or computer science or social science for that matter. This is a really perfect store. A lot of problems to solve, a lot of change happening, positive change opportunities, a lot of great stuff. Final question guys. Five years working together now on this partnership with AWS and Accenture. Congratulations, you guys are in pole position for the next wave coming. What's exciting you guys? Chris, what's on your mind? Brian, what's getting you guys pumped up? >> Well, again, I come back to Andy mentioned it in his keynote, right? We're seeing customers move now, right. Five years ago we knew customers were going to do this. We built a partnership to enable these enterprise customers to make that journey, right? But now, even more we're seeing them move at such great speed, right? Which is super excites me, right? Because I can see... Being in this for a long time now, I can see the value on the other end. We've been wanting to push our customers as fast as they can through the journey and now they're moving. Now they're getting the religion, they're getting there. They see they need to do it to change your business so that's what excites me. It just the excites me, it's just the speed at which we're going to to see the movement. >> Yeah. >> Yeah. I'd agree with that. I mean, I just think getting customers to the cloud is super important work and we're obviously doing that and helping accelerate that. It's what we've been talking about when we're there all the possibilities that become available, right? Through the common data capabilities, the access to the 175 somewhat AWS services. I also think and this is kind of permeated through this week at Re:invent is the opportunity, especially in those industries that do have an industrial aspect, a manufacturing aspect, or a really strong physical aspect of bringing together IT and operational technology and the business with all these capabilities and I think edge and pushing machine learning down to the edge and analytics at the edge is really going to help us do that. And so I'm super excited by all that possibility because I feel like we're just scratching the surface there. >> It's a great time to be building out. and this is the time for reconstruction, reinvention. Big theme, so many storylines in the keynote and the events . It's going to keep us busy here at SiliconANGLE on theCUBE for the next year. Gentlemen, thank you for coming on. I really appreciate it. Thanks. >> Thank you. All right. Great conversation. We're getting technical. We're going to go another 30 minutes A lot to talk about. A lot of storylines here at AWS Re:Invent 2020 at the Accenture Executive Summit. I'm John Furrier. Thanks for watching. (upbeat music)
SUMMARY :
Announcer: From around the globe, and you got to decompose. and the compute is still That really is the key to the hybrid. and speaks to the edge. and on the edge are things, back to you for a second. and the additional features of the engagement in the stack and machine learning to do what you say, in the last session we talked about and the customers can pull the value out, and capability to be used. and how the data and the and by the users and bringing and even in one part of the world, and then you got the of that in just the Data Lake and users don't even know it's in there. DevOps Guru, the Panorama we talked about, and the metadata, knowing and to be be able to do more stuff that are going to come around the corner I don't have to set up and be able to move the and no better time to be a techie I can see the value on the other end. and the business with in the keynote and the events . at AWS Re:Invent 2020 at the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Wegmann | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Andy Tay | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Brian Bohan | PERSON | 0.99+ |
Andy Jackson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Unilever | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Accenture | ORGANIZATION | 0.99+ |
Five years | QUANTITY | 0.99+ |
Deepak Singh | PERSON | 0.99+ |
IOS | TITLE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
iMac | COMMERCIAL_ITEM | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
three | DATE | 0.99+ |
Doug | PERSON | 0.99+ |
Five years ago | DATE | 0.99+ |
two parts | QUANTITY | 0.99+ |
AWS Business Group | ORGANIZATION | 0.98+ |
This year | DATE | 0.98+ |
10 minutes | QUANTITY | 0.98+ |
175 plus services | QUANTITY | 0.98+ |
Accenture Executive Summit | EVENT | 0.98+ |
20 | QUANTITY | 0.98+ |
four years ago | DATE | 0.98+ |
three hours | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
800 | OTHER | 0.98+ |
One | QUANTITY | 0.98+ |