Image Title

Search Results for Today:

Renen Hallak & David Floyer | CUBE Conversation 2021


 

(upbeat music) >> In 2010 Wikibon predicted that the all flash data center was coming. The forecast at the time was that flash memory consumer volumes, would drive prices of enterprise flash down faster than those of high spin speed, hard disks. And by mid decade, buyers would opt for flash over 15K HDD for virtually all active data. That call was pretty much dead on and the percentage of flash in the data center continues to accelerate faster than that, of spinning disk. Now, the analyst that made this forecast was David FLoyer and he's with me today, along with Renen Hallak who is the founder and CEO of Vast Data. And they're going to discuss these trends and what it means for the future of data and the data center. Gentlemen, welcome to the program. Thanks for coming on. >> Great to be here. >> Thank you for having me. >> You're very welcome. Now David, let's start with you. You've been looking at this for over a decade and you know, frankly, your predictions have caused some friction, in the marketplace but where do you see things today? >> Well, what I was forecasting was based on the fact that the key driver in any technology is volume, volume reduces the cost over time and the volume comes from the consumers. So flash has been driven over the years by initially by the iPod in 2006 the Nano where Steve Jobs did a great job with Samsung and introducing large volumes of flash. And then the iPhone in 2008. And since then, all of mobile has been flash and mobile has been taking in a greater and greater percentage share. To begin with the PC dropped. But now the PCs are over 90% are using flash when there delivered. So flash has taken over the consumer market, very aggressively and that has driven down the cost of flash much much faster than the declining market of HDD. >> Okay and now, so Renen I wonder if we could come to you, we've got I want you to talk about the innovations that you're doing, but before we get there, talk about why you started Vast. >> Sure, so it was five years ago and it was basically the kill of the hard drive. I think what David is saying resonates very, very well. In fact, if you look at our original presentation for Vast Data. It showed flash and tape. There was no hard drive in the middle. And we said 10 years from now, and this was five years ago. So even the dates match up pretty well. We're not going to have hard drives anymore. Any piece of information that needs to be accessible at all will be on flash and anything that is dormant and never gets read will be on tape. >> So, okay. So we're entering this kind of new phase now, with which is being driven by QLC. David maybe you could give us a quick what is QLC? Just give us a bumper sticker there. >> There's 3D NAND, which is the thing that's growing, very very fast and it's growing on several dimensions. One dimension is the number of layers. Another dimension is the size of each of those pieces. And the third dimension is the number of bits which a QLC is five bits per cell. So those three dimensions have all been improving. And the result of that is that on a wafer of, that you create, more and more data can be stored on the whole wafer on the chip that comes from that wafer. And so QLC is the latest, set of 3D NAND flash NAND flash. That's coming off the lines at the moment. >> Okay, so my understanding is that there's new architectures that are entering the data center space, that could take advantage of QLC enter Vast. Someone said they've rented this, a nice set up for you and maybe before we get into the architecture, can you talk a little bit more about the company? I mean, maybe not everybody's familiar with with Vast, you share why you started it but what can you tell us about the business performance and any metrics you can share would be great? >> Sure, so the company as I said is five years old, about 170, 180 people today. We started selling product just around two years ago and have just hit $150 million in run rate. That's with eight sales people. And so, as you can imagine, there's a lot of demand for flash all the way down the stack in the way that David predicted. >> Wow, okay. So you got pretty comfortable. I think you've got product market fit, right? And now you're going to scale. I would imagine you're going to go after escape velocity and you're going to build your moat. Now part of that, I mean a lot of that is product, right? Product is sales. Those are the cool two golden pillars, but, and David when you think back to your early forecast last decade it was really about block storage. That was really what was under attack. You know, kind of fusion IO got it started with Facebook. They were trying to solve their SQL database performance problems. And then we saw pure storage. They hit escape velocity. They drove a truck through EMC sym metrics HDD based install base which precipitated the acquisition of XtremeIO by EMC. Something Renan knows a little bit about having led development, of the product but flash was late to the NAS party guys, Renan let me start with you. Why is that? And what is the relevance of QLC in that regard? >> The way storage has been always, it looks like a pyramid and you have your block devices up at the top and then your NAS underneath. And today you have object down at the bottom of that pyramid. And the pyramid basically represents capacity and the Y axis is price performance. And so if you could only serve a small subset of the capacity, you would go for block. And that is the subset that needed high performance. But as you go to QLC and PLC will soon follow the price of all flash systems goes down to a point where it can compete on the lower ends of that pyramid. And the capacity grows to a point where there's enough flash to support those workloads. And so now with QLC and a lot of innovation that goes with it it makes sense to build an all flash, NAS and object store. >> Yeah, okay. And David, you and I have talked about the volumes and Renan sort of just alluded to that, the higher volumes of NAS, not to mention the fact that NAS is hard, you know files difficult, but that's another piece of the equation here, isn't it? >> Absolutely, NAS is difficult. It's a large, very large scale. We're talking about petabytes of data. You're talking about very important data. And you're talking about data, which is at the moment very difficult to manage. It takes a lot of people to manage it, takes a lot of resources and it takes up a lot, a lot of space as well. So all of those issues with NAS and complexity is probably the biggest single problem. >> So maybe we could geek out a little bit here. You guys go at it, but Renan talk about the Vast architecture. I presume it was built from the ground up for flash since you were trying to kill HTD. What else do we need to know? >> It was built for flash. It was also built for Crosspoint which is a new technology that came out from Intel and micron about three years ago. Cross point is basically another level of persistent media above flash and below Ram. But what we really set out to do is, as I said to kill the hard drive, and for that what you need is to get the price parity. And of course, flash and hard drives are not at price parity today. As David said, they probably will be in a few years from now. And so we wanted to, jumpstart that, to accelerate that. And so we spent a lot of time in building a new type of architecture with a lot of new metadata structures and algorithms on top to bring that effective price down to a point where it's competitive today. And in fact, two years ago the way we did it was by going out to talk to these vendors Intel with 3D Crosspoint and QLC flash Mellanox with NVMe over fabrics, and very fast ethernet networks. And we took those building blocks and we thought how can we use this to build a completely different type of architecture, that doesn't just take flash one level down the stack but actually allows us to break that pyramid, to collapse it down and to build a single system that is as fast as your fastest all flash block device or faster but as affordable as your hard drive based archives. And once that happens you don't need to think about storage anymore. You have a single system that's big enough and cheap enough to throw everything at it. And it's fast enough such that everything is accessible as sub-millisecond latencies. The way the architecture is built is pretty much the opposite of the way scale-out storage has been done. It's not based on shared nothing. The way XtremIO was the way Isilon is the way Hadoop and the Google file system are. We're basing it on a concept called Dis-aggregated Shared Everything. And what that means is that we have the media on one set of devices, the logic running in containers, just software and you can scale each of those independently. So you can scale capacity independently from performance and you have this shared metadata space, that all of the containers can see. So the containers don't actually have to talk to each other in the synchronous path. That means that it's much more scalable. You can go up to hundreds of thousands of nodes rather than just a few dozen. It's much more resilient. You can have all of them fail and you still didn't lose any data. And it's much more easy to use to David's point about complexity. >> Thank you for that. And then you, you mentioned up front that you not only built for flash, but built for Crosspoint. So you're using Crosspoint today. It's interesting. There was always been this sort of debate about Crosspoint It's less expensive than Ram, or maybe I got that wrong but it's persistent, >> It is. >> Okay, but it's more expensive than flash. And it was sort of thought it was a fence sitter cause it didn't have the volume but you're using it today successfully. That's interesting. >> We're using it both to offset the deficiencies of the low cost flash. And the nice thing about QLC and PLC is that you get the same levels of read performance as you would from high-end flash. The only difference between high cost and low cost flash today is in right cycles and in right performance. And so Crosspoint helps us offset both of those. We use it as a large right buffer and we use it as a large metadata store. And that allows us not just to arrange the information in a very large persistent right buffer before we need to place it on the low cost flash. But it also allows us to develop new types of metadata structures and algorithms that allow us to make better use of the low cost flash and reduce the effective price down even lower than the rock capacity. >> Very cool. David, what are your thoughts on the architecture? give us kind of the independent perspective >> I think it's brilliant architecture. I'd like to just go one step down on the network side of things. The whole use of NBME over fabric allows the users all of the servers to get any data across this whole network directly to it. So you've got great performance right away across the stack. And then the other thing is that by using RDMA for NASS, you're able, if you need to, to get down in microseconds to the data. So overall that's a thousand times faster than any HDD system could manage. So this architecture really allows an any to any simple, single level of storage which is so much easier to think about, architect use or manage is just so much simpler. >> If you had I mean, I said I don't know if there's an answer to this question but if you had to pick one thing Renan that you really were dogmatic about and you bet on from an architectural standpoint, what would that be? >> I think what we bet on in the early days is the fact that the pyramid doesn't work anymore and that tiering doesn't work anymore. In fact, we stole Johnson and Johnson's tagline No More Tears. Only, It's not spelled the same way. The reason for that is not because of storage. It's because of the applications as we move to applications more and more that are machine-based and machines are now not just generating the data. They're also reading the data and analyzing it and providing insights for humans to consume. Then the workloads changed dramatically. And the one thing that we saw is that you can't choose which pieces of information need to be accessible anymore. These new algorithms, especially around AI and machine learning and deep learning they need fast access to the entirety of the dataset and they want to read it over and over and over again in order to generate those insights. And so that was the driving force behind us building this new type of architecture. And we're seeing every single day when we talk to customers how the old architecture is simply break down in the face of these new applications. >> Very cool speaking of customers. I wonder if you could talk about use cases, customers you know, and this NASS arena maybe you could add some color there. >> Sure, our customers are large in data. We started half a petabyte and we grow into the exabyte range. The system likes to be big as, as it grows it grows super linearly. If you have a 100 nodes or a 1000 nodes you get more than 10X in performance, in capacity efficiency and resilience, et cetera. And so that's where we thrive. And those workloads are today. Mainly analytics workloads, although not entirely. If you look at it geographically we have a lot of life science in Boston research institutes medical imaging, genomics universities pharmaceutical companies here in New York. We have a lot of financials, hedge funds, Analyzing everything from satellite imagery to trade data to Twitter feeds out in California. A lot of AI, autonomous driving vehicles as well as media and entertainment both generation of films like animation, as well as content distribution are being done on top of best. >> Great thank you and David, when you look at the forecast that you've made over the years and when I imagine that they match nicely with your assumptions. And so, okay, I get that, but that doesn't, not everybody agrees, David. I mean, certainly the HDD guys don't agree but they, they're obviously fighting to hang on to their awesome run for 50 years, but as well there's others to do in hybrids and the like, and they kind of challenge your assumptions and you don't have a dog in this fight. We just want the truth and try to do our best to report it. But let me start with this. One of the things I've seen is that you're comparing deduped and compressed flash with raw HDD. Is that true or false? >> It's in terms of the fundamentals of the forecast, et cetera, it's false. What I'm taking is the new egg price. And I did it this morning and I looked up a two terabyte disc drive, NAS disc drive. I think it was $54. And if you look at the cost of a a NAND for two terabytes, it's about $200. So it's a four to one ratio. >> So, >> So and that's coming down from what people saw last year, which was five or six and every year has been, that ratio has been coming down. >> The ratio between the cost Delta, between HDD is still cheaper. So Renan I wonder one of the other things that Floyer has said is that because of the advantages of flash, not only performance but also data sharing, et cetera, which really drives other factors like TCO. That it doesn't have to be at parody in order for customers to consume that. I certainly saw that on my laptop, I could have got more storage and it could have been cheaper for per bit for my laptop. I took the flash. I mean, no problem. That that was an intelligence test but what are you seeing from customers? And by the way Floyer I think is forecasting by what, 2026 there will be actually a raw to raw crossover. So then it's game over. But what are you seeing in terms of what customers are telling you or any evidence you have that it doesn't have to be, even that customers actually get more value even if it's more expensive from flash, what are you seeing? >> Yeah in the enterprise space customers aren't buying raw flash they're buying storage systems. And so even if the raw numbers flash versus hard drive are still not there there is a lot of things that can be done at the system level to equalize those two. In fact, a lot of our IP is based on that we are taking flash today is, as David said more expensive than hard drives, but at the system level it doesn't remain more expensive. And the reason for that is storage systems waste space. They waste it on metadata, they waste it on redundancy. We built our new metadata structures, such that they everything lives in Crosspoint and is so much smaller because of the way Crosspoint is accessible at byte level granularity, we built our erasure codes in a way where you can sustain 10, 20, 30 drive failures but you only pay two or 1% in overhead. We built our data reduction mechanisms such that they can reduce down data even if the application has already compressed it and already de-duplicated it. And so there's a lot of innovation that can happen at the software level as part of this new direct dis-aggregated shared everything architecture that allows us to bridge that cost gap today without having customers do fancy TCO calculations. And of course, as prices of flash over the next few years continue declining, all of those advantages remain and it will just widen the gap between hard drives and flash. And there really is no advantage to hard drives once the price thing is solved. >> So thank you. So David, the other thing I've seen around these forecasts is that the comments that you can't really data reduce effectively hard disk. And I understand why the overhead and of course you can in flash you can use all kinds of data reduction techniques and not affect performance, or it's not even noticeable like put the cloud guys, do it upstream. Others do it upstream. What's your comment on that? >> Yes, if you take sequential data and you do a lot of work upfront you can write out in very lot big blocks and that's a perfect sequentially, good way of doing it. The challenge for the HDD people is if they go for that for that sort of sequential type of application that the cheapest way of doing that is to use tape which comes back to the discussion that the two things that are going to remain are tape and flash. So that part of the HDD market in my assertion will go towards tape and tape libraries. And those are serving very well at the moment. >> Yeah I mean, It's just the economics of tape are really attractive. I just feel like I've said this many times that the marketing of tape is lacking. Like I'd like to see, better thinking around how it could play. Cause I think customers have this perception tape, but there's actually a lot of value there. I want to carry on, >> Small point there. Yeah, I mean, there's an opportunity in the same way that Vast have created an architecture for flash. There's an opportunity out there for the tech people with flash to make an architecture that allows you to take that workload and really lower the price, enormously. >> You've called it Flape >> Flape yes. >> There's some interesting metadata opportunities there but we won't go into that. And then David, I want to ask you about NAND shortages. We saw this in 2016 and 2017. A lot of people saying there's an NAND shortage again. So that's a flaw in your forecast prices of you're assuming prices of flash continue to come down faster than those of HDD but the shortages of NAND could be problematic. What do you say to that? >> Well, I've looked at that in some detail and one of the big, important things is what's happening in the flash market and the Chinese, YMTC Chinese company has introduced a lot more volume into the market. They're making 100,000 wafers a month for this year. That's around six to 8% of market of NAND at this year, as a result, Samsung, micron, Intel, Hynix they're all increasing their volumes of NAND so that they're all investing. So I don't see that NAND itself is going to be a problem. There is certainly a shortage of processor chips which drive the intelligence in the NAND itself. But that's a problem for everybody. That's a problem for cars. It's a problem for disk drives. >> You could argue that's going to create an oversupply, potentially. Let's not go there, but you know what at the end of the day it comes back to the customer and all this stuff. It's interesting. I love talking about the architecture but it's really all about customer value. And so, so Renan, I want you to sort of close there. What should customers be paying attention to? And what should observers of Vast Data really watch as indicators for progress for you guys milestones and things in the market that we should be paying attention to but start with the customers. What's your advice to them? >> Sure, for any customer that I talked to I always ask the same thing. Imagine where you'll be five years from now because you're making an investment now that is at least five years long. In our case, we guaranteed the lifespan of the devices for a decade, such that you know that it's going to be there for you and imagine what is going to happen over those next five years. What we're seeing in most customers is that they have a lot of doormen data and with the advances in analytics and AI they want to make use of that data. They want to turn it from a cost center to a profit center and to gain insight from that data and to improve their business based on that information that they have the same way the hyperscalers are doing in order to do that, you need one thing you need fast access to all of that information. Once you have that, you have the foundation to step into this next generation type world where you can actually make money off of your information. And the best way to get very, very fast access to all of your information is to put it on Vast media like flash and Crosspoint. If I can give one example, Hedge Funds. Hedge funds do a lot of back-testing on Vast. And what makes sense for them is to test as much information back as they possibly can but because of storage limitations, they can't do that. And the other thing that's important to them is to have a real-time experience to be able to run those simulations in a few minutes and not as a batch process overnight, but because of storage limitations, they can't do that either. The third thing is if you have many different applications and many different users on the same system they usually step on each other's toes. And so the Vast architecture is solves those three problems. It allows you a lot of information very fast access and fast processing an amazing quality of service where different users of the system don't even notice that somebody else is accessing the same piece of information. And so Hedge Funds is one example. Any one of these verticals that make use of a lot of information will benefit from this architecture in this system. And if it doesn't cost any more, there's really no real reason delay this transition into all flash. >> Excellent very clear thinking. Thanks for laying that out. And what about, you know, things that we should how should we judge you? What are the things that we should watch? >> I think the most important way to judge us is to look at customer adoption and what we're seeing and what we're showing investors is a very high net dollar retention number. What that means is basically a customer buys a piece of kit today, how much more will they buy over the next year, over the next two years? And we're seeing them buy more than three times more, within a year of the initial purchase. And we see more than 90% of them buying more within that first year. And that to me indicates that we're solving a real problem and that they're making strategic decisions to stop buying any other type of storage system. And to just put everything on Vast over the next few years we're going to expand beyond just storage services and provide a full stack for these AI applications. We'll expand into other areas of infrastructure and develop the best possible vertically integrated system to allow those new applications to thrive. >> Nice, yeah. Think investors love that lifetime value story. If you can get above 3X of the customer acquisition cost is to IPO in the way. Guys hey, thanks so much for coming to the Cube. We had a great conversation and really appreciate your time. >> Thank you. >> Thank you. >> All right, Thanks for watching everybody. This is Dave Volante for the Cube. We'll see you next time. (gentle music)

Published Date : Apr 5 2021

SUMMARY :

that the all flash data center was coming. in the marketplace but where and the volume comes from the consumers. the innovations that you're doing, kill of the hard drive. David maybe you could give And so QLC is the latest, and any metrics you can in the way that David predicted. having led development, of the product And the capacity grows to a point where And David, you and I have talked about the biggest single problem. the ground up for flash that all of the containers can see. that you not only built for cause it didn't have the volume and PLC is that you get the same levels David, what are your all of the servers to get any data And the one thing that we saw I wonder if you could talk And so that's where we thrive. One of the things I've seen is that of the forecast, et cetera, it's false. So and that's coming down And by the way Floyer I at the system level to equalize those two. the comments that you can't really So that part of the HDD market that the marketing of tape is lacking. and really lower the price, enormously. but the shortages of NAND and one of the big, important I love talking about the architecture that it's going to be there for you What are the things that we should watch? And that to me indicates that of the customer acquisition This is Dave Volante for the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Renen HallakPERSON

0.99+

2008DATE

0.99+

SamsungORGANIZATION

0.99+

RenanPERSON

0.99+

2016DATE

0.99+

10QUANTITY

0.99+

David FLoyerPERSON

0.99+

David FloyerPERSON

0.99+

fiveQUANTITY

0.99+

New YorkLOCATION

0.99+

$54QUANTITY

0.99+

2006DATE

0.99+

Dave VolantePERSON

0.99+

HynixORGANIZATION

0.99+

$150 millionQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CaliforniaLOCATION

0.99+

EMCORGANIZATION

0.99+

2010DATE

0.99+

50 yearsQUANTITY

0.99+

Steve JobsPERSON

0.99+

twoQUANTITY

0.99+

2017DATE

0.99+

fourQUANTITY

0.99+

IntelORGANIZATION

0.99+

last yearDATE

0.99+

Vast DataORGANIZATION

0.99+

20QUANTITY

0.99+

sixQUANTITY

0.99+

three dimensionsQUANTITY

0.99+

three problemsQUANTITY

0.99+

YMTCORGANIZATION

0.99+

FloyerORGANIZATION

0.99+

BostonLOCATION

0.99+

DeltaORGANIZATION

0.99+

RenenPERSON

0.99+

30QUANTITY

0.99+

100 nodesQUANTITY

0.99+

FacebookORGANIZATION

0.99+

two terabytesQUANTITY

0.99+

1%QUANTITY

0.99+

next yearDATE

0.99+

more than 90%QUANTITY

0.99+

bothQUANTITY

0.99+

2026DATE

0.99+

two thingsQUANTITY

0.99+

five years agoDATE

0.99+

third dimensionQUANTITY

0.99+

one exampleQUANTITY

0.99+

third thingQUANTITY

0.99+

two terabyteQUANTITY

0.99+

iPodCOMMERCIAL_ITEM

0.99+

more than three timesQUANTITY

0.98+

1000 nodesQUANTITY

0.98+

todayDATE

0.98+

last decadeDATE

0.98+

single problemQUANTITY

0.98+

eachQUANTITY

0.98+

One dimensionQUANTITY

0.98+

oneQUANTITY

0.98+

five yearsQUANTITY

0.98+

one setQUANTITY

0.98+

TwitterORGANIZATION

0.98+

about $200QUANTITY

0.97+

this yearDATE

0.97+

two years agoDATE

0.97+

single systemQUANTITY

0.97+

first yearQUANTITY

0.97+

half a petabyteQUANTITY

0.97+

one thingQUANTITY

0.97+

micronORGANIZATION

0.97+

OneQUANTITY

0.97+

Marc Staimer, Dragon Slayer Consulting & David Floyer, Wikibon | December 2020


 

>> Announcer: From theCUBE studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is theCUBE conversation. >> Hi everyone, this is Dave Vellante and welcome to this CUBE conversation where we're going to dig in to this, the area of cloud databases. And Gartner just published a series of research in this space. And it's really a growing market, rapidly growing, a lot of new players, obviously the big three cloud players. And with me are three experts in the field, two long time industry analysts. Marc Staimer is the founder, president, and key principal at Dragon Slayer Consulting. And he's joined by David Floyer, the CTO of Wikibon. Gentlemen great to see you. Thanks for coming on theCUBE. >> Good to be here. >> Great to see you too Dave. >> Marc, coming from the great Northwest, I think first time on theCUBE, and so it's really great to have you. So let me set this up, as I said, you know, Gartner published these, you know, three giant tomes. These are, you know, publicly available documents on the web. I know you guys have been through them, you know, several hours of reading. And so, night... (Dave chuckles) Good night time reading. The three documents where they identify critical capabilities for cloud database management systems. And the first one we're going to talk about is, operational use cases. So we're talking about, you know, transaction oriented workloads, ERP financials. The second one was analytical use cases, sort of an emerging space to really try to, you know, the data warehouse space and the like. And, of course, the third is the famous Gartner Magic Quadrant, which we're going to talk about. So, Marc, let me start with you, you've dug into this research just at a high level, you know, what did you take away from it? >> Generally, if you look at all the players in the space they all have some basic good capabilities. What I mean by that is ultimately when you have, a transactional or an analytical database in the cloud, the goal is not to have to manage the database. Now they have different levels of where that goes to as how much you have to manage or what you have to manage. But ultimately, they all manage the basic administrative, or the pedantic tasks that DBAs have to do, the patching, the tuning, the upgrading, all of that is done by the service provider. So that's the number one thing they all aim at, from that point on every database has different capabilities and some will automate a whole bunch more than others, and will have different primary focuses. So it comes down to what you're looking for or what you need. And ultimately what I've learned from end users is what they think they need upfront, is not what they end up needing as they implement. >> David, anything you'd add to that, based on your reading of the Gartner work. >> Yes. It's a thorough piece of work. It's taking on a huge number of different types of uses and size of companies. And I think those are two parameters which really change how companies would look at it. If you're a Fortune 500 or Fortune 2000 type company, you're going to need a broader range of features, and you will need to deal with size and complexity in a much greater sense, and a lot of probably higher levels of availability, and reliability, and recoverability. Again, on the workload side, there are different types of workload and there're... There is as well as having the two transactional and analytic workloads, I think there's an emerging type of workload which is going to be very important for future applications where you want to combine transactional with analytic in real time, in order to automate business processes at a higher level, to make the business processes synchronous as opposed to asynchronous. And that degree of granularity, I think is missed, in a broader view of these companies and what they offer. It's in my view trying in some ways to not compare like with like from a customer point of view. So the very nuance, what you talked about, let's get into it, maybe that'll become clear to the audience. So like I said, these are very detailed research notes. There were several, I'll say analysts cooks in the kitchen, including Henry Cook, whom I don't know, but four other contributing analysts, two of whom are CUBE alum, Don Feinberg, and Merv Adrian, both really, you know, awesome researchers. And Rick Greenwald, along with Adam Ronthal. And these are public documents, you can go on the web and search for these. So I wonder if we could just look at some of the data and bring up... Guys, bring up the slide one here. And so we'll first look at the operational side and they broke it into four use cases. The traditional transaction use cases, the augmented transaction processing, stream/event processing and operational intelligence. And so we're going to show you there's a lot of data here. So what Gartner did is they essentially evaluated critical capabilities, or think of features and functions, and gave them a weighting, or a weighting, and then a rating. It was a weighting and rating methodology. On a s... The rating was on a scale of one to five, and then they weighted the importance of the features based on their assessment, and talking to the many customers they talk to. So you can see here on the first chart, we're showing both the traditional transactions and the augmented transactions and, you know, the thing... The first thing that jumps out at you guys is that, you know, Oracle with Autonomous is off the charts, far ahead of anybody else on this. And actually guys, if you just bring up slide number two, we'll take a look at the stream/event processing and operational intelligence use cases. And you can see, again, you know, Oracle has a big lead. And I don't want to necessarily go through every vendor here, but guys, if you don't mind going back to the first slide 'cause I think this is really, you know, the core of transaction processing. So let's look at this, you've got Oracle, you've got SAP HANA. You know, right there interestingly Amazon Web Services with the Aurora, you know, IBM Db2, which, you know, it goes back to the good old days, you know, down the list. But so, let me again start with Marc. So why is that? I mean, I guess this is no surprise, Oracle still owns the Mission-Critical for the database space. They earned that years ago. One that, you know, over the likes of Db2 and, you know, Informix and Sybase, and, you know, they emerged as number one there. But what do you make of this data Marc? >> If you look at this data in a vacuum, you're looking at specific functionality, I think you need to look at all the slides in total. And the reason I bring that up is because I agree with what David said earlier, in that the use case that's becoming more prevalent is the integration of transaction and analytics. And more importantly, it's not just your traditional data warehouse, but it's AI analytics. It's big data analytics. It's users are finding that they need more than just simple reporting. They need more in-depth analytics so that they can get more actionable insights into their data where they can react in real time. And so if you look at it just as a transaction, that's great. If you're going to just as a data warehouse, that's great, or analytics, that's fine. If you have a very narrow use case, yes. But I think today what we're looking at is... It's not so narrow. It's sort of like, if you bought a streaming device and it only streams Netflix and then you need to get another streaming device 'cause you want to watch Amazon Prime. You're not going to do that, you want one, that does all of it, and that's kind of what's missing from this data. So I agree that the data is good, but I don't think it's looking at it in a total encompassing manner. >> Well, so before we get off the horses on the track 'cause I love to do that. (Dave chuckles) I just kind of let's talk about that. So Marc, you're putting forth the... You guys seem to agree on that premise that the database that can do more than just one thing is of appeal to customers. I suppose that makes, certainly makes sense from a cost standpoint. But, you know, guys feel free to flip back and forth between slides one and two. But you can see SAP HANA, and I'm not sure what cloud that's running on, it's probably running on a combination of clouds, but, you know, scoring very strongly. I thought, you know, Aurora, you know, given AWS says it's one of the fastest growing services in history and they've got it ahead of Db2 just on functionality, which is pretty impressive. I love Google Spanner, you know, love the... What they're trying to accomplish there. You know, you go down to Microsoft is, they're kind of the... They're always good enough a database and that's how they succeed and et cetera, et cetera. But David, it sounds like you agree with Marc. I would say, I would think though, Amazon kind of doesn't agree 'cause they're like a horses for courses. >> I agree. >> Yeah, yeah. >> So I wonder if you could comment on that. >> Well, I want to comment on two vectors. The first vector is that the size of customer and, you know, a mid-sized customer versus a global $2,000 or global 500 customer. For the smaller customer that's the heart of AWS, and they are taking their applications and putting pretty well everything into their cloud, the one cloud, and Aurora is a good choice. But when you start to get to a requirements, as you do in larger companies have very high levels of availability, the functionality is not there. You're not comparing apples and... Apples with apples, it's two very different things. So from a tier one functionality point of view, IBM Db2 and Oracle have far greater capability for recovery and all the features that they've built in over there. >> Because of their... You mean 'cause of the maturity, right? maturity and... >> Because of their... Because of their focus on transaction and recovery, et cetera. >> So SAP though HANA, I mean, that's, you know... (David talks indistinctly) And then... >> Yeah, yeah. >> And then I wanted your comments on that, either of you or both of you. I mean, SAP, I think has a stated goal of basically getting its customers off Oracle that's, you know, there's always this urinary limping >> Yes, yes. >> between the two companies by 2024. Larry has said that ain't going to happen. You know, Amazon, we know still runs on Oracle. It's very hard to migrate Mission-Critical, David, you and I know this well, Marc you as well. So, you know, people often say, well, everybody wants to get off Oracle, it's too expensive, blah, blah, blah. But we talked to a lot of Oracle customers there, they're very happy with the reliability, availability, recoverability feature set. I mean, the core of Oracle seems pretty stable. >> Yes. >> But I wonder if you guys could comment on that, maybe Marc you go first. >> Sure. I've recently done some in-depth comparisons of Oracle and Aurora, and all their other RDS services and Snowflake and Google and a variety of them. And ultimately what surprised me is you made a statement it costs too much. It actually comes in half of Aurora for in most cases. And it comes in less than half of Snowflake in most cases, which surprised me. But no matter how you configure it, ultimately based on a couple of things, each vendor is focused on different aspects of what they do. Let's say Snowflake, for example, they're on the analytical side, they don't do any transaction processing. But... >> Yeah, so if I can... Sorry to interrupt. Guys if you could bring up the next slide that would be great. So that would be slide three, because now we get into the analytical piece Marc that you're talking about that's what Snowflake specialty is. So please carry on. >> Yeah, and what they're focused on is sharing data among customers. So if, for example, you're an automobile manufacturer and you've got a huge supply chain, you can supply... You can share the data without copying the data with any of your suppliers that are on Snowflake. Now, can you do that with the other data warehouses? Yes, you can. But the focal point is for Snowflake, that's where they're aiming it. And whereas let's say the focal point for Oracle is going to be performance. So their performance affects cost 'cause the higher the performance, the less you're paying for the performing part of the payment scale. Because you're paying per second for the CPUs that you're using. Same thing on Snowflake, but the performance is higher, therefore you use less. I mean, there's a whole bunch of things to come into this but at the end of the day what I've found is Oracle tends to be a lot less expensive than the prevailing wisdom. So let's talk value for a second because you said something, that yeah the other databases can do that, what Snowflake is doing there. But my understanding of what Snowflake is doing is they built this global data mesh across multiple clouds. So not only are they compatible with Google or AWS or Azure, but essentially you sign up for Snowflake and then you can share data with anybody else in the Snowflake cloud, that I think is unique. And I know, >> Marc: Yes. >> Redshift, for instance just announced, you know, Redshift data sharing, and I believe it's just within, you know, clusters within a customer, as opposed to across an ecosystem. And I think that's where the network effect is pretty compelling for Snowflake. So independent of costs, you and I can debate about costs and, you know, the tra... The lack of transparency of, because AWS you don't know what the bill is going to be at the end of the month. And that's the same thing with Snowflake, but I find that... And by the way guys, you can flip through slides three and four, because we've got... Let me just take a quick break and you have data warehouse, logical data warehouse. And then the next slide four you got data science, deep learning and operational intelligent use cases. And you can see, you know, Teradata, you know, law... Teradata came up in the mid 1980s and dominated in that space. Oracle does very well there. You can see Snowflake pop-up, SAP with the Data Warehouse, Amazon with Redshift. You know, Google with BigQuery gets a lot of high marks from people. You know, Cloud Data is in there, you know, so you see some of those names. But so Marc and David, to me, that's a different strategy. They're not trying to be just a better data warehouse, easier data warehouse. They're trying to create, Snowflake that is, an incremental opportunity as opposed to necessarily going after, for example, Oracle. David, your thoughts. >> Yeah, I absolutely agree. I mean, ease of use is a primary benefit for Snowflake. It enables you to do stuff very easily. It enables you to take data without ETL, without any of the complexity. It enables you to share a number of resources across many different users and know... And be able to bring in what that particular user wants or part of the company wants. So in terms of where they're focusing, they've got a tremendous ease of use, tremendous focus on what the customer wants. And you pointed out yourself the restrictions there are of doing that both within Oracle and AWS. So yes, they have really focused very, very hard on that. Again, for the future, they are bringing in a lot of additional functions. They're bringing in Python into it, not Python, JSON into the database. They can extend the database itself, whether they go the whole hog and put in transaction as well, that's probably something they may be thinking about but not at the moment. >> Well, but they, you know, they obviously have to have TAM expansion designs because Marc, I mean, you know, if they just get a 100% of the data warehouse market, they're probably at a third of their stock market valuation. So they had better have, you know, a roadmap and plans to extend there. But I want to come back Marc to this notion of, you know, the right tool for the right job, or, you know, best of breed for a specific, the right specific, you know horse for course, versus this kind of notion of all in one, I mean, they're two different ends of the spectrum. You're seeing, you know, Oracle obviously very successful based on these ratings and based on, you know their track record. And Amazon, I think I lost count of the number of data stores (Dave chuckles) with Redshift and Aurora and Dynamo, and, you know, on and on and on. (Marc talks indistinctly) So they clearly want to have that, you know, primitive, you know, different APIs for each access, completely different philosophies it's like Democrats or Republicans. Marc your thoughts as to who ultimately wins in the marketplace. >> Well, it's hard to say who is ultimately going to win, but if I look at Amazon, Amazon is an all-cart type of system. If you need time series, you go with their time series database. If you need a data warehouse, you go with Redshift. If you need transaction, you go with one of the RDS databases. If you need JSON, you go with a different database. Everything is a different, unique database. Moving data between these databases is far from simple. If you need to do a analytics on one database from another, you're going to use other services that cost money. So yeah, each one will do what they say it's going to do but it's going to end up costing you a lot of money when you do any kind of integration. And you're going to add complexity and you're going to have errors. There's all sorts of issues there. So if you need more than one, probably not your best route to go, but if you need just one, it's fine. And if, and on Snowflake, you raise the issue that they're going to have to add transactions, they're going to have to rewrite their database. They have no indexes whatsoever in Snowflake. I mean, part of the simplicity that David talked about is because they had to cut corners, which makes sense. If you're focused on the data warehouse you cut out the indexes, great. You don't need them. But if you're going to do transactions, you kind of need them. So you're going to have to do some more work there. So... >> Well... So, you know, I don't know. I have a different take on that guys. I think that, I'm not sure if Snowflake will add transactions. I think maybe, you know, their hope is that the market that they're creating is big enough. I mean, I have a different view of this in that, I think the data architecture is going to change over the next 10 years. As opposed to having a monolithic system where everything goes through that big data platform, the data warehouse and the data lake. I actually see what Snowflake is trying to do and, you know, I'm sure others will join them, is to put data in the hands of product builders, data product builders or data service builders. I think they're betting that that market is incremental and maybe they don't try to take on... I think it would maybe be a mistake to try to take on Oracle. Oracle is just too strong. I wonder David, if you could comment. So it's interesting to see how strong Gartner rated Oracle in cloud database, 'cause you don't... I mean, okay, Oracle has got OCI, but you know, you think a cloud, you think Google, or Amazon, Microsoft and Google. But if I have a transaction database running on Oracle, very risky to move that, right? And so we've seen that, it's interesting. Amazon's a big customer of Oracle, Salesforce is a big customer of Oracle. You know, Larry is very outspoken about those companies. SAP customers are many, most are using Oracle. I don't, you know, it's not likely that they're going anywhere. My question to you, David, is first of all, why do they want to go to the cloud? And if they do go to the cloud, is it logical that the least risky approach is to stay with Oracle, if you're an Oracle customer, or Db2, if you're an IBM customer, and then move those other workloads that can move whether it's more data warehouse oriented or incremental transaction work that could be done in a Aurora? >> I think the first point, why should Oracle go to the cloud? Why has it gone to the cloud? And if there is a... >> Moreso... Moreso why would customers of Oracle... >> Why would customers want to... >> That's really the question. >> Well, Oracle have got Oracle Cloud@Customer and that is a very powerful way of doing it. Where exactly the same Oracle system is running on premise or in the cloud. You can have it where you want, you can have them joined together. That's unique. That's unique in the marketplace. So that gives them a very special place in large customers that have data in many different places. The second point is that moving data is very expensive. Marc was making that point earlier on. Moving data from one place to another place between two different databases is a very expensive architecture. Having the data in one place where you don't have to move it where you can go directly to it, gives you enormous capabilities for a single database, single database type. And I'm sure that from a transact... From an analytic point of view, that's where Snowflake is going, to a large single database. But where Oracle is going to is where, you combine both the transactional and the other one. And as you say, the cost of migration of databases is incredibly high, especially transaction databases, especially large complex transaction databases. >> So... >> And it takes a long time. So at least a two year... And it took five years for Amazon to actually succeed in getting a lot of their stuff over. And five years they could have been doing an awful lot more with the people that they used to bring it over. So it was a marketing decision as opposed to a rational business decision. >> It's the holy grail of the vendors, they all want your data in their database. That's why Amazon puts so much effort into it. Oracle is, you know, in obviously a very strong position. It's got growth and it's new stuff, it's old stuff. It's, you know... The problem with Oracle it has like many of the legacy vendors, it's the size of the install base is so large and it's shrinking. And the new stuff is.... The legacy stuff is shrinking. The new stuff is growing very, very fast but it's not large enough yet to offset that, you see that in all the learnings. So very positive news on, you know, the cloud database, and they just got to work through that transition. Let's bring up slide number five, because Marc, this is to me the most interesting. So we've just shown all these detailed analysis from Gartner. And then you look at the Magic Quadrant for cloud databases. And, you know, despite Amazon being behind, you know, Oracle, or Teradata, or whomever in every one of these ratings, they're up to the right. Now, of course, Gartner will caveat this and say, it doesn't necessarily mean you're the best, but of course, everybody wants to be in the upper, right. We all know that, but it doesn't necessarily mean that you should go by that database, I agree with what Gartner is saying. But look at Amazon, Microsoft and Google are like one, two and three. And then of course, you've got Oracle up there and then, you know, the others. So that I found that very curious, it is like there was a dissonance between the hardcore ratings and then the positions in the Magic Quadrant. Why do you think that is Marc? >> It, you know, it didn't surprise me in the least because of the way that Gartner does its Magic Quadrants. The higher up you go in the vertical is very much tied to the amount of revenue you get in that specific category which they're doing the Magic Quadrant. It doesn't have to do with any of the revenue from anywhere else. Just that specific quadrant is with that specific type of market. So when I look at it, Oracle's revenue still a big chunk of the revenue comes from on-prem, not in the cloud. So you're looking just at the cloud revenue. Now on the right side, moving to the right of the quadrant that's based on functionality, capabilities, the resilience, other things other than revenue. So visionary says, hey how far are you on the visionary side? Now, how they weight that again comes down to Gartner's experts and how they want to weight it and what makes more sense to them. But from my point of view, the right side is as important as the vertical side, 'cause the vertical side doesn't measure the growth rate either. And if we look at these, some of these are growing much faster than the others. For example, Snowflake is growing incredibly fast, and that doesn't reflect in these numbers from my perspective. >> Dave: I agree. >> Oracle is growing incredibly fast in the cloud. As David pointed out earlier, it's not just in their cloud where they're growing, but it's Cloud@Customer, which is basically an extension of their cloud. I don't know if that's included these numbers or not in the revenue side. So there's... There're a number of factors... >> Should it be in your opinion, Marc, would you include that in your definition of cloud? >> Yeah. >> The things that are hybrid and on-prem would that cloud... >> Yes. >> Well especially... Well, again, it depends on the hybrid. For example, if you have your own license, in your own hardware, but it connects to the cloud, no, I wouldn't include that. If you have a subscription license and subscription hardware that you don't own, but it's owned by the cloud provider, but it connects with the cloud as well, that I would. >> Interesting. Well, you know, to your point about growth, you're right. I mean, it's probably looking at, you know, revenues looking, you know, backwards from guys like Snowflake, it will be double, you know, the next one of these. It's also interesting to me on the horizontal axis to see Cloud Data and Databricks further to the right, than Snowflake, because that's kind of the data lake cloud. >> It is. >> And then of course, you've got, you know, the other... I mean, database used to be boring, so... (David laughs) It's such a hot market space here. (Marc talks indistinctly) David, your final thoughts on all this stuff. What does the customer take away here? What should I... What should my cloud database management strategy be? >> Well, I was positive about Oracle, let's take some of the negatives of Oracle. First of all, they don't make it very easy to rum on other platforms. So they have put in terms and conditions which make it very difficult to run on AWS, for example, you get double counts on the licenses, et cetera. So they haven't played well... >> Those are negotiable by the way. Those... You bring it up on the customer. You can negotiate that one. >> Can be, yes, They can be. Yes. If you're big enough they are negotiable. But Aurora certainly hasn't made it easy to work with other plat... Other clouds. What they did very... >> How about Microsoft? >> Well, no, that is exactly what I was going to say. Oracle with adjacent workloads have been working very well with Microsoft and you can then use Microsoft Azure and use a database adjacent in the same data center, working with integrated very nicely indeed. And I think Oracle has got to do that with AWS, it's got to do that with Google as well. It's got to provide a service for people to run where they want to run things not just on the Oracle cloud. If they did that, that would in my term, and my my opinion be a very strong move and would make make the capabilities available in many more places. >> Right. Awesome. Hey Marc, thanks so much for coming to theCUBE. Thank you, David, as well, and thanks to Gartner for doing all this great research and making it public on the web. You can... If you just search critical capabilities for cloud database management systems for operational use cases, that's a mouthful, and then do the same for analytical use cases, and the Magic Quadrant. There's the third doc for cloud database management systems. You'll get about two hours of reading and I learned a lot and I learned a lot here too. I appreciate the context guys. Thanks so much. >> My pleasure. All right, thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (upbeat music)

Published Date : Dec 18 2020

SUMMARY :

leaders all around the world. Marc Staimer is the founder, to really try to, you know, or what you have to manage. based on your reading of the Gartner work. So the very nuance, what you talked about, You're not going to do that, you I thought, you know, Aurora, you know, So I wonder if you and, you know, a mid-sized customer You mean 'cause of the maturity, right? Because of their focus you know... either of you or both of you. So, you know, people often say, But I wonder if you But no matter how you configure it, Guys if you could bring up the next slide and then you can share And by the way guys, you can And you pointed out yourself to have that, you know, So if you need more than one, I think maybe, you know, Why has it gone to the cloud? Moreso why would customers of Oracle... on premise or in the cloud. And as you say, the cost in getting a lot of their stuff over. and then, you know, the others. to the amount of revenue you in the revenue side. The things that are hybrid and on-prem that you don't own, but it's Well, you know, to your point got, you know, the other... you get double counts Those are negotiable by the way. hasn't made it easy to work and you can then use Microsoft Azure and the Magic Quadrant. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

Rick GreenwaldPERSON

0.99+

DavePERSON

0.99+

Marc StaimerPERSON

0.99+

MarcPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Adam RonthalPERSON

0.99+

Don FeinbergPERSON

0.99+

GoogleORGANIZATION

0.99+

LarryPERSON

0.99+

AWSORGANIZATION

0.99+

OracleORGANIZATION

0.99+

December 2020DATE

0.99+

IBMORGANIZATION

0.99+

Henry CookPERSON

0.99+

Palo AltoLOCATION

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

GartnerORGANIZATION

0.99+

Merv AdrianPERSON

0.99+

100%QUANTITY

0.99+

second pointQUANTITY

0.99+

Old Version: James Kobielus & David Floyer, Wikibon | VMworld 2018


 

from Las Vegas it's the queue covering VMworld 2018 brought to you by VMware and its ecosystem partners and we're back here at the Mandalay Bay in somewhat beautiful Las Vegas where we're doing third day of VMworld on the cube and on Peterborough and I'm joined by my two lead analysts here at Ricky bond with me Jim Camilo's who's looking at a lot of the software stuff David floor who's helping to drive a lot of our hardware's research guys you've spent an enormous amount of time talking to an enormous number of customers a lot of partners and we all participated in the Analyst Day on Monday let me give you my first impressions and I want to ask you guys some questions here you thought so I have it this is you know my third I guess VMworld in or in a row and and my impression is that this has been the most coherent of the VM worlds I've seen you can tell when a company's going through a transition because they're reaching to try to bring a story together and that sets the tone but this one hot calendar did a phenomenal job of setting up the story it makes sense it's coherent possibly because it aligns so well with what we think is going to happen in the industry so I want to ask you guys based on three days of one around and talking to customers David foyer what's been the high point what have you found is the most interesting thing well I think the most interesting thing is the excitement that there is over VMware if you if you contrast that with a two three years ago the degree of commitment of customers to viennois the degree of integration they're wanting to make the degree rate of change and ideas that have come out of VMware it's like two different companies totally different companies some of the highlights for me were the RDS the bringing from AWS to on site as well as on the AWS cloud RDS capabilities I think that's a very very interesting thing that's the relational database is services the Maria DB and all the other services that's a very exciting thing to me and a hint to me that AWS is going to have to get serious about well Moore's gone out I think it's a really interesting point that after a lot of conversations with a lot of folks saying all AWS it's all going to go up to the cloud and wondering whether that also is a one-way street for VMware Casta Moore's right but now we're seeing it's much more of a bilateral relationship it's a moving it to the right place and that's the second thing the embracing of multi-cloud by everybody one cloud is not going to do everything they're going to be SAS clouds they're going to be multiple places where people are gonna put certain workloads because that's the best strategic fit for it and the acceptance in the marketplace that that is where it's going to go I think that again is a major change so hybrid cloud and multi cloud environments and then the third thing is I think the richness of the ecosystem is amazing the the going on the floor and the number of people that have come to talk to us with new ideas really fascinating ideas is something I haven't seen at all for the last last three four years and so I'm gonna come back to you on that but it goes back to the first point that you make that yeah there is a palpable excitement here about VMware that two-three years ago the conversation was how much longer is the franchise gonna be around Jim but now it's clear yeah it's gonna be around Jim how about you yeah actually I'm like you guys I'm a newbie to VM world this is my very first remember I'm a big data analyst I'm a data science an AI guy but obviously I've been aware of VMware and I've had many contacts with them over the years my take away my prime and I like Pat Gail singers I agree with you Peter they're really coherent take and I like that phrase even though it sounds clucking impact kind of apologize they are the dial tone to the multi-cloud if the surgery really gives you a strong sense or who else can you character is in this whole market space cloud computing has essentially a multi cloud provider who provide the unifying virtualization glue to help their custom to help customers who are investing in an AWS and maybe in a bit of you know you're adopting Google and Microsoft Azure and so forth providing a virtualization layer that's the above server virtualization network virtualization VDI all the way to the edge nobody can put it all is putting it all together and quite the way that VMware is one of the my chief takeaways is similar to David's which is that in terms of the notion of a hybrid cloud VMware with its whole what's it's doing with RDS but also projects like this project dimension which is in project in progress taking essentially the entire VMware virtualization stack and putting it onto an appliance for deployment on the edges and then for them to manage it VMware of this their plans as an end-to-end managed edge cloud service and so forth Wow the blurring of public and private cloud I don't even think the term hybrid cloud applies it's just a blurry the common cloud yeah it's moving to the workload the clouds moving to the data which is exactly what we say they are halfway there in terms of that vision halfway in a sense that RDS has been announced the you know on the VMware and this project dimension they're well along with that if there was a briefings for the analyst space I'm really impressed for how they're architecting this I think they've got a shot to really dominate well I'll tell you so I would agree with you just to maybe provide a slightly different version of one of the things you said I definitely agree I think what's VMware hopes to do and I think they're not alone is to have AWS look like an appliance to their console to have as you look like an appliance of their Khan so through free em where you can get access to whatever services you need including your VMware machines your VMs inside those clouds but that increasingly their their goal is to be that control point that management point for all of these different resources that are building and it is very compelling I think that there's one area that I still think we need more from as analysts and we always got to look through no and what's yeah what was more required and I hear what you say about project dimension but I think that the edge story still requires a fair amount of work oh yeah it's a project in place but that's going to be an increasingly important locus of how architectures get laid out how people think about applications in the future how design happens how methodologies for building software work David what do you think what when you look out what what is what what is more is needed for you so really I think there are two things that give me a small concern the the edge that's a long term view so they got time to get that right but the edge view is very much an IT view top-down and they are looking to put in place everything that they think the OT people should fit in with I think that is personally not going to be a winning strategy you you have to take it from the bottom up the world is going to go towards devices very rich devices and sensors lots of software right on that device the inference work on those devices and the job of IT will be to integrate those devices it won't be those devices taking on the standards of IT it'll be IT that has to shape itself to look after all those devices there so that's a that's the main viewpoint I think that needs adjustment and it will come I'm sure over time but as you said there's a lot of computer science it's going to be an enormous amount of new partnerships are gonna be fabricate exactly to make this happen Jim what do you think yeah I agree terms of partnerships one big gap from both VMware and Dell technologies partnerships and romance and technology proposes AI now they have a project VMware call from another project called project Magna which is really AI ops in fact I published a wiki about reports this week on AI ops AI to drive IT Service Management and to and they're doing some stuff they're working on that project it's just you know the beginning stages I think what's going to happen is that vmware dell technologies they're gonna have to make strategic acquisitions of AI solution providers to build up that capability because that's going to be fundamental to their ability to manage this complex multi called fabric from end to end continuously they need that competency internally that can't be simply a partner providing that that's got to be their core competencies so you know I'm gonna push it I'll give you the contrarian point of view okay we actually had Khamsin VMware we've had a lot of conversations about this does that is that a reflection of David's point about top-down buying things and pushing it down as opposed to other conversations we've had about how the edge is going to evolve where a lot of OT guys are going to combine with business expertise and technology expertise to create specialized solutions and is and then VMware is gonna have to reach out to them and make VMware relevant to them do you think it's going to be VMware buying a bunch of stuff or an a-grade no solution or is it going to be the solutions coming from elsewhere and VM at VMware I just becoming more relevant to them now you can still be buying a bunch of stuff to get that horizontal in place but which way you think it's going to go I think it's gonna be the top-down they're gonna buy stuff because if I talk to the channel one of the channel people this morning about well you know but they've got an IOT connected bundle and so forth they announced this show you know I think they agree with me that the core AI technology needs to be built into the fundamentals like the IOT stack bundle that they then provide to the channel partners for with you know with channel specific content that they can then tweak and customize to their specific needs but you know the core requirements for a I are horizontal you know it's the ability to run neural networks to do predictive analysis anomaly detection and so forth this is all cross-cutting across all domains it has to be in the core application stack they can't be simply something they source for particular channel opportunities it has to be leveraged across you know the same core tensorflow models for anomaly detection for manufacturing for logistics for you know customer relationship management whatever it's or are you saying essentially that then VMware becomes that horizontal play even though even if the solution providers are increasingly close to the actual action where the edges III I'm gonna disagree we can gently on that but we'd still be friends [Music] no it's you know I'm I'm an OT guy of hearth I suppose and I think that that is going to be a stronger force in terms of VMware but there will be some places where you it will be top-down but other places that where it's going to be need needed to adjust but I think there's one other there very interesting area I'd like to bring up in terms of of this question of acquisition what what we heard about beforehand was excellent results and VMware has been adding a you know a billion dollars a year in terms of free cash there and they have thirteen billion in short term cash there and the the refinancing from Dell is gonna take eleven of that thirteen and put it towards the towards the the company now you can work towards deltek yes well just Dell Dell as a hold and and silver later towards those partners I I personally believe that there is such a lot of opportunity that's going to be out there if you take NSX for example it has the potential to do things in new areas they're gonna need to provide solutions in those new areas and aggressively go after those new areas and that's going to mean big investments and many other areas where I think they are going to need acquisitions to strengthen the whole story they have the whole multi-cloud story about this real-time operating system in a sexy has a network routing virtualization backplane I mean it needs to go real-time so sensitive guaranteed ladies if they need that big investments guarantee yeah they need to go there yeah so what we're agreeing on that and I get concerned that it's not going to be given the right resources you know to be able to actually go after the opportunities that they have genuinely created it's gonna mean from you see how that plays out so I think all drugs in the future I think saying though is that there is going to be a solution a set of solution players that VMware is going to have to make significant moves to make them relevant and then the question is where it's the values story what's the value proposition it's probably gonna be like all partnerships yeah some are gonna claim that they are doing it also some are gonna DM where it's gonna claim that they do more of it but at the end of the day VMware has to make themself relevant to the edge however that happens I want to pick up on NSX because I'm a pretty big believer that NSX may be the very special crown jewel and a lot of the stuff this notion of hybrid cloud whatever we call it let's just call it extended cloud let me talk of a better word like it is predicated on the idea that I also have a network that can naturally and easily not just bridge but truly multi network interoperate internet work with a lot of different cloud sources but also all different cloud locations and there's not a lot of technologies out there that are great candidates to do that and it's and I look at NSX and I'm wondering is that gonna be kind of a I want to take the metaphor too far but is that gonna be kind of a new tcp/ip for the cloud in the sense that you're still gonna run over tcp/ip and you're still gonna run over the Internet but now we're gonna get greater visibility into jobs into workloads into management infrastructures into data locations and data placement predictive movement and NSX is going to be the at the vanguard of showing how that's gonna work and the security side of that especially to be able to know what is connected to what and what shouldn't be connected to what and to be able to have that yeah they need stateful structured streaming others Kafka flink whatever they need that to be baked into the whole nsx virtualization layer that much more programmable and that provides that much better a target for applications all right last question then we got a wrap guys David as you walk out the door get in the plane what are you taking away what's your last impression my last impression is one of genuine excitement wanting to work wanting to follow up with so many of the smaller organizations the partners that have been here and who are genuinely providing in this ecosystem a very rich tapestry of of capability that's great Jim my takeaway is I want to see their roadmap for kubernetes and serverless there wasn't a hole last year they made an announcement of a serverless project I forgot what the code name is didn't hear a whole lot about it this year but they're going up the app stack they got a coop you know distribution you know they're if they need a developer story I mean developers are building functional apps and so forth you know you can and they're also containerized they need they need a developer story and they need a server list story and they need to you need to bring us up to speed on where they're going in that regard because AWS their predominant partner I mean they got lambda functions and all that stuff you know that's that's the development platform of the present and future and I'm not hearing an intersection of that story with VMware's a story yeah my last thing that I'll say is that I think that for the next five years VMware is gonna be one of the companies that shapes the future of the cloud and I don't think we would have said that a couple of names no they wouldn't I agree with you so you said yes all right so this has been the wiki bond research leadership team talking about what we've heard at VMware this year VMworld this year a lot of great conversation feel free to reach out to us and if you want to spend more time with rookie bond love to have you once again Peter burrows for David floor and Jim Kabila's thank you very much for watching the cube we'll talk to you again [Music]

Published Date : Aug 29 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

James KobielusPERSON

0.99+

Jim KabilaPERSON

0.99+

thirteen billionQUANTITY

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

Jim CamiloPERSON

0.99+

VMwareORGANIZATION

0.99+

DellORGANIZATION

0.99+

Las VegasLOCATION

0.99+

JimPERSON

0.99+

first impressionsQUANTITY

0.99+

three daysQUANTITY

0.99+

two thingsQUANTITY

0.99+

thirteenQUANTITY

0.99+

PeterPERSON

0.99+

last yearDATE

0.99+

Pat GailPERSON

0.99+

MoorePERSON

0.99+

Mandalay BayLOCATION

0.99+

first pointQUANTITY

0.99+

second thingQUANTITY

0.98+

firstQUANTITY

0.98+

GoogleORGANIZATION

0.97+

third thingQUANTITY

0.97+

this yearDATE

0.97+

thirdQUANTITY

0.97+

this yearDATE

0.97+

NSXORGANIZATION

0.97+

two-three years agoDATE

0.97+

David floorPERSON

0.96+

VMworldORGANIZATION

0.96+

two different companiesQUANTITY

0.95+

bothQUANTITY

0.95+

VMworld 2018EVENT

0.95+

Maria DBTITLE

0.95+

wikiORGANIZATION

0.95+

MicrosoftORGANIZATION

0.95+

this weekDATE

0.94+

two lead analystsQUANTITY

0.94+

David foyerPERSON

0.93+

deltekORGANIZATION

0.93+

MondayDATE

0.93+

third dayQUANTITY

0.93+

two three years agoDATE

0.92+

one areaQUANTITY

0.92+

this morningDATE

0.91+

oneQUANTITY

0.91+

KafkaTITLE

0.9+

Analyst DayEVENT

0.89+

VMworldEVENT

0.89+

KhamsinORGANIZATION

0.88+

VMwareTITLE

0.84+

Ricky bondORGANIZATION

0.84+

WikibonORGANIZATION

0.83+

one cloudQUANTITY

0.82+

lot of partnersQUANTITY

0.82+

elevenQUANTITY

0.81+

a billion dollars a yearQUANTITY

0.81+

David Floyer, Wikibon | Pure Storage Accelerate 2018


 

>> Narrator: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE, covering Pure Storage Accelerate, 2018, brought to you by Pure Storage. >> Welcome back to theCUBE's coverage of Pure Storage Accelerate 2018. I'm Lisa Martin. Been here all day with Dave Vellante. We're joined by David Floyer now. Guys, really interesting, very informative day. We got to talk to a lot of puritans, but also a breadth of customers, from Mercedes Formula One, to Simpson Strong-Tie to UCLA's School of Medicine. Lot of impact that data is making in a diverse set of industries. Dave, you've been sitting here, with me, all day. What are some of the key takeaways that you have from today? >> Well, Pure's winning in the marketplace. I mean, Pure said, "We're not going to bump along. "We're going to go for it. "We're going to drive growth. "We don't care if we lose money, early on." They bet that the street would reward that model, it has. Kind of a little mini Amazon, version of Amazon model. Grow, grow, grow, worry about profits down the road. They're eking out a slight, little positive free cashflow, on a non-gap basis, so that's good. And they were first with All-Flash, really kind of early on. They kind of won that game. You heard David, today. The NVMe, the first with NVMe. No uplifts on pricing for NVMe. So everybody's going to follow that. They can do the Evergreen model. The can do these things and claim these things as we were first. Of course, we know, David Floyer, you were first to make the call, back in 2008, (laughs) on Flash and the All-Flash data center, but Pure was right there with you. So they're winning in that respect. Their ecosystem is growing. But, you know, storage companies never really have this massive ecosystem that follow them. They really have to do integration. So that's, that's a good thing. So, you know, we're watching growth, we're watching continued execution. It seems like they are betting that their product portfolio, their platform, can serve a lot of different workloads. And it's going to be interesting to see if they can get to two billion, the kind of, the next milestone. They hit a billion. Can they get to two billion with the existing sort of product portfolio and roadmap, or do they have to do M&A? >> David: You're right. >> That's one thing to watch. The other is, can Pure remain independent? David, you know well, we used to have this conversation, all the time, with the likes of David Scott, at 3PAR, and the guys at Compellent, Phil Soran and company. They weren't able, Frank Slootman at Data Domain, they weren't able to stay independent. They got taken out. They weren't pricey enough for the market not to buy them. They got bought out. You know, Pure, five billion dollar market cap, that's kind of rich for somebody to absorb. So it was kind of like NetApp. NetApp got too expensive to get acquired. So, can they achieve that next milestone, two billion. Can they get to five billion. The big difference-- >> Or is there any hiccup, on the way, which will-- >> Yeah, right, exactly. Well the other thing, too, is that, you know, NetApp's market was growing, pretty substantially, at the time, even though they got hit in the dot-com boom. The overall market for Pure isn't really growing. So they have to gain share in order to get to that two billion, three billion, five billion dollar mark. >> If you break the market into the flash and non flash, then they're in the much better half of the market. That one is still growing, from that perspective. >> Well, I kind of like to look at the service end piece of it. I mean, they use this term, by Gartner, today, the something, accelerated, it's a new Gartner term, in 2018-- >> Shared Accelerated Storage >> Shared Accelerated Storage. Gartner finally came up with a category that we called service end. I've been joking all day. Gartner has a better V.P. of naming than we do. (chuckles) We're looking' at service end. I mean, I started, first talking about it, in 2009, thanks to your guidance. But that chart that you have that shows the sort of service end, which is essentially Pure, right? It's the, it's not-- >> Yes. It's a little more software than Pure is. But Pure is an awful lot of software, yes. And showing it growing, at the expense of the other segments, you know. >> David: Particularly sad. >> Particularly sad. Very particularly sad. >> So they're really well positioned, from that standpoint. And, you know, the other thing, Lisa, that was really interesting, we heard from customers today, that they switched for simplicity. Okay, not a surprise. But they were relatively unhappy with some of their existing suppliers. >> Right. >> They got kind of crummy service from some of their existing suppliers. >> Right. >> Now these are, maybe, smaller companies. One customer called out SimpliVity, specifically. He said, "I loved 'em when they were an independent company, "now they're part of HPE, meh, "I don't get service like the way I used to." So, that's a sort of a warning sign and a concern. Maybe their, you know, HPE's prioritizing the bigger customers, maybe the more profitable customers, but that can come back to bite you. >> Lisa: Right. >> So Pure, the point is, Pure has the luxury of being able to lose money, service, like crazy, those customers that might not be as profitable, and grow from it's position of a smaller company, on up. >> Yeah, besides the Evergreen model and the simplicity being, resoundingly, drivers and benefits, that customers across, you know, from Formula One to medical schools, are having, you're right. The independence that Pure has currently is a selling factor for them. And it's also probably a big factor in retention. I mean, they've got a Net Promoter Score of over 83, which is extremely high. >> It's fantastic, isn't it? I think there would be VMI, that I know of, has even higher one, but it's a very, very high score. >> It's very high. They added 300 new customers, last quarter alone, bringing their global customer count to over 4800. And that was a resounding benefit that we were hearing. They, no matter how small, if it's Mercedes Formula One or the Department of Revenue in Mississippi, they all feel important. They feel like they're supported. And that's really key for driving something like a Net Promoter Score. >> Pure had definitely benefited from, it's taken share from EMC. It did early on with VMAX and Symmetrix and VNX. We've seen Dell EMC storage business, you know, decline. It probably has hit bottom, maybe it starts to grow again. When it starts to grow again, I think, even last quarter, it's growth, in dollars, was probably the size of Pure. (chuckles) You know, so, but Pure has definitely benefited from stealing share. The flip side of all this, is when you talk to you know, the CxOs, the big customers, they're doing these big digital transformations. They're not buying products, you know, they're buying transformations. They're buying sets of services. They're buying relationships, and big companies like Dell and IBM and HPE, who have large services arms, can vie for certain business that Pure, necessarily, can't. So, they've got the advantage of being smaller, nimbler, best of breed product, but they don't have this huge portfolio of capabilities that gives them a seat at the CxO table. And you saw that, today. Charlie Giancarlo, his talk, he's a techie. The guys here, Kicks, Hat, they're techies. They're hardcore storage guys. They love storage. It reminds me of the early days of EMC, you know, it's-- >> David: Or NetApp. Yeah. Yeah, or NetApp, right. They're really focused on that. So there's plenty of market for them, right now. But I wonder, David, if you could talk about, sort of architecturally, people used to criticize the two controller, you know, approach. It obviously seems to be doing very well. People take shots at their, the Evergreen model, saying "Oh, we can do that too." But, again, Pure was first. Architecturally, what's your assessment of Pure? >> So, the Evergreen, I think, is excellent. They've gone about that, well. I think, from a straighforward architecture, they kept it very simple. They made a couple of slightly, odd decisions. They went with their own NAND chips, putting them into their own stuff, which made them much smaller, much more compact, completely in charge of the storage stack. And that was a very important choice they made, and it's come out well for them. I have a feeling. My own view is that M.2 is actually going to be the form factor of the future, not the SSD. The Ssd just fitted into a hard disk slot. That was it's only benefit. So, when that comes along, and the NAND vendors want to increase the value that they get from these stacks, etc., I'm a little bit nervous about that. But, having said that, they can convert back. >> Yeah, I mean, that seems like something they could respond to, right? >> Yeah, absolutely. >> I was at the Micron financial analysts' meeting, this week. And a lot of people were expecting that, you know, the memory business has always been very cyclical, it's like the disk drive business. But, it looks like, because of the huge capital expenses required, it looks like supply, looks like they've got a good handle on supply. Micron made a good strong case to the street that, you know, the pricing is probably going to stay pretty favorable for them. So, I don't know what your thoughts are on that, but that could be a little bit of a head wind for some of the systems suppliers. >> I take that with a pinch of salt. They always want to have the market saying it's not going to go down. >> Of course, yeah. And then it crashes. (chuckles) >> The normal market place is, for any of that, is go through this series of S-curves, as you reach a certain point of volume, and 3D NAND has reached that point, that it will go down, inevitably, and then cue comes in,and then that there will go down, again, through that curve. So, I don't see the marketplace changes. I also think that there's plenty of room in the marketplace for enterprise, because the biggest majority of NAND production is for consumer, 80% goes to consumer. So there's plenty of space, in the marketplace, for enterprise to grow. >> But clearly, the prices have not come down as fast as expected because of supply constraints And the way in which companies like Pure have competed with spinning disks, go through excellent data reduction algorithms, right? >> Yes. >> So, at one point, you had predicted there would be a crossover between the cost per bit of flash and spinning disk. Has that crossover occurred, or-- >> Well, I added in the concept of sharing. >> Raw. >> Yeah, raw. But, added in the cost of sharing, the cost-benefit of sharing, and one of the things that really impresses me is their focus on sharing, which is to be able to share that data, for multiple workloads, in one place. And that's excellent technology, they have. And they're extending that from snapshots to cloud snaps, as well. >> Right. >> And I understand that benefit, but from a pure cost per bit standpoint, the crossover hasn't occurred? >> Oh no. No, they're never going to. I don't think they'll ever get to that. The second that happens, disks will just disappear, completely. >> Gosh, guys, I wish we had more time to wrap things up, but thanks, so much, Dave, for joining me all day-- >> Pleasure, Lisa. >> And sporting The Who to my Prince symbol. >> Awesome. >> David, thanks for joining us in the wrap. We appreciate you watching theCUBE, from Pure Storage Accelerate, 2018. I'm Lisa Martin, for Dave and David, thanks for watching.

Published Date : May 24 2018

SUMMARY :

brought to you by Pure Storage. that you have from today? They bet that the street would reward that model, it has. Can they get to five billion. Well the other thing, too, is that, you know, If you break the market into the flash and non flash, Well, I kind of like to look at But that chart that you have that shows the at the expense of the other segments, Particularly sad. And, you know, the other thing, Lisa, They got kind of crummy service but that can come back to bite you. So Pure, the point is, Pure has the luxury that customers across, you know, from I think there would be VMI, that I know of, And that was a resounding benefit that we were hearing. It reminds me of the early days of EMC, you know, it's-- the two controller, you know, approach. completely in charge of the storage stack. And a lot of people were expecting that, you know, I take that with a pinch of salt. And then it crashes. So, I don't see the marketplace changes. So, at one point, you had predicted But, added in the cost of sharing, I don't think they'll ever get to that. We appreciate you watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LisaPERSON

0.99+

DavidPERSON

0.99+

IBMORGANIZATION

0.99+

David FloyerPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

2018DATE

0.99+

2008DATE

0.99+

EMCORGANIZATION

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

VMAXORGANIZATION

0.99+

Charlie GiancarloPERSON

0.99+

2009DATE

0.99+

GartnerORGANIZATION

0.99+

two billionQUANTITY

0.99+

80%QUANTITY

0.99+

David ScottPERSON

0.99+

VNXORGANIZATION

0.99+

five billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

three billionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

SymmetrixORGANIZATION

0.99+

Department of RevenueORGANIZATION

0.99+

300 new customersQUANTITY

0.99+

Data DomainORGANIZATION

0.99+

3PARORGANIZATION

0.99+

PureORGANIZATION

0.99+

last quarterDATE

0.99+

Pure StorageORGANIZATION

0.99+

Phil SoranPERSON

0.99+

MississippiLOCATION

0.99+

UCLAORGANIZATION

0.99+

firstQUANTITY

0.99+

MicronORGANIZATION

0.98+

CompellentORGANIZATION

0.98+

EvergreenORGANIZATION

0.98+

todayDATE

0.98+

One customerQUANTITY

0.98+

oneQUANTITY

0.98+

a billionQUANTITY

0.98+

over 4800QUANTITY

0.98+

San FranciscoLOCATION

0.97+

theCUBEORGANIZATION

0.97+

two controllerQUANTITY

0.97+

over 83QUANTITY

0.96+

Dell EMCORGANIZATION

0.96+

five billion dollarQUANTITY

0.96+

one placeQUANTITY

0.95+

NVMeORGANIZATION

0.95+

PurePERSON

0.95+

Simpson Strong-TieORGANIZATION

0.94+

WikibonORGANIZATION

0.92+

NetAppTITLE

0.92+

Action Item Quick Take | David Floyer | Flash and SSD, April 2018


 

>> Hi, I'm Peter Burris with another Wikibon Action Item Quick Take. David Floyer, you've been at the vanguard of talking about the role that Flash, SSD's, and others, other technologies are going to have in the technology industry, predicting early on that it was going to eclipse HDD, even though you got a lot of blow back about the "We're going to remain expensive and small". That's changed. What's going on? >> Well, I've got a prediction that we'll have petabyte drives, SSD drives, within five years. Let me tell you a little bit why. So there's this new type of SSD that's coming into town. It's the mega SSD, and Nimbus Data has just announced this mega SSD. It's a hundred terabyte drive. It's very high density, obviously. It has much fewer, uh, much fewer? It has fewer IOPS and bandwidth than SSD. The access density is much better than HDD, but still obviously lower than high-performance SSD. Much, much lower space power than either SSD or HDD in terms of environmentals. It's three and a half inch. That's compatible with HDD. It's obviously looking to go into the same slots. A hundred terabytes today, two hundred terabytes, 10x, that 10x of the Hammer drives that are coming in from HDD's in 2019, 2020, and the delta will increase over time. It's still more expensive than HDD per bit, but it's, and it's not a direct replacement, but much greater ability to integrate with data services and other things like that. So the prediction, then, is get ready for mega SSD's. It's going to carve out a space at the low end of SSD's and into the HDD's, and we're going to have one petabyte, or more, drives within five years. >> Big stuff from small things. David Floyer, thank you very much. And, once again, this has been a Wikibon Action Item Quick Take. (chill techno music)

Published Date : Apr 6 2018

SUMMARY :

about the "We're going to remain expensive and small". It's the mega SSD, and Nimbus Data has just announced Wikibon Action Item Quick Take.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

April 2018DATE

0.99+

2019DATE

0.99+

2020DATE

0.99+

10xQUANTITY

0.99+

two hundred terabytesQUANTITY

0.99+

three and a half inchQUANTITY

0.99+

Nimbus DataORGANIZATION

0.98+

A hundred terabytesQUANTITY

0.98+

one petabyteQUANTITY

0.97+

todayDATE

0.95+

five yearsQUANTITY

0.94+

WikibonORGANIZATION

0.92+

a hundred terabyteQUANTITY

0.83+

petabyteQUANTITY

0.56+

David Floyer | Action Item Quick Take - March 30, 2018


 

>> Hi, this is Peter Burris with another Wikibon Action Item Quick Take. David Floyer, big news from Redmond, what's going on? >> Well, big Microsoft announcement. If we go back a few years before Nadella took over, Ballmer was a great believer in one Microsoft. They bought Nokia, they were looking at putting Windows into everything, it was a Windows led, one Microsoft organization. And a lot of ambitious ideas were cut off because they didn't get the sign off by, for example, the Windows group. Nadella's first action, and I actually was there, was to announce Office on the iPhone. A major, major thing that had been proposed for a long time was being held up internally. And now he's gone even further. The focus, clear focus of Microsoft is on the cloud, you know 50% plus CAGR on the cloud, Office 365 CAGR 41% and AI, focusing on AI and obviously the intelligent age as well. So Windows 10, Myerson, the leader there, is out, 2% CAGR, he missed his one billion Windows target, by a long way, something like 50%. Windows functionality is being distributed, essentially, across the whole of Microsoft. So hardware is taking the Xbox and the Surface. Windows server itself is going to the cloud. So, big change from the historical look of Microsoft, but, a trimming down of the organization and a much clearer focus on the key things driving Microsoft's fantastic increase in net worth. >> So Microsoft retooling to take advantage and be more relevant, sustain it's relevance in the new era of computing. Once again, this has been a Wikibon Action Item Quick Take. (soft electronic music)

Published Date : Mar 30 2018

SUMMARY :

David Floyer, big news from Redmond, what's going on? So Windows 10, Myerson, the leader there, is out, in the new era of computing.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

March 30, 2018DATE

0.99+

MicrosoftORGANIZATION

0.99+

NadellaPERSON

0.99+

NokiaORGANIZATION

0.99+

50%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

BallmerPERSON

0.99+

2%QUANTITY

0.99+

one billionQUANTITY

0.99+

Windows 10TITLE

0.99+

Office 365TITLE

0.99+

WindowsTITLE

0.98+

XboxCOMMERCIAL_ITEM

0.98+

SurfaceCOMMERCIAL_ITEM

0.96+

first actionQUANTITY

0.95+

OfficeTITLE

0.94+

MyersonPERSON

0.93+

oneQUANTITY

0.92+

WikibonORGANIZATION

0.87+

RedmondLOCATION

0.86+

41%QUANTITY

0.84+

WindowsORGANIZATION

0.8+

few yearsDATE

0.43+

Wikibon Action Item Quick Take | David Floyer | OCP Summit, March 2018


 

>> Hi I'm Peter Burris, and welcome once again to another Wikibon Action Item Quick Take. David Floyer you were at OCP, the Open Compute Platform show, or summit this week, wandered the floor, talked to a lot of people, one company in particular stood out, Nimbus Data, what'd you hear? >> Well they had a very interesting announcement of their 100 terrabyte three and a half inch SSD, called the ExaData. That's a lot of storage in a very small space. It's high capacity SSDs, in my opinion, are going to be very important. They are denser, much less power, much less space, not as much performance, but fit in very nicely between the lowest level of disc, hard disc storage and the upper level. So they are going to be very useful in lower tier two applications. Very low friction for adoption there. They're going to be useful in tier three, but they're not direct replacement for disc. They work in a slightly different way. So the friction is going to be a little bit higher there. And then in tier four, there's again very interesting of putting all of the metadata about large amounts of data and put the metadata on high capacity SSD to enable much faster access at a tier four level. So action item for me is have a look at my research, and have a look at the general pricing: it's about half of what a standard SSD is. >> Excellent so this is once again a Wikibon Action Item Quick Take. David Floyer talking about Nimbus Data and their new high capacity, slightly lower performance, cost effective SSD. (upbeat music)

Published Date : Mar 23 2018

SUMMARY :

to another Wikibon Action Item Quick Take. So they are going to be very useful and their new high capacity, slightly lower performance,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

Steve MulaneyPERSON

0.99+

GeorgePERSON

0.99+

John CurrierPERSON

0.99+

Derek MonahanPERSON

0.99+

Justin SmithPERSON

0.99+

StevePERSON

0.99+

MexicoLOCATION

0.99+

George BuckmanPERSON

0.99+

AmazonORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

StephenPERSON

0.99+

CiscoORGANIZATION

0.99+

Steve EleniPERSON

0.99+

Bobby WilloughbyPERSON

0.99+

millionsQUANTITY

0.99+

John FordPERSON

0.99+

Santa ClaraLOCATION

0.99+

20%QUANTITY

0.99+

MissouriLOCATION

0.99+

twenty-yearQUANTITY

0.99+

Luis CastilloPERSON

0.99+

SeattleLOCATION

0.99+

Ellie MaePERSON

0.99+

80 percentQUANTITY

0.99+

EuropeLOCATION

0.99+

10%QUANTITY

0.99+

25 yearsQUANTITY

0.99+

USLOCATION

0.99+

twenty yearsQUANTITY

0.99+

three monthsQUANTITY

0.99+

JeffPERSON

0.99+

80%QUANTITY

0.99+

John fritzPERSON

0.99+

JustinPERSON

0.99+

GoogleORGANIZATION

0.99+

North AmericaLOCATION

0.99+

JenniferPERSON

0.99+

AWSORGANIZATION

0.99+

Michael KeatonPERSON

0.99+

Santa Clara, CALOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

National InstrumentsORGANIZATION

0.99+

Jon FourierPERSON

0.99+

50%QUANTITY

0.99+

20 mileQUANTITY

0.99+

DavidPERSON

0.99+

Toby FosterPERSON

0.99+

hundred-percentQUANTITY

0.99+

fiveQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

PythonTITLE

0.99+

GartnerORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

StaceyPERSON

0.99+

Palo AltoLOCATION

0.99+

next yearDATE

0.99+

two sidesQUANTITY

0.99+

18 months agoDATE

0.99+

two typesQUANTITY

0.99+

Andy JessePERSON

0.99+

Action Item Quick Take | David Floyer - Feb 2018


 

(groovy music) >> Hi, I'm Peter Burris, welcome to a Wikibon action item quick take. David Floyer, you and I visited Half Moon Bay this week for announcements, what happened? >> Well, there were a number of IBM, Spectrum, and NVMe over Fabric announcements, and they were, I thought, good. The first one was a broad range of Spectrum software announcments working on any hardware, not just IBM, and it's a good step towards the hyperconverged software-led service and environments that we've been talking about. The second, they filled in the IBM NAS gap with Spectrum NAS, so that's always a good thing to fill in. There's a lot of practical reasons for using that. The third is they announced an IBM 900 storage product with fantastic IO performance. 95 microsecond, including inline compression. And for the hardware people, that's really, really good. And the last one is, I thought the most interesting of all, which is a good IBM announcement on the commitment to NVME over fabrics. They announced a very fast solution with the POWER9 with gen four PCIE and the 900 storage, that's best of breed in terms of speed, and they guarantee that all of their current products would support NVMe over fabrics as it comes over in 2018 and some of 2019. So, a very good overall announcement, and puts IBM back into storage. >> Great, so a very aggressive announcement from IBM. Good to see them back in the storage world. This has been Peter Burris talking with David Floyer, and a Wikibon action item quick take. (groovy music)

Published Date : Feb 23 2018

SUMMARY :

and the 900 storage, and a Wikibon action item quick take.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

2018DATE

0.99+

Feb 2018DATE

0.99+

IBMORGANIZATION

0.99+

2019DATE

0.99+

thirdQUANTITY

0.98+

Half Moon BayLOCATION

0.98+

95 microsecondQUANTITY

0.97+

secondQUANTITY

0.94+

this weekDATE

0.93+

POWER9COMMERCIAL_ITEM

0.91+

first oneQUANTITY

0.91+

WikibonORGANIZATION

0.77+

NVMeORGANIZATION

0.71+

900COMMERCIAL_ITEM

0.65+

SpectrumORGANIZATION

0.63+

SpectrumTITLE

0.55+

PCIECOMMERCIAL_ITEM

0.49+

gen fourOTHER

0.38+

SpectrumCOMMERCIAL_ITEM

0.32+

David Floyer, Wikibon | Action Item Quick Take: Storage Networks, Feb 2018


 

>> Hi, I'm Peter Burris, and this is a Wikibon Action Item Quick Take. (techno music) David Floyer, lot of new opportunities for thinking about how we can spread data. That puts new types of pressure on networks. What's going on? >> So, what's interesting is the future of networks and in particular one type of network. So, if we generalize about networks you can have simplicity, which is N-F-V, for example, Network Function Virtualization is incredibly important for. You can have scale, reach, the number of different places that you place data and how you can have the same admin for that. And you can have performance. Those are three things and there's usually a trade-off between those. You can't ... very, very difficult to have all three. What's interesting is that Mellanox have defined one piece of that network, the storage network, as a place where performance is absolutely critical. And they've defined the storage network with an emphasis on this performance using ethernet. Why? Because now ethernet can offer the same point-to-point capabilities, no lost capabilities. The fastest switches are in ethernet now. They go up to 400 has been announced, which is much ... >> David: 400 ... >> Gigabits per second, which is much faster than anybody else for any other protocol. So, and the reason for, one of the major reasons for this is that volume is coming from the Cloud providers. So they are providing a statement that storage networks are different from other networks. They need to have very low latency, they need to have high bandwidth, they need to have no loss, they need this point-to-point capability so that things can be done very, very fast indeed. I think their vision of where storage networks go is very sound and that is what all storage vendors need to take heed of, and C-I-Os, C-T-Os need to take heed of, is that type of network is going to be what is in the Cloud and is going to come to the Enterprise Data Center very quickly. >> David Floyer, thank you very much. Bottom line, ethernet, storage area networks, segmentation, still going to happen. >> Yup. >> I'm Peter Burris, this has been a Wikibon Action Item Quick Take. (techno music)

Published Date : Feb 16 2018

SUMMARY :

and this is a Wikibon Action Item Quick Take. and how you can have the same admin for that. So, and the reason for, one of the major reasons for this David Floyer, thank you very much. this has been a Wikibon Action Item Quick Take.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

David FloyerPERSON

0.99+

Feb 2018DATE

0.99+

oneQUANTITY

0.98+

one pieceQUANTITY

0.98+

three thingsQUANTITY

0.97+

MellanoxORGANIZATION

0.97+

WikibonORGANIZATION

0.95+

one typeQUANTITY

0.95+

threeQUANTITY

0.93+

DavidPERSON

0.92+

secondQUANTITY

0.89+

up to 400QUANTITY

0.73+

I-OsCOMMERCIAL_ITEM

0.54+

-T-OsTITLE

0.52+

C-TITLE

0.52+

400QUANTITY

0.43+

CORGANIZATION

0.38+

Breaking Analysis: Databricks faces critical strategic decisions…here’s why


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Spark became a top level Apache project in 2014, and then shortly thereafter, burst onto the big data scene. Spark, along with the cloud, transformed and in many ways, disrupted the big data market. Databricks optimized its tech stack for Spark and took advantage of the cloud to really cleverly deliver a managed service that has become a leading AI and data platform among data scientists and data engineers. However, emerging customer data requirements are shifting into a direction that will cause modern data platform players generally and Databricks, specifically, we think, to make some key directional decisions and perhaps even reinvent themselves. Hello and welcome to this week's wikibon theCUBE Insights, powered by ETR. In this Breaking Analysis, we're going to do a deep dive into Databricks. We'll explore its current impressive market momentum. We're going to use some ETR survey data to show that, and then we'll lay out how customer data requirements are changing and what the ideal data platform will look like in the midterm future. We'll then evaluate core elements of the Databricks portfolio against that vision, and then we'll close with some strategic decisions that we think the company faces. And to do so, we welcome in our good friend, George Gilbert, former equities analyst, market analyst, and current Principal at TechAlpha Partners. George, good to see you. Thanks for coming on. >> Good to see you, Dave. >> All right, let me set this up. We're going to start by taking a look at where Databricks sits in the market in terms of how customers perceive the company and what it's momentum looks like. And this chart that we're showing here is data from ETS, the emerging technology survey of private companies. The N is 1,421. What we did is we cut the data on three sectors, analytics, database-data warehouse, and AI/ML. The vertical axis is a measure of customer sentiment, which evaluates an IT decision maker's awareness of the firm and the likelihood of engaging and/or purchase intent. The horizontal axis shows mindshare in the dataset, and we've highlighted Databricks, which has been a consistent high performer in this survey over the last several quarters. And as we, by the way, just as aside as we previously reported, OpenAI, which burst onto the scene this past quarter, leads all names, but Databricks is still prominent. You can see that the ETR shows some open source tools for reference, but as far as firms go, Databricks is very impressively positioned. Now, let's see how they stack up to some mainstream cohorts in the data space, against some bigger companies and sometimes public companies. This chart shows net score on the vertical axis, which is a measure of spending momentum and pervasiveness in the data set is on the horizontal axis. You can see that chart insert in the upper right, that informs how the dots are plotted, and net score against shared N. And that red dotted line at 40% indicates a highly elevated net score, anything above that we think is really, really impressive. And here we're just comparing Databricks with Snowflake, Cloudera, and Oracle. And that squiggly line leading to Databricks shows their path since 2021 by quarter. And you can see it's performing extremely well, maintaining an elevated net score and net range. Now it's comparable in the vertical axis to Snowflake, and it consistently is moving to the right and gaining share. Now, why did we choose to show Cloudera and Oracle? The reason is that Cloudera got the whole big data era started and was disrupted by Spark. And of course the cloud, Spark and Databricks and Oracle in many ways, was the target of early big data players like Cloudera. Take a listen to Cloudera CEO at the time, Mike Olson. This is back in 2010, first year of theCUBE, play the clip. >> Look, back in the day, if you had a data problem, if you needed to run business analytics, you wrote the biggest check you could to Sun Microsystems, and you bought a great big, single box, central server, and any money that was left over, you handed to Oracle for a database licenses and you installed that database on that box, and that was where you went for data. That was your temple of information. >> Okay? So Mike Olson implied that monolithic model was too expensive and inflexible, and Cloudera set out to fix that. But the best laid plans, as they say, George, what do you make of the data that we just shared? >> So where Databricks has really come up out of sort of Cloudera's tailpipe was they took big data processing, made it coherent, made it a managed service so it could run in the cloud. So it relieved customers of the operational burden. Where they're really strong and where their traditional meat and potatoes or bread and butter is the predictive and prescriptive analytics that building and training and serving machine learning models. They've tried to move into traditional business intelligence, the more traditional descriptive and diagnostic analytics, but they're less mature there. So what that means is, the reason you see Databricks and Snowflake kind of side by side is there are many, many accounts that have both Snowflake for business intelligence, Databricks for AI machine learning, where Snowflake, I'm sorry, where Databricks also did really well was in core data engineering, refining the data, the old ETL process, which kind of turned into ELT, where you loaded into the analytic repository in raw form and refine it. And so people have really used both, and each is trying to get into the other. >> Yeah, absolutely. We've reported on this quite a bit. Snowflake, kind of moving into the domain of Databricks and vice versa. And the last bit of ETR evidence that we want to share in terms of the company's momentum comes from ETR's Round Tables. They're run by Erik Bradley, and now former Gartner analyst and George, your colleague back at Gartner, Daren Brabham. And what we're going to show here is some direct quotes of IT pros in those Round Tables. There's a data science head and a CIO as well. Just make a few call outs here, we won't spend too much time on it, but starting at the top, like all of us, we can't talk about Databricks without mentioning Snowflake. Those two get us excited. Second comment zeros in on the flexibility and the robustness of Databricks from a data warehouse perspective. And then the last point is, despite competition from cloud players, Databricks has reinvented itself a couple of times over the year. And George, we're going to lay out today a scenario that perhaps calls for Databricks to do that once again. >> Their big opportunity and their big challenge for every tech company, it's managing a technology transition. The transition that we're talking about is something that's been bubbling up, but it's really epical. First time in 60 years, we're moving from an application-centric view of the world to a data-centric view, because decisions are becoming more important than automating processes. So let me let you sort of develop. >> Yeah, so let's talk about that here. We going to put up some bullets on precisely that point and the changing sort of customer environment. So you got IT stacks are shifting is George just said, from application centric silos to data centric stacks where the priority is shifting from automating processes to automating decision. You know how look at RPA and there's still a lot of automation going on, but from the focus of that application centricity and the data locked into those apps, that's changing. Data has historically been on the outskirts in silos, but organizations, you think of Amazon, think Uber, Airbnb, they're putting data at the core, and logic is increasingly being embedded in the data instead of the reverse. In other words, today, the data's locked inside the app, which is why you need to extract that data is sticking it to a data warehouse. The point, George, is we're putting forth this new vision for how data is going to be used. And you've used this Uber example to underscore the future state. Please explain? >> Okay, so this is hopefully an example everyone can relate to. The idea is first, you're automating things that are happening in the real world and decisions that make those things happen autonomously without humans in the loop all the time. So to use the Uber example on your phone, you call a car, you call a driver. Automatically, the Uber app then looks at what drivers are in the vicinity, what drivers are free, matches one, calculates an ETA to you, calculates a price, calculates an ETA to your destination, and then directs the driver once they're there. The point of this is that that cannot happen in an application-centric world very easily because all these little apps, the drivers, the riders, the routes, the fares, those call on data locked up in many different apps, but they have to sit on a layer that makes it all coherent. >> But George, so if Uber's doing this, doesn't this tech already exist? Isn't there a tech platform that does this already? >> Yes, and the mission of the entire tech industry is to build services that make it possible to compose and operate similar platforms and tools, but with the skills of mainstream developers in mainstream corporations, not the rocket scientists at Uber and Amazon. >> Okay, so we're talking about horizontally scaling across the industry, and actually giving a lot more organizations access to this technology. So by way of review, let's summarize the trend that's going on today in terms of the modern data stack that is propelling the likes of Databricks and Snowflake, which we just showed you in the ETR data and is really is a tailwind form. So the trend is toward this common repository for analytic data, that could be multiple virtual data warehouses inside of Snowflake, but you're in that Snowflake environment or Lakehouses from Databricks or multiple data lakes. And we've talked about what JP Morgan Chase is doing with the data mesh and gluing data lakes together, you've got various public clouds playing in this game, and then the data is annotated to have a common meaning. In other words, there's a semantic layer that enables applications to talk to the data elements and know that they have common and coherent meaning. So George, the good news is this approach is more effective than the legacy monolithic models that Mike Olson was talking about, so what's the problem with this in your view? >> So today's data platforms added immense value 'cause they connected the data that was previously locked up in these monolithic apps or on all these different microservices, and that supported traditional BI and AI/ML use cases. But now if we want to build apps like Uber or Amazon.com, where they've got essentially an autonomously running supply chain and e-commerce app where humans only care and feed it. But the thing is figuring out what to buy, when to buy, where to deploy it, when to ship it. We needed a semantic layer on top of the data. So that, as you were saying, the data that's coming from all those apps, the different apps that's integrated, not just connected, but it means the same. And the issue is whenever you add a new layer to a stack to support new applications, there are implications for the already existing layers, like can they support the new layer and its use cases? So for instance, if you add a semantic layer that embeds app logic with the data rather than vice versa, which we been talking about and that's been the case for 60 years, then the new data layer faces challenges that the way you manage that data, the way you analyze that data, is not supported by today's tools. >> Okay, so actually Alex, bring me up that last slide if you would, I mean, you're basically saying at the bottom here, today's repositories don't really do joins at scale. The future is you're talking about hundreds or thousands or millions of data connections, and today's systems, we're talking about, I don't know, 6, 8, 10 joins and that is the fundamental problem you're saying, is a new data error coming and existing systems won't be able to handle it? >> Yeah, one way of thinking about it is that even though we call them relational databases, when we actually want to do lots of joins or when we want to analyze data from lots of different tables, we created a whole new industry for analytic databases where you sort of mung the data together into fewer tables. So you didn't have to do as many joins because the joins are difficult and slow. And when you're going to arbitrarily join thousands, hundreds of thousands or across millions of elements, you need a new type of database. We have them, they're called graph databases, but to query them, you go back to the prerelational era in terms of their usability. >> Okay, so we're going to come back to that and talk about how you get around that problem. But let's first lay out what the ideal data platform of the future we think looks like. And again, we're going to come back to use this Uber example. In this graphic that George put together, awesome. We got three layers. The application layer is where the data products reside. The example here is drivers, rides, maps, routes, ETA, et cetera. The digital version of what we were talking about in the previous slide, people, places and things. The next layer is the data layer, that breaks down the silos and connects the data elements through semantics and everything is coherent. And then the bottom layers, the legacy operational systems feed that data layer. George, explain what's different here, the graph database element, you talk about the relational query capabilities, and why can't I just throw memory at solving this problem? >> Some of the graph databases do throw memory at the problem and maybe without naming names, some of them live entirely in memory. And what you're dealing with is a prerelational in-memory database system where you navigate between elements, and the issue with that is we've had SQL for 50 years, so we don't have to navigate, we can say what we want without how to get it. That's the core of the problem. >> Okay. So if I may, I just want to drill into this a little bit. So you're talking about the expressiveness of a graph. Alex, if you'd bring that back out, the fourth bullet, expressiveness of a graph database with the relational ease of query. Can you explain what you mean by that? >> Yeah, so graphs are great because when you can describe anything with a graph, that's why they're becoming so popular. Expressive means you can represent anything easily. They're conducive to, you might say, in a world where we now want like the metaverse, like with a 3D world, and I don't mean the Facebook metaverse, I mean like the business metaverse when we want to capture data about everything, but we want it in context, we want to build a set of digital twins that represent everything going on in the world. And Uber is a tiny example of that. Uber built a graph to represent all the drivers and riders and maps and routes. But what you need out of a database isn't just a way to store stuff and update stuff. You need to be able to ask questions of it, you need to be able to query it. And if you go back to prerelational days, you had to know how to find your way to the data. It's sort of like when you give directions to someone and they didn't have a GPS system and a mapping system, you had to give them turn by turn directions. Whereas when you have a GPS and a mapping system, which is like the relational thing, you just say where you want to go, and it spits out the turn by turn directions, which let's say, the car might follow or whoever you're directing would follow. But the point is, it's much easier in a relational database to say, "I just want to get these results. You figure out how to get it." The graph database, they have not taken over the world because in some ways, it's taking a 50 year leap backwards. >> Alright, got it. Okay. Let's take a look at how the current Databricks offerings map to that ideal state that we just laid out. So to do that, we put together this chart that looks at the key elements of the Databricks portfolio, the core capability, the weakness, and the threat that may loom. Start with the Delta Lake, that's the storage layer, which is great for files and tables. It's got true separation of compute and storage, I want you to double click on that George, as independent elements, but it's weaker for the type of low latency ingest that we see coming in the future. And some of the threats highlighted here. AWS could add transactional tables to S3, Iceberg adoption is picking up and could accelerate, that could disrupt Databricks. George, add some color here please? >> Okay, so this is the sort of a classic competitive forces where you want to look at, so what are customers demanding? What's competitive pressure? What are substitutes? Even what your suppliers might be pushing. Here, Delta Lake is at its core, a set of transactional tables that sit on an object store. So think of it in a database system, this is the storage engine. So since S3 has been getting stronger for 15 years, you could see a scenario where they add transactional tables. We have an open source alternative in Iceberg, which Snowflake and others support. But at the same time, Databricks has built an ecosystem out of tools, their own and others, that read and write to Delta tables, that's what makes the Delta Lake and ecosystem. So they have a catalog, the whole machine learning tool chain talks directly to the data here. That was their great advantage because in the past with Snowflake, you had to pull all the data out of the database before the machine learning tools could work with it, that was a major shortcoming. They fixed that. But the point here is that even before we get to the semantic layer, the core foundation is under threat. >> Yep. Got it. Okay. We got a lot of ground to cover. So we're going to take a look at the Spark Execution Engine next. Think of that as the refinery that runs really efficient batch processing. That's kind of what disrupted the DOOp in a large way, but it's not Python friendly and that's an issue because the data science and the data engineering crowd are moving in that direction, and/or they're using DBT. George, we had Tristan Handy on at Supercloud, really interesting discussion that you and I did. Explain why this is an issue for Databricks? >> So once the data lake was in place, what people did was they refined their data batch, and Spark has always had streaming support and it's gotten better. The underlying storage as we've talked about is an issue. But basically they took raw data, then they refined it into tables that were like customers and products and partners. And then they refined that again into what was like gold artifacts, which might be business intelligence metrics or dashboards, which were collections of metrics. But they were running it on the Spark Execution Engine, which it's a Java-based engine or it's running on a Java-based virtual machine, which means all the data scientists and the data engineers who want to work with Python are really working in sort of oil and water. Like if you get an error in Python, you can't tell whether the problems in Python or where it's in Spark. There's just an impedance mismatch between the two. And then at the same time, the whole world is now gravitating towards DBT because it's a very nice and simple way to compose these data processing pipelines, and people are using either SQL in DBT or Python in DBT, and that kind of is a substitute for doing it all in Spark. So it's under threat even before we get to that semantic layer, it so happens that DBT itself is becoming the authoring environment for the semantic layer with business intelligent metrics. But that's again, this is the second element that's under direct substitution and competitive threat. >> Okay, let's now move down to the third element, which is the Photon. Photon is Databricks' BI Lakehouse, which has integration with the Databricks tooling, which is very rich, it's newer. And it's also not well suited for high concurrency and low latency use cases, which we think are going to increasingly become the norm over time. George, the call out threat here is customers want to connect everything to a semantic layer. Explain your thinking here and why this is a potential threat to Databricks? >> Okay, so two issues here. What you were touching on, which is the high concurrency, low latency, when people are running like thousands of dashboards and data is streaming in, that's a problem because SQL data warehouse, the query engine, something like that matures over five to 10 years. It's one of these things, the joke that Andy Jassy makes just in general, he's really talking about Azure, but there's no compression algorithm for experience. The Snowflake guy started more than five years earlier, and for a bunch of reasons, that lead is not something that Databricks can shrink. They'll always be behind. So that's why Snowflake has transactional tables now and we can get into that in another show. But the key point is, so near term, it's struggling to keep up with the use cases that are core to business intelligence, which is highly concurrent, lots of users doing interactive query. But then when you get to a semantic layer, that's when you need to be able to query data that might have thousands or tens of thousands or hundreds of thousands of joins. And that's a SQL query engine, traditional SQL query engine is just not built for that. That's the core problem of traditional relational databases. >> Now this is a quick aside. We always talk about Snowflake and Databricks in sort of the same context. We're not necessarily saying that Snowflake is in a position to tackle all these problems. We'll deal with that separately. So we don't mean to imply that, but we're just sort of laying out some of the things that Snowflake or rather Databricks customers we think, need to be thinking about and having conversations with Databricks about and we hope to have them as well. We'll come back to that in terms of sort of strategic options. But finally, when come back to the table, we have Databricks' AI/ML Tool Chain, which has been an awesome capability for the data science crowd. It's comprehensive, it's a one-stop shop solution, but the kicker here is that it's optimized for supervised model building. And the concern is that foundational models like GPT could cannibalize the current Databricks tooling, but George, can't Databricks, like other software companies, integrate foundation model capabilities into its platform? >> Okay, so the sound bite answer to that is sure, IBM 3270 terminals could call out to a graphical user interface when they're running on the XT terminal, but they're not exactly good citizens in that world. The core issue is Databricks has this wonderful end-to-end tool chain for training, deploying, monitoring, running inference on supervised models. But the paradigm there is the customer builds and trains and deploys each model for each feature or application. In a world of foundation models which are pre-trained and unsupervised, the entire tool chain is different. So it's not like Databricks can junk everything they've done and start over with all their engineers. They have to keep maintaining what they've done in the old world, but they have to build something new that's optimized for the new world. It's a classic technology transition and their mentality appears to be, "Oh, we'll support the new stuff from our old stuff." Which is suboptimal, and as we'll talk about, their biggest patron and the company that put them on the map, Microsoft, really stopped working on their old stuff three years ago so that they could build a new tool chain optimized for this new world. >> Yeah, and so let's sort of close with what we think the options are and decisions that Databricks has for its future architecture. They're smart people. I mean we've had Ali Ghodsi on many times, super impressive. I think they've got to be keenly aware of the limitations, what's going on with foundation models. But at any rate, here in this chart, we lay out sort of three scenarios. One is re-architect the platform by incrementally adopting new technologies. And example might be to layer a graph query engine on top of its stack. They could license key technologies like graph database, they could get aggressive on M&A and buy-in, relational knowledge graphs, semantic technologies, vector database technologies. George, as David Floyer always says, "A lot of ways to skin a cat." We've seen companies like, even think about EMC maintained its relevance through M&A for many, many years. George, give us your thought on each of these strategic options? >> Okay, I find this question the most challenging 'cause remember, I used to be an equity research analyst. I worked for Frank Quattrone, we were one of the top tech shops in the banking industry, although this is 20 years ago. But the M&A team was the top team in the industry and everyone wanted them on their side. And I remember going to meetings with these CEOs, where Frank and the bankers would say, "You want us for your M&A work because we can do better." And they really could do better. But in software, it's not like with EMC in hardware because with hardware, it's easier to connect different boxes. With software, the whole point of a software company is to integrate and architect the components so they fit together and reinforce each other, and that makes M&A harder. You can do it, but it takes a long time to fit the pieces together. Let me give you examples. If they put a graph query engine, let's say something like TinkerPop, on top of, I don't even know if it's possible, but let's say they put it on top of Delta Lake, then you have this graph query engine talking to their storage layer, Delta Lake. But if you want to do analysis, you got to put the data in Photon, which is not really ideal for highly connected data. If you license a graph database, then most of your data is in the Delta Lake and how do you sync it with the graph database? If you do sync it, you've got data in two places, which kind of defeats the purpose of having a unified repository. I find this semantic layer option in number three actually more promising, because that's something that you can layer on top of the storage layer that you have already. You just have to figure out then how to have your query engines talk to that. What I'm trying to highlight is, it's easy as an analyst to say, "You can buy this company or license that technology." But the really hard work is making it all work together and that is where the challenge is. >> Yeah, and well look, I thank you for laying that out. We've seen it, certainly Microsoft and Oracle. I guess you might argue that well, Microsoft had a monopoly in its desktop software and was able to throw off cash for a decade plus while it's stock was going sideways. Oracle had won the database wars and had amazing margins and cash flow to be able to do that. Databricks isn't even gone public yet, but I want to close with some of the players to watch. Alex, if you'd bring that back up, number four here. AWS, we talked about some of their options with S3 and it's not just AWS, it's blob storage, object storage. Microsoft, as you sort of alluded to, was an early go-to market channel for Databricks. We didn't address that really. So maybe in the closing comments we can. Google obviously, Snowflake of course, we're going to dissect their options in future Breaking Analysis. Dbt labs, where do they fit? Bob Muglia's company, Relational.ai, why are these players to watch George, in your opinion? >> So everyone is trying to assemble and integrate the pieces that would make building data applications, data products easy. And the critical part isn't just assembling a bunch of pieces, which is traditionally what AWS did. It's a Unix ethos, which is we give you the tools, you put 'em together, 'cause you then have the maximum choice and maximum power. So what the hyperscalers are doing is they're taking their key value stores, in the case of ASW it's DynamoDB, in the case of Azure it's Cosmos DB, and each are putting a graph query engine on top of those. So they have a unified storage and graph database engine, like all the data would be collected in the key value store. Then you have a graph database, that's how they're going to be presenting a foundation for building these data apps. Dbt labs is putting a semantic layer on top of data lakes and data warehouses and as we'll talk about, I'm sure in the future, that makes it easier to swap out the underlying data platform or swap in new ones for specialized use cases. Snowflake, what they're doing, they're so strong in data management and with their transactional tables, what they're trying to do is take in the operational data that used to be in the province of many state stores like MongoDB and say, "If you manage that data with us, it'll be connected to your analytic data without having to send it through a pipeline." And that's hugely valuable. Relational.ai is the wildcard, 'cause what they're trying to do, it's almost like a holy grail where you're trying to take the expressiveness of connecting all your data in a graph but making it as easy to query as you've always had it in a SQL database or I should say, in a relational database. And if they do that, it's sort of like, it'll be as easy to program these data apps as a spreadsheet was compared to procedural languages, like BASIC or Pascal. That's the implications of Relational.ai. >> Yeah, and again, we talked before, why can't you just throw this all in memory? We're talking in that example of really getting down to differences in how you lay the data out on disk in really, new database architecture, correct? >> Yes. And that's why it's not clear that you could take a data lake or even a Snowflake and why you can't put a relational knowledge graph on those. You could potentially put a graph database, but it'll be compromised because to really do what Relational.ai has done, which is the ease of Relational on top of the power of graph, you actually need to change how you're storing your data on disk or even in memory. So you can't, in other words, it's not like, oh we can add graph support to Snowflake, 'cause if you did that, you'd have to change, or in your data lake, you'd have to change how the data is physically laid out. And then that would break all the tools that talk to that currently. >> What in your estimation, is the timeframe where this becomes critical for a Databricks and potentially Snowflake and others? I mentioned earlier midterm, are we talking three to five years here? Are we talking end of decade? What's your radar say? >> I think something surprising is going on that's going to sort of come up the tailpipe and take everyone by storm. All the hype around business intelligence metrics, which is what we used to put in our dashboards where bookings, billings, revenue, customer, those things, those were the key artifacts that used to live in definitions in your BI tools, and DBT has basically created a standard for defining those so they live in your data pipeline or they're defined in their data pipeline and executed in the data warehouse or data lake in a shared way, so that all tools can use them. This sounds like a digression, it's not. All this stuff about data mesh, data fabric, all that's going on is we need a semantic layer and the business intelligence metrics are defining common semantics for your data. And I think we're going to find by the end of this year, that metrics are how we annotate all our analytic data to start adding common semantics to it. And we're going to find this semantic layer, it's not three to five years off, it's going to be staring us in the face by the end of this year. >> Interesting. And of course SVB today was shut down. We're seeing serious tech headwinds, and oftentimes in these sort of downturns or flat turns, which feels like this could be going on for a while, we emerge with a lot of new players and a lot of new technology. George, we got to leave it there. Thank you to George Gilbert for excellent insights and input for today's episode. I want to thank Alex Myerson who's on production and manages the podcast, of course Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our EIC over at Siliconangle.com, he does some great editing. Remember all these episodes, they're available as podcasts. Wherever you listen, all you got to do is search Breaking Analysis Podcast, we publish each week on wikibon.com and siliconangle.com, or you can email me at David.Vellante@siliconangle.com, or DM me @DVellante. Comment on our LinkedIn post, and please do check out ETR.ai, great survey data, enterprise tech focus, phenomenal. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis.

Published Date : Mar 10 2023

SUMMARY :

bringing you data-driven core elements of the Databricks portfolio and pervasiveness in the data and that was where you went for data. and Cloudera set out to fix that. the reason you see and the robustness of Databricks and their big challenge and the data locked into in the real world and decisions Yes, and the mission of that is propelling the likes that the way you manage that data, is the fundamental problem because the joins are difficult and slow. and connects the data and the issue with that is the fourth bullet, expressiveness and it spits out the and the threat that may loom. because in the past with Snowflake, Think of that as the refinery So once the data lake was in place, George, the call out threat here But the key point is, in sort of the same context. and the company that put One is re-architect the platform and architect the components some of the players to watch. in the case of ASW it's DynamoDB, and why you can't put a relational and executed in the data and manages the podcast, of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

David FloyerPERSON

0.99+

Mike OlsonPERSON

0.99+

2014DATE

0.99+

George GilbertPERSON

0.99+

Dave VellantePERSON

0.99+

GeorgePERSON

0.99+

Cheryl KnightPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Andy JassyPERSON

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Erik BradleyPERSON

0.99+

DavePERSON

0.99+

UberORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Sun MicrosystemsORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Bob MugliaPERSON

0.99+

GartnerORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

60 yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Ali GhodsiPERSON

0.99+

2010DATE

0.99+

DatabricksORGANIZATION

0.99+

Kristin MartinPERSON

0.99+

Rob HofPERSON

0.99+

threeQUANTITY

0.99+

15 yearsQUANTITY

0.99+

Databricks'ORGANIZATION

0.99+

two placesQUANTITY

0.99+

BostonLOCATION

0.99+

Tristan HandyPERSON

0.99+

M&AORGANIZATION

0.99+

Frank QuattronePERSON

0.99+

second elementQUANTITY

0.99+

Daren BrabhamPERSON

0.99+

TechAlpha PartnersORGANIZATION

0.99+

third elementQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

50 yearQUANTITY

0.99+

40%QUANTITY

0.99+

ClouderaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

five yearsQUANTITY

0.99+

Breaking Analysis: re:Invent 2022 marks the next chapter in data & cloud


 

from the cube studios in Palo Alto in Boston bringing you data-driven insights from the cube and ETR this is breaking analysis with Dave vellante the ascendancy of AWS under the leadership of Andy jassy was marked by a tsunami of data and corresponding cloud services to leverage that data now those Services they mainly came in the form of Primitives I.E basic building blocks that were used by developers to create more sophisticated capabilities AWS in the 2020s being led by CEO Adam solipski will be marked by four high-level Trends in our opinion one A Rush of data that will dwarf anything we've previously seen two a doubling or even tripling down on the basic elements of cloud compute storage database security Etc three a greater emphasis on end-to-end integration of AWS services to simplify and accelerate customer adoption of cloud and four significantly deeper business integration of cloud Beyond it as an underlying element of organizational operations hello and welcome to this week's wikibon Cube insights powered by ETR in this breaking analysis we extract and analyze nuggets from John furrier's annual sit-down with the CEO of AWS we'll share data from ETR and other sources to set the context for the market and competition in cloud and we'll give you our glimpse of what to expect at re invent in 2022. now before we get into the core of our analysis Alibaba has announced earnings they always announced after the big three you know a month later and we've updated our Q3 slash November hyperscale Computing forecast for the year as seen here and we're going to spend a lot of time on this as most of you have seen the bulk of it already but suffice to say alibaba's cloud business is hitting that same macro Trend that we're seeing across the board but a more substantial slowdown than we expected and more substantial than its peers they're facing China headwinds they've been restructuring its Cloud business and it's led to significantly slower growth uh in in the you know low double digits as opposed to where we had it at 15 this puts our year-end estimates for 2022 Revenue at 161 billion still a healthy 34 growth with AWS surpassing 80 billion in 2022 Revenue now on a related note one of the big themes in Cloud that we've been reporting on is how customers are optimizing their Cloud spend it's a technique that they use and when the economy looks a little shaky and here's a graphic that we pulled from aws's website which shows the various pricing plans at a high level as you know they're much more granular than that and more sophisticated but Simplicity we'll just keep it here basically there are four levels first one here is on demand I.E pay by the drink now we're going to jump down to what we've labeled as number two spot instances that's like the right place at the right time I can use that extra capacity in the moment the third is reserved instances or RIS where I pay up front to get a discount and the fourth is sort of optimized savings plans where customers commit to a one or three year term and for a better price now you'll notice we labeled the choices in a different order than AWS presented them on its website and that's because we believe that the order that we chose is the natural progression for customers this started on demand they maybe experiment with spot instances they move to reserve instances when the cloud bill becomes too onerous and if you're large enough you lock in for one or three years okay the interesting thing is the order in which AWS presents them we believe that on-demand accounts for the majority of AWS customer spending now if you think about it those on-demand customers they're also at risk customers yeah sure there's some switching costs like egress and learning curve but many customers they have multiple clouds and they've got experience and so they're kind of already up to a learning curve and if you're not married to AWS with a longer term commitment there's less friction to switch now AWS here presents the most attractive plan from a financial perspective second after on demand and it's also the plan that makes the greatest commitment from a lock-in standpoint now In fairness to AWS it's also true that there is a trend towards subscription-based pricing and we have some data on that this chart is from an ETR drill down survey the end is 300. pay attention to the bars on the right the left side is sort of busy but the pink is subscription and you can see the trend upward the light blue is consumption based or on demand based pricing and you can see there's a steady Trend toward subscription now we'll dig into this in a later episode of Breaking analysis but we'll share with you a little some tidbits with the data that ETR provides you can select which segment is and pass or you can go up the stack Etc but so when you choose is and paths 44 of customers either prefer or are required to use on-demand pricing whereas around 40 percent of customers say they either prefer or are required to use subscription pricing again that's for is so now the further mu you move up the stack the more prominent subscription pricing becomes often with sixty percent or more for the software-based offerings that require or prefer subscription and interestingly cyber security tracks along with software at around 60 percent that that prefer subscription it's likely because as with software you're not shutting down your cyber protection on demand all right let's get into the expectations for reinvent and we're going to start with an observation in data in this 2018 book seeing digital author David michella made the point that whereas most companies apply data on the periphery of their business kind of as an add-on function successful data companies like Google and Amazon and Facebook have placed data at the core of their operations they've operationalized data and they apply machine intelligence to that foundational element why is this the fact is it's not easy to do what the internet Giants have done very very sophisticated engineering and and and cultural discipline and this brings us to reinvent 2022 in the future of cloud machine learning and AI will increasingly be infused into applications we believe the data stack and the application stack are coming together as organizations build data apps and data products data expertise is moving from the domain of Highly specialized individuals to Everyday business people and we are just at the cusp of this trend this will in our view be a massive theme of not only re invent 22 but of cloud in the 2020s the vision of data mesh We Believe jamachtagani's principles will be realized in this decade now what we'd like to do now is share with you a glimpse of the thinking of Adam solipsky from his sit down with John Furrier each year John has a one-on-one conversation with the CEO of AWS AWS he's been doing this for years and the outcome is a better understanding of the directional thinking of the leader of the number one Cloud platform so we're now going to share some direct quotes I'm going to run through them with some commentary and then bring in some ETR data to analyze the market implications here we go this is from solipsky quote I.T in general and data are moving from departments into becoming intrinsic parts of how businesses function okay we're talking here about deeper business integration let's go on to the next one quote in time we'll stop talking about people who have the word analyst we inserted data he meant data data analyst in their title rather will have hundreds of millions of people who analyze data as part of their day-to-day job most of whom will not have the word analyst anywhere in their title we're talking about graphic designers and pizza shop owners and product managers and data scientists as well he threw that in I'm going to come back to that very interesting so he's talking about here about democratizing data operationalizing data next quote customers need to be able to take an end-to-end integrated view of their entire data Journey from ingestion to storage to harmonizing the data to being able to query it doing business Intelligence and human-based Analysis and being able to collaborate and share data and we've been putting together we being Amazon together a broad Suite of tools from database to analytics to business intelligence to help customers with that and this last statement it's true Amazon has a lot of tools and you know they're beginning to become more and more integrated but again under jassy there was not a lot of emphasis on that end-to-end integrated view we believe it's clear from these statements that solipsky's customer interactions are leading him to underscore that the time has come for this capability okay continuing quote if you have data in one place you shouldn't have to move it every time you want to analyze that data couldn't agree more it would be much better if you could leave that data in place avoid all the ETL which has become a nasty three-letter word more and more we're building capabilities where you can query that data in place end quote okay this we see a lot in the marketplace Oracle with mySQL Heatwave the entire Trend toward converge database snowflake [ __ ] extending their platforms into transaction and analytics respectively and so forth a lot of the partners are are doing things as well in that vein let's go into the next quote the other phenomenon is infusing machine learning into all those capabilities yes the comments from the michelleographic come into play here infusing Ai and machine intelligence everywhere next one quote it's not a data Cloud it's not a separate Cloud it's a series of broad but integrated capabilities to help you manage the end-to-end life cycle of your data there you go we AWS are the cloud we're going to come back to that in a moment as well next set of comments around data very interesting here quote data governance is a huge issue really what customers need is to find the right balance of their organization between access to data and control and if you provide too much access then you're nervous that your data is going to end up in places that it shouldn't shouldn't be viewed by people who shouldn't be viewing it and you feel like you lack security around that data and by the way what happens then is people overreact and they lock it down so that almost nobody can see it it's those handcuffs there's data and asset are reliability we've talked about that for years okay very well put by solipsky but this is a gap in our in our view within AWS today and we're we're hoping that they close it at reinvent it's not easy to share data in a safe way within AWS today outside of your organization so we're going to look for that at re invent 2022. now all this leads to the following statement by solipsky quote data clean room is a really interesting area and I think there's a lot of different Industries in which clean rooms are applicable I think that clean rooms are an interesting way of enabling multiple parties to share and collaborate on the data while completely respecting each party's rights and their privacy mandate okay again this is a gap currently within AWS today in our view and we know snowflake is well down this path and databricks with Delta sharing is also on this curve so AWS has to address this and demonstrate this end-to-end data integration and the ability to safely share data in our view now let's bring in some ETR spending data to put some context around these comments with reference points in the form of AWS itself and its competitors and partners here's a chart from ETR that shows Net score or spending momentum on the x-axis an overlap or pervasiveness in the survey um sorry let me go back up the net scores on the y-axis and overlap or pervasiveness in the survey is on the x-axis so spending momentum by pervasiveness okay or should have share within the data set the table that's inserted there with the Reds and the greens that informs us to how the dots are positioned so it's Net score and then the shared ends are how the plots are determined now we've filtered the data on the three big data segments analytics database and machine learning slash Ai and we've only selected one company with fewer than 100 ends in the survey and that's databricks you'll see why in a moment the red dotted line indicates highly elevated customer spend at 40 percent now as usual snowflake outperforms all players on the y-axis with a Net score of 63 percent off the charts all three big U.S cloud players are above that line with Microsoft and AWS dominating the x-axis so very impressive that they have such spending momentum and they're so large and you see a number of other emerging data players like rafana and datadog mongodbs there in the mix and then more established players data players like Splunk and Tableau now you got Cisco who's gonna you know it's a it's a it's a adjacent to their core networking business but they're definitely into you know the analytics business then the really established players in data like Informatica IBM and Oracle all with strong presence but you'll notice in the red from the momentum standpoint now what you're going to see in a moment is we put red highlights around databricks Snowflake and AWS why let's bring that back up and we'll explain so there's no way let's bring that back up Alex if you would there's no way AWS is going to hit the brakes on innovating at the base service level what we call Primitives earlier solipsky told Furrier as much in their sit down that AWS will serve the technical user and data science Community the traditional domain of data bricks and at the same time address the end-to-end integration data sharing and business line requirements that snowflake is positioned to serve now people often ask Snowflake and databricks how will you compete with the likes of AWS and we know the answer focus on data exclusively they have their multi-cloud plays perhaps the more interesting question is how will AWS compete with the likes of Specialists like Snowflake and data bricks and the answer is depicted here in this chart AWS is going to serve both the technical and developer communities and the data science audience and through end-to-end Integrations and future services that simplify the data Journey they're going to serve the business lines as well but the Nuance is in all the other dots in the hundreds or hundreds of thousands that are not shown here and that's the AWS ecosystem you can see AWS has earned the status of the number one Cloud platform that everyone wants to partner with as they say it has over a hundred thousand partners and that ecosystem combined with these capabilities that we're discussing well perhaps behind in areas like data sharing and integrated governance can wildly succeed by offering the capabilities and leveraging its ecosystem now for their part the snowflakes of the world have to stay focused on the mission build the best products possible and develop their own ecosystems to compete and attract the Mind share of both developers and business users and that's why it's so interesting to hear solipski basically say it's not a separate Cloud it's a set of integrated Services well snowflake is in our view building a super cloud on top of AWS Azure and Google when great products meet great sales and marketing good things can happen so this will be really fun to watch what AWS announces in this area at re invent all right one other topic that solipsky talked about was the correlation between serverless and container adoption and you know I don't know if this gets into there certainly their hybrid place maybe it starts to get into their multi-cloud we'll see but we have some data on this so again we're talking about the correlation between serverless and container adoption but before we get into that let's go back to 2017 and listen to what Andy jassy said on the cube about serverless play the clip very very earliest days of AWS Jeff used to say a lot if I were starting Amazon today I'd have built it on top of AWS we didn't have all the capability and all the functionality at that very moment but he knew what was coming and he saw what people were still able to accomplish even with where the services were at that point I think the same thing is true here with Lambda which is I think if Amazon were starting today it's a given they would build it on the cloud and I think we with a lot of the applications that comprise Amazon's consumer business we would build those on on our serverless capabilities now we still have plenty of capabilities and features and functionality we need to add to to Lambda and our various serverless services so that may not be true from the get-go right now but I think if you look at the hundreds of thousands of customers who are building on top of Lambda and lots of real applications you know finra has built a good chunk of their market watch application on top of Lambda and Thompson Reuters has built you know one of their key analytics apps like people are building real serious things on top of Lambda and the pace of iteration you'll see there will increase as well and I really believe that to be true over the next year or two so years ago when Jesse gave a road map that serverless was going to be a key developer platform going forward and so lipsky referenced the correlation between serverless and containers in the Furrier sit down so we wanted to test that within the ETR data set now here's a screen grab of The View across 1300 respondents from the October ETR survey and what we've done here is we've isolated on the cloud computing segment okay so you can see right there cloud computing segment now we've taken the functions from Google AWS Lambda and Microsoft Azure functions all the serverless offerings and we've got Net score on the vertical axis we've got presence in the data set oh by the way 440 by the way is highly elevated remember that and then we've got on the horizontal axis we have the presence in the data center overlap okay that's relative to each other so remember 40 all these guys are above that 40 mark okay so you see that now what we're going to do this is just for serverless and what we're going to do is we're going to turn on containers to see the correlation and see what happens so watch what happens when we click on container boom everything moves to the right you can see all three move to the right Google drops a little bit but all the others now the the filtered end drops as well so you don't have as many people that are aggressively leaning into both but all three move to the right so watch again containers off and then containers on containers off containers on so you can see a really major correlation between containers and serverless okay so to get a better understanding of what that means I call my friend and former Cube co-host Stu miniman what he said was people generally used to think of VMS containers and serverless as distinctly different architectures but the lines are beginning to blur serverless makes things simpler for developers who don't want to worry about underlying infrastructure as solipsky and the data from ETR indicate serverless and containers are coming together but as Stu and I discussed there's a spectrum where on the left you have kind of native Cloud VMS in the middle you got AWS fargate and in the rightmost anchor is Lambda AWS Lambda now traditionally in the cloud if you wanted to use containers developers would have to build a container image they have to select and deploy the ec2 images that they or instances that they wanted to use they have to allocate a certain amount of memory and then fence off the apps in a virtual machine and then run the ec2 instances against the apps and then pay for all those ec2 resources now with AWS fargate you can run containerized apps with less infrastructure management but you still have some you know things that you can you can you can do with the with the infrastructure so with fargate what you do is you'd build the container images then you'd allocate your memory and compute resources then run the app and pay for the resources only when they're used so fargate lets you control the runtime environment while at the same time simplifying the infrastructure management you gotta you don't have to worry about isolating the app and other stuff like choosing server types and patching AWS does all that for you then there's Lambda with Lambda you don't have to worry about any of the underlying server infrastructure you're just running code AS functions so the developer spends their time worrying about the applications and the functions that you're calling the point is there's a movement and we saw in the data towards simplifying the development environment and allowing the cloud vendor AWS in this case to do more of the underlying management now some folks will still want to turn knobs and dials but increasingly we're going to see more higher level service adoption now re invent is always a fire hose of content so let's do a rapid rundown of what to expect we talked about operate optimizing data and the organization we talked about Cloud optimization there'll be a lot of talk on the show floor about best practices and customer sharing data solipsky is leading AWS into the next phase of growth and that means moving beyond I.T transformation into deeper business integration and organizational transformation not just digital transformation organizational transformation so he's leading a multi-vector strategy serving the traditional peeps who want fine-grained access to core services so we'll see continued Innovation compute storage AI Etc and simplification through integration and horizontal apps further up to stack Amazon connect is an example that's often cited now as we've reported many times databricks is moving from its stronghold realm of data science into business intelligence and analytics where snowflake is coming from its data analytics stronghold and moving into the world of data science AWS is going down a path of snowflake meet data bricks with an underlying cloud is and pass layer that puts these three companies on a very interesting trajectory and you can expect AWS to go right after the data sharing opportunity and in doing so it will have to address data governance they go hand in hand okay price performance that is a topic that will never go away and it's something that we haven't mentioned today silicon it's a it's an area we've covered extensively on breaking analysis from Nitro to graviton to the AWS acquisition of Annapurna its secret weapon new special specialized capabilities like inferential and trainium we'd expect something more at re invent maybe new graviton instances David floyer our colleague said he's expecting at some point a complete system on a chip SOC from AWS and maybe an arm-based server to eventually include high-speed cxl connections to devices and memories all to address next-gen applications data intensive applications with low power requirements and lower cost overall now of course every year Swami gives his usual update on machine learning and AI building on Amazon's years of sagemaker innovation perhaps a focus on conversational AI or a better support for vision and maybe better integration across Amazon's portfolio of you know large language models uh neural networks generative AI really infusing AI everywhere of course security always high on the list that reinvent and and Amazon even has reinforce a conference dedicated to it uh to security now here we'd like to see more on supply chain security and perhaps how AWS can help there as well as tooling to make the cio's life easier but the key so far is AWS is much more partner friendly in the security space than say for instance Microsoft traditionally so firms like OCTA and crowdstrike in Palo Alto have plenty of room to play in the AWS ecosystem we'd expect of course to hear something about ESG it's an important topic and hopefully how not only AWS is helping the environment that's important but also how they help customers save money and drive inclusion and diversity again very important topics and finally come back to it reinvent is an ecosystem event it's the Super Bowl of tech events and the ecosystem will be out in full force every tech company on the planet will have a presence and the cube will be featuring many of the partners from the serial floor as well as AWS execs and of course our own independent analysis so you'll definitely want to tune into thecube.net and check out our re invent coverage we start Monday evening and then we go wall to wall through Thursday hopefully my voice will come back we have three sets at the show and our entire team will be there so please reach out or stop by and say hello all right we're going to leave it there for today many thanks to Stu miniman and David floyer for the input to today's episode of course John Furrier for extracting the signal from the noise and a sit down with Adam solipski thanks to Alex Meyerson who was on production and manages the podcast Ken schiffman as well Kristen Martin and Cheryl Knight helped get the word out on social and of course in our newsletters Rob hoef is our editor-in-chief over at siliconangle does some great editing thank thanks to all of you remember all these episodes are available as podcasts wherever you listen you can pop in the headphones go for a walk just search breaking analysis podcast I published each week on wikibon.com at siliconangle.com or you can email me at david.valante at siliconangle.com or DM me at di vallante or please comment on our LinkedIn posts and do check out etr.ai for the best survey data in the Enterprise Tech business this is Dave vellante for the cube insights powered by ETR thanks for watching we'll see it reinvent or we'll see you next time on breaking analysis [Music]

Published Date : Nov 26 2022

SUMMARY :

so now the further mu you move up the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David michellaPERSON

0.99+

Alex MeyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

AWSORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

oneQUANTITY

0.99+

Dave vellantePERSON

0.99+

David floyerPERSON

0.99+

Kristen MartinPERSON

0.99+

JohnPERSON

0.99+

sixty percentQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Adam solipskiPERSON

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2022DATE

0.99+

Andy jassyPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

hundredsQUANTITY

0.99+

2017DATE

0.99+

Palo AltoLOCATION

0.99+

40 percentQUANTITY

0.99+

alibabaORGANIZATION

0.99+

LambdaTITLE

0.99+

63 percentQUANTITY

0.99+

1300 respondentsQUANTITY

0.99+

Super BowlEVENT

0.99+

80 billionQUANTITY

0.99+

John furrierPERSON

0.99+

ThursdayDATE

0.99+

CiscoORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Monday eveningDATE

0.99+

JessePERSON

0.99+

Stu minimanPERSON

0.99+

siliconangle.comOTHER

0.99+

OctoberDATE

0.99+

thecube.netOTHER

0.99+

fourthQUANTITY

0.99+

a month laterDATE

0.99+

thirdQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

fargateORGANIZATION

0.99+

Breaking Analysis: VMware Explore 2022 will mark the start of a Supercloud journey


 

>> From the Cube studios in Palo Alto and Boston, bringing you data driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> While the precise direction of VMware's future is unknown, given the plan Broadcom acquisition, one thing is clear. The topic of what Broadcom plans will not be the main focus of the agenda at the upcoming VMware Explore event next week in San Francisco. We believe that despite any uncertainty, VMware will lay out for its customers what it sees as its future. And that future is multi-cloud or cross-cloud services, what we call Supercloud. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we drill into the latest survey data on VMware from ETR. And we'll share with you the next iteration of the Supercloud definition based on feedback from dozens of contributors. And we'll give you our take on what to expect next week at VMware Explorer 2022. Well, VMware is maturing. You can see it in the numbers. VMware had a solid quarter just this week, which was announced beating earnings and growing the top line by 6%. But it's clear from its financials and the ETR data that we're showing here that VMware's Halcion glory days are behind it. This chart shows the spending profile from ETR's July survey of nearly 1500 IT buyers and CIOs. The survey included 722 VMware customers with the green bars showing elevated spending momentum, ie: growth, either new or growing at more than 6%. And the red bars show lower spending, either down 6% or worse or defections. The gray bars, that's the flat spending crowd, and it really tells a story. Look, nobody's throwing away their VMware platforms. They're just not investing as rapidly as in previous years. The blue line shows net score or spending momentum and subtracts the reds from the greens. The yellow line shows market penetration or pervasiveness in the survey. So the data is pretty clear. It's steady, but it's not remarkable. Now, the timing of the acquisition, quite rightly, is quite good, I would say. Now, this next chart shows the net score and pervasiveness juxtaposed on an XY graph and breaks down the VMware portfolio in those dimensions, the product portfolio. And you can see the dominance of respondents citing VMware as the platform. They might not know exactly which services they use, but they just respond VMware. That's on the X axis. You can see it way to the right. And the spending momentum or the net score is on the Y axis. That red dotted line at 4%, that indicates elevated levels and only VMware cloud on AWS is above that line. Notably, Tanzu has jumped up significantly from previous quarters, with the rest of the portfolio showing steady, as you would expect from a maturing platform. Only carbon black is hovering in the red zone, kind of ironic given the name. We believe that VMware is going to be a major player in cross cloud services, what we refer to as Supercloud. For months, we've been refining the concept and the definition. At Supercloud '22, we had discussions with more than 30 technology and business experts, and we've gathered input from many more. Based on that feedback, here's the definition we've landed on. It's somewhat refined from our earlier definition that we published a couple weeks ago. Supercloud is an emerging computing architecture that comprises a set of services abstracted from the underlying primitives of hyperscale clouds, e.g. compute, storage, networking, security, and other native resources, to create a global system spanning more than one cloud. Supercloud is three essential properties, three deployment models, and three service models. So what are those essential elements, those properties? We've simplified the picture from our last report. We show them here. I'll review them briefly. We're not going to go super in depth here because we've covered this topic a lot. But supercloud, it runs on more than one cloud. It creates that common or identical experience across clouds. It contains a necessary capability that we call a superPaaS that acts as a cloud interpreter, and it has metadata intelligence to optimize for a specific purpose. We'll publish this definition in detail. So again, we're not going to spend a ton of time here today. Now, we've identified three deployment models for Supercloud. The first is a single instantiation, where a control plane runs on one cloud but supports interactions with multiple other clouds. An example we use is Kubernetes cluster management service that runs on one cloud but can deploy and manage clusters on other clouds. The second model is a multi-cloud, multi-region instantiation where a full stack of services is instantiated on multiple clouds and multiple cloud regions with a common interface across them. We've used cohesity as one example of this. And then a single global instance that spans multiple cloud providers. That's our snowflake example. Again, we'll publish this in detail. So we're not going to spend a ton of time here today. Finally, the service models. The feedback we've had is IaaS, PaaS, and SaaS work fine to describe the service models for Supercloud. NetApp's Cloud Volume is a good example in IaaS. VMware cloud foundation and what we expect at VMware Explore is a good PaaS example. And SAP HANA Cloud is a good example of SaaS running as a Supercloud service. That's the SAP HANA multi-cloud. So what is it that we expect from VMware Explore 2022? Well, along with what will be an exciting and speculation filled gathering of the VMware community at the Moscone Center, we believe VMware will lay out its future architectural direction. And we expect it will fit the Supercloud definition that we just described. We think VMware will show its hand on a set of cross-cloud services and will promise a common experience for users and developers alike. As we talked about at Supercloud '22, VMware kind of wants to have its cake, eat it too, and lose weight. And by that, we mean that it will not only abstract the underlying primitives of each of the individual clouds, but if developers want access to them, they will allow that and actually facilitate that. Now, we don't expect VMware to use the term Supercloud, but it will be a cross-cloud multi-cloud services model that they put forth, we think, at VMworld Explore. With IaaS comprising compute, storage, and networking, a very strong emphasis, we believe, on security, of course, a governance and a comprehensive set of data protection services. Now, very importantly, we believe Tanzu will play a leading role in any announcements this coming week, as a purpose-built PaaS layer, specifically designed to create a common experience for cross clouds for data and application services. This, we believe, will be VMware's most significant offering to date in cross-cloud services. And it will position VMware to be a leader in what we call Supercloud. Now, while it remains to be seen what Broadcom exactly intends to do with VMware, we've speculated, others have speculated. We think this Supercloud is a substantial market opportunity generally and for VMware specifically. Look, if you don't own a public cloud, and very few companies do, in the tech business, we believe you better be supporting the build out of superclouds or building a supercloud yourself on top of hyperscale infrastructure. And we believe that as cloud matures, hyperscalers will increasingly I cross cloud services as an opportunity. We asked David Floyer to take a stab at a market model for super cloud. He's really good at these types of things. What he did is he took the known players in cloud and estimated their IaaS and PaaS cloud services, their total revenue, and then took a percentage. So this is super set of just the public cloud and the hyperscalers. And then what he did is he took a percentage to fit the Supercloud definition, as we just shared above. He then added another 20% on top to cover the long tail of Other. Other over time is most likely going to grow to let's say 30%. That's kind of how these markets work. Okay, so this is obviously an estimate, but it's an informed estimate by an individual who has done this many, many times and is pretty well respected in these types of forecasts, these long term forecasts. Now, by the definition we just shared, Supercloud revenue was estimated at about $3 billion in 2022 worldwide, growing to nearly $80 billion by 2030. Now remember, there's not one Supercloud market. It comprises a bunch of purpose-built superclouds that solve a specific problem. But the common attribute is it's built on top of hyperscale infrastructure. So overall, cloud services, including Supercloud, peak by the end of the decade. But Supercloud continues to grow and will take a higher percentage of the cloud market. The reasoning here is that the market will change and compute, will increasingly become distributed and embedded into edge devices, such as automobiles and robots and factory equipment, et cetera, and not necessarily be a discreet... I mean, it still will be, of course, but it's not going to be as much of a discrete component that is consumed via services like EZ2, that will mature. And this will be a key shift to watch in spending dynamics and really importantly, computing economics, the things we've talked about around arm and edge and AI inferencing and new low cost computing architectures at the edge. We're talking not the near edge, like, Lowes and Home Depot, we're talking far edge and embedded devices. Now, whether this becomes a seamless part of Supercloud remains to be seen. Look, if that's how we see it, the current and the future state of Supercloud, and we're committed to keeping the discussion going with an inclusive model that gathers input from all parts of the industry. Okay, that's it for today. Thanks to Alex Morrison, who's on production, and he also manages the podcast. Ken Schiffman, as well, is on production in our Boston office. Kristin Martin and Cheryl Knight, they help us get the word out on social media and in our newsletters. And Rob Hoffe is our editor in chief over at Silicon Angle and does some helpful editing. Thank you, all. Remember these episodes, they're all available as podcasts, wherever you listen. All you got to do is search Breaking Analysis Podcast. I publish each week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com or DM me @Dvellante or comment on our LinkedIn posts. Please do check out etr.ai. They've got some great enterprise survey research. So please go there and poke around, And if you need any assistance, let them know. This is Dave Vellante for the Cube Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis. (lively music)

Published Date : Aug 27 2022

SUMMARY :

From the Cube studios and subtracts the reds from the greens.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

Cheryl KnightPERSON

0.99+

Dave VellantePERSON

0.99+

Rob HoffePERSON

0.99+

VMwareORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

David FloyerPERSON

0.99+

Kristin MartinPERSON

0.99+

30%QUANTITY

0.99+

BostonLOCATION

0.99+

2022DATE

0.99+

LowesORGANIZATION

0.99+

20%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

722QUANTITY

0.99+

4%QUANTITY

0.99+

San FranciscoLOCATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

2030DATE

0.99+

Silicon AngleORGANIZATION

0.99+

JulyDATE

0.99+

BroadcomORGANIZATION

0.99+

Home DepotORGANIZATION

0.99+

6%QUANTITY

0.99+

next weekDATE

0.99+

AWSORGANIZATION

0.99+

second modelQUANTITY

0.99+

more than 6%QUANTITY

0.99+

ETRORGANIZATION

0.99+

more than one cloudQUANTITY

0.99+

siliconangle.comOTHER

0.99+

nearly $80 billionQUANTITY

0.99+

about $3 billionQUANTITY

0.99+

more than 30 technologyQUANTITY

0.99+

firstQUANTITY

0.99+

this weekDATE

0.98+

SupercloudORGANIZATION

0.98+

each weekQUANTITY

0.98+

one exampleQUANTITY

0.98+

three service modelsQUANTITY

0.98+

VMware ExploreEVENT

0.98+

dozens of contributorsQUANTITY

0.97+

todayDATE

0.97+

NetAppTITLE

0.97+

this weekDATE

0.97+

SupercloudTITLE

0.97+

SAP HANATITLE

0.97+

VMworld ExploreORGANIZATION

0.97+

three essential propertiesQUANTITY

0.97+

three deployment modelsQUANTITY

0.97+

one cloudQUANTITY

0.96+

TanzuORGANIZATION

0.96+

eachQUANTITY

0.96+

Moscone CenterLOCATION

0.96+

wikibon.comOTHER

0.95+

SAP HANA CloudTITLE

0.95+

Cube InsightsORGANIZATION

0.92+

single instantiationQUANTITY

0.9+

Closing Remarks | Supercloud22


 

(gentle upbeat music) >> Welcome back everyone, to "theCUBE"'s live stage performance here in Palo Alto, California at "theCUBE" Studios. I'm John Furrier with Dave Vellante, kicking off our first inaugural Supercloud event. It's an editorial event, we wanted to bring together the best in the business, the smartest, the biggest, the up-and-coming startups, venture capitalists, everybody, to weigh in on this new Supercloud trend, this structural change in the cloud computing business. We're about to run the Ecosystem Speaks, which is a bunch of pre-recorded companies that wanted to get their voices on the record, so stay tuned for the rest of the day. We'll be replaying all that content and they're going to be having some really good commentary and hear what they have to say. I had a chance to interview and so did Dave. Dave, this is our closing segment where we kind of unpack everything or kind of digest and report. So much to kind of digest from the conversations today, a wide range of commentary from Supercloud operating system to developers who are in charge to maybe it's an ops problem or maybe Oracle's a Supercloud. I mean, that was debated. So so much discussion, lot to unpack. What was your favorite moments? >> Well, before I get to that, I think, I go back to something that happened at re:Invent last year. Nick Sturiale came up, Steve Mullaney from Aviatrix; we're going to hear from him shortly in the Ecosystem Speaks. Nick Sturiale's VC said "it's happening"! And what he was talking about is this ecosystem is exploding. They're building infrastructure or capabilities on top of the CapEx infrastructure. So, I think it is happening. I think we confirmed today that Supercloud is a thing. It's a very immature thing. And I think the other thing, John is that, it seems to me that the further you go up the stack, the weaker the business case gets for doing Supercloud. We heard from Marianna Tessel, it's like, "Eh, you know, we can- it was easier to just do it all on one cloud." This is a point that, Adrian Cockcroft just made on the panel and so I think that when you break out the pieces of the stack, I think very clearly the infrastructure layer, what we heard from Confluent and HashiCorp, and certainly VMware, there's a real problem there. There's a real need at the infrastructure layer and then even at the data layer, I think Benoit Dageville did a great job of- You know, I was peppering him with all my questions, which I basically was going through, the Supercloud definition and they ticked the box on pretty much every one of 'em as did, by the way Ali Ghodsi you know, the big difference there is the philosophy of Republicans and Democrats- got open versus closed, not to apply that to either one side, but you know what I mean! >> And the similarities are probably greater than differences. >> Berkely, I would probably put them on the- >> Yeah, we'll put them on the Democrat side we'll make Snowflake the Republicans. But so- but as we say there's a lot of similarities as well in terms of what their objectives are. So, I mean, I thought it was a great program and a really good start to, you know, an industry- You brought up the point about the industry consortium, asked Kit Colbert- >> Yep. >> If he thought that was something that was viable and what'd they say? That hyperscale should lead it? >> Yeah, they said hyperscale should lead it and there also should be an industry consortium to get the voices out there. And I think VMware is very humble in how they're putting out their white paper because I think they know that they can't do it all and that they do not have a great track record relative to cloud. And I think, but they have a great track record of loyal installed base ops people using VMware vSphere all the time. >> Yeah. >> So I think they need a catapult moment where they can catapult to the cloud native which they've been working on for years under Raghu and the team. So the question on VMware is in the light of Broadcom, okay, acquisition of VMware, this is an opportunity or it might not be an opportunity or it might be a spin-out or something, I just think VMware's got way too much engineering culture to be ignored, Dave. And I think- well, I'm going to watch this very closely because they can pull off some sort of rallying moment. I think they could. And then you hear the upstarts like Platform9, Rafay Systems and others they're all like, "Yes, we need to unify behind something. There needs to be some sort of standard". You know, we heard the argument of you know, more standards bodies type thing. So, it's interesting, maybe "theCUBE" could be that but we're going to certainly keep the conversation going. >> I thought one of the most memorable statements was Vittorio who said we- for VMware, we want our cake, we want to eat it too and we want to lose weight. So they have a lot of that aspirations there! (John laughs) >> And then I thought, Adrian Cockcroft said you know, the devs, they want to get married. They were marrying everybody, and then the ops team, they have to deal with the divorce. >> Yeah. >> And I thought that was poignant. It's like, they want consistency, they want standards, they got to be able to scale And Lori MacVittie, I'm not sure you agree with this, I'd have to think about it, but she was basically saying, all we've talked about is devs devs devs for the last 10 years, going forward we're going to be talking about ops. >> Yeah, and I think one of the things I learned from this day and looking back, and some kind of- I've been sauteing through all the interviews. If you zoom out, for me it was the epiphany of developers are still in charge. And I've said, you know, the developers are doing great, it's an ops security thing. Not sure I see that the way I was seeing before. I think what I learned was the refactoring pattern that's emerging, In Sik Rhee brought this up from Vertex Ventures with Marianna Tessel, it's a nuanced point but I think he's right on which is the pattern that's emerging is developers want ease-of-use tooling, they're driving the change and I think the developers in the devs ops ethos- it's never going to be separate. It's going to be DevOps. That means developers are driving operations and then security. So what I learned was it's not ops teams leveling up, it's devs redefining what ops is. >> Mm. And I think that to me is where Supercloud's going to be interesting- >> Forcing that. >> Yeah. >> Forcing the change because the structural change is open sources thriving, devs are still in charge and they still want more developers, Vittorio "we need more developers", right? So the developers are in charge and that's clear. Now, if that happens- if you believe that to be true the domino effect of that is going to be amazing because then everyone who gets on the wrong side of history, on the ops and security side, is going to be fighting a trend that may not be fight-able, you know, it might be inevitable. And so the winners are the ones that are refactoring their business like Snowflake. Snowflake is a data warehouse that had nothing to do with Amazon at first. It was the developers who said "I'm going to refactor data warehouse on AWS". That is a developer-driven refactorization and a business model. So I think that's the pattern I'm seeing is that this concept refactoring, patterns and the developer trajectory is critical. >> I thought there was another great comment. Maribel Lopez, her Lord of the Rings comment: "there will be no one ring to rule them all". Now at the same time, Kit Colbert, you know what we asked him straight out, "are you the- do you want to be the, the Supercloud OS?" and he basically said, "yeah, we do". Now, of course they're confined to their world, which is a pretty substantial world. I think, John, the reason why Maribel is so correct is security. I think security's a really hard problem to solve. You've got cloud as the first layer of defense and now you've got multiple clouds, multiple layers of defense, multiple shared responsibility models. You've got different tools for XDR, for identity, for governance, for privacy all within those different clouds. I mean, that really is a confusing picture. And I think the hardest- one of the hardest parts of Supercloud to solve. >> Yeah, and I thought the security founder Gee Rittenhouse, Piyush Sharrma from Accurics, which sold to Tenable, and Tony Kueh, former head of product at VMware. >> Right. >> Who's now an investor kind of looking for his next gig or what he is going to do next. He's obviously been extremely successful. They brought up the, the OS factor. Another point that they made I thought was interesting is that a lot of the things to do to solve the complexity is not doable. >> Yeah. >> It's too much work. So managed services might field the bit. So, and Chris Hoff mentioned on the Clouderati segment that the higher level services being a managed service and differentiating around the service could be the key competitive advantage for whoever does it. >> I think the other thing is Chris Hoff said "yeah, well, Web 3, metaverse, you know, DAO, Superclouds" you know, "Stupercloud" he called it and this bring up- It resonates because one of the criticisms that Charles Fitzgerald laid on us was, well, it doesn't help to throw out another term. I actually think it does help. And I think the reason it does help is because it's getting people to think. When you ask people about Supercloud, they automatically- it resonates with them. They play back what they think is the future of cloud. So Supercloud really talks to the future of cloud. There's a lot of aspects to it that need to be further defined, further thought out and we're getting to the point now where we- we can start- begin to say, okay that is Supercloud or that isn't Supercloud. >> I think that's really right on. I think Supercloud at the end of the day, for me from the simplest way to describe it is making sure that the developer experience is so good that the operations just happen. And Marianna Tessel said, she's investing in making their developer experience high velocity, very easy. So if you do that, you have to run on premise and on the cloud. So hybrid really is where Supercloud is going right now. It's not multi-cloud. Multi-cloud was- that was debunked on this session today. I thought that was clear. >> Yeah. Yeah, I mean I think- >> It's not about multi-cloud. It's about operationally seamless operations across environments, public cloud to on-premise, basically. >> I think we got consensus across the board that multi-cloud, you know, is a symptom Chuck Whitten's thing of multi-cloud by default versus multi- multi-cloud has not been a strategy, Kit Colbert said, up until the last couple of years. Yeah, because people said, "oh we got all these multiple clouds, what do we do with it?" and we got this mess that we have to solve. Whereas, I think Supercloud is something that is a strategy and then the other nuance that I keep bringing up is it's industries that are- as part of their digital transformation, are building clouds. Now, whether or not they become superclouds, I'm not convinced. I mean, what Goldman Sachs is doing, you know, with AWS, what Walmart's doing with Azure connecting their on-prem tools to those public clouds, you know, is that a supercloud? I mean, we're going to have to go back and really look at that definition. Or is it just kind of a SAS that spans on-prem and cloud. So, as I said, the further you go up the stack, the business case seems to wane a little bit but there's no question in my mind that from an infrastructure standpoint, to your point about operations, there's a real requirement for super- what we call Supercloud. >> Well, we're going to keep the conversation going, Dave. I want to put a shout out to our founding supporters of this initiative. Again, we put this together really fast kind of like a pilot series, an inaugural event. We want to have a face-to-face event as an industry event. Want to thank the founding supporters. These are the people who donated their time, their resource to contribute content, ideas and some cash, not everyone has committed some financial contribution but we want to recognize the names here. VMware, Intuit, Red Hat, Snowflake, Aisera, Alteryx, Confluent, Couchbase, Nutanix, Rafay Systems, Skyhigh Security, Aviatrix, Zscaler, Platform9, HashiCorp, F5 and all the media partners. Without their support, this wouldn't have happened. And there are more people that wanted to weigh in. There was more demand than we could pull off. We'll certainly continue the Supercloud conversation series here on "theCUBE" and we'll add more people in. And now, after this session, the Ecosystem Speaks session, we're going to run all the videos of the big name companies. We have the Nutanix CEOs weighing in, Aviatrix to name a few. >> Yeah. Let me, let me chime in, I mean you got Couchbase talking about Edge, Platform 9's going to be on, you know, everybody, you know Insig was poopoo-ing Oracle, but you know, Oracle and Azure, what they did, two technical guys, developers are coming on, we dig into what they did. Howie Xu from Zscaler, Paula Hansen is going to talk about going to market in the multi-cloud world. You mentioned Rajiv, the CEO of Nutanix, Ramesh is going to talk about multi-cloud infrastructure. So that's going to run now for, you know, quite some time here and some of the pre-record so super excited about that and I just want to thank the crew. I hope guys, I hope you have a list of credits there's too many of you to mention, but you know, awesome jobs really appreciate the work that you did in a very short amount of time. >> Well, I'm excited. I learned a lot and my takeaway was that Supercloud's a thing, there's a kind of sense that people want to talk about it and have real conversations, not BS or FUD. They want to have real substantive conversations and we're going to enable that on "theCUBE". Dave, final thoughts for you. >> Well, I mean, as I say, we put this together very quickly. It was really a phenomenal, you know, enlightening experience. I think it confirmed a lot of the concepts and the premises that we've put forth, that David Floyer helped evolve, that a lot of these analysts have helped evolve, that even Charles Fitzgerald with his antagonism helped to really sharpen our knives. So, you know, thank you Charles. And- >> I like his blog, by the I'm a reader- >> Yeah, absolutely. And it was great to be back in Palo Alto. It was my first time back since pre-COVID, so, you know, great job. >> All right. I want to thank all the crew and everyone. Thanks for watching this first, inaugural Supercloud event. We are definitely going to be doing more of these. So stay tuned, maybe face-to-face in person. I'm John Furrier with Dave Vellante now for the Ecosystem chiming in, and they're going to speak and share their thoughts here with "theCUBE" our first live stage performance event in our studio. Thanks for watching. (gentle upbeat music)

Published Date : Aug 9 2022

SUMMARY :

and they're going to be having as did, by the way Ali Ghodsi you know, And the similarities on the Democrat side And I think VMware is very humble So the question on VMware is and we want to lose weight. they have to deal with the divorce. And I thought that was poignant. Not sure I see that the Mm. And I think that to me is where And so the winners are the ones that are of the Rings comment: the security founder Gee Rittenhouse, a lot of the things to do So, and Chris Hoff mentioned on the is the future of cloud. is so good that the public cloud to on-premise, basically. So, as I said, the further and all the media partners. So that's going to run now for, you know, I learned a lot and my takeaway was and the premises that we've put forth, since pre-COVID, so, you know, great job. and they're going to speak

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

Breaking Analysis: How the cloud is changing security defenses in the 2020s


 

>> Announcer: From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> The rapid pace of cloud adoption has changed the way organizations approach cybersecurity. Specifically, the cloud is increasingly becoming the first line of cyber defense. As such, along with communicating to the board and creating a security aware culture, the chief information security officer must ensure that the shared responsibility model is being applied properly. Meanwhile, the DevSecOps team has emerged as the critical link between strategy and execution, while audit becomes the free safety, if you will, in the equation, i.e., the last line of defense. Hello, and welcome to this week's, we keep on CUBE Insights, powered by ETR. In this "Breaking Analysis", we'll share the latest data on hyperscale, IaaS, and PaaS market performance, along with some fresh ETR survey data. And we'll share some highlights and the puts and takes from the recent AWS re:Inforce event in Boston. But first, the macro. It's earning season, and that's what many people want to talk about, including us. As we reported last week, the macro spending picture is very mixed and weird. Think back to a week ago when SNAP reported. A player like SNAP misses and the Nasdaq drops 300 points. Meanwhile, Intel, the great semiconductor hope for America misses by a mile, cuts its revenue outlook by 15% for the year, and the Nasdaq was up nearly 250 points just ahead of the close, go figure. Earnings reports from Meta, Google, Microsoft, ServiceNow, and some others underscored cautious outlooks, especially those exposed to the advertising revenue sector. But at the same time, Apple, Microsoft, and Google, were, let's say less bad than expected. And that brought a sigh of relief. And then there's Amazon, which beat on revenue, it beat on cloud revenue, and it gave positive guidance. The Nasdaq has seen this month best month since the isolation economy, which "Breaking Analysis" contributor, Chip Symington, attributes to what he calls an oversold rally. But there are many unknowns that remain. How bad will inflation be? Will the fed really stop tightening after September? The Senate just approved a big spending bill along with corporate tax hikes, which generally don't favor the economy. And on Monday, August 1st, the market will likely realize that we are in the summer quarter, and there's some work to be done. Which is why it's not surprising that investors sold the Nasdaq at the close today on Friday. Are people ready to call the bottom? Hmm, some maybe, but there's still lots of uncertainty. However, the cloud continues its march, despite some very slight deceleration in growth rates from the two leaders. Here's an update of our big four IaaS quarterly revenue data. The big four hyperscalers will account for $165 billion in revenue this year, slightly lower than what we had last quarter. We expect AWS to surpass 83 billion this year in revenue. Azure will be more than 2/3rds the size of AWS, a milestone from Microsoft. Both AWS and Azure came in slightly below our expectations, but still very solid growth at 33% and 46% respectively. GCP, Google Cloud Platform is the big concern. By our estimates GCP's growth rate decelerated from 47% in Q1, and was 38% this past quarter. The company is struggling to keep up with the two giants. Remember, both GCP and Azure, they play a shell game and hide the ball on their IaaS numbers, so we have to use a survey data and other means of estimating. But this is how we see the market shaping up in 2022. Now, before we leave the overall cloud discussion, here's some ETR data that shows the net score or spending momentum granularity for each of the hyperscalers. These bars show the breakdown for each company, with net score on the right and in parenthesis, net score from last quarter. lime green is new adoptions, forest green is spending up 6% or more, the gray is flat, pink is spending at 6% down or worse, and the bright red is replacement or churn. Subtract the reds from the greens and you get net score. One note is this is for each company's overall portfolio. So it's not just cloud. So it's a bit of a mixed bag, but there are a couple points worth noting. First, anything above 40% or 40, here as shown in the chart, is considered elevated. AWS, as you can see, is well above that 40% mark, as is Microsoft. And if you isolate Microsoft's Azure, only Azure, it jumps above AWS's momentum. Google is just barely hanging on to that 40 line, and Alibaba is well below, with both Google and Alibaba showing much higher replacements, that bright red. But here's the key point. AWS and Azure have virtually no churn, no replacements in that bright red. And all four companies are experiencing single-digit numbers in terms of decreased spending within customer accounts. People may be moving some workloads back on-prem selectively, but repatriation is definitely not a trend to bet the house on, in our view. Okay, let's get to the main subject of this "Breaking Analysis". TheCube was at AWS re:Inforce in Boston this week, and we have some observations to share. First, we had keynotes from Steven Schmidt who used to be the chief information security officer at Amazon on Web Services, now he's the CSO, the chief security officer of Amazon. Overall, he dropped the I in his title. CJ Moses is the CISO for AWS. Kurt Kufeld of AWS also spoke, as did Lena Smart, who's the MongoDB CISO, and she keynoted and also came on theCUBE. We'll go back to her in a moment. The key point Schmidt made, one of them anyway, was that Amazon sees more data points in a day than most organizations see in a lifetime. Actually, it adds up to quadrillions over a fairly short period of time, I think, it was within a month. That's quadrillion, it's 15 zeros, by the way. Now, there was drill down focus on data protection and privacy, governance, risk, and compliance, GRC, identity, big, big topic, both within AWS and the ecosystem, network security, and threat detection. Those are the five really highlighted areas. Re:Inforce is really about bringing a lot of best practice guidance to security practitioners, like how to get the most out of AWS tooling. Schmidt had a very strong statement saying, he said, "I can assure you with a 100% certainty that single controls and binary states will absolutely positively fail." Hence, the importance of course, of layered security. We heard a little bit of chat about getting ready for the future and skating to the security puck where quantum computing threatens to hack all of the existing cryptographic algorithms, and how AWS is trying to get in front of all that, and a new set of algorithms came out, AWS is testing. And, you know, we'll talk about that maybe in the future, but that's a ways off. And by its prominent presence, the ecosystem was there enforced, to talk about their role and filling the gaps and picking up where AWS leaves off. We heard a little bit about ransomware defense, but surprisingly, at least in the keynotes, no discussion about air gaps, which we've talked about in previous "Breaking Analysis", is a key factor. We heard a lot about services to help with threat detection and container security and DevOps, et cetera, but there really wasn't a lot of specific talk about how AWS is simplifying the life of the CISO. Now, maybe it's inherently assumed as AWS did a good job stressing that security is job number one, very credible and believable in that front. But you have to wonder if the world is getting simpler or more complex with cloud. And, you know, you might say, "Well, Dave, come on, of course it's better with cloud." But look, attacks are up, the threat surface is expanding, and new exfiltration records are being set every day. I think the hard truth is, the cloud is driving businesses forward and accelerating digital, and those businesses are now exposed more than ever. And that's why security has become such an important topic to boards and throughout the entire organization. Now, the other epiphany that we had at re:Inforce is that there are new layers and a new trust framework emerging in cyber. Roles are shifting, and as a direct result of the cloud, things are changing within organizations. And this first hit me in a conversation with long-time cyber practitioner and Wikibon colleague from our early Wikibon days, and friend, Mike Versace. And I spent two days testing the premise that Michael and I talked about. And here's an attempt to put that conversation into a graphic. The cloud is now the first line of defense. AWS specifically, but hyperscalers generally provide the services, the talent, the best practices, and automation tools to secure infrastructure and their physical data centers. And they're really good at it. The security inside of hyperscaler clouds is best of breed, it's world class. And that first line of defense does take some of the responsibility off of CISOs, but they have to understand and apply the shared responsibility model, where the cloud provider leaves it to the customer, of course, to make sure that the infrastructure they're deploying is properly configured. So in addition to creating a cyber aware culture and communicating up to the board, the CISO has to ensure compliance with and adherence to the model. That includes attracting and retaining the talent necessary to succeed. Now, on the subject of building a security culture, listen to this clip on one of the techniques that Lena Smart, remember, she's the CISO of MongoDB, one of the techniques she uses to foster awareness and build security cultures in her organization. Play the clip >> Having the Security Champion program, so that's just, it's like one of my babies. That and helping underrepresented groups in MongoDB kind of get on in the tech world are both really important to me. And so the Security Champion program is purely purely voluntary. We have over 100 members. And these are people, there's no bar to join, you don't have to be technical. If you're an executive assistant who wants to learn more about security, like my assistant does, you're more than welcome. Up to, we actually, people grade themselves when they join us. We give them a little tick box, like five is, I walk on security water, one is I can spell security, but I'd like to learn more. Mixing those groups together has been game-changing for us. >> Now, the next layer is really where it gets interesting. DevSecOps, you know, we hear about it all the time, shifting left. It implies designing security into the code at the dev level. Shift left and shield right is the kind of buzz phrase. But it's getting more and more complicated. So there are layers within the development cycle, i.e., securing the container. So the app code can't be threatened by backdoors or weaknesses in the containers. Then, securing the runtime to make sure the code is maintained and compliant. Then, the DevOps platform so that change management doesn't create gaps and exposures, and screw things up. And this is just for the application security side of the equation. What about the network and implementing zero trust principles, and securing endpoints, and machine to machine, and human to app communication? So there's a lot of burden being placed on the DevOps team, and they have to partner with the SecOps team to succeed. Those guys are not security experts. And finally, there's audit, which is the last line of defense or what I called at the open, the free safety, for you football fans. They have to do more than just tick the box for the board. That doesn't cut it anymore. They really have to know their stuff and make sure that what they sign off on is real. And then you throw ESG into the mix is becoming more important, making sure the supply chain is green and also secure. So you can see, while much of this stuff has been around for a long, long time, the cloud is accelerating innovation in the pace of delivery. And so much is changing as a result. Now, next, I want to share a graphic that we shared last week, but a little different twist. It's an XY graphic with net score or spending velocity in the vertical axis and overlap or presence in the dataset on the horizontal. With that magic 40% red line as shown. Okay, I won't dig into the data and draw conclusions 'cause we did that last week, but two points I want to make. First, look at Microsoft in the upper-right hand corner. They are big in security and they're attracting a lot of dollars in the space. We've reported on this for a while. They're a five-star security company. And every time, from a spending standpoint in ETR data, that little methodology we use, every time I've run this chart, I've wondered, where the heck is AWS? Why aren't they showing up there? If security is so important to AWS, which it is, and its customers, why aren't they spending money with Amazon on security? And I asked this very question to Merrit Baer, who resides in the office of the CISO at AWS. Listen to her answer. >> It doesn't mean don't spend on security. There is a lot of goodness that we have to offer in ESS, external security services. But I think one of the unique parts of AWS is that we don't believe that security is something you should buy, it's something that you get from us. It's something that we do for you a lot of the time. I mean, this is the definition of the shared responsibility model, right? >> Now, maybe that's good messaging to the market. Merritt, you know, didn't say it outright, but essentially, Microsoft they charge for security. At AWS, it comes with the package. But it does answer my question. And, of course, the fact is that AWS can subsidize all this with egress charges. Now, on the flip side of that, (chuckles) you got Microsoft, you know, they're both, they're competing now. We can take CrowdStrike for instance. Microsoft and CrowdStrike, they compete with each other head to head. So it's an interesting dynamic within the ecosystem. Okay, but I want to turn to a powerful example of how AWS designs in security. And that is the idea of confidential computing. Of course, AWS is not the only one, but we're coming off of re:Inforce, and I really want to dig into something that David Floyer and I have talked about in previous episodes. And we had an opportunity to sit down with Arvind Raghu and J.D. Bean, two security experts from AWS, to talk about this subject. And let's share what we learned and why we think it matters. First, what is confidential computing? That's what this slide is designed to convey. To AWS, they would describe it this way. It's the use of special hardware and the associated firmware that protects customer code and data from any unauthorized access while the data is in use, i.e., while it's being processed. That's oftentimes a security gap. And there are two dimensions here. One is protecting the data and the code from operators on the cloud provider, i.e, in this case, AWS, and protecting the data and code from the customers themselves. In other words, from admin level users are possible malicious actors on the customer side where the code and data is being processed. And there are three capabilities that enable this. First, the AWS Nitro System, which is the foundation for virtualization. The second is Nitro Enclaves, which isolate environments, and then third, the Nitro Trusted Platform Module, TPM, which enables cryptographic assurances of the integrity of the Nitro instances. Now, we've talked about Nitro in the past, and we think it's a revolutionary innovation, so let's dig into that a bit. This is an AWS slide that was shared about how they protect and isolate data and code. On the left-hand side is a classical view of a virtualized architecture. You have a single host or a single server, and those white boxes represent processes on the main board, X86, or could be Intel, or AMD, or alternative architectures. And you have the hypervisor at the bottom which translates instructions to the CPU, allowing direct execution from a virtual machine into the CPU. But notice, you also have blocks for networking, and storage, and security. And the hypervisor emulates or translates IOS between the physical resources and the virtual machines. And it creates some overhead. Now, companies like VMware have done a great job, and others, of stripping out some of that overhead, but there's still an overhead there. That's why people still like to run on bare metal. Now, and while it's not shown in the graphic, there's an operating system in there somewhere, which is privileged, so it's got access to these resources, and it provides the services to the VMs. Now, on the right-hand side, you have the Nitro system. And you can see immediately the differences between the left and right, because the networking, the storage, and the security, the management, et cetera, they've been separated from the hypervisor and that main board, which has the Intel, AMD, throw in Graviton and Trainium, you know, whatever XPUs are in use in the cloud. And you can see that orange Nitro hypervisor. That is a purpose-built lightweight component for this system. And all the other functions are separated in isolated domains. So very strong isolation between the cloud software and the physical hardware running workloads, i.e., those white boxes on the main board. Now, this will run at practically bare metal speeds, and there are other benefits as well. One of the biggest is security. As we've previously reported, this came out of AWS's acquisition of Annapurna Labs, which we've estimated was picked up for a measly $350 million, which is a drop in the bucket for AWS to get such a strategic asset. And there are three enablers on this side. One is the Nitro cards, which are accelerators to offload that wasted work that's done in traditional architectures by typically the X86. We've estimated 25% to 30% of core capacity and cycles is wasted on those offloads. The second is the Nitro security chip, which is embedded and extends the root of trust to the main board hardware. And finally, the Nitro hypervisor, which allocates memory and CPU resources. So the Nitro cards communicate directly with the VMs without the hypervisors getting in the way, and they're not in the path. And all that data is encrypted while it's in motion, and of course, encryption at rest has been around for a while. We asked AWS, is this an, we presumed it was an Arm-based architecture. We wanted to confirm that. Or is it some other type of maybe hybrid using X86 and Arm? They told us the following, and quote, "The SoC, system on chips, for these hardware components are purpose-built and custom designed in-house by Amazon and Annapurna Labs. The same group responsible for other silicon innovations such as Graviton, Inferentia, Trainium, and AQUA. Now, the Nitro cards are Arm-based and do not use any X86 or X86/64 bit CPUs. Okay, so it confirms what we thought. So you may say, "Why should we even care about all this technical mumbo jumbo, Dave?" Well, a year ago, David Floyer and I published this piece explaining why Nitro and Graviton are secret weapons of Amazon that have been a decade in the making, and why everybody needs some type of Nitro to compete in the future. This is enabled, this Nitro innovations and the custom silicon enabled by the Annapurna acquisition. And AWS has the volume economics to make custom silicon. Not everybody can do it. And it's leveraging the Arm ecosystem, the standard software, and the fabrication volume, the manufacturing volume to revolutionize enterprise computing. Nitro, with the alternative processor, architectures like Graviton and others, enables AWS to be on a performance, cost, and power consumption curve that blows away anything we've ever seen from Intel. And Intel's disastrous earnings results that we saw this past week are a symptom of this mega trend that we've been talking about for years. In the same way that Intel and X86 destroyed the market for RISC chips, thanks to PC volumes, Arm is blowing away X86 with volume economics that cannot be matched by Intel. Thanks to, of course, to mobile and edge. Our prediction is that these innovations and the Arm ecosystem are migrating and will migrate further into enterprise computing, which is Intel's stronghold. Now, that stronghold is getting eaten away by the likes of AMD, Nvidia, and of course, Arm in the form of Graviton and other Arm-based alternatives. Apple, Tesla, Amazon, Google, Microsoft, Alibaba, and others are all designing custom silicon, and doing so much faster than Intel can go from design to tape out, roughly cutting that time in half. And the premise of this piece is that every company needs a Nitro to enable alternatives to the X86 in order to support emergent workloads that are data rich and AI-based, and to compete from an economic standpoint. So while at re:Inforce, we heard that the impetus for Nitro was security. Of course, the Arm ecosystem, and its ascendancy has enabled, in our view, AWS to create a platform that will set the enterprise computing market this decade and beyond. Okay, that's it for today. Thanks to Alex Morrison, who is on production. And he does the podcast. And Ken Schiffman, our newest member of our Boston Studio team is also on production. Kristen Martin and Cheryl Knight help spread the word on social media and in the community. And Rob Hof is our editor in chief over at SiliconANGLE. He does some great, great work for us. Remember, all these episodes are available as podcast. Wherever you listen, just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me directly at David.Vellante@siliconangle.com or DM me @dvellante, comment on my LinkedIn post. And please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. Be well, and we'll see you next time on "Breaking Analysis." (upbeat theme music)

Published Date : Jul 30 2022

SUMMARY :

This is "Breaking Analysis" and the Nasdaq was up nearly 250 points And so the Security Champion program the SecOps team to succeed. of the shared responsibility model, right? and it provides the services to the VMs.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

David FloyerPERSON

0.99+

Mike VersacePERSON

0.99+

MichaelPERSON

0.99+

AWSORGANIZATION

0.99+

Steven SchmidtPERSON

0.99+

AmazonORGANIZATION

0.99+

Kurt KufeldPERSON

0.99+

AppleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

TeslaORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

J.D. BeanPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Arvind RaghuPERSON

0.99+

Lena SmartPERSON

0.99+

Kristen MartinPERSON

0.99+

Cheryl KnightPERSON

0.99+

40%QUANTITY

0.99+

Rob HofPERSON

0.99+

DavePERSON

0.99+

SchmidtPERSON

0.99+

Palo AltoLOCATION

0.99+

2022DATE

0.99+

fiveQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

two daysQUANTITY

0.99+

Annapurna LabsORGANIZATION

0.99+

6%QUANTITY

0.99+

SNAPORGANIZATION

0.99+

five-starQUANTITY

0.99+

Chip SymingtonPERSON

0.99+

47%QUANTITY

0.99+

AnnapurnaORGANIZATION

0.99+

$350 millionQUANTITY

0.99+

BostonLOCATION

0.99+

Merrit BaerPERSON

0.99+

CJ MosesPERSON

0.99+

40QUANTITY

0.99+

MerrittPERSON

0.99+

15%QUANTITY

0.99+

25%QUANTITY

0.99+

AMDORGANIZATION

0.99+

Breaking Analysis: Broadcom, Taming the VMware Beast


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the words of my colleague CTO David Nicholson, Broadcom buys old cars, not to restore them to their original luster and beauty. Nope. They buy classic cars to extract the platinum that's inside the catalytic converter and monetize that. Broadcom's planned 61 billion acquisition of VMware will mark yet another new era and chapter for the virtualization pioneer, a mere seven months after finally getting spun out as an independent company by Dell. For VMware, this means a dramatically different operating model with financial performance and shareholder value creation as the dominant and perhaps the sole agenda item. For customers, it will mean a more focused portfolio, less aspirational vision pitches, and most certainly higher prices. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we'll share data, opinions and customer insights about this blockbuster deal and forecast the future of VMware, Broadcom and the broader ecosystem. Let's first look at the key deal points, it's been well covered in the press. But just for the record, $61 billion in a 50/50 cash and stock deal, resulting in a blended price of $138 per share, which is a 44% premium to the unaffected price, i.e. prior to the news breaking. Broadcom will assume 8 billion of VMware debt and promises that the acquisition will be immediately accretive and will generate 8.5 billion in EBITDA by year three. That's more than 4 billion in EBITDA relative to VMware's current performance today. In a classic Broadcom M&A approach, the company promises to dilever debt and maintain investment grade ratings. They will rebrand their software business as VMware, which will now comprise about 50% of revenues. There's a 40 day go shop and importantly, Broadcom promises to continue to return 60% of its free cash flow to shareholders in the form of dividends and buybacks. Okay, with that out of the way, we're going to get to the money slide literally in a moment that Broadcom shared on its investor call. Broadcom has more than 20 business units. It's CEO Hock Tan makes it really easy for his business unit managers to understand. Rule number one, you agreed to an operating plan with targets for revenue, growth, EBITDA, et cetera, hit your numbers consistently and we're good. You'll be very well compensated and life will be wonderful for you and your family. Miss the number, and we're going to have a frank and uncomfortable bottom line discussion. You'll four, perhaps five quarters to turn your business around, if you don't, we'll kill it or sell it if we can. Rule number two, refer to rule number one. Hello, VMware, here's the money slide. I'll interpret the bullet points on the left for clarity. Your fiscal year 2022 EBITDA was 4.7 billion. By year three, it will be 8.5 billion. And we Broadcom have four knobs to turn with you, VMware to help you get there. First knob, if it ain't recurring revenue with rubber stamp renewals, we're going to convert that revenue or kill it. Knob number two, we're going to focus R&D in the most profitable areas of the business. AKA expect the R&D budget to be cut. Number three, we're going to spend less on sales and marketing by focusing on existing customers. We're not going to lose money today and try to make it up many years down the road. And number four, we run Broadcom with 1% GNA. You will too. Any questions? Good. Now, just to give you a little sense of how Broadcom runs its business and how well run a company it is, let's do a little simple comparison with this financial snapshot. All we're doing here is taking the most recent quarterly earnings reports from Broadcom and VMware respectively. We take the quarterly revenue and multiply by four X to get the revenue run rate and then we calculate the ratios off of the most recent quarters revenue. It's worth spending some time on this to get a sense of how profitable the Broadcom business actually is and what the spreadsheet gurus at Broadcom are seeing with respect to the possibilities for VMware. So combined, we're talking about a 40 plus billion dollar company. Broadcom is growing at more than 20% per year. Whereas VMware's latest quarter showed a very disappointing 3% growth. Broadcom is mostly a hardware company, but its gross margin is in the high seventies. As a software company of course VMware has higher gross margins, but FYI, Broadcom's software business, the remains of Symantec and what they purchased as CA has 90% gross margin. But the I popper is operating margin. This is all non gap. So it excludes things like stock based compensation, but Broadcom had 61% operating margin last quarter. This is insanely off the charts compared to VMware's 25%. Oracle's non gap operating margin is 47% and Oracle is an incredibly profitable company. Now the red box is where the cuts are going to take place. Broadcom doesn't spend much on marketing. It doesn't have to. It's SG&A is 3% of revenue versus 18% for VMware and R&D spend is almost certainly going to get cut. The other eye popper is free cash flow as a percentage of revenue at 51% for Broadcom and 29% for VMware. 51%. That's incredible. And that my dear friends is why Broadcom a company with just under 30 billion in revenue has a market cap of 230 billion. Let's dig into the VMware portfolio a bit more and identify the possible areas that will be placed under the microscope by Hock Tan and his managers. The data from ETR's latest survey shows the net score or spending momentum across VMware's portfolio in this chart, net score essentially measures the net percent of customers that are spending more on a specific product or vendor. The yellow bar is the most recent survey and compares the April 22 survey data to April 21 and January of 22. Everything is down in the yellow from January, not surprising given the economic outlook and the change in spending patterns that we've reported. VMware Cloud on AWS remains the product in the ETR survey with the most momentum. It's the only offering in the portfolio with spending momentum above the 40% line, a level that we consider highly elevated. Unified Endpoint Management looks more than respectable, but that business is a rock fight with Microsoft. VMware Cloud is things like VMware Cloud foundation, VCF and VMware's cross cloud offerings. NSX came from the Nicira acquisition. Tanzu is not yet pervasive and one wonders if VMware is making any money there. Server is ESX and vSphere and is the bread and butter. That is where Broadcom is going to focus. It's going to look at VSAN and NSX, which is software probably profitable. And of course the other products and see if the investments are paying off, if they are Broadcom will keep, if they are not, you can bet your socks, they will be sold off or killed. Carbon Black is at the far right. VMware paid $2.1 billion for Carbon Black. And it's the lowest performer on this list in terms of net score or spending momentum. And that doesn't mean it's not profitable. It just doesn't have the momentum you'd like to see, so you can bet that is going to get scrutiny. Remember VMware's growth has been under pressure for the last several years. So it's been buying companies, dozens of them. It bought AirWatch, bought Heptio, Carbon Black, Nicira, SaltStack, Datrium, Versedo, Bitnami, and on and on and on. Many of these were to pick up engineering teams. Some of them were to drive new revenue. Now this is definitely going to be scrutinized by Broadcom. So that helps explain why Michael Dell would sell VMware. And where does VMware go from here? It's got great core product. It's an iconic name. It's got an awesome ecosystem, fantastic distribution channel, but its growth is slowing. It's got limited developer chops in a world that developers and cloud native is all the rage. It's got a far flung R&D agenda going at war with a lot of different places. And it's increasingly fighting this multi front war with cloud companies, companies like Cisco, IBM Red Hat, et cetera. VMware's kind of becoming a heavy lift. It's a perfect acquisition target for Broadcom and why the street loves this deal. And we titled this Breaking Analysis taming the VMware beast because VMware is a beast. It's ubiquitous. It's an epic software platform. EMC couldn't control it. Dell used it as a piggy bank, but really didn't change its operating model. Broadcom 100% will. Now one of the things that we get excited about is the future of systems architectures. We published a breaking analysis about a year ago, talking about AWS's secret weapon with Nitro and it's Annapurna custom Silicon efforts. Remember it acquired Annapurna for a measly $350 million. And we talked about how there's a new architecture and a new price performance curve emerging in the enterprise, driven by AWS and being followed by Microsoft, Google, Alibaba, a trend toward custom Silicon with the arm based Nitro and which is AWS's hypervisor and Nick strategy, enabling processor diversity with things like Graviton and Trainium and other diverse processors, really diversifying away from x86 and how this leads to much faster product cycles, faster tape out, lower costs. And our premise was that everyone in the data center is going to competes, is going to need a Nitro to be competitive long term. And customers are going to gravitate toward the most economically favorable platform. And as we describe the landscape with this chart, we've updated this for this Breaking Analysis and we'll come back to nitro in a moment. This is a two dimensional graphic with net score or spending momentum on the vertical axis and overlap formally known as market share or presence within the survey, pervasiveness that's on the horizontal axis. And we plot various companies and products and we've inserted VMware's net score breakdown. The granularity in those colored bars on the bottom right. Net score is essentially the green minus the red and a couple points on that. VMware in the latest survey has 6% new adoption. That's that lime green. It's interesting. The question Broadcom is going to ask is, how much does it cost you to acquire that 6% new. 32% of VMware customers in the survey are increasing spending, meaning they're increasing spending by 6% or more. That's the forest green. And the question Broadcom will dig into is what percent of that increased spend (chuckles) you're capturing is profitable spend? Whatever isn't profitable is going to be cut. Now that 52% gray area flat spending that is ripe for the Broadcom picking, that is the fat middle, and those customers are locked and loaded for future rent extraction via perpetual renewals and price increases. Only 8% of customers are spending less, that's the pinkish color and only 3% are defecting, that's the bright red. So very, very sticky profile. Perfect for Broadcom. Now the rest of the chart lays out some of the other competitor names and we've plotted many of the VMware products so you can see where they fit. They're all pretty respectable on the vertical axis, that's spending momentum. But what Broadcom wants is that core ESX vSphere base where we've superimposed the Broadcom logo. Broadcom doesn't care so much about spending momentum. It cares about profitability potential and then momentum. AWS and Azure, they're setting the pace in this business, in the upper right corner. Cisco very huge presence in the data center, as does Intel, they're not in the ETR survey, but we've superimposed them. Now, Intel of course, is in a dog fight within Nvidia, the Arm ecosystem, AMD, don't forget China. You see a Google cloud platform is in there. Oracle is also on the chart as well, somewhat lower on the vertical axis, but it doesn't have that spending momentum, but it has a big presence. And it owns a cloud as we've talked about many times and it's highly differentiated. It's got a strategy that allows it to differentiate from the pack. It's very financially driven. It knows how to extract lifetime value. Safra Catz operates in many ways, similar to what we're seeing from Hock Tan and company, different from a portfolio standpoint. Oracle's got the full stack, et cetera. So it's a different strategy. But very, very financially savvy. You could see IBM and IBM Red Hat in the mix and then Dell and HP. I want to come back to that momentarily to talk about where value is flowing. And then we plotted Nutanix, which with Acropolis could suck up some V tax avoidance business. Now notice Symantec and CA, relatively speaking in the ETR survey, they have horrible spending momentum. As we said, Broadcom doesn't care. Hock Tan is not going for growth at the expense of profitability. So we fully expect VMware to come down on the vertical axis over time and go up on the profit scale. Of course, ETR doesn't measure the profitability here. Now back to Nitro, VMware has this thing called Project Monterey. It's essentially their version of Nitro and will serve as their future architecture diversifying off x86 and accommodating alternative processors. And a much more efficient performance, price in energy consumption curve. Now, one of the things that we've advocated for, we said this about Dell and others, including VMware to take a page out of AWS and start developing custom Silicon to better integrate hardware and software and accelerate multi-cloud or what we call supercloud. That layer above the cloud, not just running on individual clouds. So this is all about efficiency and simplicity to own this space. And we've challenged organizations to do that because otherwise we feel like the cloud guys are just going to have consistently better costs, not necessarily price, but better cost structures, but it begs the question. What happens to Project Monterey? Hock Tan and Broadcom, they don't invest in something that is unproven and doesn't throw off free cash flow. If it's not going to pay off for years to come, they're probably not going to invest in it. And yet Project Monterey could help secure VMware's future in not only the data center, but at the edge and compete more effectively with cloud economics. So we think either Project Monterey is toast or the VMware team will knock on the door of one of Broadcom's 20 plus business units and say, guys, what if we work together with you to develop a version of Monterey that we can use and sell to everyone, it'd be the arms dealer to everyone and be competitive with the cloud and other players out there and create the de facto standard for data center performance and supercloud. I mean, it's not outrageously expensive to develop custom Silicon. Tesla is doing it for example. And Broadcom obviously is capable of doing it. It's got good relationships with semiconductor fabs. But I think this is going to be a tough sell to Broadcom, unless VMware can hide this in plain site and make it profitable fast, like AWS most likely has with Nitro and Graviton. Then Project Monterey and our pipe dream of alternatives to Nitro in the data center could happen but if it can't, it's going to be toast. Or maybe Intel or Nvidia will take it over or maybe the Monterey team will spin out a VMware and do a Pensando like deal and demonstrate the viability of this concept and then Broadcom will buy it back in 10 years. Here's a double click on that previous data that we put in tabular form. It's how the data on that previous slide was plotted. I just want to give you the background data here. So net score spending momentum is the sorted on the left. So it's sorted by net score in the left hand chart, that was the y-axis in the previous data set and then shared and or presence in the data set is the right hand chart. In other words, it's sorted on the right hand chart, right hand table. That right most column is shared and you can see it's sorted top to bottom, and that was the x-axis on the previous chart. The point is not many on the left hand side are above the 40% line. VMware Cloud on AWS is, it's expensive, so it's probably profitable and it's probably a keeper. We'll see about the rest of VMware's portfolio. Like what happens to Tanzu for example. On the right, we drew a red line, just arbitrarily at those companies and products with more than a hundred mentions in the survey, everything but Tanzu from VMware makes that cut. Again, this is no indication of profitability here, and that's what's going to matter to Broadcom. Now let's take a moment to address the question of Broadcom as a software company. What the heck do they know about software, right. Well, they're not dumb over there and they know how to run a business, but there is a strategic rationale to this move beyond just doing portfolios and extracting rents and cutting R&D, et cetera, et cetera. Why, for example, isn't Broadcom going after coming back to Dell or HPE, it could pick up for a lot less than VMware, and they got way more revenue than VMware. Well, it's obvious, software's more profitable of course, and Broadcom wants to move up the stack, but there's a trend going on, which Broadcom is very much in touch with. First, it sells to Dell and HPE and Cisco and all the OEM. so it's not going to disrupt that. But this chart shows that the value is flowing away from traditional servers and storage and networking to two places, merchant Silicon, which itself is morphing. Broadcom... We focus on the left hand side of this chart. Broadcom correctly believes that the world is shifting from a CPU centric center of gravity to a connectivity centric world. We've talked about this on theCUBE a lot. You should listen to Broadcom COO Charlie Kawwas speak about this. It's all that supporting infrastructure around the CPU where value is flowing, including of course, alternative GPUs and XPUs, and NPUs et cetera, that are sucking the value out of the traditional x86 architecture, offloading some of the security and networking and storage functions that traditionally have been done in x86 which are part of the waste right now in the data center. This is that shifting dynamic of Moore's law. Moore's law, not keeping pace. It's slowing down. It's slower relative to some of the combinatorial factors. When you add up in all the CPU and GPU and NPU and accelerators, et cetera. So we've talked about this a lot in Breaking Analysis episodes. So the value is shifting left within that middle circle. And it's shifting left within that left circle toward components, other than CPU, many of which Broadcom supplies. And then you go back to the middle, value is shifting from that middle section, that traditional data center up into hyperscale clouds, and then to the right toward infrastructure software to manage all that equipment in the data center and across clouds. And look Broadcom is an arms dealer. They simply sell to everyone, locking up key vectors of the value chain, cutting costs and raising prices. It's a pretty straightforward strategy, but not for the fate of heart. And Broadcom has become pretty good at it. Let's close with the customer feedback. I spoke with ETRs Eric Bradley this morning. He and I both reached out to VMware customers that we know and got their input. And here's a little snapshot of what they said. I'll just read this. Broadcom will be looking to invest in the core and divest of any underperforming assets, right on. It's just what we were saying. This doesn't bode well for future innovation, this is a CTO at a large travel company. Next comment, we're a Carbon Black customer. VMware didn't seem to interfere with Carbon Black, but now that we're concerned about short term disruption to their tech roadmap and long term, are they going to split and be sold off like Symantec was, this is a CISO at a large hospitality organization. Third comment, I got directly from a VMware practitioner, an IT director at a manufacturing firm. This individual said, moving off VMware would be very difficult for us. We have over 500 applications running on VMware, and it's really easy to manage. We're not going to move those into the cloud and we're worried Broadcom will raise prices and just extract rents. Last comment, we'll share as, Broadcom sees the cloud data center and IoT is their next revenue source. The VMware acquisition provides them immediate virtualization capabilities to support a lightweight IoT offering. Big concern for customers is what technology they will invest in and innovate, and which will be stripped off and sold. Interesting. I asked David Floyer to give me a back of napkin estimate for the following question. I said, David, if you're running mission critical applications on VMware, how much would it increase your operating cost moving those applications into the cloud? Or how much would it save? And he said, Dave, VMware's really easy to run. It can run any application pretty much anywhere, and you don't need an army of people to manage it. All your processes are tied to VMware, you're locked and loaded. Move that into the cloud and your operating cost would double by his estimates. Well, there you have it. Broadcom will pinpoint the optimal profit maximization strategy and raise prices to the point where customers say, you know what, we're still better off staying with VMware. And sadly, for many practitioners there aren't a lot of choices. You could move to the cloud and increase your cost for a lot of your applications. You could do it yourself with say Zen or OpenStack. Good luck with that. You could tap Nutanix. That will definitely work for some applications, but are you going to move your entire estate, your application portfolio to Nutanix? It's not likely. So you're going to pay more for VMware and that's the price you're going to pay for two decades of better IT. So our advice is get out ahead of this, do an application portfolio assessment. If you can move apps to the cloud for less, and you haven't yet, do it, start immediately. Definitely give Nutanix a call, but going to have to be selective as to what you actually can move, forget porting to OpenStack, or do it yourself Hypervisor, don't even go there. And start building new cloud native apps where it makes sense and let the VMware stuff go into manage decline. Let certain apps just die through attrition, shift your development resources to innovation in the cloud and build a brick wall around the stable apps with VMware. As Paul Maritz, the former CEO of VMware said, "We are building the software mainframe". Now marketing guys got a hold of that and said, Paul, stop saying that, but it's true. And with Broadcom's help that day we'll soon be here. That's it for today. Thanks to Stephanie Chan who helps research our topics for Breaking Analysis. Alex Myerson does the production and he also manages the Breaking Analysis podcast. Kristen Martin and Cheryl Knight help get the word out on social and thanks to Rob Hof, who was our editor in chief at siliconangle.com. Remember, these episodes are all available as podcast, wherever you listen, just search Breaking Analysis podcast. Check out ETRs website at etr.ai for all the survey action. We publish a full report every week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com. You can DM me at DVellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 28 2022

SUMMARY :

This is Breaking Analysis and promises that the acquisition

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Stephanie ChanPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SymantecORGANIZATION

0.99+

Rob HofPERSON

0.99+

Alex MyersonPERSON

0.99+

April 22DATE

0.99+

HPORGANIZATION

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Paul MaritzPERSON

0.99+

BroadcomORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

April 21DATE

0.99+

NSXORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

DavePERSON

0.99+

JanuaryDATE

0.99+

$61 billionQUANTITY

0.99+

8.5 billionQUANTITY

0.99+

$2.1 billionQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

EMCORGANIZATION

0.99+

AcropolisORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

90%QUANTITY

0.99+

6%QUANTITY

0.99+

4.7 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Hock TanORGANIZATION

0.99+

60%QUANTITY

0.99+

44%QUANTITY

0.99+

40 dayQUANTITY

0.99+

61%QUANTITY

0.99+

8 billionQUANTITY

0.99+

Michael DellPERSON

0.99+

52%QUANTITY

0.99+

47%QUANTITY

0.99+

Analyst Power Panel: Future of Database Platforms


 

(upbeat music) >> Once a staid and boring business dominated by IBM, Oracle, and at the time newcomer Microsoft, along with a handful of wannabes, the database business has exploded in the past decade and has become a staple of financial excellence, customer experience, analytic advantage, competitive strategy, growth initiatives, visualizations, not to mention compliance, security, privacy and dozens of other important use cases and initiatives. And on the vendor's side of the house, we've seen the rapid ascendancy of cloud databases. Most notably from Snowflake, whose massive raises leading up to its IPO in late 2020 sparked a spate of interest and VC investment in the separation of compute and storage and all that elastic resource stuff in the cloud. The company joined AWS, Azure and Google to popularize cloud databases, which have become a linchpin of competitive strategies for technology suppliers. And if I get you to put your data in my database and in my cloud, and I keep innovating, I'm going to build a moat and achieve a hugely attractive lifetime customer value in a really amazing marginal economics dynamic that is going to fund my future. And I'll be able to sell other adjacent services, not just compute and storage, but machine learning and inference and training and all kinds of stuff, dozens of lucrative cloud offerings. Meanwhile, the database leader, Oracle has invested massive amounts of money to maintain its lead. It's building on its position as the king of mission critical workloads and making typical Oracle like claims against the competition. Most were recently just yesterday with another announcement around MySQL HeatWave. An extension of MySQL that is compatible with on-premises MySQLs and is setting new standards in price performance. We're seeing a dramatic divergence in strategies across the database spectrum. On the far left, we see Amazon with more than a dozen database offerings each with its own API and primitives. AWS is taking a right tool for the right job approach, often building on open source platforms and creating services that it offers to customers to solve very specific problems for developers. And on the other side of the line, we see Oracle, which is taking the Swiss Army Knife approach, converging database functionality, enabling analytic and transactional workloads to run in the same data store, eliminating the need to ETL, at the same time adding capabilities into its platform like automation and machine learning. Welcome to this database Power Panel. My name is Dave Vellante, and I'm so excited to bring together some of the most respected industry analyst in the community. Today we're going to assess what's happening in the market. We're going to dig into the competitive landscape and explore the future of database and database platforms and decode what it means to customers. Let me take a moment to welcome our guest analyst today. Matt Kimball is a vice president and principal analysts at Moor Insights and Strategy, Matt. He knows products, he knows industry, he's got real world IT expertise, and he's got all the angles 25 plus years of experience in all kinds of great background. Matt, welcome. Thanks very much for coming on theCUBE. Holgar Mueller, friend of theCUBE, vice president and principal analyst at Constellation Research in depth knowledge on applications, application development, knows developers. He's worked at SAP and Oracle. And then Bob Evans is Chief Content Officer and co-founder of the Acceleration Economy, founder and principle of Cloud Wars. Covers all kinds of industry topics and great insights. He's got awesome videos, these three minute hits. If you haven't seen 'em, checking them out, knows cloud companies, his Cloud Wars minutes are fantastic. And then of course, Marc Staimer is the founder of Dragon Slayer Research. A frequent contributor and guest analyst at Wikibon. He's got a wide ranging knowledge across IT products, knows technology really well, can go deep. And then of course, Ron Westfall, Senior Analyst and Director Research Director at Futurum Research, great all around product trends knowledge. Can take, you know, technical dives and really understands competitive angles, knows Redshift, Snowflake, and many others. Gents, thanks so much for taking the time to join us in theCube today. It's great to have you on, good to see you. >> Good to be here, thanks for having us. >> Thanks, Dave. >> All right, let's start with an around the horn and briefly, if each of you would describe, you know, anything I missed in your areas of expertise and then you answer the following question, how would you describe the state of the database, state of platform market today? Matt Kimball, please start. >> Oh, I hate going first, but that it's okay. How would I describe the world today? I would just in one sentence, I would say, I'm glad I'm not in IT anymore, right? So, you know, it is a complex and dangerous world out there. And I don't envy IT folks I'd have to support, you know, these modernization and transformation efforts that are going on within the enterprise. It used to be, you mentioned it, Dave, you would argue about IBM versus Oracle versus this newcomer in the database space called Microsoft. And don't forget Sybase back in the day, but you know, now it's not just, which SQL vendor am I going to go with? It's all of these different, divergent data types that have to be taken, they have to be merged together, synthesized. And somehow I have to do that cleanly and use this to drive strategic decisions for my business. That is not easy. So, you know, you have to look at it from the perspective of the business user. It's great for them because as a DevOps person, or as an analyst, I have so much flexibility and I have this thing called the cloud now where I can go get services immediately. As an IT person or a DBA, I am calling up prevention hotlines 24 hours a day, because I don't know how I'm going to be able to support the business. And as an Oracle or as an Oracle or a Microsoft or some of the cloud providers and cloud databases out there, I'm licking my chops because, you know, my market is expanding and expanding every day. >> Great, thank you for that, Matt. Holgar, how do you see the world these days? You always have a good perspective on things, share with us. >> Well, I think it's the best time to be in IT, I'm not sure what Matt is talking about. (laughing) It's easier than ever, right? The direction is going to cloud. Kubernetes has won, Google has the best AI for now, right? So things are easier than ever before. You made commitments for five plus years on hardware, networking and so on premise, and I got gray hair about worrying it was the wrong decision. No, just kidding. But you kind of both sides, just to be controversial, make it interesting, right. So yeah, no, I think the interesting thing specifically with databases, right? We have this big suite versus best of breed, right? Obviously innovation, like you mentioned with Snowflake and others happening in the cloud, the cloud vendors server, where to save of their databases. And then we have one of the few survivors of the old guard as Evans likes to call them is Oracle who's doing well, both their traditional database. And now, which is really interesting, remarkable from that because Oracle it was always the power of one, have one database, add more to it, make it what I call the universal database. And now this new HeatWave offering is coming and MySQL open source side. So they're getting the second (indistinct) right? So it's interesting that older players, traditional players who still are in the market are diversifying their offerings. Something we don't see so much from the traditional tools from Oracle on the Microsoft side or the IBM side these days. >> Great, thank you Holgar. Bob Evans, you've covered this business for a while. You've worked at, you know, a number of different outlets and companies and you cover the competition, how do you see things? >> Dave, you know, the other angle to look at this from is from the customer side, right? You got now CEOs who are any sort of business across all sorts of industries, and they understand that their future success is going to be dependent on their ability to become a digital company, to understand data, to use it the right way. So as you outline Dave, I think in your intro there, it is a fantastic time to be in the database business. And I think we've got a lot of new buyers and influencers coming in. They don't know all this history about IBM and Microsoft and Oracle and you know, whoever else. So I think they're going to take a long, hard look, Dave, at some of these results and who is able to help these companies not serve up the best technology, but who's going to be able to help their business move into the digital future. So it's a fascinating time now from every perspective. >> Great points, Bob. I mean, digital transformation has gone from buzzword to imperative. Mr. Staimer, how do you see things? >> I see things a little bit differently than my peers here in that I see the database market being segmented. There's all the different kinds of databases that people are looking at for different kinds of data, and then there is databases in the cloud. And so database as cloud service, I view very differently than databases because the traditional way of implementing a database is changing and it's changing rapidly. So one of the premises that you stated earlier on was that you viewed Oracle as a database company. I don't view Oracle as a database company anymore. I view Oracle as a cloud company that happens to have a significant expertise and specialty in databases, and they still sell database software in the traditional way, but ultimately they're a cloud company. So database cloud services from my point of view is a very distinct market from databases. >> Okay, well, you gave us some good meat on the bone to talk about that. Last but not least-- >> Dave did Marc, just say Oracle's a cloud company? >> Yeah. (laughing) Take away the database, it would be interesting to have that discussion, but let's let Ron jump in here. Ron, give us your take. >> That's a great segue. I think it's truly the era of the cloud database, that's something that's rising. And the key trends that come with it include for example, elastic scaling. That is the ability to scale on demand, to right size workloads according to customer requirements. And also I think it's going to increase the prioritization for high availability. That is the player who can provide the highest availability is going to have, I think, a great deal of success in this emerging market. And also I anticipate that there will be more consolidation across platforms in order to enable cost savings for customers, and that's something that's always going to be important. And I think we'll see more of that over the horizon. And then finally security, security will be more important than ever. We've seen a spike (indistinct), we certainly have seen geopolitical originated cybersecurity concerns. And as a result, I see database security becoming all the more important. >> Great, thank you. Okay, let me share some data with you guys. I'm going to throw this at you and see what you think. We have this awesome data partner called Enterprise Technology Research, ETR. They do these quarterly surveys and each period with dozens of industry segments, they track clients spending, customer spending. And this is the database, data warehouse sector okay so it's taxonomy, so it's not perfect, but it's a big kind of chunk. They essentially ask customers within a category and buy a specific vendor, you're spending more or less on the platform? And then they subtract the lesses from the mores and they derive a metric called net score. It's like NPS, it's a measure of spending velocity. It's more complicated and granular than that, but that's the basis and that's the vertical axis. The horizontal axis is what they call market share, it's not like IDC market share, it's just pervasiveness in the data set. And so there are a couple of things that stand out here and that we can use as reference point. The first is the momentum of Snowflake. They've been off the charts for many, many, for over two years now, anything above that dotted red line, that 40%, is considered by ETR to be highly elevated and Snowflake's even way above that. And I think it's probably not sustainable. We're going to see in the next April survey, next month from those guys, when it comes out. And then you see AWS and Microsoft, they're really pervasive on the horizontal axis and highly elevated, Google falls behind them. And then you got a number of well funded players. You got Cockroach Labs, Mongo, Redis, MariaDB, which of course is a fork on MySQL started almost as protest at Oracle when they acquired Sun and they got MySQL and you can see the number of others. Now Oracle who's the leading database player, despite what Marc Staimer says, we know, (laughs) and they're a cloud player (laughing) who happens to be a leading database player. They dominate in the mission critical space, we know that they're the king of that sector, but you can see here that they're kind of legacy, right? They've been around a long time, they get a big install base. So they don't have the spending momentum on the vertical axis. Now remember this is, just really this doesn't capture spending levels, so that understates Oracle but nonetheless. So it's not a complete picture like SAP for instance is not in here, no Hana. I think people are actually buying it, but it doesn't show up here, (laughs) but it does give an indication of momentum and presence. So Bob Evans, I'm going to start with you. You've commented on many of these companies, you know, what does this data tell you? >> Yeah, you know, Dave, I think all these compilations of things like that are interesting, and that folks at ETR do some good work, but I think as you said, it's a snapshot sort of a two-dimensional thing of a rapidly changing, three dimensional world. You know, the incidents at which some of these companies are mentioned versus the volume that happens. I think it's, you know, with Oracle and I'm not going to declare my religious affiliation, either as cloud company or database company, you know, they're all of those things and more, and I think some of our old language of how we classify companies is just not relevant anymore. But I want to ask too something in here, the autonomous database from Oracle, nobody else has done that. So either Oracle is crazy, they've tried out a technology that nobody other than them is interested in, or they're onto something that nobody else can match. So to me, Dave, within Oracle, trying to identify how they're doing there, I would watch autonomous database growth too, because right, it's either going to be a big plan and it breaks through, or it's going to be caught behind. And the Snowflake phenomenon as you mentioned, that is a rare, rare bird who comes up and can grow 100% at a billion dollar revenue level like that. So now they've had a chance to come in, scare the crap out of everybody, rock the market with something totally new, the data cloud. Will the bigger companies be able to catch up and offer a compelling alternative, or is Snowflake going to continue to be this outlier. It's a fascinating time. >> Really, interesting points there. Holgar, I want to ask you, I mean, I've talked to certainly I'm sure you guys have too, the founders of Snowflake that came out of Oracle and they actually, they don't apologize. They say, "Hey, we not going to do all that complicated stuff that Oracle does, we were trying to keep it real simple." But at the same time, you know, they don't do sophisticated workload management. They don't do complex joints. They're kind of relying on the ecosystems. So when you look at the data like this and the various momentums, and we talked about the diverging strategies, what does this say to you? >> Well, it is a great point. And I think Snowflake is an example how the cloud can turbo charge a well understood concept in this case, the data warehouse, right? You move that and you find steroids and you see like for some players who've been big in data warehouse, like Sentara Data, as an example, here in San Diego, what could have been for them right in that part. The interesting thing, the problem though is the cloud hides a lot of complexity too, which you can scale really well as you attract lots of customers to go there. And you don't have to build things like what Bob said, right? One of the fascinating things, right, nobody's answering Oracle on the autonomous database. I don't think is that they cannot, they just have different priorities or the database is not such a priority. I would dare to say that it's for IBM and Microsoft right now at the moment. And the cloud vendors, you just hide that right through scripts and through scale because you support thousands of customers and you can deal with a little more complexity, right? It's not against them. Whereas if you have to run it yourself, very different story, right? You want to have the autonomous parts, you want to have the powerful tools to do things. >> Thank you. And so Matt, I want to go to you, you've set up front, you know, it's just complicated if you're in IT, it's a complicated situation and you've been on the customer side. And if you're a buyer, it's obviously, it's like Holgar said, "Cloud's supposed to make this stuff easier, but the simpler it gets the more complicated gets." So where do you place your bets? Or I guess more importantly, how do you decide where to place your bets? >> Yeah, it's a good question. And to what Bob and Holgar said, you know, the around autonomous database, I think, you know, part of, as I, you know, play kind of armchair psychologist, if you will, corporate psychologists, I look at what Oracle is doing and, you know, databases where they've made their mark and it's kind of, that's their strong position, right? So it makes sense if you're making an entry into this cloud and you really want to kind of build momentum, you go with what you're good at, right? So that's kind of the strength of Oracle. Let's put a lot of focus on that. They do a lot more than database, don't get me wrong, but you know, I'm going to short my strength and then kind of pivot from there. With regards to, you know, what IT looks at and what I would look at you know as an IT director or somebody who is, you know, trying to consume services from these different cloud providers. First and foremost, I go with what I know, right? Let's not forget IT is a conservative group. And when we look at, you know, all the different permutations of database types out there, SQL, NoSQL, all the different types of NoSQL, those are largely being deployed by business users that are looking for agility or businesses that are looking for agility. You know, the reason why MongoDB is so popular is because of DevOps, right? It's a great platform to develop on and that's where it kind of gained its traction. But as an IT person, I want to go with what I know, where my muscle memory is, and that's my first position. And so as I evaluate different cloud service providers and cloud databases, I look for, you know, what I know and what I've invested in and where my muscle memory is. Is there enough there and do I have enough belief that that company or that service is going to be able to take me to, you know, where I see my organization in five years from a data management perspective, from a business perspective, are they going to be there? And if they are, then I'm a little bit more willing to make that investment, but it is, you know, if I'm kind of going in this blind or if I'm cloud native, you know, that's where the Snowflakes of the world become very attractive to me. >> Thank you. So Marc, I asked Andy Jackson in theCube one time, you have all these, you know, data stores and different APIs and primitives and you know, very granular, what's the strategy there? And he said, "Hey, that allows us as the market changes, it allows us to be more flexible. If we start building abstractions layers, it's harder for us." I think also it was not a good time to market advantage, but let me ask you, I described earlier on that spectrum from AWS to Oracle. We just saw yesterday, Oracle announced, I think the third major enhancement in like 15 months to MySQL HeatWave, what do you make of that announcement? How do you think it impacts the competitive landscape, particularly as it relates to, you know, converging transaction and analytics, eliminating ELT, I know you have some thoughts on this. >> So let me back up for a second and defend my cloud statement about Oracle for a moment. (laughing) AWS did a great job in developing the cloud market in general and everything in the cloud market. I mean, I give them lots of kudos on that. And a lot of what they did is they took open source software and they rent it to people who use their cloud. So I give 'em lots of credit, they dominate the market. Oracle was late to the cloud market. In fact, they actually poo-pooed it initially, if you look at some of Larry Ellison's statements, they said, "Oh, it's never going to take off." And then they did 180 turn, and they said, "Oh, we're going to embrace the cloud." And they really have, but when you're late to a market, you've got to be compelling. And this ties into the announcement yesterday, but let's deal with this compelling. To be compelling from a user point of view, you got to be twice as fast, offer twice as much functionality, at half the cost. That's generally what compelling is that you're going to capture market share from the leaders who established the market. It's very difficult to capture market share in a new market for yourself. And you're right. I mean, Bob was correct on this and Holgar and Matt in which you look at Oracle, and they did a great job of leveraging their database to move into this market, give 'em lots of kudos for that too. But yesterday they announced, as you said, the third innovation release and the pace is just amazing of what they're doing on these releases on HeatWave that ties together initially MySQL with an integrated builtin analytics engine, so a data warehouse built in. And then they added automation with autopilot, and now they've added machine learning to it, and it's all in the same service. It's not something you can buy and put on your premise unless you buy their cloud customers stuff. But generally it's a cloud offering, so it's compellingly better as far as the integration. You don't buy multiple services, you buy one and it's lower cost than any of the other services, but more importantly, it's faster, which again, give 'em credit for, they have more integration of a product. They can tie things together in a way that nobody else does. There's no additional services, ETL services like Glue and AWS. So from that perspective, they're getting better performance, fewer services, lower cost. Hmm, they're aiming at the compelling side again. So from a customer point of view it's compelling. Matt, you wanted to say something there. >> Yeah, I want to kind of, on what you just said there Marc, and this is something I've found really interesting, you know. The traditional way that you look at software and, you know, purchasing software and IT is, you look at either best of breed solutions and you have to work on the backend to integrate them all and make them all work well. And generally, you know, the big hit against the, you know, we have one integrated offering is that, you lose capability or you lose depth of features, right. And to what you were saying, you know, that's the thing I found interesting about what Oracle is doing is they're building in depth as they kind of, you know, build that service. It's not like you're losing a lot of capabilities, because you're going to one integrated service versus having to use A versus B versus C, and I love that idea. >> You're right. Yeah, not only you're not losing, but you're gaining functionality that you can't get by integrating a lot of these. I mean, I can take Snowflake and integrate it in with machine learning, but I also have to integrate in with a transactional database. So I've got to have connectors between all of this, which means I'm adding time. And what it comes down to at the end of the day is expertise, effort, time, and cost. And so what I see the difference from the Oracle announcements is they're aiming at reducing all of that by increasing performance as well. Correct me if I'm wrong on that but that's what I saw at the announcement yesterday. >> You know, Marc, one thing though Marc, it's funny you say that because I started out saying, you know, I'm glad I'm not 19 anymore. And the reason is because of exactly what you said, it's almost like there's a pseudo level of witchcraft that's required to support the modern data environment right in the enterprise. And I need simpler faster, better. That's what I need, you know, I am no longer wearing pocket protectors. I have turned from, you know, break, fix kind of person, to you know, business consultant. And I need that point and click simplicity, but I can't sacrifice, you know, a depth of features of functionality on the backend as I play that consultancy role. >> So, Ron, I want to bring in Ron, you know, it's funny. So Matt, you mentioned Mongo, I often and say, if Oracle mentions you, you're on the map. We saw them yesterday Ron, (laughing) they hammered RedShifts auto ML, they took swipes at Snowflake, a little bit of BigQuery. What were your thoughts on that? Do you agree with what these guys are saying in terms of HeatWaves capabilities? >> Yes, Dave, I think that's an excellent question. And fundamentally I do agree. And the question is why, and I think it's important to know that all of the Oracle data is backed by the fact that they're using benchmarks. For example, all of the ML and all of the TPC benchmarks, including all the scripts, all the configs and all the detail are posted on GitHub. So anybody can look at these results and they're fully transparent and replicate themselves. If you don't agree with this data, then by all means challenge it. And we have not really seen that in all of the new updates in HeatWave over the last 15 months. And as a result, when it comes to these, you know, fundamentals in looking at the competitive landscape, which I think gives validity to outcomes such as Oracle being able to deliver 4.8 times better price performance than Redshift. As well as for example, 14.4 better price performance than Snowflake, and also 12.9 better price performance than BigQuery. And so that is, you know, looking at the quantitative side of things. But again, I think, you know, to Marc's point and to Matt's point, there are also qualitative aspects that clearly differentiate the Oracle proposition, from my perspective. For example now the MySQL HeatWave ML capabilities are native, they're built in, and they also support things such as completion criteria. And as a result, that enables them to show that hey, when you're using Redshift ML for example, you're having to also use their SageMaker tool and it's running on a meter. And so, you know, nobody really wants to be running on a meter when, you know, executing these incredibly complex tasks. And likewise, when it comes to Snowflake, they have to use a third party capability. They don't have the built in, it's not native. So the user, to the point that he's having to spend more time and it increases complexity to use auto ML capabilities across the Snowflake platform. And also, I think it also applies to other important features such as data sampling, for example, with the HeatWave ML, it's intelligent sampling that's being implemented. Whereas in contrast, we're seeing Redshift using random sampling. And again, Snowflake, you're having to use a third party library in order to achieve the same capabilities. So I think the differentiation is crystal clear. I think it definitely is refreshing. It's showing that this is where true value can be assigned. And if you don't agree with it, by all means challenge the data. >> Yeah, I want to come to the benchmarks in a minute. By the way, you know, the gentleman who's the Oracle's architect, he did a great job on the call yesterday explaining what you have to do. I thought that was quite impressive. But Bob, I know you follow the financials pretty closely and on the earnings call earlier this month, Ellison said that, "We're going to see HeatWave on AWS." And the skeptic in me said, oh, they must not be getting people to come to OCI. And then they, you remember this chart they showed yesterday that showed the growth of HeatWave on OCI. But of course there was no data on there, it was just sort of, you know, lines up and to the right. So what do you guys think of that? (Marc laughs) Does it signal Bob, desperation by Oracle that they can't get traction on OCI, or is it just really a smart tame expansion move? What do you think? >> Yeah, Dave, that's a great question. You know, along the way there, and you know, just inside of that was something that said Ellison said on earnings call that spoke to a different sort of philosophy or mindset, almost Marc, where he said, "We're going to make this multicloud," right? With a lot of their other cloud stuff, if you wanted to use any of Oracle's cloud software, you had to use Oracle's infrastructure, OCI, there was no other way out of it. But this one, but I thought it was a classic Ellison line. He said, "Well, we're making this available on AWS. We're making this available, you know, on Snowflake because we're going after those users. And once they see what can be done here." So he's looking at it, I guess you could say, it's a concession to customers because they want multi-cloud. The other way to look at it, it's a hunting expedition and it's one of those uniquely I think Oracle ways. He said up front, right, he doesn't say, "Well, there's a big market, there's a lot for everybody, we just want on our slice." Said, "No, we are going after Amazon, we're going after Redshift, we're going after Aurora. We're going after these users of Snowflake and so on." And I think it's really fairly refreshing these days to hear somebody say that, because now if I'm a buyer, I can look at that and say, you know, to Marc's point, "Do they measure up, do they crack that threshold ceiling? Or is this just going to be more pain than a few dollars savings is worth?" But you look at those numbers that Ron pointed out and that we all saw in that chart. I've never seen Dave, anything like that. In a substantive market, a new player coming in here, and being able to establish differences that are four, seven, eight, 10, 12 times better than competition. And as new buyers look at that, they're going to say, "What the hell are we doing paying, you know, five times more to get a poor result? What's going on here?" So I think this is going to rattle people and force a harder, closer look at what these alternatives are. >> I wonder if the guy, thank you. Let's just skip ahead of the benchmarks guys, bring up the next slide, let's skip ahead a little bit here, which talks to the benchmarks and the benchmarking if we can. You know, David Floyer, the sort of semiretired, you know, Wikibon analyst said, "Dave, this is going to force Amazon and others, Snowflake," he said, "To rethink actually how they architect databases." And this is kind of a compilation of some of the data that they shared. They went after Redshift mostly, (laughs) but also, you know, as I say, Snowflake, BigQuery. And, like I said, you can always tell which companies are doing well, 'cause Oracle will come after you, but they're on the radar here. (laughing) Holgar should we take this stuff seriously? I mean, or is it, you know, a grain salt? What are your thoughts here? >> I think you have to take it seriously. I mean, that's a great question, great point on that. Because like Ron said, "If there's a flaw in a benchmark, we know this database traditionally, right?" If anybody came up that, everybody will be, "Oh, you put the wrong benchmark, it wasn't audited right, let us do it again," and so on. We don't see this happening, right? So kudos to Oracle to be aggressive, differentiated, and seem to having impeccable benchmarks. But what we really see, I think in my view is that the classic and we can talk about this in 100 years, right? Is the suite versus best of breed, right? And the key question of the suite, because the suite's always slower, right? No matter at which level of the stack, you have the suite, then the best of breed that will come up with something new, use a cloud, put the data warehouse on steroids and so on. The important thing is that you have to assess as a buyer what is the speed of my suite vendor. And that's what you guys mentioned before as well, right? Marc said that and so on, "Like, this is a third release in one year of the HeatWave team, right?" So everybody in the database open source Marc, and there's so many MySQL spinoffs to certain point is put on shine on the speed of (indistinct) team, putting out fundamental changes. And the beauty of that is right, is so inherent to the Oracle value proposition. Larry's vision of building the IBM of the 21st century, right from the Silicon, from the chip all the way across the seven stacks to the click of the user. And that what makes the database what Rob was saying, "Tied to the OCI infrastructure," because designed for that, it runs uniquely better for that, that's why we see the cross connect to Microsoft. HeatWave so it's different, right? Because HeatWave runs on cheap hardware, right? Which is the breadth and butter 886 scale of any cloud provider, right? So Oracle probably needs it to scale OCI in a different category, not the expensive side, but also allow us to do what we said before, the multicloud capability, which ultimately CIOs really want, because data gravity is real, you want to operate where that is. If you have a fast, innovative offering, which gives you more functionality and the R and D speed is really impressive for the space, puts away bad results, then it's a good bet to look at. >> Yeah, so you're saying, that we versus best of breed. I just want to sort of play back then Marc a comment. That suite versus best of breed, there's always been that trade off. If I understand you Holgar you're saying that somehow Oracle has magically cut through that trade off and they're giving you the best of both. >> It's the developing velocity, right? The provision of important features, which matter to buyers of the suite vendor, eclipses the best of breed vendor, then the best of breed vendor is in the hell of a potential job. >> Yeah, go ahead Marc. >> Yeah and I want to add on what Holgar just said there. I mean the worst job in the data center is data movement, moving the data sucks. I don't care who you are, nobody likes it. You never get any kudos for doing it well, and you always get the ah craps, when things go wrong. So it's in- >> In the data center Marc all the time across data centers, across cloud. That's where the bleeding comes. >> It's right, you get beat up all the time. So nobody likes to move data, ever. So what you're looking at with what they announce with HeatWave and what I love about HeatWave is it doesn't matter when you started with it, you get all the additional features they announce it's part of the service, all the time. But they don't have to move any of the data. You want to analyze the data that's in your transactional, MySQL database, it's there. You want to do machine learning models, it's there, there's no data movement. The data movement is the key thing, and they just eliminate that, in so many ways. And the other thing I wanted to talk about is on the benchmarks. As great as those benchmarks are, they're really conservative 'cause they're underestimating the cost of that data movement. The ETLs, the other services, everything's left out. It's just comparing HeatWave, MySQL cloud service with HeatWave versus Redshift, not Redshift and Aurora and Glue, Redshift and Redshift ML and SageMaker, it's just Redshift. >> Yeah, so what you're saying is what Oracle's doing is saying, "Okay, we're going to run MySQL HeatWave benchmarks on analytics against Redshift, and then we're going to run 'em in transaction against Aurora." >> Right. >> But if you really had to look at what you would have to do with the ETL, you'd have to buy two different data stores and all the infrastructure around that, and that goes away so. >> Due to the nature of the competition, they're running narrow best of breed benchmarks. There is no suite level benchmark (Dave laughs) because they created something new. >> Well that's you're the earlier point they're beating best of breed with a suite. So that's, I guess to Floyer's earlier point, "That's going to shake things up." But I want to come back to Bob Evans, 'cause I want to tap your Cloud Wars mojo before we wrap. And line up the horses, you got AWS, you got Microsoft, Google and Oracle. Now they all own their own cloud. Snowflake, Mongo, Couchbase, Redis, Cockroach by the way they're all doing very well. They run in the cloud as do many others. I think you guys all saw the Andreessen, you know, commentary from Sarah Wang and company, to talk about the cost of goods sold impact of cloud. So owning your own cloud has to be an advantage because other guys like Snowflake have to pay cloud vendors and negotiate down versus having the whole enchilada, Safra Catz's dream. Bob, how do you think this is going to impact the market long term? >> Well, Dave, that's a great question about, you know, how this is all going to play out. If I could mention three things, one, Frank Slootman has done a fantastic job with Snowflake. Really good company before he got there, but since he's been there, the growth mindset, the discipline, the rigor and the phenomenon of what Snowflake has done has forced all these bigger companies to really accelerate what they're doing. And again, it's an example of how this intense competition makes all the different cloud vendors better and it provides enormous value to customers. Second thing I wanted to mention here was look at the Adam Selipsky effect at AWS, took over in the middle of May, and in Q2, Q3, Q4, AWS's growth rate accelerated. And in each of those three quotas, they grew faster than Microsoft's cloud, which has not happened in two or three years, so they're closing the gap on Microsoft. The third thing, Dave, in this, you know, incredibly intense competitive nature here, look at Larry Ellison, right? He's got his, you know, the product that for the last two or three years, he said, "It's going to help determine the future of the company, autonomous database." You would think he's the last person in the world who's going to bring in, you know, in some ways another database to think about there, but he has put, you know, his whole effort and energy behind this. The investments Oracle's made, he's riding this horse really hard. So it's not just a technology achievement, but it's also an investment priority for Oracle going forward. And I think it's going to form a lot of how they position themselves to this new breed of buyer with a new type of need and expectations from IT. So I just think the next two or three years are going to be fantastic for people who are lucky enough to get to do the sorts of things that we do. >> You know, it's a great point you made about AWS. Back in 2018 Q3, they were doing about 7.4 billion a quarter and they were growing in the mid forties. They dropped down to like 29% Q4, 2020, I'm looking at the data now. They popped back up last quarter, last reported quarter to 40%, that is 17.8 billion, so they more doubled and they accelerated their growth rate. (laughs) So maybe that pretends, people are concerned about Snowflake right now decelerating growth. You know, maybe that's going to be different. By the way, I think Snowflake has a different strategy, the whole data cloud thing, data sharing. They're not trying to necessarily take Oracle head on, which is going to make this next 10 years, really interesting. All right, we got to go, last question. 30 seconds or less, what can we expect from the future of data platforms? Matt, please start. >> I have to go first again? You're killing me, Dave. (laughing) In the next few years, I think you're going to see the major players continue to meet customers where they are, right. Every organization, every environment is, you know, kind of, we use these words bespoke in Snowflake, pardon the pun, but Snowflakes, right. But you know, they're all opinionated and unique and what's great as an IT person is, you know, there is a service for me regardless of where I am on my journey, in my data management journey. I think you're going to continue to see with regards specifically to Oracle, I think you're going to see the company continue along this path of being all things to all people, if you will, or all organizations without sacrificing, you know, kind of richness of features and sacrificing who they are, right. Look, they are the data kings, right? I mean, they've been a database leader for an awful long time. I don't see that going away any time soon and I love the innovative spirit they've brought in with HeatWave. >> All right, great thank you. Okay, 30 seconds, Holgar go. >> Yeah, I mean, the interesting thing that we see is really that trend to autonomous as Oracle calls or self-driving software, right? So the database will have to do more things than just store the data and support the DVA. It will have to show it can wide insights, the whole upside, it will be able to show to one machine learning. We haven't really talked about that. How in just exciting what kind of use case we can get of machine learning running real time on data as it changes, right? So, which is part of the E5 announcement, right? So we'll see more of that self-driving nature in the database space. And because you said we can promote it, right. Check out my report about HeatWave latest release where I post in oracle.com. >> Great, thank you for that. And Bob Evans, please. You're great at quick hits, hit us. >> Dave, thanks. I really enjoyed getting to hear everybody's opinion here today and I think what's going to happen too. I think there's a new generation of buyers, a new set of CXO influencers in here. And I think what Oracle's done with this, MySQL HeatWave, those benchmarks that Ron talked about so eloquently here that is going to become something that forces other companies, not just try to get incrementally better. I think we're going to see a massive new wave of innovation to try to play catch up. So I really take my hat off to Oracle's achievement from going to, push everybody to be better. >> Excellent. Marc Staimer, what do you say? >> Sure, I'm going to leverage off of something Matt said earlier, "Those companies that are going to develop faster, cheaper, simpler products that are going to solve customer problems, IT problems are the ones that are going to succeed, or the ones who are going to grow. The one who are just focused on the technology are going to fall by the wayside." So those who can solve more problems, do it more elegantly and do it for less money are going to do great. So Oracle's going down that path today, Snowflake's going down that path. They're trying to do more integration with third party, but as a result, aiming at that simpler, faster, cheaper mentality is where you're going to continue to see this market go. >> Amen brother Marc. >> Thank you, Ron Westfall, we'll give you the last word, bring us home. >> Well, thank you. And I'm loving it. I see a wave of innovation across the entire cloud database ecosystem and Oracle is fueling it. We are seeing it, with the native integration of auto ML capabilities, elastic scaling, lower entry price points, et cetera. And this is just going to be great news for buyers, but also developers and increased use of open APIs. And so I think that is really the key takeaways. Just we're going to see a lot of great innovation on the horizon here. >> Guys, fantastic insights, one of the best power panel as I've ever done. Love to have you back. Thanks so much for coming on today. >> Great job, Dave, thank you. >> All right, and thank you for watching. This is Dave Vellante for theCube and we'll see you next time. (soft music)

Published Date : Mar 31 2022

SUMMARY :

and co-founder of the and then you answer And don't forget Sybase back in the day, the world these days? and others happening in the cloud, and you cover the competition, and Oracle and you know, whoever else. Mr. Staimer, how do you see things? in that I see the database some good meat on the bone Take away the database, That is the ability to scale on demand, and they got MySQL and you I think it's, you know, and the various momentums, and Microsoft right now at the moment. So where do you place your bets? And to what Bob and Holgar said, you know, and you know, very granular, and everything in the cloud market. And to what you were saying, you know, functionality that you can't get to you know, business consultant. you know, it's funny. and all of the TPC benchmarks, By the way, you know, and you know, just inside of that was of some of the data that they shared. the stack, you have the suite, and they're giving you the best of both. of the suite vendor, and you always get the ah In the data center Marc all the time And the other thing I wanted to talk about and then we're going to run 'em and all the infrastructure around that, Due to the nature of the competition, I think you guys all saw the Andreessen, And I think it's going to form I'm looking at the data now. and I love the innovative All right, great thank you. and support the DVA. Great, thank you for that. And I think what Oracle's done Marc Staimer, what do you say? or the ones who are going to grow. we'll give you the last And this is just going to Love to have you back. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

Ron WestfallPERSON

0.99+

DavePERSON

0.99+

Marc StaimerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MarcPERSON

0.99+

EllisonPERSON

0.99+

Bob EvansPERSON

0.99+

OracleORGANIZATION

0.99+

MattPERSON

0.99+

Holgar MuellerPERSON

0.99+

AWSORGANIZATION

0.99+

Frank SlootmanPERSON

0.99+

RonPERSON

0.99+

StaimerPERSON

0.99+

Andy JacksonPERSON

0.99+

BobPERSON

0.99+

Matt KimballPERSON

0.99+

GoogleORGANIZATION

0.99+

100%QUANTITY

0.99+

Sarah WangPERSON

0.99+

San DiegoLOCATION

0.99+

AmazonORGANIZATION

0.99+

RobPERSON

0.99+

Does Intel need a Miracle?


 

(upbeat music) >> Welcome everyone, this is Stephanie Chan with theCUBE. Recently analyst Dave Ross RADIO entitled, Pat Gelsinger has a vision. It just needs the time, the cash and a miracle where he highlights why he thinks Intel is years away from reversing position in the semiconductor industry. Welcome Dave. >> Hey thanks, Stephanie. Good to see you. >> So, Dave you been following the company closely over the years. If you look at Wall Street Journal most analysts are saying to hold onto Intel. can you tell us why you're so negative on it? >> Well, you know, I'm not a stock picker Stephanie, but I've seen the data there are a lot of... some buys some sells, but most of the analysts are on a hold. I think they're, who knows maybe they're just hedging their bets they don't want to a strong controversial call that kind of sitting in the fence. But look, Intel still an amazing company they got tremendous resources. They're an ICON and they pay a dividend. So, there's definitely an investment case to be made to hold onto the stock. But I would generally say that investors they better be ready to hold on to Intel for a long, long time. I mean, Intel's they're just not the dominant player that it used to be. And the challenges have been mounting for a decade and look competitively Intel's fighting a five front war. They got AMD in both PCs and the data center the entire Arm Ecosystem` and video coming after with the whole move toward AI and GPU they're dominating there. Taiwan Semiconductor is by far the leading fab in the world with terms of output. And I would say even China is kind of the fifth leg of that stool, long term. So, lot of hurdles to jump competitively. >> So what are other sources of Intel's trouble sincere besides what you just mentioned? >> Well, I think they started when PC volumes peaked which was, or David Floyer, Wikibon wrote back in 2011, 2012 that he tells if it doesn't make some moves, it's going to face some trouble. So, even though PC volumes have bumped up with the pandemic recently, they pair in comparison to the wafer volume that are coming out of the Arm Ecosystem, and TSM and Samsung factories. The volumes of the Arm Ecosystem, Stephanie they dwarf the output of Intel by probably 10 X in semiconductors. I mean, the volume in semiconductors is everything. And because that's what costs down and Intel they just knocked a little cost manufacture any anymore. And in my view, they may never be again, not without a major change in the volume strategy, which of course Gelsinger is doing everything he can to affect that change, but they're years away and they're going to have to spend, north of a 100 billion dollars trying to get there, but it's all about volume in the semiconductor game. And Intel just doesn't have it right now. >> So you mentioned Pat Gelsinger he was a new CEO last January. He's a highly respected CEO and in truth employed more than four decades, I think he has knowledge and experience. including 30 years at Intel where he began his career. What's your opinion on his performance thus far besides the volume and semiconductor industry position of Intel? >> Well, I think Gelsinger is an amazing executive. He's a technical visionary, he's an execution machine, he's doing all the right things. I mean, he's working, he was at the state of the union address and looking good in a suit, he's saying all the right things. He's spending time with EU leaders. And he's just a very clear thinker and a super strong strategist, but you can't change Physics. The thing about Pat is he's known all along what's going on with Intel. I'm sure he's watched it from not so far because I think it's always been his dream to run the company. So, the fact that he's made a lot of moves. He's bringing in new management, he's repairing some of the dead wood at Intel. He's launched, kind of relaunched if you will, the Foundry Business. But I think they're serious about that. You know, this time around, they're spinning out mobile eye to throw off some cash mobile eye was an acquisition they made years ago to throw off some more cash to pay for the fabs. They have announced things like; a fabs in Ohio, in the Heartland, Ze in Heartland which is strikes all the right chords with the various politicians. And so again, he's doing all the right things. He's trying to inject. He's calling out his best Andrew Grove. I like to say who's of course, The Iconic CEO of Intel for many, many years, but again you can't change Physics. He can't compress the cycle any faster than the cycle wants to go. And so he's doing all the right things. It's just going to take a long, long time. >> And you said that competition is better positioned. Could you elaborate on why you think that, and who are the main competitors at this moment? >> Well, it's this Five Front War that I talked about. I mean, you see what's happened in Arm changed everything, Intel remember they passed on the iPhone didn't think it could make enough money on smartphones. And that opened the door for Arm. It was eager to take Apple's business. And because of the consumer volumes the semiconductor industry changed permanently just like the PC volume changed the whole mini computer business. Well, the smartphone changed the economics of semiconductors as well. Very few companies can afford the capital expense of building semiconductor fabrication facilities. And even fewer can make cutting edge chips like; five nanometer, three nanometer and beyond. So companies like AMD and Invidia, they don't make chips they design them and then they ship them to foundries like TSM and Samsung to manufacture them. And because TSM has such huge volumes, thanks to large part to Apple it's further down or up I guess the experience curve and experience means everything in terms of cost. And they're leaving Intel behind. I mean, the best example I can give you is Apple would look at the, a series chip, and now the M one and the M one ultra, I think about the traditional Moore's law curve that we all talk about two X to transistor density every two years doubling. Intel's lucky today if can keep that pace up, let's assume it can. But meanwhile, look at Apple's Arm based M one to M one Ultra transition. It occurred in less than two years. It was more like, 15 or 18 months. And it went from 16 billion transistors on a package to over a 100 billion. And so we're talking about the competition Apple in this case using Arm standards improving it six to seven X inside of a two year period while Intel's running it two X. And that says it all. So Intel is on a curve that's more expensive and slower than the competition. >> Well recently, until what Lujan Harrison did with 5.4 billion So it can make more check order companies last February I think the middle of February what do you think of that strategic move? >> Well, it was designed to help with Foundry. And again, I said left that out of my things that in Intel's doing, as Pat's doing there's a long list actually and many more. Again I think, it's an Israeli based company they're a global company, which is important. One of the things that Pat stresses is having a a presence in Western countries, I think that's super important, he'd like to get the percentage of semiconductors coming out of Western countries back up to at least maybe not to where it was previously but by the end of the decade, much more competitive. And so that's what that acquisition was designed to do. And it's a good move, but it's, again it doesn't change Physics. >> So Dave, you've been putting a lot of content out there and been following Intel for years. What can Intel do to go back on track? >> Well, I think first it needs great leadership and Pat Gelsinger is providing that. Since we talked about it, he's doing all the right things. He's manifesting his best. Andrew Grove, as I said earlier, splitting out the Foundry business is critical because we all know Moore's law. This is Right Law talks about volume in any business not just semiconductors, but it's crucial in semiconductors. So, splitting out a separate Foundry business to make chips is important. He's going to do that. Of course, he's going to ask Intel's competitors to allow Intel to manufacture their chips which they very well may well want to do because there's such a shortage right now of supply and they need those types of manufacturers. So, the hope is that that's going to drive the volume necessary for Intel to compete cost effectively. And there's the chips act. And it's EU cousin where governments are going to possibly put in some money into the semiconductor manufacturing to make the west more competitive. It's a key initiative that Pat has put forth and a challenge. And it's a good one. And he's making a lot of moves on the design side and committing tons of CapEx in these new fabs as we talked about but maybe his best chance is again the fact that, well first of all, the market's enormous. It's a trillion dollar market, but secondly there's a very long term shortage in play here in semiconductors. I don't think it's going to be cleared up in 2022 or 2023. It's just going to be keep being an explotion whether it's automobiles and factory devices and cameras. I mean, virtually every consumer device and edge device is going to use huge numbers of semiconductor chip. So, I think that's in Pat's favor, but honestly Intel is so far behind in my opinion, that I hope by the end of this decade, it's going to be in a position maybe a stronger number two position, and volume behind TSM maybe number three behind Samsung maybe Apple is going to throw Intel some Foundry business over time, maybe under pressure from the us government. And they can maybe win that account back but that's still years away from a design cycle standpoint. And so again, maybe in the 2030's, Intel can compete for top dog status, but that in my view is the best we can hope for this national treasure called Intel. >> Got it. So we got to leave it right there. Thank you so much for your time, Dave. >> You're welcome Stephanie. Good to talk to you >> So you can check out Dave's breaking analysis on theCUBE.net each Friday. This is Stephanie Chan for theCUBE. We'll see you next time. (upbeat music)

Published Date : Mar 22 2022

SUMMARY :

It just needs the time, Good to see you. closely over the years. but most of the analysts are on a hold. I mean, the volume in far besides the volume And so he's doing all the right things. And you said that competition And because of the consumer volumes I think the middle of February but by the end of the decade, What can Intel do to go back on track? And so again, maybe in the 2030's, Thank you so much for your time, Dave. Good to talk to you So you can check out

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamsungORGANIZATION

0.99+

DavePERSON

0.99+

Stephanie ChanPERSON

0.99+

StephaniePERSON

0.99+

TSMORGANIZATION

0.99+

David FloyerPERSON

0.99+

OhioLOCATION

0.99+

Pat GelsingerPERSON

0.99+

2022DATE

0.99+

2023DATE

0.99+

30 yearsQUANTITY

0.99+

Andrew GrovePERSON

0.99+

AppleORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AMDORGANIZATION

0.99+

5.4 billionQUANTITY

0.99+

GelsingerPERSON

0.99+

10 XQUANTITY

0.99+

less than two yearsQUANTITY

0.99+

sixQUANTITY

0.99+

M oneCOMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

PatPERSON

0.99+

M one ultraCOMMERCIAL_ITEM

0.99+

fifth legQUANTITY

0.99+

15QUANTITY

0.99+

five nanometerQUANTITY

0.99+

MoorePERSON

0.99+

HeartlandLOCATION

0.99+

EUORGANIZATION

0.99+

18 monthsQUANTITY

0.99+

sevenQUANTITY

0.99+

IconicORGANIZATION

0.98+

five frontQUANTITY

0.98+

three nanometerQUANTITY

0.98+

Dave RossPERSON

0.98+

two yearQUANTITY

0.98+

CapExORGANIZATION

0.98+

last FebruaryDATE

0.97+

last JanuaryDATE

0.97+

Lujan HarrisonPERSON

0.97+

middle of FebruaryDATE

0.97+

firstQUANTITY

0.96+

OneQUANTITY

0.96+

16 billion transistorsQUANTITY

0.96+

100 billion dollarsQUANTITY

0.96+

todayDATE

0.96+

theCUBEORGANIZATION

0.96+

theCUBE.netOTHER

0.95+

both PCsQUANTITY

0.94+

Five Front WarEVENT

0.94+

Breaking Analysis: Snowflake’s Wild Ride


 

from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante snowflake they love the stock at 400 and hated at 165 that's the nature of the business i guess especially in this crazy cycle over the last two years of lockdowns free money exploding demand and now rising inflation and rates but with the fed providing some clarity on its actions the time has come to really dig into the fundamentals of companies and there's no tech company that's more fun to analyze than snowflake hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we look at the action of snowflake stock since its ipo why it's behaved the way it has how some sharp traders are looking at the stock and most importantly what customer demand looks like the stock has really provided some great theater since its ipo i know people who got in at 120 before the open and i know lots of people who kind of held their noses and bought the stock on day one at over 300 a day when it closed at around 240 that first day of trading snowflake hit 164 this week it's all-time low as a public company as my college roommate chip simonton a long time trader told me when great companies trade at all times time lows because of panic it's worth taking a shot he did now of course the stock could go lower there's geopolitical risk and the stock with a 64 billion market cap is expensive for a company that's forecast to do around 2 billion in product revenue this year and remember i don't recommend stocks you shouldn't take my advice and my comments you got to do your own research but i have lots of data and i have opinions and i'm willing to share that with you stocks like snowflake crowdstrike z-scaler octa and companies like this are highly volatile when markets are moving up they're going to move up faster than the mean when they're declining they're going to drop more severely and that's clearly what's happened to snowflake so with a company like this you when you see panic selling you'll also see panic buying sometimes like we we've seen with this name it went from 220 to 320 in a very short period earlier snowflake put in a short-term bottom this week and many traders feel the issue was oversold so they bought okay but not everyone felt this way and you can see this in the headlines snowflake hits low but cloud stocks rise and we're going to come back to that is it a buy don't buy the dip buy the dip and what snowflake investors can learn from microsoft and from the street.com snow stock is sliding on the back of ill-conceived guidance and to that i would say that conservative guidance these days is anything but ill-conceived now let's unpack all this a bit and to do so i reached out to ivana delevska who has been on this program before she's with spear invest a female-led etf that goes deep into understanding supply chains she came on breaking analysis and laid out her thesis to buy the dip on snowflake this is a while ago she told me currently spear still likes snowflake and has doubled its position let me share her analysis she called out two drivers for the downside interest rates you know rising of course in snowflakes guidance which my own publication called weak in that previous chart that i just showed you so let's dig into that a bit snowflake guided for product revenues of 67 year on year which was below buy side expectations but i believe within sell side consensus regardless the guide was nuanced and driven by snowflake's decision to pass along price efficiencies to customers from optimizing processor price performance predominantly from aws's graviton too this is going to hit snowflakes revenue a net of about a hundred million dollars this year but the timing's not precise because it's going to hit 165 million but they're going to make up 65 million in increased demand frank slootman on the earnings call made this very clear he said quote this is not philanthropy this stimulates demand classic slootman the point is spear and other bulls believe that this will result in a gain for snowflake over the medium term and we would agree price goes down roi gets better you throw more projects at snowflakes customers going to buy more snowflake and when that happens and it gives the company an advantage as they continue to build their moat it's a longer term bet on cloud and data which are good bets now some of this could also be competitive pressures there have been you know studies that are out there from competitors attacking snowflakes pricing and price performance and they make comparisons oracle's been pretty aggressive as have others but so far the company's customers continue to consume now at a very fast rate now on on this front what can we learn from microsoft that applies to snowflake that's the headline here from benzinga so the article quoted a wealth manager named josh brown talking about what happened to microsoft after the dot-com bubble burst and how they quadrupled earnings over the next decade and the stock went sideways suggesting the same thing could happen to snowflake now i'd like to make a couple of comments here first at the time microsoft was a 23 billion dollar company and it had a monopoly and was already highly profitable steve ballmer became the ceo of microsoft right after the dot-com bubble burst and he hugged onto windows for dear life and lived off of microsoft's pc software monopoly microsoft became an extremely profitable and remarkably uninteresting caretaker of a pc in on-prem software estate during balmer's tenure so i just don't see the comparison as relevant snowflake you know they're going to make struggle for other reasons but that one didn't really resonate with me what's interesting is this chart it poses the question do cloud and data markets behave differently it's a chart that shows aws growth rates over time and superimposes the revenue in the red in q1 2018 aws generated 5.4 billion dollars in revenue and that was growing at the time at nearly a 50 rate now that rate as you can see decelerated quite significantly as aws grew to a 50 billion dollar run rate company that down below where you see it bottoms now it makes sense right law of large numbers you can't keep growing that fast when you get that big well oops look what happened in 2021 aws's growth rate bottoms in the high 20s and then rockets back up to 40 this past quarter as aws surpasses a 70 billion dollar run rate so you have to ask is cloud different is data different is cloud data different or data cloud different let's put it in the snowflake parlance can cloud because of its consumption model and the speed of innovation and ecosystem depth and breadth enable snowflake to exhibit lots of variability in its growth rates versus a say progressive and somewhat linear decline as the company grows revenue which is what you would expect historically and part of the answer relates to its market size here's a chart we've shared before with some additions it's our version of snowflake's total available market they're tam which snowflake's version that that blue data cloud thing superimposed on the right it shows the various layers of market opportunity that we came up with that that snowflake and others we think have in front of them emerging from the disruption of legacy data lakes and data warehouses to what snowflake refers to as its data cloud we think about the data mesh concept and decentralized data architectures with domain ownership and data product and service builders as consistent with snowflake's data cloud vision where snowflake data stores are nodes they're just simply discoverable nodes on the mesh you could have you know data bricks data lakes you know s3 buckets on that mesh it doesn't matter they can be discovered they can be shared and of course they're governed in a federated model now in snowflake's model it's all inside the snowflake data cloud that's fine then you'll go to the out years it gets a little fuzzy you know from edge locations and ai inference it becomes massive and decision making occurs in real time where machines and machine data take over the world instead of you know clicks and keystrokes sounds out there but it's real and how exactly snowflake plays there at this point is unclear but one thing's for sure there'll be a lot of data and it's going to find its way into snowflake you know snowflake's not a real-time engine it's an analytical system it's moving into the realm of data science and you know we've talked about the need for you know semantic layer between those those two worlds of analytics and data science but expanding the scope further out we think that snowflake is a big role to play in this future and the future is massive okay check you got the big tam now as someone that looks at companies through a fundamentals prism you've got to look obviously at the markets in the tan which we just did but you also want to understand customers and it's not hard to find snowflake customers capital one disney micron alliance sainsbury sonos and hundreds of other companies i've talked to snowflake customers who have also been customers of oracle teradata ibm neteza vertica serious database practitioners and they tell me it's consistent soulflake is different they say it's simpler it's more agile it's less complicated to secure and it's disruptive to their traditional ways of doing data management now of course there are naysayers i've spoken to a number of analysts that feel snowflake is deficient in areas like workload management and course complex joins and it's too specialized in a world where we're seeing the convergence of analytics and transactional workloads our own david floyer believes that what oracle is doing with mysql heatwave is radically disruptive to many of the database architectures and blows away anything out there and he believes that snowflake and the likes of aws are going to have to respond now this the other criticism here is that snowflake is not architected for real-time inference where a lot of that edge activity is is going to happen it's a multi-hundred billion dollar market and so look snowflake has a ton of competition that's the other thing all the major cloud players have very capable and competitive database platforms even though they all partner with snowflake except oracle of course but companies like databricks and have garnered tons of vc other vc funded companies have raised billions of dollars to do this kind of elastic consumption based separate compute from storage stuff so you have to always keep an open mind and be aware of potential blind spots for these companies but to the criticisms i would say look snowflake they got there first and watch their ecosystem it's a real key to its continued success snowflake's not going to go it alone and it's going to use its ecosystem partners to expand its reach and accelerate the network effects and fill those gaps and it will acquire its stock is valuable so it should be doing that just as it did with streamlit a zero revenue company that it bought for 800 million dollars in stock and cash just recently streamlit is an open source python library that gets snowflake further deeper into that data science space that data brick space and look watch what snowflake is doing with snowpark it's an api library for processing data and building data intensive applications we've talked about snowflake essentially being becoming the super cloud and building this sort of path-like layer across clouds rather than trying to do it all themselves it seems snowflake is really staring at the api economy and building its ecosystem to plug those holes so let's come back to the customers here's a chart that shows snowflakes customer spending momentum or net score on the the top line that's the vertical axis and pervasiveness in the data or market share and that bottom brown line snowflake has unprecedented net scores and held them up for many many quarters as you can see here going back you know a couple years all leading to its expanded market penetration and measured as pervasiveness of so-called market share within the etr survey it's not like idc market share it's pervasiveness in the data set now i'll say this i don't see how this is sustainable i've been waiting for this to moderate i wouldn't be surprised to see snowflake come back to earth a little bit i think they'll clearly still be highly elevated based on the data that i've seen but but i could see in in one or more of the etr surveys this year this starting to moderate as they get they get big it's just it has to happen um but i would again expect them to have a high spending velocity score but i think we're going to see snowflake you know maybe porpoise a bit here meaning you know it moderates it comes back up it's just really hard to sustain this piece of momentum and higher train retain and scale without absorbing some some friction and some head woods that's going to slow you down but back to the aws growth example it's entirely possible that we could see a similar dynamic with snowflake that you saw with aws and you kind of see it with salesforce and servicenow very successful large entrenched entrenched companies and it's very possible that snowflake could pull back moderate and then accelerate that growth even though people are concerned about the moderated guidance of 80 percent growth yeah that's that's the new definition of tepid i guess i look i like to look at other some other metrics the one that really called you know my my my attention was the remaining performance obligations this last quarter rpo snowflakes is up to something like 2.6 billion and that is a forward-looking indicator of of future revenues so i want to i'd like to see that growing and it's growing at a fast pace so you're going to see some ups and downs with snowflake i have no doubt but i think things are still looking pretty solid for the company growth companies like snowflake and octa and z scalar those other ones that i mentioned earlier have probably been repriced and refactored by investors while there's always going to be market and of course geopolitical risk especially in these times fundamentals matter you've got huge market well capitalized you got a leadership position great products and strong customer adoption you also have a great team team is something else that we look for we haven't touched on that but i'll leave you with this thought everyone knows about frank slootman mike scarpelli and what they've accomplished in their years of working together that's why the stock you know in ipo was was so overvalued they had seen these guys do it before slootman just documented in all this in his book amp it up which gives great insight into the history of of that though you know that pair and and the teams that they've built the companies that they've built how he thinks about building companies and markets and and how you know total available markets super important but the whole philosophy and culture that that he's building in his management style but you got to wonder right how long is this guy going to keep going what keeps him motivated you know i asked him that one time here's what he said why i mean are you in this for the sport what's the story here uh actually that that's not a bad way of characterizing it i think i am in it uh you know for the sport uh you know the only way to become the best version of yourself is to be uh to be under the gun and uh you know every single day and that's that's certainly uh what we are it sort of has its own rewards building great products building great companies uh you know regardless of you know uh what the spoils may be uh it has its own rewards and i i it's hard for people like us to get off the field and uh you know hang it up so here we are so there you have it he's in it for the sport how great is that he loves building companies and that my opinion that's how frank slootman thinks about success it's not about money money's the byproduct of success as earl nightingale would say success is the progressive realization of a worthy ideal i love that quote building great companies building products that change the world changing people's lives with data and insights creating jobs creating life-altering wealth opportunities not for himself but for thousands of employees and partners i'd say that's a pretty worthy ideal and i hope frank slootman sticks with it for a while okay that's it for today thanks to stephanie chan for the background research she does for breaking analysis alex meyerson on production kristen martin and cheryl knight on social with rob hoff on siliconangle and thanks to ivana delevska of spear invest and my friend chip symington for the angles from the money side of things remember all these episodes are available as podcasts just search breaking analysis podcast i publish weekly on wikibon.com and siliconangle.com and don't forget to check out etr.plus for all the survey data you can reach me at devolante or david.velante siliconangle.com and this is dave vellante for cube insights powered by etrbsafe stay well and we'll see you next time [Music] you

Published Date : Mar 18 2022

SUMMARY :

the history of of that though you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
microsoftORGANIZATION

0.99+

josh brownPERSON

0.99+

alex meyersonPERSON

0.99+

thousandsQUANTITY

0.99+

80 percentQUANTITY

0.99+

2021DATE

0.99+

slootmanPERSON

0.99+

rob hoffPERSON

0.99+

67 yearQUANTITY

0.99+

5.4 billion dollarsQUANTITY

0.99+

50 billion dollarQUANTITY

0.99+

64 billionQUANTITY

0.99+

800 million dollarsQUANTITY

0.99+

165 millionQUANTITY

0.99+

23 billion dollarQUANTITY

0.99+

stephanie chanPERSON

0.99+

david floyerPERSON

0.99+

ivana delevskaPERSON

0.99+

steve ballmerPERSON

0.99+

this yearDATE

0.99+

2.6 billionQUANTITY

0.99+

frank slootmanPERSON

0.99+

mike scarpelliPERSON

0.99+

billions of dollarsQUANTITY

0.99+

oracleORGANIZATION

0.99+

earl nightingalePERSON

0.99+

two driversQUANTITY

0.99+

multi-hundred billion dollarQUANTITY

0.99+

david.velanteOTHER

0.98+

bostonLOCATION

0.98+

dave vellantePERSON

0.98+

oneQUANTITY

0.98+

about a hundred million dollarsQUANTITY

0.98+

120QUANTITY

0.98+

awsORGANIZATION

0.98+

Snowflake’s Wild RideTITLE

0.98+

frank slootmanPERSON

0.98+

siliconangle.comOTHER

0.98+

this weekDATE

0.98+

around 2 billionQUANTITY

0.98+

70 billion dollarQUANTITY

0.97+

400QUANTITY

0.97+

320QUANTITY

0.97+

q1 2018DATE

0.97+

kristen martinPERSON

0.97+

220QUANTITY

0.97+

chip symingtonPERSON

0.96+

firstQUANTITY

0.96+

benzingaORGANIZATION

0.96+

164QUANTITY

0.96+

over 300 a dayQUANTITY

0.96+

first dayQUANTITY

0.95+

earthLOCATION

0.95+

windowsTITLE

0.95+

two worldsQUANTITY

0.95+

past quarterDATE

0.95+

165QUANTITY

0.94+

disneyORGANIZATION

0.94+

65 millionQUANTITY

0.94+

simontonLOCATION

0.94+

pythonTITLE

0.94+

street.comOTHER

0.93+

a lot of dataQUANTITY

0.92+

last quarterDATE

0.92+

cheryl knightPERSON

0.92+

todayDATE

0.92+

50 rateQUANTITY

0.91+

day oneQUANTITY

0.9+

zero revenueQUANTITY

0.9+

devolanteOTHER

0.9+

tonsQUANTITY

0.89+

wikibon.comOTHER

0.88+

one timeQUANTITY

0.88+

hundreds of other companiesQUANTITY

0.88+

etrORGANIZATION

0.87+

single dayQUANTITY

0.86+

balmerPERSON

0.85+

around 240QUANTITY

0.85+

ipoORGANIZATION

0.85+

20sQUANTITY

0.84+

lots of dataQUANTITY

0.83+

Breaking Analysis: Pat Gelsinger has the Vision Intel Just Needs Time, Cash & a Miracle


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> If it weren't for Pat Gelsinger, Intel's future would be a disaster. Even with his clear vision, fantastic leadership, deep technical and business acumen, and amazing positivity, the company's future is in serious jeopardy. It's the same story we've been telling for years. Volume is king in the semiconductor industry, and Intel no longer is the volume leader. Despite Intel's efforts to change that dynamic With several recent moves, including making another go at its Foundry business, the company is years away from reversing its lagging position relative to today's leading foundries and design shops. Intel's best chance to survive as a leader in our view, will come from a combination of a massive market, continued supply constraints, government money, and luck, perhaps in the form of a deal with apple in the midterm. Hello, and welcome to this week's "Wikibon CUBE Insights, Powered by ETR." In this "Breaking Analysis," we'll update you on our latest assessment of Intel's competitive position and unpack nuggets from the company's February investor conference. Let's go back in history a bit and review what we said in the early 2010s. If you've followed this program, you know that our David Floyer sounded the alarm for Intel as far back as 2012, the year after PC volumes peaked. Yes, they've ticked up a bit in the past couple of years but they pale in comparison to the volumes that the ARM ecosystem is producing. The world has changed from people entering data into machines, and now it's machines that are driving all the data. Data volumes in Web 1.0 were largely driven by keystrokes and clicks. Web 3.0 is going to be driven by machines entering data into sensors, cameras. Other edge devices are going to drive enormous data volumes and processing power to boot. Every windmill, every factory device, every consumer device, every car, will require processing at the edge to run AI, facial recognition, inference, and data intensive workloads. And the volume of this space compared to PCs and even the iPhone itself is about to be dwarfed with an explosion of devices. Intel is not well positioned for this new world in our view. Intel has to catch up on the process, Intel has to catch up on architecture, Intel has to play catch up on security, Intel has to play catch up on volume. The ARM ecosystem has cumulatively shipped 200 billion chips to date, and is shipping 10x Intel's wafer volume. Intel has to have an architecture that accommodates much more diversity. And while it's working on that, it's years behind. All that said, Pat Gelsinger is doing everything he can and more to close the gap. Here's a partial list of the moves that Pat is making. A year ago, he announced IDM 2.0, a new integrated device manufacturing strategy that opened up its world to partners for manufacturing and other innovation. Intel has restructured, reorganized, and many executives have boomeranged back in, many previous Intel execs. They understand the business and have a deep passion to help the company regain its prominence. As part of the IDM 2.0 announcement, Intel created, recreated if you will, a Foundry division and recently acquired Tower Semiconductor an Israeli firm, that is going to help it in that mission. It's opening up partnerships with alternative processor manufacturers and designers. And the company has announced major investments in CAPEX to build out Foundry capacity. Intel is going to spin out Mobileye, a company it had acquired for 15 billion in 2017. Or does it try and get a $50 billion valuation? Mobileye is about $1.4 billion in revenue, and is likely going to be worth more around 25 to 30 billion, we'll see. But Intel is going to maybe get $10 billion in cash from that, that spin out that IPO and it can use that to fund more FABS and more equipment. Intel is leveraging its 19,000 software engineers to move up the stack and sell more subscriptions and high margin software. He got to sell what he got. And finally Pat is playing politics beautifully. Announcing for example, FAB investments in Ohio, which he dubbed Silicon Heartland. Brilliant! Again, there's no doubt that Pat is moving fast and doing the right things. Here's Pat at his investor event in a T-shirt that says, "torrid, bringing back the torrid pace and discipline that Intel is used to." And on the right is Pat at the State of the Union address, looking sharp in shirt and tie and suit. And he has said, "a bet on Intel is a hedge against geopolitical instability in the world." That's just so good. To that statement, he showed this chart at his investor meeting. Basically it shows that whereas semiconductor manufacturing capacity has gone from 80% of the world's volume to 20%, he wants to get it back to 50% by 2030, and reset supply chains in a market that has become important as oil. Again, just brilliant positioning and pushing all the right hot buttons. And here's a slide underscoring that commitment, showing manufacturing facilities around the world with new capacity coming online in the next few years in Ohio and the EU. Mentioning the CHIPS Act in his presentation in The US and Europe as part of a public private partnership, no doubt, he's going to need all the help he can get. Now, we couldn't resist the chart on the left here shows wafer starts and transistor capacity growth. For Intel, overtime speaks to its volume aspirations. But we couldn't help notice that the shape of the curve is somewhat misleading because it shows a two-year (mumbles) and then widens the aperture to three years to make the curve look steeper. Fun with numbers. Okay, maybe a little nitpick, but these are some of the telling nuggets we pulled from the investor day, and they're important. Another nitpick is in our view, wafers would be a better measure of volume than transistors. It's like a company saying we shipped 20% more exabytes or MIPS this year than last year. Of course you did, and your revenue shrank. Anyway, Pat went through a detailed analysis of the various Intel businesses and promised mid to high double digit growth by 2026, half of which will come from Intel's traditional PC they center in network edge businesses and the rest from advanced graphics HPC, Mobileye and Foundry. Okay, that sounds pretty good. But it has to be taken into context that the balance of the semiconductor industry, yeah, this would be a pretty competitive growth rate, in our view, especially for a 70 plus billion dollar company. So kudos to Pat for sticking his neck out on this one. But again, the promise is several years away, at least four years away. Now we want to focus on Foundry because that's the only way Intel is going to get back into the volume game and the volume necessary for the company to compete. Pat built this slide showing the baby blue for today's Foundry business just under a billion dollars and adding in another $1.5 billion for Tower Semiconductor, the Israeli firm that it just acquired. So a few billion dollars in the near term future for the Foundry business. And then by 2026, this really fuzzy blue bar. Now remember, TSM is the new volume leader, and is a $50 billion company growing. So there's definitely a market there that it can go after. And adding in ARM processors to the mix, and, you know, opening up and partnering with the ecosystems out there can only help volume if Intel can win that business, which you know, it should be able to, given the likelihood of long term supply constraints. But we remain skeptical. This is another chart Pat showed, which makes the case that Foundry and IDM 2.0 will allow expensive assets to have a longer useful life. Okay, that's cool. It will also solve the cumulative output problem highlighted in the bottom right. We've talked at length about Wright's Law. That is, for every cumulative doubling of units manufactured, cost will fall by a constant percentage. You know, let's say around 15% in semiconductor world, which is vitally important to accommodate next generation chips, which are always more expensive at the start of the cycle. So you need that 15% cost buffer to jump curves and make any money. So let's unpack this a bit. You know, does this chart at the bottom right address our Wright's Law concerns, i.e. that Intel can't take advantage of Wright's Law because it can't double cumulative output fast enough? Now note the decline in wafer starts and then the slight uptick, and then the flattening. It's hard to tell what years we're talking about here. Intel is not going to share the sausage making because it's probably not pretty, But you can see on the bottom left, the flattening of the cumulative output curve in IDM 1.0 otherwise known as the death spiral. Okay, back to the power of Wright's Law. Now, assume for a second that wafer density doesn't grow. It does, but just work with us for a second. Let's say you produce 50 million units per year, just making a number up. That gets you cumulative output to $100 million in, sorry, 100 million units in the second year to take you two years to get to that 100 million. So in other words, it takes two years to lower your manufacturing cost by, let's say, roughly 15%. Now, assuming you can get wafer volumes to be flat, which that chart showed, with good yields, you're at 150 now in year three, 200 in year four, 250 in year five, 300 in year six, now, that's four years before you can take advantage of Wright's Law. You keep going at that flat wafer start, and that simplifying assumption we made at the start and 50 million units a year, and well, you get to the point. You get the point, it's now eight years before you can get the Wright's Law to kick in, and you know, by then you're cooked. But now you can grow the density of transistors on a chip, right? Yes, of course. So let's come back to Moore's Law. The graphic on the left says that all the growth is in the new stuff. Totally agree with that. Huge term that Pat presented. Now he also said that until we exhaust the periodic table of elements, Moore's Law is alive and well, and Intel is the steward of Moore's Law. Okay, that's cool. The chart on the right shows Intel going from 100 billion transistors today to a trillion by 2030. Hold that thought. So Intel is assuming that we'll keep up with Moore's Law, meaning a doubling of transistors every let's say two years, and I believe it. So bring that back to Wright's Law, in the previous chart, it means with IDM 2.0, Intel can get back to enjoying the benefits of Wright's Law every two years, let's say, versus IDM 1.0 where they were failing to keep up. Okay, so Intel is saved, yeah? Well, let's bring into this discussion one of our favorite examples, Apple's M1 ARM-based chip. The M1 Ultra is a new architecture. And you can see the stats here, 114 billion transistors on a five nanometer process and all the other stats. The M1 Ultra has two chips. They're bonded together. And Apple put an interposer between the two chips. An interposer is a pathway that allows electrical signals to pass through it onto another chip. It's a super fast connection. You can see 2.5 terabytes per second. But the brilliance is the two chips act as a single chip. So you don't have to change the software at all. The way Intel's architecture works is it takes two different chips on a substrate, and then each has its own memory. The memory is not shared. Apple shares the memory for the CPU, the NPU, the GPU. All of it is shared, meaning it needs no change in software unlike Intel. Now Intel is working on a new architecture, but Apple and others are way ahead. Now let's make this really straightforward. The original Apple M1 had 16 billion transistors per chip. And you could see in that diagram, the recently launched M1 Ultra has $114 billion per chip. Now if you take into account the size of the chips, which are increasing, and the increase in the number of transistors per chip, that transistor density, that's a factor of around 6x growth in transistor density per chip in 18 months. Remember Intel, assuming the results in the two previous charts that we showed, assuming they were achievable, is running at 2x every two years, versus 6x for the competition. And AMD and Nvidia are close to that as well because they can take advantage of TSM's learning curve. So in the previous chart with Moore's Law, alive and well, Intel gets to a trillion transistors by 2030. The Apple ARM and Nvidia ecosystems will arrive at that point years ahead of Intel. That means lower costs and significantly better competitive advantage. Okay, so where does that leave Intel? The story is really not resonating with investors and hasn't for a while. On February 18th, the day after its investor meeting, the stock was off. It's rebound a little bit but investors are, you know, they're probably prudent to wait unless they have really a long term view. And you can see Intel's performance relative to some of the major competitors. You know, Pat talked about five nodes in for years. He made a big deal out of that, and he shared proof points with Alder Lake and Meteor Lake and other nodes, but Intel just delayed granite rapids last month that pushed it out from 2023 to 2024. And it told investors that we're going to have to boost spending to turn this ship around, which is absolutely the case. And that delay in chips I feel like the first disappointment won't be the last. But as we've said many times, it's very difficult, actually, it's impossible to quickly catch up in semiconductors, and Intel will never catch up without volume. So we'll leave you by iterating our scenario that could save Intel, and that's if its Foundry business can eventually win back Apple to supercharge its volume story. It's going to be tough to wrestle that business away from TSM especially as TSM is setting up shop in Arizona, with US manufacturing that's going to placate The US government. But look, maybe the government cuts a deal with Apple, says, hey, maybe we'll back off with the DOJ and FTC and as part of the CHIPS Act, you'll have to throw some business at Intel. Would that be enough when combined with other Foundry opportunities Intel could theoretically produce? Maybe. But from this vantage point, it's very unlikely Intel will gain back its true number one leadership position. If it were really paranoid back when David Floyer sounded the alarm 10 years ago, yeah, that might have made a pretty big difference. But honestly, the best we can hope for is Intel's strategy and execution allows it to get competitive volumes by the end of the decade, and this national treasure survives to fight for its leadership position in the 2030s. Because it would take a miracle for that to happen in the 2020s. Okay, that's it for today. Thanks to David Floyer for his contributions to this research. Always a pleasure working with David. Stephanie Chan helps me do much of the background research for "Breaking Analysis," and works with our CUBE editorial team. Kristen Martin and Cheryl Knight to get the word out. And thanks to SiliconANGLE's editor in chief Rob Hof, who comes up with a lot of the great titles that we have for "Breaking Analysis" and gets the word out to the SiliconANGLE audience. Thanks, guys. Great teamwork. Remember, these episodes are all available as podcast wherever you listen. Just search "Breaking Analysis Podcast." You'll want to check out ETR's website @etr.ai. We also publish a full report every week on wikibon.com and siliconangle.com. You could always get in touch with me on email, david.vellante@siliconangle.com or DM me @dvellante, and comment on my LinkedIn posts. This is Dave Vellante for "theCUBE Insights, Powered by ETR." Have a great week. Stay safe, be well, and we'll see you next time. (upbeat music)

Published Date : Mar 12 2022

SUMMARY :

in Palo Alto in Boston, and Intel is the steward of Moore's Law.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie ChanPERSON

0.99+

David FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

Cheryl KnightPERSON

0.99+

Pat GelsingerPERSON

0.99+

NvidiaORGANIZATION

0.99+

PatPERSON

0.99+

Rob HofPERSON

0.99+

AppleORGANIZATION

0.99+

DavidPERSON

0.99+

TSMORGANIZATION

0.99+

OhioLOCATION

0.99+

February 18thDATE

0.99+

MobileyeORGANIZATION

0.99+

2012DATE

0.99+

$100 millionQUANTITY

0.99+

two yearsQUANTITY

0.99+

80%QUANTITY

0.99+

ArizonaLOCATION

0.99+

WrightPERSON

0.99+

18 monthsQUANTITY

0.99+

2017DATE

0.99+

2023DATE

0.99+

AMDORGANIZATION

0.99+

6xQUANTITY

0.99+

Kristen MartinPERSON

0.99+

Palo AltoLOCATION

0.99+

20%QUANTITY

0.99+

15%QUANTITY

0.99+

two chipsQUANTITY

0.99+

2xQUANTITY

0.99+

$50 billionQUANTITY

0.99+

100 millionQUANTITY

0.99+

$1.5 billionQUANTITY

0.99+

2030sDATE

0.99+

2030DATE

0.99+

IntelORGANIZATION

0.99+

CHIPS ActTITLE

0.99+

last yearDATE

0.99+

$10 billionQUANTITY

0.99+

2020sDATE

0.99+

50%QUANTITY

0.99+

2026DATE

0.99+

two-yearQUANTITY

0.99+

10xQUANTITY

0.99+

appleORGANIZATION

0.99+

FebruaryDATE

0.99+

two chipsQUANTITY

0.99+

15 billionQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

Tower SemiconductorORGANIZATION

0.99+

M1 UltraCOMMERCIAL_ITEM

0.99+

2024DATE

0.99+

70 plus billion dollarQUANTITY

0.99+

last monthDATE

0.99+

A year agoDATE

0.99+

200 billion chipsQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

three yearsQUANTITY

0.99+

CHIPS ActTITLE

0.99+

second yearQUANTITY

0.99+

about $1.4 billionQUANTITY

0.99+

early 2010sDATE

0.99+

Breaking Analysis - How AWS is Revolutionizing Systems Architecture


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante aws is pointing the way to a revolution in system architecture much in the same way that aws defined the cloud operating model last decade we believe it is once again leading in future systems design the secret sauce underpinning these innovations is specialized designs that break the stranglehold of inefficient and bloated centralized processing and allows aws to accommodate a diversity of workloads that span cloud data center as well as the near and far edge hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll dig into the moves that aws has been making which we believe define the future of computing we'll also project what this means for customers partners and aws many competitors now let's take a look at aws's architectural journey the is revolution it started by giving easy access as we all know to virtual machines that could be deployed and decommissioned on demand amazon at the time used a highly customized version of zen that allowed multiple vms to run on one physical machine the hypervisor functions were controlled by x86 now according to werner vogels as much as 30 of the processing was wasted meaning it was supporting hypervisor functions and managing other parts of the system including the storage and networking these overheads led to aws developing custom asics that help to accelerate workloads now in 2013 aws began shipping custom chips and partnered with amd to announce ec2 c3 instances but as the as the aws cloud started to scale they really weren't satisfied with the performance gains that they were getting and they were hitting architectural barriers that prompted aws to start a partnership with anaperta labs this was back in 2014 and they launched then ec2 c4 instances in 2015. the asic in c4 optimized offload functions for storage and networking but still relied on intel xeon as the control point aws aws shelled out a reported 350 million dollars to acquire annapurna in 2015 which is a meager sum to acquire the secret sauce of its future system design this acquisition led to a modern version of project nitro in 2017 nitro nitro offload cards were first introduced in 2013 at this time aws introduced c5 instances and replaced zen with kvm and more tightly coupled the hypervisor with the asic vogels shared last year that this milestone offloaded the remaining components including the control plane the rest of the i o and enabled nearly a hundred percent of the processing to support customer workloads it also enabled a bare metal version of the compute that spawned the partnership the famous partnership with vmware to launch vmware cloud on aws then in 2018 aws took the next step and introduced graviton its custom designed arm-based chip this broke the dependency on x86 and launched a new era of architecture which now supports a wide variety of configurations to support data intensive workloads now these moves preceded other aws innovations including new chips optimized for machine learning and training and inferencing and all kinds of ai the bottom line is aws has architected an approach that offloaded the work currently done by the central processing unit in most general purpose workloads like in the data center it has set the stage in our view for the future allowing shared memory memory disaggregation and independent resources that can be configured to support workloads from the cloud all the way to the edge and nitro is the key to this architecture and to summarize aws nitro think of it as a set of custom hardware and software that runs on an arm-based platform from annapurna aws has moved the hypervisor the network the storage virtualization to dedicated hardware that frees up the cpu to run more efficiently this in our opinion is where the entire industry is headed so let's take a look at that this chart pulls data from the etr data set and lays out key players competing for the future of cloud data center and the edge now we've superimposed nvidia up top and intel they don't show up directly in the etr survey but they clearly are platform players in the mix we covered nvidia extensively in previous breaking analysis and won't go too deep there today but the data shows net scores on the vertical axis that's a measure of spending velocity and then it shows market share in the horizontal axis which is a measure of pervasiveness within the etr data set we're not going to dwell on the relative positions here rather let's comment on the players and start with aws we've laid out aws how they got here and we believe they are setting the direction for the future of the industry and aws is really pushing migration to its arm-based platforms pat morehead at the 6-5 summit spoke to dave brown who heads ec2 at aws and he talked extensively about migrating from x86 to aws's arm-based graviton 2. and he announced a new developer challenge to accelerate that migration to arm instances graviton instances and the end game for customers is a 40 better price performance so a customer running 100 server instances can do the same work with 60 servers now there's some work involved but for the by the customers to actually get there but the payoff if they can get 40 improvement in price performance is quite large imagine this aws currently offers 400 different ec2 instances last year as we reported sorry last year as we reported earlier this year nearly 50 percent of the new ec2 instances so nearly 50 percent of the new ec2 instances shipped in 2020 were arm based and aws is working hard to accelerate this pace it's very clear now let's talk about intel i'll just say it intel is finally responding in earnest and basically it's taking a page out of arm's playbook we're going to dig into that a bit today in 2015 intel paid 16.7 billion dollars for altera a maker of fpgas now also at the 6.5 summit nevin shenoy of intel presented details of what intel is calling an ipu it's infrastructure processing unit this is a departure from intel norms where everything is controlled by a central processing unit ipu's are essentially smart knicks as our dpus so don't get caught up in all the acronym soup as we've reported it's all about offloading work and disaggregating memory and evolving socs system-on-chip and sops system on package but just let this sink in a bit a bit for a moment intel's moves this past week it seems to us anyway are designed to create a platform that is nitro like and the basis of that platform is a 16.7 billion dollar acquisition just compare that to aws's 350 million dollar tuck-in of annapurna that is incredible now chenoy said in his presentation rough quote we've already deployed ipu's using fpgas in a in very high volume at microsoft azure and we've recently announced partnerships with baidu jd cloud and vmware so let's look at vmware vmware is the other you know really big platform player in this race in 2020 vmware announced project monterrey you might recall that it's based on the aforementioned fpgas from intel so vmware is in the mix and it chose to work with intel most likely for a variety of reasons one of the obvious ones is all the software that's running on on on vmware it's been built for x86 and there's a huge install base there the other is pat was heading vmware at the time and and you know when project monterey was conceived so i'll let you connect the dots if you like regardless vmware has a nitro like offering in our view its optionality however is limited by intel but at least it's in the game and appears to be ahead of the competition in this space aws notwithstanding because aws is clearly in the lead now what about microsoft and google suffice it to say that we strongly believe that despite the comments that intel made about shipping fpgas and volume to microsoft that both microsoft and google as well as alibaba will follow aws's lead and develop an arm-based platform like nitro we think they have to in order to keep pace with aws now what about the rest of the data center pack well dell has vmware so despite the split we don't expect any real changes there dell is going to leverage whatever vmware does and do it better than anyone else cisco is interesting in that it just revamped its ucs but we don't see any evidence that it has a nitro like plans in its roadmap same with hpe now both of these companies have history and capabilities around silicon cisco designs its own chips today for carrier class use cases and and hpe as we've reported probably has some remnants of the machine hanging around but both companies are very likely in our view to follow vmware's lead and go with an intel based design what about ibm well we really don't know we think the best thing ibm could do would be to move the ibm cloud of course to an arm-based nitro-like platform we think even the mainframe should move to arm as well i mean it's just too expensive to build a specialized mainframe cpu these days now oracle they're interesting if we were running oracle we would build an arm-based nitro-like database cloud where oracle the database runs cheaper faster and consumes less energy than any other platform that would would dare to run oracle and we'd go one step further and we would optimize for competitive databases in the oracle cloud so we would make oci run the table on all databases and be essentially the database cloud but you know back to sort of fpgas we're not overly excited about about the market amd is acquiring xi links for 35 billion dollars so i guess that's something to get excited about i guess but at least amd is using its inflated stock price to do the deal but we honestly we think that the arm ecosystem will will obliterate the fpga market by making it simpler and faster to move to soc with far better performance flexibility integration and mobility so again we're not too sanguine about intel's acquisition of altera and the moves that amd is making in in the long term now let's take a deeper look at intel's vision of the data center of the future here's a chart that intel showed depicting its vision of the future of the data center what you see is the ipu's which are intelligent nixed and they're embedded in the four blocks shown and they're communicating across a fabric now you have general purpose compute in the upper left and machine intelligent on the bottom left machine intelligence apps and up in the top right you see storage services and then the bottom right variation of alternative processors and this is intel's view of how to share resources and go from a world where everything is controlled by a central processing unit to a more independent set of resources that can work in parallel now gelsinger has talked about all the cool tech that this will allow intel to incorporate including pci and gen 5 and cxl memory interfaces and or cxl memory which are interfaces that enable memory sharing and disaggregation and 5g and 6g connectivity and so forth so that's intel's view of the future of the data center let's look at arm's vision of the future and compare them now there are definite similarities as you can see especially on the right hand side of this chart you've got the blocks of different process processor types these of course are programmable and you notice the high bandwidth memory the hbm3 plus the ddrs on the two sides kind of bookending the blocks that's shared across the entire system and it's connected by pcie gen 5 cxl or ccix multi-die socket so you know you may be looking to say okay two sets of block diagrams big deal well while there are similarities around disaggregation and i guess implied shared memory in the intel diagram and of course the use of advanced standards there are also some notable differences in particular arm is really already at the soc level whereas intel is talking about fpgas neoverse arms architecture is shipping in test mode and we'll have end market product by year end 2022 intel is talking about maybe 2024 we think that's aspirational or 2025 at best arm's road map is much more clear now intel said it will release more details in october so we'll pay attention then maybe we'll recalibrate at that point but it's clear to us that arm is way further along now the other major difference is volume intel is coming at this from a high data center perspective and you know presumably plans to push down market or out to the edge arm is coming at this from the edge low cost low power superior price performance arm is winning at the edge and based on the data that we shared earlier from aws it's clearly gaining ground in the enterprise history strongly suggests that the volume approach will win not only at the low end but eventually at the high end so we want to wrap by looking at what this means for customers and the partner ecosystem the first point we'd like to make is follow the consumer apps this capability the capabilities that we see in consumer apps like image processing and natural language processing and facial recognition and voice translation these inference capabilities that are going on today in mobile will find their way into the enterprise ecosystem ninety percent of the cost associated with machine learning in the cloud is around inference in the future most ai in the enterprise and most certainly at the edge will be inference it's not today because it's too expensive this is why aws is building custom chips for inferencing to drive costs down so it can increase adoption now the second point is we think that customers should start experimenting and see what you can do with arm-based platforms moore's law is accelerating at least the outcome of moore's law the doubling of performance every of the 18 to 24 months it's it's actually much higher than that now when you add up all the different components in these alternative processors just take a look at apple's a5 a15 chip and arm is in the lead in terms of performance price performance cost and energy consumption by moving some workloads onto graviton for example you'll see what types of cost savings you can drive for which applications and possibly generate new applications that you can deliver to your business put a couple engineers in the task and see what they can do in two or three weeks you might be surprised or you might say hey it's too early for us but you'll find out and you may strike gold we would suggest that you talk to your hybrid cloud provider as well and find out if they have a nitro we shared that vmware they've got a clear path as does dell because they're you know vmware cousins what about your other strategic suppliers what's their roadmap what's the time frame to move from where they are today to something that resembles nitro do they even think about that how do they think about that do they think it's important to get there so if if so or if not how are they thinking about reducing your costs and supporting your new workloads at scale now for isvs these consumer capabilities that we discussed earlier all these mobile and and automated systems and cars and and things like that biometrics another example they're going to find their way into your software and your competitors are porting to arm they're embedding these consumer-like capabilities into their apps are you we would strongly recommend that you take a look at that talk to your cloud suppliers and see what they can do to help you innovate run faster and cut costs okay that's it for now thanks to my collaborator david floyer who's been on this topic since early last decade thanks to the community for your comments and insights and hey thanks to patrick morehead and daniel newman for some timely interviews from your event nice job fellas remember i published each week on wikibon.com and siliconangle.com these episodes are all available as podcasts just search for breaking analysis podcasts you can always connect with me on twitter at d vallante or email me at david.velante at siliconangle.com i appreciate the comments on linkedin and clubhouse so follow us if you see us in a room jump in and let's riff on these topics and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time

Published Date : Jun 18 2021

SUMMARY :

and nitro is the key to this

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2013DATE

0.99+

2015DATE

0.99+

dave brownPERSON

0.99+

2014DATE

0.99+

2020DATE

0.99+

2017DATE

0.99+

david floyerPERSON

0.99+

60 serversQUANTITY

0.99+

2018DATE

0.99+

last yearDATE

0.99+

18QUANTITY

0.99+

microsoftORGANIZATION

0.99+

twoQUANTITY

0.99+

daniel newmanPERSON

0.99+

35 billion dollarsQUANTITY

0.99+

alibabaORGANIZATION

0.99+

16.7 billion dollarsQUANTITY

0.99+

16.7 billion dollarQUANTITY

0.99+

2025DATE

0.99+

second pointQUANTITY

0.99+

ninety percentQUANTITY

0.99+

siliconangle.comOTHER

0.99+

octoberDATE

0.99+

350 million dollarsQUANTITY

0.99+

dave vellantePERSON

0.99+

2024DATE

0.99+

bothQUANTITY

0.99+

googleORGANIZATION

0.99+

nvidiaORGANIZATION

0.99+

amdORGANIZATION

0.99+

bostonLOCATION

0.99+

first pointQUANTITY

0.99+

both companiesQUANTITY

0.99+

three weeksQUANTITY

0.99+

24 monthsQUANTITY

0.99+

appleORGANIZATION

0.98+

30QUANTITY

0.98+

todayDATE

0.98+

gravitonTITLE

0.98+

each weekQUANTITY

0.98+

nearly 50 percentQUANTITY

0.98+

awsORGANIZATION

0.98+

earlier this yearDATE

0.98+

100 server instancesQUANTITY

0.98+

amazonORGANIZATION

0.98+

two sidesQUANTITY

0.98+

intelORGANIZATION

0.98+

400 differentQUANTITY

0.97+

early last decadeDATE

0.97+

twitterORGANIZATION

0.97+

linkedinORGANIZATION

0.97+

40 improvementQUANTITY

0.97+

x86TITLE

0.96+

last decadeDATE

0.96+

ciscoORGANIZATION

0.95+

oracleORGANIZATION

0.95+

chenoyPERSON

0.95+

40 betterQUANTITY

0.95+

vmwareORGANIZATION

0.95+

350 million dollarQUANTITY

0.94+

nitroORGANIZATION

0.92+

Breaking Analysis: How Nvidia Wins the Enterprise With AI


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante nvidia wants to completely transform enterprise computing by making data centers run 10x faster at one tenth the cost and video's ceo jensen wang is crafting a strategy to re-architect today's on-prem data centers public clouds and edge computing installations with a vision that leverages the company's strong position in ai architectures the keys to this end-to-end strategy include a clarity of vision massive chip design skills a new arm-based architecture approach that integrates memory processors i o and networking and a compelling software consumption model even if nvidia is unsuccessful at acquiring arm we believe it will still be able to execute on this strategy by actively participating in the arm ecosystem however if its attempts to acquire arm are successful we believe it will transform nvidia from the world's most valuable chip company into the world's most valuable supplier of integrated computing architectures hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll explain why we believe nvidia is in the right position to power the world's computing centers and how it plans to disrupt the grip that x86 architectures have had on the data center for decades the data center market is in transition like the universe the cloud is expanding at an accelerated pace no longer is the cloud an opaque set of remote services i always say somewhere out there sitting in a mega data center no rather the cloud is extending to on-premises data centers data centers are moving into the cloud and they're connecting through adjacent locations that create hybrid interactions clouds are being meshed together across regions and eventually will stretch to the far edge this new definition or view of cloud will be hyper distributed and run by software kubernetes is changing the world of software development and enabling workloads to run anywhere open apis external applications expanding the digital supply chains and this expanding cloud they all increase the threat surface and vulnerability to the most sensitive information that resides within the data center and around the world zero trust has become a mandate we're also seeing ai being injected into every application and it's the technology area that we see with the most momentum coming out of the pandemic this new world will not be powered by general purpose x86 processors rather it will be supported by an ecosystem of arm-based providers in our opinion that are affecting an unprecedented increase in processor performance as we have been reporting and nvidia in our view is sitting in the poll position and is currently the favorite to dominate the next era of computing architecture for global data centers public clouds as well as the near and far edge let's talk about jensen wang's clarity of vision for this new world here's a chart that underscores some of the fundamental assumptions that he's leveraging to expand his market the first is that there's a lot of waste in the data center he claims that only half of the cpu cores deployed in the data center today actually support applications the other half are processing the infrastructure all around the applications that run the software defined data center and they're terribly under utilized nvidia's blue field three dpu the data processing unit was described in a blog post on siliconangle by analyst zias caravala as a complete mini server on a card i like that with software defined networking storage and security acceleration built in this product has the bandwidth and according to nvidia can replace 300 general purpose x86 cores jensen believes that every network chip will be intelligent programmable and capable of this type of acceleration to offload conventional cpus he believes that every server node will have this capability and enable every packed of every packet and every application to be monitored in real time all the time for intrusion and as servers move to the edge bluefield will be included as a core component in his view and this last statement by jensen is critical in our opinion he says ai is the most powerful force of our time whether you agree with that or not it's relevant because ai is everywhere an invidious position in ai and the architectures the company is building are the fundamental linchpin of its data center enterprise strategy so let's take a look at some etr spending data to see where ai fits on the priority list here's a set of data in a view that we often like to share the horizontal axis is market share or pervasiveness in the etr data but we want to call your attention to the vertical axis that's really really what really we want to pay attention today that's net score or spending momentum exiting the pandemic we've seen ai capture the number one position in the last two surveys and we think this dynamic will continue for quite some time as ai becomes the staple of digital transformations and automations an ai will be infused in every single dot you see on this chart nvidia's architectures it just so happens are tailor made for ai workloads and that is how it will enter these markets let's quantify what that means and lay out our view of how nvidia with the help of arm will go after the enterprise market here's some data from wikibon research that depicts the percent of worldwide spending on server infrastructure by workload type here are the key points first the market last year was around 78 billion dollars worldwide and is expected to approach 115 billion by the end of the decade this might even be a conservative figure and we've split the market into three broad workload categories the blue is ai and other related applications what david floyer calls matrix workloads the orange is general purpose think things like erp supply chain hcm collaboration basically oracle saps and microsoft work that's being supported today and of course many other software providers and the gray that's the area that jensen was referring to is about being wasted the offload work for networking and storage and all the software defined management in the data centers around the world okay you can see the squeeze that we think compute infrastructure is gonna gonna occur around that orange area that general-purpose workloads that we think is going to really get squeezed in the next several years on a percentage basis and on an absolute basis it's really not growing nearly as fast as the other two and video with arm in our view is well positioned to attack that blue area and the gray area those those workload offsets and the new emerging ai applications but even the orange as we've reported is under pressure as for example companies like aws and oracle they use arm-based designs to service general purpose workloads why are they doing that cost is the reason because x86 generally and intel specifically are not delivering the price performance and efficiency required to keep up with the demands to reduce data center costs and if intel doesn't respond which we believe it will but if it doesn't act arm we think will get 50 percent of the general purpose workloads by the end of the decade and with nvidia it will dominate the blue the ai and the gray the offload work when we say dominate we're talking like capture 90 percent of the available market if intel doesn't respond now intel they're not just going to sit back and let that happen pat gelsinger is well aware of this in moving intel to a new strategy but nvidia and arm are way ahead in the game in our view and as we've reported this is going to be a real challenge for intel to catch up now let's take a quick look at what nvidia is doing with relevant parts of its pretty massive portfolio here's a slide that shows nvidia's three chip strategy the company is shifting to arm-based architectures which we'll describe in more detail in a moment the slide shows at the top line nvidia's ampere architecture not to be confused with the company ampere computing nvidia is taking a gpu centric approach no surprise obvious reasons there that's their sort of stronghold but we think over time it may rethink this a little bit and lean more into npus the neural processing unit we look at what apple's doing what tesla are doing we see opportunities for companies like nvidia to really sort of go after that but we'll save that for another day nvidia has announced its grace cpu a nod to the famous computer scientist grace hopper grace is a new architecture that doesn't rely on x86 and much more efficiently uses memory resources we'll again describe this in more detail later and the bottom line there that roadmap line shows the bluefield dpu which we described is essentially a complete server on a card in this approach using arm will reduce the elapsed time to go from chip design to production by 50 we're talking about shaving years down to 18 months or less we don't have time to do a deep dive into nvidia's portfolio it's large but we want to share some things that we think are important and this next graphic is one of them this shows some of the details of nvidia's jetson architecture which is designed to accelerate those ai plus workloads that we showed earlier and the reason is that this is important in our view is because the same software supports from small to very large including edge systems and we think this type of architecture is very well suited for ai inference at the edge as well as core data center applications that use ai and as we've said before a lot of the action in ai is going to happen at the edge so this is a good example of leveraging an architecture across a wide spectrum of performance and cost now we want to take a moment to explain why the moved arm-based architectures is so critical to nvidia one of the biggest cost challenges for nvidia today is keeping the gpu utilized typical utilization of gpu is well below 20 percent here's why the left hand side of this chart shows essentially racks if you will of traditional compute and the bottlenecks that nvidia faces the processor and dram they're tied together in separate blocks imagine there are thousands thousands of cores in a rack and every time you need data that lives in another processor you have to send a request and go retrieve it it's very overhead intensive now technologies like rocky are designed to help but it doesn't solve the fundamental architectural bottleneck every gpu shown here also has its own dram and it has to communicate with the processors to get the data i.e they can't communicate with each other efficiently now the right hand side side shows where nvidia is headed start in the middle with system on chip socs cpus are packaged in with npus ipu's that's the image processing unit you know x dot dot dot x pu's the the alternative processors they're all connected with sram which is think of that as a high speed layer like an layer one cache the os for the system on a chip lives inside of this and that's where nvidia has this killer software model what they're doing is they're licensing the consumption of the operating system that's running this system on chip in this entire system and they're affecting a new and really compelling subscription model you know maybe they should just give away the chips and charge for the software like a razer blade model talk about disruptive now the outer layer is the the dpu and the shared dram and other resources like the ampere computing the company this time cpus ssds and other resources these are the processors that will manage the socs together this design is based on nvidia's three chip approach using bluefield dpu leveraging melanox that's the networking component the network enables shared dram across the cpus which will eventually be all arm based grace lives inside the system on a chip and also on the outside layers and of course the gpu lives inside the soc in a scaled-down version like for instance a rendering gpu and we show some gpus on the outer layer as well for ai workloads at least in the near term you know eventually we think they may reside solely in the system on chip but only time will tell okay so you as you can see nvidia is making some serious moves and by teaming up with arm and leaning into the arm ecosystem it plans to take the company to its next level so let's talk about how we think competition for the next era of compute stacks up here's that same xy graph that we love to show market share or pervasiveness on the horizontal tracking against next net score on the vertical net score again is spending velocity and we've cut the etr data to capture players that are that are big in compute and storage and networking we've plugged in a couple of the cloud players these are the guys that we feel are vying for data center leadership around compute aws is a very strong position we believe that more than half of its revenues comes from compute you know ec2 we're talking about more than 25 billion on a run rate basis that's huge the company designs its own silicon graviton 2 etc and is working with isvs to run general purpose workloads on arm-based graviton chips microsoft and google they're going to follow suit they're big consumers of compute they sell a lot but microsoft in particular you know they're likely to continue to work with oem partners to attack that on-prem data center opportunity but it's really intel that's the provider of compute to the likes of hpe and dell and cisco and the odms which are the odms are not shown here now hpe let's talk about them for a second they have architectures and i hate to bring it up but remember the machine i know it's the butt of many jokes especially from competitors it had been you know frankly hpe and hp they deserve some of that heat for all the fanfare and then that they they put out there and then quietly you know pulled the machine or put it out the pasture but hpe has a strong position in high performance computing and the work that it did on new computing architectures with the machine and shared memories that might be still kicking around somewhere inside of hp and could come in handy for some day in the future so hpe has some chops there plus hpe has been known hp historically has been known to design its own custom silicon so i would not count them out as an innovator in this race cisco is interesting because it not only has custom silicon designs but its entry into the compute business with ucs a decade ago was notable and they created a new way to think about integrating resources particularly compute and networking with partnerships to add in the storage piece initially it was within within emc prior to the dell acquisition but you know it continues with netapp and pure and others cisco invests they spend money investing in architectures and we expect the next generation of ucs oh ucs2 ucs 2.0 will mark another notable milestone in the company's data center business dell just had an amazing quarterly earnings report the company grew top line revenue by around 12 percent and it wasn't because of an easy compare to last year dells is simply executing despite continued softness in the legacy emc storage business laptop the laptop demand continued to soar in dell server business it's growing again but we don't see dell as an architectural innovator per se in compute rather we think the company will be content to partner with suppliers whether it's intel nvidia arm-based partners or all of the above dell we think will rely on its massive portfolio its excellent supply chain and execution ethos to compete now ibm is notable for historical reasons with its mainframe ibm created the first great compute monopoly before it unwind and wittingly handed it to intel along with microsoft we don't see ibm necessarily aspiring to retake that compute platform mantle that once once held with mainframes rather red hat in the march to hybrid cloud is the path that we think in our view is ibm's approach now let's get down to the elephants in the room intel nvidia and china inc china is of course relevant because of companies like alibaba and huawei and the chinese chinese government's desire to be self-sufficient in semiconductor technology and technology generally but our premise here is that the trends are favoring nvidia over intel in this picture because nvidia is making moves to further position itself for new workloads in the data center and compete for intel's stronghold intel is going to attempt to remake itself but it should have been doing this seven years ago what pat gelsinger is doing today intel is simply far behind and it's going to take at least a couple years for them to really start to to make inroads in this new model let's stay on the nvidia v intel comparison for a moment and take a snapshot of the two companies here's a quick chart that we put together with some basic kpis some of these figures are approximations or they're rounded so don't stress over it too much but you can see intel is an 80 billion dollar company 4x the size of nvidia but nvidia's market cap far exceeds that of intel why is that of course growth in our view it's justified due to that growth and nvidia's strategic positioning intel used to be the gross margin king but nvidia has much higher gross margins interesting now when it comes down to free cash flow intel is still dominant as it pertains to the balance sheet intel is way more capital intensive than nvidia and as it starts to build out its foundries that's going to eat into intel's cash position now what we did is we put together a little pro forma on the third column of nvidia plus arm circa let's say the end of 2022. we think they could get to a run rate that is about half the size of intel and that can propel the company's market cap to well over half a trillion dollars if they get any credit for arm they're paying 40 billion dollars for arm a company that's you know sub 2 billion the risk is that because of the arm because the arm deal is based on cash plus tons of stock it could put pressure on the market capitalization for some time arm has 90 percent gross margins because it pretty much has a pure license model so it helps the gross margin line a little bit for this in this pro forma and the balance sheet is a swag arm has said that it's not going to take on debt to do the transaction but we haven't had time to really dig into that and figure out how they're going to structure it so we took a took a swag in in what we would do with this low interest rate environment but but take that with a grain of salt we'll do more research in there the point is given the momentum and growth of nvidia its strategic position in ai is in its deep engineering they're aimed at all the right places and its potential to unlock huge value with arm on paper it looks like the horse to beat if it can execute all right let's wrap up here's a summary look the architectures on which nvidia is building its dominant ai business are evolving and nvidia is well positioned to drive a truck right to the enterprise in our view the power has shifted from intel to the arm ecosystem and nvidia is leaning in big time whereas intel it has to preserve its current business while recreating itself at the same time this is going to take a couple of years but intel potentially has the powerful backing of the us government too strategic to fail the wild card is will nvidia be successful in acquiring arm certain factions in the uk and eu are fighting the deal because they don't want the u.s dictating to whom arm can sell its technology for example the restrictions placed on huawei for many suppliers of arm-based chips based on u.s sanctions nvidia's competitors like broadcom qualcomm at all are nervous that if nvidia gets armed they will be at a competitive disadvantage they being invidious competitors and for sure china doesn't want nvidia controlling arm for obvious reasons and it will do what it can to block the deal and or put handcuffs on how business can be done in china we can see a scenario where the u.s government pressures the uk and eu regulators to let this deal go through look ai and semiconductors you can't get much more strategic than that for the u.s military and the u.s long-term competitiveness in exchange for maybe facilitating the deal the government pressures nvidia to guarantee some feed to the intel foundry business while at the same time imposing conditions that secure access to arm-based technology for nvidia's competitors and maybe as we've talked about before having them funnel business to intel's foundry actually we've talked about the us government enticing apple to do so but it could also entice nvidia's competitors to do so propping up intel's foundry business which is clearly starting from ground zero and is going to need help outside of intel's own semiconductor manufacturing internally look we don't have any inside information as to what's happening behind the scenes with the us government and so forth but on its earning call on its earnings call nvidia said they're working with regulators that are on track to complete the deal in early 2022. we'll see okay that's it for today thank you to david floyer who co-created this episode with me and remember i publish each week on wikibon.com and siliconangle.com these episodes they're all available as podcasts all you're going to do is search breaking analysis podcast and you can always connect with me on twitter at dvalante or email me at david.valante siliconangle.com i always appreciate the comments on linkedin and in the clubhouse please follow me so you can be notified when we start a room and riff on these topics and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time [Music] you

Published Date : May 30 2021

SUMMARY :

and it's the technology area that we see

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
alibabaORGANIZATION

0.99+

nvidiaORGANIZATION

0.99+

50 percentQUANTITY

0.99+

90 percentQUANTITY

0.99+

huaweiORGANIZATION

0.99+

microsoftORGANIZATION

0.99+

david floyerPERSON

0.99+

40 billion dollarsQUANTITY

0.99+

chinaLOCATION

0.99+

thousandsQUANTITY

0.99+

18 monthsQUANTITY

0.99+

appleORGANIZATION

0.99+

david.valanteOTHER

0.99+

last yearDATE

0.99+

two companiesQUANTITY

0.99+

bostonLOCATION

0.99+

googleORGANIZATION

0.99+

10xQUANTITY

0.99+

early 2022DATE

0.99+

jensenPERSON

0.99+

ibmORGANIZATION

0.99+

around 78 billion dollarsQUANTITY

0.99+

third columnQUANTITY

0.99+

80 billion dollarQUANTITY

0.99+

more than halfQUANTITY

0.99+

ukLOCATION

0.99+

firstQUANTITY

0.98+

around 12 percentQUANTITY

0.98+

a decade agoDATE

0.98+

115 billionQUANTITY

0.98+

todayDATE

0.98+

each weekQUANTITY

0.97+

dellsORGANIZATION

0.97+

seven years agoDATE

0.97+

50QUANTITY

0.97+

dellORGANIZATION

0.97+

jensen wangPERSON

0.97+

twoQUANTITY

0.97+

end of 2022DATE

0.97+

over half a trillion dollarsQUANTITY

0.97+

siliconangle.comOTHER

0.96+

intelORGANIZATION

0.96+

Breaking Analysis: Your Online Assets Aren’t Safe - Is Cloud the Problem or the Solution?


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante the convenience of online access to bank accounts payment apps crypto exchanges and other transaction systems has created enormous risks which the vast majority of individuals either choose to ignore or simply don't understand the internet has become the new private network and unfortunately it's not so private apis scripts spoofing insider crime sloppy security hygiene by users and much more all increase our risks the convenience of cloud-based services in many respects exacerbates the problem but software built in the cloud is a big part of the solution hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll try to raise awareness about a growing threat to your liquid assets and hopefully inspire you to do some research and take actions to lower the probability of you losing thousands hundreds of thousands or millions of dollars let's go back to 2019 in an event that should have forced us to act but for most of us didn't in september of that year jack dorsey's twitter twitter account was hacked the hackers took over his account and posted racial slurs and other bizarre comments before twitter could regain control of the account and assure us that this wasn't a system-wide attack most concerning however was the manner in which the attackers got a hold of dorsey's twitter account they used an increasingly common and relatively easy to execute technique referred to as a sim hijack or a sim swap the approach allows cyber thieves to take control of a victim's phone number now they often will target high-profile individuals like ceos and celebrities to embarrass or harass them but increasingly they're going after people's money of course now just in the past month we've seen a spate of attacks where individuals have lost cash it's a serious problem of increasing frequency so let's talk a little bit about how it works now some of you are familiar with this technique but most people that we talk to either aren't aware of it or aren't concerned you should be in a sim hack like this one documented on medium in may of 2019 four months prior to the dorsey attack the hackers who have many of your credentials that have likely been posted on the dark web they have your email they have your frequently used passwords your phone number your address your mother's maiden name name of your favorite pet and so forth they go in and they spoof a mobile phone carrier rep into thinking that it's you and they convince the agent that they've switched phones or have some other ruse to get a new sim card sent to them or they pay insiders at the phone carrier to steal sim card details hey 100 bucks a card big money now once in possession of the sim card info the attacker now can receive sms messages as part of two-factor authentication systems that are often used to verify identity they can't use face id on mobile but what they can do is go into your web account and change the password or other information the website then sends an sms and now the attacker has the code and is in then the individual can lock you out and steal your money before you even know what hit you all right so what can you do about it first there's no system that is hack proof if the bad guys want to get you and the value is high enough they will get you but that's the key roi what's roi simply put it's a measure of return derived from dividing the value stolen by the cost of getting that value it's benefit divided by cost so a good way to dissuade a criminal is to increase the denominator if you make it harder to steal the value goes down the roi is less here's a layered system shared by jason floyer the son of our very own david floyer smart dna there so we appreciate his contribution to the cube the system involves three layers of protection first you got to think about all the high value online systems that you have here are just a few you got bank accounts you have investment accounts you might have betting sites that has cash in it e-commerce sites and so forth now many of these sites if not most will use sms-based two-factor authentication to identify you now that exposes you to the sim hack the system that jason proposes let's start in the middle of this chart the first thing is you got to acknowledge that the logins that you're using to access your critical systems are already public so the first thing you do is to get a in quotes secure email in other words one that no one knows about and isn't on the dark web find a provider that you trust maybe the one maybe one that doesn't sell ads but that look that's your call or maybe go out and buy a domain and create a private email address now the second step is to use a password manager now for those who don't know what that is you're probably already using one that comes with your chrome browser for example and it remembers your passwords and autofills them now if you on your iphone if you're an iphone user go to settings passwords and security recommendations or if you're on an android phone open your chrome app and go to settings passwords check passwords you're likely to see a number of recommendations as in dozens or maybe even hundreds that have been compromised reuse passwords and or or are the subject of a data breach so a password manager is a single cloud-based layer that works on your laptop and your mobile phone and allows you to largely automate the creation management and maintenance of your online credentials now the third layer here involves an external cloud-based or sometimes app-based two-factor authentication system that doesn't use sms one that essentially turns your phone into a hardware authentication device much like an external device that you would use like a yubikey now that's also a really good idea to use as that third layer that hardware fob so the system basically brings together all your passwords under one roof under one system with some layers that lower the probability of your money getting stolen again it doesn't go to zero percent but it's dramatically better than the protection that most people have here's another view of that system and this venn the password manager in the middle manages everything and yes there's a concern that all your passwords are in one place but once set up it's more secure than what you're likely doing today we'll explain that and it'll make your life a lot easier the key to this system is there's there's a single password that you have to remember for the password manager and it takes care of everything else now for many password managers you can also add a non-sms based third-party two-factor authentication capability we'll come back and talk about that in a moment so the mobile phone here uses facial recognition if it's enabled so it would require somebody they had either have you at gunpoint to use your phone and to stick it in front of your face to get into your accounts or you know eventually they'll become experts at deep fakes that's probably something we're going to have to contend with down the road so it's the desktop or laptop via web access that is of the greatest concern in this use case this is where the non-sms-based third-party two-factor authentication comes into play it's installed on your phone and if somebody comes into your account from an unauthorized device it forces a two-factor authentication not using sms but using a third-party app as you guessed it is running in the cloud this is where the cloud creates this problem but it's also here to help solve this problem but the key is this app it generates a verification code that changes on your phone every 20 seconds and you can't get into the website without entering that auto generated code well normal people can't get in there's probably some other back door if they really want to get you but i think you see that this is a better system than what 99 of the people have today but there's more to the story so just as with enterprise tech and dealing with the problem of ransomware air gaps are an essential tool in com combating our personal cyber crime so we've added a couple of items to jason's slide so the this air gap and the secure password notion what you want to do is make sure that that password manager is strong and it's easy for you to remember it's never used anywhere except for the password manager which also uses the secure email now if you've set up a non s if you've set up a two factor authentication sms or otherwise you're even more protected non-sms is better for the reasons we've described now for your crypto if you got a lot first of all get out of coinbase not only does coinbase gouge you on transaction costs but we'd recommend storing a good chunk of your crypto in an air-gapped vault now what you want to do is you want to make a few copies of this critical information you want to keep your secure password on you in one spot or memorize it but maybe keep a copy in your wallet your physical wallet and put the rest in a fireproof filing cabinet and a safety deposit box and or fire proof lock a lock box or a book in your library but but have multiple copies that somebody has to get to in order to hack you and you want to put also all your recovery codes so when you set all this up you're going to get recovery codes for the password manager in your crypto wallets that you own yeah it gets complicated and it's a pain but imagine having 30 percent or more of your liquid assets stolen now look we've really just scratched the surface here and you you're going to have to do some research and talk to people who have set this stuff up to get it right so figure out your secure email provider and then focus on the password manager now just google it and take your time deciding which one is the best for you here's a sample there are many some are free you know the better ones are for pay but carve out a full day to do research and set up your system take your time and think about how you use it before pulling the trigger on these tools and document everything offline air gap it now the other tooling that you want to use is the non-sms based third-party authentication app so in case you get sim hacked you've got further protection this turns your phone into a secure token generator without using sms unfortunately it's even more complicated because not only are there a lot of tools but not all your financial systems and apps we will support the same two-factor authentication app your password manager for example might only support duo your crypto exchange might support authy but your bank might only support symantec vip or it forces you to have a key fob or use sms so it's it's a mishmash so you may need to use multiple authentication apps to protect your liquid assets yeah i'm sorry but the consequences of not protecting your money and identity are worth the effort okay well i know there's a deviation from our normal enterprise tech discussions but look we're all the cios of our respective home i.t we're the network admin the storage admin the tech support help desk and we're the chief information security officer so as individuals we can only imagine the challenges of securing the enterprise and one of the things we talk about a lot in the cyber security space is complexity and fragmentation it's just the way it is now here's a chart from etr that we use frequently which lays out the security players in the etr data set on two dimensions net score or spending velocity in the vertical axis and market share or pervasiveness within the data set on the horizontal now for change i'm not going to elaborate on any of the specific vendors today you've seen a lot of this before but the chart underscores the complexity and fragmentation of this market and this is just really literally one tiny subset but the cloud which i said at the outset is a big reason that we got into this problem holds a key to solving it now here's one example listen to this clip of dave hatfield the longtime industry exec he's formerly an executive with pure storage he's now the ceo of laceworks lace work a very well-funded cloud-based security company that in our view is attacking one of the biggest problems in security and that's the fragmentation issue that we've often discussed take a listen so at the core of what we do you know you know it's um it's really trying to merge when we look at we look at security as a data problem security and compliance is the data problem and when you apply that to the cloud it's a massive data problem you know you literally have trillions of data points you know across shared infrastructure that we you need to be able to ingest and capture uh and then you need to be able to process efficiently and provide context back to the end user and so we approached it very differently than how legacy approaches have been uh in place you know largely rules-based engines that are written to be able to try and stop the bad guys and they miss a lot of things and so our data-driven approach uh that we patented is called uh polygraph it's it's a security architecture and there are three primary benefits it does a lot of things but the three things that we think are most profound first is it eliminates the need for you know dozens of point solutions um i was shocked when i you know kind of learned about security i was at symantec back in the day and just to see how fragmented this market is it's one of the biggest markets in tech 124 billion dollars in annual spend growing at 300 billion dollars in the next three years and it's massively fragmented and the average number of point solutions that customers have to deal with is dozens like literally 75 is the average number and so we wanted to take a platform approach to solve this problem where the larger the attack service that you put in the more data that you put into our machine learning algorithms the smarter that it gets and the higher the efficacies look hatfield nailed it in our view i mean the cloud and edge explodes the threat surface and this becomes a data problem at massive scale now is lace work going to solve all these problems no of course not but having researched this it's common for individuals to be managing dozens of tools and enterprises as hatfield said 75 on average with many hundreds being common the number one challenge we hear from csos and they'll tell you this is a lack of talent lack of human skills and bandwidth to solve the problem and a big part of that problem is fragmentation multiple apis scripts different standards that are constantly being updated and evolved so if the cloud can help us reduce tooling creep and simplify and automate at scale as the network continues to expand like the universe we can keep up with the adversaries they're never going to get ahead of them so look i know this topic is a bit off our normal swim lane but we think this is so important and no people that have been victimized so we wanted to call your attention to the exposure and try to get you to take some action even if it's baby steps so let's summarize you really want to begin by understanding where your credentials have been compromised because i promise they have been just look at your phone or look into your browser and see those recommendations and you're going to go whoa i got to get on this at least i hope you do that now you want to block out an entire day to focus on this and dig into it in order to protect you or your and your family's assets there's a lot of stake here and look one day is not going to kill you it's worth it then you want to begin building those three layers that we showed you choose a private email that is secure quote-unquote quote-unquote research the password manager that's find the one that's going to work for you do you want one that's web-based or an app that you download how does the password manager authenticate what do the reviews say how much does it cost don't rush into this you may want to test this out on a couple of low risk systems before fully committing because if you screw it up it's really a pain to unwind so don't rush into it then you want to figure out how to use your non-sms based two-factor authentication apps and identify which assets you want to protect you don't want to protect everything do you really care about your credentials on a site where you signed up years ago and never use it anymore it doesn't have any credit cards in it just delete it from your digital life and focus on your financial accounts your crypto and your sites where your credit card or other sensitive information lives and can be stolen also it's important to understand which institutions utilize which authentication methods really important that you make sure to document everything and air gap the most sensitive credentials and finally you're going to have to keep iterating and improving your security because this is a moving target you will never be 100 protected unfortunately this isn't a one-shot deal you're going to do a bunch of work it's hard but it's important work you're going to maintain your password you're going to change them every now and then maybe every few months six months maybe once a year whatever whatever is right for you and then a couple years down the road maybe two or three years down the road you might have to implement an entirely new system using the most modern tooling which we believe is going to be cloud-based or you could just ignore it and see what happens okay that's it for now thanks to the community for your comments and input and thanks again to jason floyer whose analysis around this topic was extremely useful remember i publish each week on wikibon.com and siliconangle.com these episodes are all available as podcasts all you can do is research breaking analysis podcasts or you can always connect on twitter i'm at d vallante or email me at david.velante siliconangle.com of course i always appreciate the comments on linkedin and clubhouse follow me so you're notified when we start a room and riff on these topics don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time

Published Date : May 24 2021

SUMMARY :

so the first thing you do is to get a

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave ShacochisPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VelantePERSON

0.99+

GoogleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Francis HaugenPERSON

0.99+

Justin WarrenPERSON

0.99+

David DantePERSON

0.99+

Ken RingdahlPERSON

0.99+

PWCORGANIZATION

0.99+

CenturylinkORGANIZATION

0.99+

Bill BelichikPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

DeloitteORGANIZATION

0.99+

Frank SlootmanPERSON

0.99+

AndyPERSON

0.99+

Coca-ColaORGANIZATION

0.99+

Tom BradyPERSON

0.99+

appleORGANIZATION

0.99+

David ShacochisPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Don JohnsonPERSON

0.99+

CelticsORGANIZATION

0.99+

DavePERSON

0.99+

MerckORGANIZATION

0.99+

KenPERSON

0.99+

BerniePERSON

0.99+

OracleORGANIZATION

0.99+

30 percentQUANTITY

0.99+

CelticORGANIZATION

0.99+

LisaPERSON

0.99+

Robert KraftPERSON

0.99+

John ChambersPERSON

0.99+

Silicon Angle MediaORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

JohnPERSON

0.99+

John WallsPERSON

0.99+

$120 billionQUANTITY

0.99+

John FurrierPERSON

0.99+

January 6thDATE

0.99+

2007DATE

0.99+

DanielPERSON

0.99+

Andy McAfeePERSON

0.99+

FacebookORGANIZATION

0.99+

ClevelandORGANIZATION

0.99+

CavsORGANIZATION

0.99+

BrandonPERSON

0.99+

2014DATE

0.99+

Christian Craft, Oracle | CUBE Conversation


 

(upbeat music) >> Hello everyone, and welcome to this Cube conversation. We're going to dig into some of the more specific and sometimes gory details of managing the nuances of database, database management systems. You know, it's a lot of fun to get it to the daily buzz of cloud and database competition and get a little snarky on Twitter, but there are a lot of mundane issues that you have to address to really do proper database sizing, capacity planning, and you know whether or not database consolidation makes sense. These are not trivial issues. And decades ago they spawned an entire role around the database administrator. They had to do the dirty work of database management so that users and customers would be satisfied. And while automation and cloud are changing that role, at the end of the day, somebody actually has to make the databases work in the cloud and make sure that the business doesn't feel any impact on the transition along the way. So on that note, we have with us Oracle senior director of product management for mission critical databases. He works in Juan Loaiza's group, Chris Craft, and Steve Zivanic whom we know well on the cube says this guy is the Jedi master when it comes to consolidating databases in the cloud. Nobody knows more on the face of the planet Earth. So we're really excited Chris, to have you inside the Cube. Welcome. >> Thanks, thanks Dave. >> That's a very humble thanks. So when it comes to running databases in the cloud can you explain the difference between sizing and capacity planning? Aren't they two sides of the same coin? >> Yeah, you know, they really are. It's like, you know sizing is really part of capacity to planning. It's really, I look at sizing as a one-time effort whereas capacity planning is more your ongoing. You perform sizing initially when the application is deployed. And then, then when you're changing platforms, like going from on-prem to the Cloud you're going to go through a sizing exercise 'cause you're looking at going to a new platform. That's more of a one-time effort, and then ongoing, you're looking at your capacity management over time. So yeah, they are very related so. >> Okay, thank you. So we're going to talk about database consolidation. A lot of people would say, look the cloud makes consolidating databases maybe not irrelevant, but maybe not the best strategy because I got all these different purpose-built databases. Why consolidate databases if they're already going to consolidate it in the cloud in one location? >> Yeah. So, so we're really talking about in in the cloud, you're running virtual machines but consolidation still applies on the virtual machines. So if you have a virtual machine that's dedicated to a database that database is that server, that virtual machine is going to be under utilized over time. So what we're doing with consolidation is running multiple databases within a virtual machine or what it, Oracle virtual cluster. We do everything on clusters. So multiple machines multiple databases within that will drive up the utilization and improve your cost structure. So it's a sizing it's it's absolutely critical on even in the cloud. >> Okay. But, but wouldn't it, I might say to that, wouldn't it be better to have each database have a dedicated VM? I mean, from a performance perspective, it doesn't try to make the database do too much affect performance. >> Yeah. It, so whenever, so we know historically that a database on a dedicated server back in the day that was a physical server, today it's a virtual machine. When you do that, your utilization will be in the range of 15 to 20%. And that's, you know very highly under utilized systems when you do that. So we don't need to isolate things onto dedicated virtual machines for a performance perspective. There are other ways that we can manage that we have resource management built into Oracle and the Oracle database. And then on Exadata we have an integrated IO resource management as well so we can deal with that different ways. >> Okay. So you're basically proposing that you're putting these databases onto a single VM and managing it accordingly. Is there additional details you can provide on that? >> So, you know, we don't put everything into you know, literally one, one VM. You want to have some isolation built in there, but see and take a more pragmatic approach. You know, like every single database in one VM that's the wrong way to go. Each database in a dedicated VM is also the other extreme, also the wrong way to go. So we're kind of right down the middle and be more pragmatic about it, and do some level of consolidation to drive up utilization. >> I remember when I first started following tech I was studying up on, you know kind of how disc drives work and so forth. And there was probably like I can't even remember what it was. It was like probably like 10 megabytes under an actuator. And people were saying, Oh my God, that's so much data. You, you got your blast radius is, is so big. You got to split that up. So it's the same concept, apply with availability. Some would say, there's a problem because you're consolidating all this data and you've got this blast radius that increases. How do you address that? >> And so, you know, redundancy. So we have redundancy at all levels. So if you look at a single, so we're talking about Exadata here, taught in an Exadata machine we can lose up to 24 disc drives out of 30. 30 machines with 36 disc drives, we can use 24 of those. So that'd be 12 per storage cell. You can lose two storage cells as 24 out of 36 drives so we can lose and keep on running. We can also, we also cluster, we also do clustering. So the database servers are clustered together for high availability. So we can take, we can suffer multiple simultaneous failures and keep on running without performance impact either. So it's, so recovery, we handle that in different ways. So it's, look at blast radius from a standpoint, you want some, some isolation for blast radius but we have physical failures is just not something that we're concerned with. >> Why do you deal with taking down a VM? Doesn't that normally mean there's going to be some kind of disruption? >> Oh, so you know, the, so Oracle database, you're talking about real application clusters on on Oracle database, on Exadata. We've got, we have a very fast detection of of failures and then resolution of the failure. So we're looking at a small blip in performance, you know we're looking at a few milliseconds to detect failure and then maybe up around three seconds to actually affect the failover. So the applications that are not getting disconnected, they continue operating in the, in that kind of condition. So that's kind of unique to the Exadata platform. And so, you know, in our cloud, we're running Exadata. We have this built in there. So we're, we're resilient to that type of failure, so. >> And sorry, you mentioned real application clusters. You're saying because you're running real application clusters that's how you're able to become more resilient? >> So yeah, so we have, so Oracle database real application clusters runs on top of a clustered virtual machines on Exadata. We have integration then optimizes the fail over times of that clustering. So it's, it's not the cluster same, it's the optimizations are only built into Exadata. So we have much much faster, much better tighter integration, so much more scalability because of that, that integration that we have. >> Can I run rack in other clouds? Can I put that into Amazon's cloud? >> So, so real application clusters requires two things. It's a, you require shared storage in a fast interconnect, a fast networking interconnecting. And those things just don't exist in the other clouds. We have those built into Exadata in our cloud. And we also, we also allow real application clusters in our relational database, our database cloud service offering as well. But it's, really the highest implementation of that is in Exadata. >> Well, of course I was tongue in cheek joking but this is, this is why, you know, I was listening to Arvind Krishna the other day in IBM Think. And he was saying only 25% of mission critical applications have moved into the cloud. I didn't think it's that high. I mean, but, but what you're doing is basically building a mission critical, you know, cloud or a cloud for mission critical databases. And that's, that's unique. I mean, I would expect other cloud vendors that eventually you know, are going to get there, but you're kind of starting with the hard stuff and working backwards. But, that is what I've always interpreted is unique to Oracle, but how does that affect cost? Isn't that more expensive? >> Actually, no. We're taking services that that start out at a very similar price point. And then we drive. So what we've seen from other customers that are running in like Amazon, for example, we see databases on dedicated virtual machines that run anywhere from 15 to 20% utilization. So what we do is, that low, low utilization, what we do is take that and triple that. So we run, so we run maybe 50% utilization. At that point we still have full redundancy, but we've now made the service one third of the cost. So we're starting at a third, we're starting at a very similar cost. And then we drive it to, you know three times a utilization. This is not crazy numbers. This is, you know, 50% is, is fine and retain the redundancy at that level as well. >> Got it, well so. >> What we've seen is about a third the cost. >> Really? Okay. Well, so, but, what about, like for instance, on AWS, couldn't I run this in a multi availability zone, running RDS or some other cloud database? >> So, so you can run a Multi-AZ environment like in in Amazon, for example, you can run locals. That's what we call local standby. If you do that, you're now instead of being one third, instead of being three times more expensive, you're now six times more expensive. Because that is another copy of the entire platform, the entire instance, the storage, everything on the other availability zone instead of being three times more, it's now six. >> Because you're essentially replicating everything in a brute force mode, right? >> Yeah. It's a data guard standby, local standby in another AZ, or what we call availability domain in our cloud. >> So let's maybe geek out a little bit. So, let's talk more about availability. You know, for years, I mean, I remember going back to reading about this stuff with tandem computers, you know, coincident failures. How are you dealing with those in today's modern world? >> So what we call simultaneous failures is, so we, we deal with that with redundancy in the system. So we have redundancy at all layers in the storage. Like I said earlier, we can take across, you know, two storage cells and each storage cell has a dozen drives. So that's 24 disc drives. That's eight flashcard failures simultaneously. And we keep on running no data loss, no loss of service. That's at the storage layer. We have multiple, multiple redundant networking switches at that, at the networking layer, the internal network. Then we go up into the database server. We then have redundancy across the nodes of a cluster. You have multiple virtual machines that comprise a virtual cluster. So it's at each and every level, we have redundancy. And then we drive the redundancy into the application using what's called application continuity. So the application connections have knowledge of the failure, failure modes of the database. They can follow to the surviving node, and continue operating. >> And you do this with math, you're doing some kind of magic bit slicing, or how do you do that? >> That, so that is that particular thing, application continuity, so technology that's been built into Oracle database since, since 12c, and that it's been around for quite a long time. And it allows the application to follow the rack cluster, any kind of issues with the rack cluster. We can drain connections off. It's very well-proven technology in, you know, prior to to proactive maintenance, we can drain connections over and then it will also handle a failure of a connection as well. And the application following that, yes. >> I learned from my old mainframe days and hanging around with David Floyer. It's always ask, what happens when something goes wrong and it's all about recovery. And you guys have the gold standard there. I mean, we've talked about this a lot. So you got Exadata. That's what is behind your Exadata cloud service, X8M I think you call it, and you've got autonomous database. I'm not great with model numbers, but, but talk about the way you can handle simultaneous failures. I mean, are there like triple redundancies that you've built in? >> Yeah. So everything what we do in our cloud is everything is triple redundancy by default. So we, you can suffer, that way we can suffer two failures and continue operating. So the, the other thing, so recovery, if you look at transaction recovery, when a failure occurs a transaction will flip that session, will flip to the machine that keeps running. It'll reposition all in the work that's in flight, any kind of inflight transactions, any in flight queries that are going on, reposition and continue operating. >> So you've essentially created like the old three site data centers, but you're in a single platform because you're synchronous. But, that same concept in a package. >> It's, you know, it's a lot of times you show a picture of an Exadata. It looks like a single box, but in the box there's some redundancy built in the box. And in fact, in the cloud it's actually across an entire aisle. So it's, we kind of obscure that a little bit, from your provisioning, you know, our database nodes and our storage cells and in the cloud but it's actually across an entire aisle of a dataset. >> Okay, and of course, that's within a synchronous location. Let's talk about disaster recovery, and what you're doing in that area, around Oracle Cloud What are my options there? What's different from other cloud providers we were talking earlier about, AZs, how are you different and what are you doing there? >> Yeah, so we, we talked earlier about the Multi-AZ deployment, what we call it availability domain, AD, so a little different terminology. But we can deploy another, another copy of the database into another availability domain, if you like. It's not often that you lose an entire AZ or AD, it's more, we're protecting from regional failures. So across another region. And that's where we look at, we really look at that as that technology, as a standby, as a data, disaster recovery solution not for HA. HA, we build HA into the machine itself. >> So you're saying, we were talking earlier about AZ, you're saying that's for HA versus DR. Is that, is that what you're contending? >> Yeah, like, you know again, pick on Amazon for a second here. Amazon uses a standby database. What we would normally use for disaster recovery, they're using that for availability. And you're looking at a few minutes of time to flip over to another AZ, whereas within an Exadata frame, we can flip over in milliseconds. We keep continue running. There is no loss of conductivity. And then we use the standby in another region for disaster. That's a true disaster solution. >> As opposed to incurring that penalty of latency, or whatever, to spin up the other resource. >> Right, right. >> Okay, so that's clear how kind of you guys address that, that challenge. Last question, maybe you could give us your take, again folks, coming out of Oracle's mouth, but what's the bottom line cost Delta based on your experience between your service and competitive services? I love these conversations because you're not afraid to talk about the competition, so bring it on. >> I've seen, so we've just based on what we've seen with customers deploying databases in Amazon, versus what, you know we've replaced that within, in our cloud service. We're seeing from just a list price perspective. Now, you know, we discount, I know Amazon discounts, but the only thing I can really speak to is list price perspective. It's about a third the cost. So we're talking about a more powerful platform, runs faster. We get these incredible, we haven't even talked about performance here. Talk about availability, performance where we're getting IO rates, IO latencies in the 19 microsecond range. Now with Exadata, that's going to be 50 times faster than what you get with these traditional cloud vendors. So much, much faster, and a third the cost. >> So talk about discounts, I mean, I know Oracle discounts, Oracle from list price, Oracle provides significant discounts. I'm not as familiar with your cloud pricing but I mean, Amazon's discounts are really in the form of like reserved instances. Is your pricing similar in that regard or different? I mean, if I'm just paying on demand, I'm paying through the nose. I presume it's same with you. If I, but if I buy in bulk getting a discount, is that what you mean by discount? Or is it more similar to the way you've traditionally discounted, you know large customers, the more you spend, the more you you get kind of thing. >> It's a, there's a discount structure. So it's, we don't have the same kind of lock-in like with reserved instance structure, but yeah, it's, there are discounts and that's going to be very customer specific. >> Right. >> So, but I think that the end result we're starting at, a three X differential on the price. >> But the reason I'm asking the question is that the stats you gave me are for list price, right? >> Yeah, yes, yeah. >> Okay, and sure, you're saying that at list price you're, you're less expensive. I, and again, my contention would be just by experiences that your discounts would be more aggressive traditionally in Oracle's traditional business. You know, I've done a lot of Oracle negotiation in my days. And if you're, you know, if you're a big customer you can get good deals. And again, I'm not as familiar with the cloud pricing, but still that's, that's good. If you're doing it on a list price basis, to me, that's a conservative statement if that makes any sense. >> Right, that's where it starts. We know that's where it's starting out. So I, you know, once you get into discounts, it's very customer specific. >> Right. >> We know the starting point is at three X differential. Before you do something in the Multi-AZ would be a six X differential, by the way, so. >> Yeah, okay. All right, Chris. Well, Hey, I appreciate you taking us through this, good stuff, and best of luck, good work. You know, you guys keep, I always say Oracle invest you guys spend a lot of money in RD and, and, you know you're quiet for a while in the cloud and all of a sudden you came out like you invented it. So good job! >> All right. >> All right, thanks. Thanks for coming on. All right. >> Thanks. >> Thank you for watching everybody. This is Dave Vellante for Cube conversations. We'll see you next time. (upbeat music)

Published Date : May 14 2021

SUMMARY :

So on that note, we have with databases in the cloud Yeah, you know, they really are. maybe not the best strategy So if you have a virtual I might say to that, in the range of 15 to 20%. you can provide on that? So, you know, we So it's the same concept, So if you look at a So the applications that are And sorry, you mentioned So it's, it's not the cluster exist in the other clouds. building a mission critical, you know, And then we drive it to, you know about a third the cost. Well, so, but, what If you do that, you're now or what we call availability you know, coincident failures. So the application And it allows the application about the way you can handle So we, you can suffer, like the old three site data And in fact, in the cloud what are you doing there? It's not often that you So you're saying, we were Yeah, like, you know again, that penalty of latency, kind of you guys address that, but the only thing I can really speak to is that what you mean by discount? So it's, we don't have the So, but I think that the you can get good deals. So I, you know, once We know the starting point and all of a sudden you came Thanks for coming on. Thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve ZivanicPERSON

0.99+

Dave VellantePERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

15QUANTITY

0.99+

36 drivesQUANTITY

0.99+

sixQUANTITY

0.99+

50%QUANTITY

0.99+

50 timesQUANTITY

0.99+

three timesQUANTITY

0.99+

OracleORGANIZATION

0.99+

24QUANTITY

0.99+

David FloyerPERSON

0.99+

six timesQUANTITY

0.99+

36 disc drivesQUANTITY

0.99+

10 megabytesQUANTITY

0.99+

Chris CraftPERSON

0.99+

30. 30 machinesQUANTITY

0.99+

one-timeQUANTITY

0.99+

one thirdQUANTITY

0.99+

two sidesQUANTITY

0.99+

Arvind KrishnaPERSON

0.99+

two failuresQUANTITY

0.99+

each storage cellQUANTITY

0.99+

IBMORGANIZATION

0.99+

19 microsecondQUANTITY

0.99+

two storage cellsQUANTITY

0.99+

Christian CraftPERSON

0.99+

DeltaORGANIZATION

0.99+

25%QUANTITY

0.99+

20%QUANTITY

0.99+

Juan LoaizaPERSON

0.99+

single platformQUANTITY

0.99+

Each databaseQUANTITY

0.98+

AWSORGANIZATION

0.98+

each databaseQUANTITY

0.98+

decades agoDATE

0.98+

thirdQUANTITY

0.97+

ExadataORGANIZATION

0.97+

AZLOCATION

0.97+

around three secondsQUANTITY

0.97+

three timesQUANTITY

0.96+

12 per storage cellQUANTITY

0.96+

two thingsQUANTITY

0.96+

24 disc drivesQUANTITY

0.95+

single boxQUANTITY

0.95+

todayDATE

0.94+

TwitterORGANIZATION

0.94+

one locationQUANTITY

0.93+

three XQUANTITY

0.93+

oneQUANTITY

0.92+

one VMQUANTITY

0.91+

firstQUANTITY

0.9+

single VMQUANTITY

0.89+

threeQUANTITY

0.88+

Oracle CloudTITLE

0.85+

single databaseQUANTITY

0.83+

three site data centersQUANTITY

0.83+

dozen drivesQUANTITY

0.81+

eight flashcardQUANTITY

0.79+

Breaking Analysis: Why Apple Could be the Key to Intel's Future


 

>> From theCUBE studios in Palo Alto, in Boston bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante >> The latest Arm Neoverse announcement further cements our opinion that it's architecture business model and ecosystem execution are defining a new era of computing and leaving Intel in it's dust. We believe the company and its partners have at least a two year lead on Intel and are currently in a far better position to capitalize on a major waves that are driving the technology industry and its innovation. To compete our view is that Intel needs a new strategy. Now, Pat Gelsinger is bringing that but they also need financial support from the US and the EU governments. Pat Gelsinger was just noted as asking or requesting from the EU government $9 billion, sorry, 8 billion euros in financial support. And very importantly, Intel needs a volume for its new Foundry business. And that is where Apple could be a key. Hello, everyone. And welcome to this week's weekly bond Cube insights powered by ETR. In this breaking analysis will explain why Apple could be the key to saving Intel and America's semiconductor industry leadership. We'll also further explore our scenario of the evolution of computing and what will happen to Intel if it can't catch up. Here's a hint it's not pretty. Let's start by looking at some of the key assumptions that we've made that are informing our scenarios. We've pointed out many times that we believe Arm wafer volumes are approaching 10 times those of x86 wafers. This means that manufacturers of Arm chips have a significant cost advantage over Intel. We've covered that extensively, but we repeat it because when we see news reports and analysis and print it's not a factor that anybody's highlighting. And this is probably the most important issue that Intel faces. And it's why we feel that Apple could be Intel's savior. We'll come back to that. We've projected that the chip shortage will last no less than three years, perhaps even longer. As we reported in a recent breaking analysis. Well, Moore's law is waning. The result of Moore's law, I.e the doubling of processor performance every 18 to 24 months is actually accelerating. We've observed and continue to project a quadrupling of performance every two years, breaking historical norms. Arm is attacking the enterprise and the data center. We see hyperscalers as the tip of their entry spear. AWS's graviton chip is the best example. Amazon and other cloud vendors that have engineering and software capabilities are making Arm-based chips capable of running general purpose applications. This is a huge threat to x86. And if Intel doesn't quickly we believe Arm will gain a 50% share of an enterprise semiconductor spend by 2030. We see the definition of Cloud expanding. Cloud is no longer a remote set of services, in the cloud, rather it's expanding to the edge where the edge could be a data center, a data closet, or a true edge device or system. And Arm is by far in our view in the best position to support the new workloads and computing models that are emerging as a result. Finally geopolitical forces are at play here. We believe the U S government will do, or at least should do everything possible to ensure that Intel and the U S chip industry regain its leadership position in the semiconductor business. If they don't the U S and Intel could fade to irrelevance. Let's look at this last point and make some comments on that. Here's a map of the South China sea in a way off in the Pacific we've superimposed a little pie chart. And we asked ourselves if you had a hundred points of strategic value to allocate, how much would you put in the semiconductor manufacturing bucket and how much would go to design? And our conclusion was 50, 50. Now it used to be because of Intel's dominance with x86 and its volume that the United States was number one in both strategic areas. But today that orange slice of the pie is dominated by TSMC. Thanks to Arm volumes. Now we've reported extensively on this and we don't want to dwell on it for too long but on all accounts cost, technology, volume. TSMC is the clear leader here. China's president Xi has a stated goal of unifying Taiwan by China's Centennial in 2049, will this tiny Island nation which dominates a critical part of the strategic semiconductor pie, go the way of Hong Kong and be subsumed into China. Well, military experts say it was very hard for China to take Taiwan by force, without heavy losses and some serious international repercussions. The US's military presence in the Philippines and Okinawa and Guam combined with support from Japan and South Korea would make it even more difficult. And certainly the Taiwanese people you would think would prefer their independence. But Taiwanese leadership, it ebbs and flows between those hardliners who really want to separate and want independence and those that are more sympathetic to China. Could China for example, use cyber warfare to over time control the narrative in Taiwan. Remember if you control the narrative you can control the meme. If you can crawl the meme you control the idea. If you control the idea, you control the belief system. And if you control the belief system you control the population without firing a shot. So is it possible that over the next 25 years China could weaponize propaganda and social media to reach its objectives with Taiwan? Maybe it's a long shot but if you're a senior strategist in the U S government would you want to leave that to chance? We don't think so. Let's park that for now and double click on one of our key findings. And that is the pace of semiconductor performance gains. As we first reported a few weeks ago. Well, Moore's law is moderating the outlook for cheap dense and efficient processing power has never been better. This slideshows two simple log lines. One is the traditional Moore's law curve. That's the one at the bottom. And the other is the current pace of system performance improvement that we're seeing measured in trillions of operations per second. Now, if you calculate the historical annual rate of processor performance improvement that we saw with x86, the math comes out to around 40% improvement per year. Now that rate is slowing. It's now down to around 30% annually. So we're not quite doubling every 24 months anymore with x86 and that's why people say Moore's law is dead. But if you look at the (indistinct) effects of packaging CPU's, GPU's, NPUs accelerators, DSPs and all the alternative processing power you can find in SOC system on chip and eventually system on package it's growing at more than a hundred percent per annum. And this means that the processing power is now quadrupling every 24 months. That's impressive. And the reason we're here is Arm. Arm has redefined the core process of model for a new era of computing. Arm made an announcement last week which really recycle some old content from last September, but it also put forth new proof points on adoption and performance. Arm laid out three components and its announcement. The first was Neoverse version one which is all about extending vector performance. This is critical for high performance computing HPC which at one point you thought that was a niche but it is the AI platform. AI workloads are not a niche. Second Arm announced the Neoverse and two platform based on the recently introduced Arm V9. We talked about that a lot in one of our earlier Breaking Analysis. This is going to performance boost of around 40%. Now the third was, it was called CMN-700 Arm maybe needs to work on some of its names, but Arm said this is the industry's most advanced mesh interconnect. This is the glue for the V1 and the N2 platforms. The importance is it allows for more efficient use and sharing of memory resources across components of the system package. We talked about this extensively in previous episodes the importance of that capability. Now let's share with you this wheel diagram underscores the completeness of the Arm platform. Arms approach is to enable flexibility across an open ecosystem, allowing for value add at many levels. Arm has built the architecture in design and allows an open ecosystem to provide the value added software. Now, very importantly, Arm has created the standards and specifications by which they can with certainty, certify that the Foundry can make the chips to a high quality standard, and importantly that all the applications are going to run properly. In other words, if you design an application, it will work across the ecosystem and maintain backwards compatibility with previous generations, like Intel has done for years but Arm as we'll see next is positioning not only for existing workloads but also the emerging high growth applications. To (indistinct) here's the Arm total available market as we see it, we think the end market spending value of just the chips going into these areas is $600 billion today. And it's going to grow to 1 trillion by 2030. In other words, we're allocating the value of the end market spend in these sectors to the marked up value of the Silicon as a percentage of the total spend. It's enormous. So the big areas are Hyperscale Clouds which we think is around 20% of this TAM and the HPC and AI workloads, which account for about 35% and the Edge will ultimately be the largest of all probably capturing 45%. And these are rough estimates and they'll ebb and flow and there's obviously some overlap but the bottom line is the market is huge and growing very rapidly. And you see that little red highlighted area that's enterprise IT. Traditional IT and that's the x86 market in context. So it's relatively small. What's happening is we're seeing a number of traditional IT vendors, packaging x86 boxes throwing them over the fence and saying, we're going after the Edge. And what they're doing is saying, okay the edge is this aggregation point for all these end point devices. We think the real opportunity at the Edge is for AI inferencing. That, that is where most of the activity and most of the spending is going to be. And we think Arm is going to dominate that market. And this brings up another challenge for Intel. So we've made the point a zillion times that PC volumes peaked in 2011. And we saw that as problematic for Intel for the cost reasons that we've beat into your head. And lo and behold PC volumes, they actually grew last year thanks to COVID and we'll continue to grow it seems for a year or so. Here's some ETR data that underscores that fact. This chart shows the net score. Remember that's spending momentum it's the breakdown for Dell's laptop business. The green means spending is accelerating and the red is decelerating. And the blue line is net score that spending momentum. And the trend is up and to the right now, as we've said this is great news for Dell and HP and Lenovo and Apple for its laptops, all the laptops sellers but it's not necessarily great news for Intel. Why? I mean, it's okay. But what it does is it shifts Intel's product mix toward lower margin, PC chips and it squeezes Intel's gross margins. So the CFO has to explain that margin contraction to wall street. Imagine that the business that got Intel to its monopoly status is growing faster than the high margin server business. And that's pulling margins down. So as we said, Intel is fighting a war on multiple fronts. It's battling AMD in the core x86 business both PCs and servers. It's watching Arm mop up in mobile. It's trying to figure out how to reinvent itself and change its culture to allow more flexibility into its designs. And it's spinning up a Foundry business to compete with TSMC. So it's got to fund all this while at the same time propping up at stock with buybacks Intel last summer announced that it was accelerating it's $10 billion stock buyback program, $10 billion. Buy stock back, or build a Foundry which do you think is more important for the future of Intel and the us semiconductor industry? So Intel, it's got to protect its past while building his future and placating wall street all at the same time. And here's where it gets even more dicey. Intel's got to protect its high-end x86 business. It is the cash cow and funds their operation. Who's Intel's biggest customer Dell, HP, Facebook, Google Amazon? Well, let's just say Amazon is a big customer. Can we agree on that? And we know AWS is biggest revenue generator is EC2. And EC2 was powered by microprocessors made from Intel and others. We found this slide in the Arm Neoverse deck and it caught our attention. The data comes from a data platform called lifter insights. The charts show, the rapid growth of AWS is graviton chips which are they're custom designed chips based on Arm of course. The blue is that graviton and the black vendor A presumably is Intel and the gray is assumed to be AMD. The eye popper is the 2020 pie chart. The instant deployments, nearly 50% are graviton. So if you're Pat Gelsinger, you better be all over AWS. You don't want to lose this customer and you're going to do everything in your power to keep them. But the trend is not your friend in this account. Now the story gets even gnarlier and here's the killer chart. It shows the ISV ecosystem platforms that run on graviton too, because AWS has such good engineering and controls its own stack. It can build Arm-based chips that run software designed to run on general purpose x86 systems. Yes, it's true. The ISV, they got to do some work, but large ISV they have a huge incentives because they want to ride the AWS wave. Certainly the user doesn't know or care but AWS cares because it's driving costs and energy consumption down and performance up. Lower cost, higher performance. Sounds like something Amazon wants to consistently deliver, right? And the ISV portfolio that runs on our base graviton and it's just going to continue to grow. And by the way, it's not just Amazon. It's Alibaba, it's Oracle, it's Marvell. It's 10 cents. The list keeps growing Arm, trotted out a number of names. And I would expect over time it's going to be Facebook and Google and Microsoft. If they're not, are you there? Now the last piece of the Arm architecture story that we want to share is the progress that they're making and compare that to x86. This chart shows how Arm is innovating and let's start with the first line under platform capabilities. Number of cores supported per die or, or system. Now die is what ends up as a chip on a small piece of Silicon. Think of the die as circuit diagram of the chip if you will, and these circuits they're fabricated on wafers using photo lithography. The wafers then cut up into many pieces each one, having a chip. Each of these pieces is the chip. And two chips make up a system. The key here is that Arm is quadrupling the number of cores instead of increasing thread counts. It's giving you cores. Cores are better than threads because threads are shared and cores are independent and much easier to virtualize. This is particularly important in situations where you want to be as efficient as possible sharing massive resources like the Cloud. Now, as you can see in the right hand side of the chart under the orange Arm is dramatically increasing the amount of capabilities compared to previous generations. And one of the other highlights to us is that last line that CCIX and CXL support again Arm maybe needs to name these better. These refer to Arms and memory sharing capabilities within and between processors. This allows CPU's GPU's NPS, et cetera to share resources very often efficiently especially compared to the way x86 works where everything is currently controlled by the x86 processor. CCIX and CXL support on the other hand will allow designers to program the system and share memory wherever they want within the system directly and not have to go through the overhead of a central processor, which owns the memory. So for example, if there's a CPU, GPU, NPU the CPU can say to the GPU, give me your results at a specified location and signal me when you're done. So when the GPU is finished calculating and sending the results, the GPU just signals the operation is complete. Versus having to ping the CPU constantly, which is overhead intensive. Now composability in that chart means the system it's a fixed. Rather you can programmatically change the characteristics of the system on the fly. For example, if the NPU is idle you can allocate more resources to other parts of the system. Now, Intel is doing this too in the future but we think Arm is way ahead. At least by two years this is also huge for Nvidia, which today relies on x86. A major problem for Nvidia has been coherent memory management because the utilization of its GPU is appallingly low and it can't be easily optimized. Last week, Nvidia announced it's intent to provide an AI capability for the data center without x86 I.e using Arm-based processors. So Nvidia another big Intel customer is also moving to Arm. And if it's successful acquiring Arm which is still a long shot this trend is only going to accelerate. But the bottom line is if Intel can't move fast enough to stem the momentum of Arm we believe Arm will capture 50% of the enterprise semiconductor spending by 2030. So how does Intel continue to lead? Well, it's not going to be easy. Remember we said, Intel, can't go it alone. And we posited that the company would have to initiate a joint venture structure. We propose a triumvirate of Intel, IBM with its power of 10 and memory aggregation and memory architecture And Samsung with its volume manufacturing expertise on the premise that it coveted in on US soil presence. Now upon further review we're not sure the Samsung is willing to give up and contribute its IP to this venture. It's put a lot of money and a lot of emphasis on infrastructure in South Korea. And furthermore, we're not convinced that Arvind Krishna who we believe ultimately made the call to Jettisons. Jettison IBM's micro electronics business wants to put his efforts back into manufacturing semi-conductors. So we have this conundrum. Intel is fighting AMD, which is already at seven nanometer. Intel has a fall behind in process manufacturing which is strategically important to the United States it's military and the nation's competitiveness. Intel's behind the curve on cost and architecture and is losing key customers in the most important market segments. And it's way behind on volume. The critical piece of the pie that nobody ever talks about. Intel must become more price and performance competitive with x86 and bring in new composable designs that maintain x86 competitive. And give the ability to allow customers and designers to add and customize GPU's, NPUs, accelerators et cetera. All while launching a successful Foundry business. So we think there's another possibility to this thought exercise. Apple is currently reliant on TSMC and is pushing them hard toward five nanometer, in fact sucking up a lot of that volume and TSMC is maybe not servicing some other customers as well as it's servicing Apple because it's a bit destructive, it is distracted and you have this chip shortage. So Apple because of its size gets the lion's share of the attention but Apple needs a trusted onshore supplier. Sure TSMC is adding manufacturing capacity in the US and Arizona. But back to our precarious scenario in the South China sea. Will the U S government and Apple sit back and hope for the best or will they hope for the best and plan for the worst? Let's face it. If China gains control of TSMC, it could block access to the latest and greatest process technology. Apple just announced that it's investing billions of dollars in semiconductor technology across the US. The US government is pressuring big tech. What about an Apple Intel joint venture? Apple brings the volume, it's Cloud, it's Cloud, sorry. It's money it's design leadership, all that to the table. And they could partner with Intel. It gives Intel the Foundry business and a guaranteed volume stream. And maybe the U S government gives Apple a little bit of breathing room and the whole big up big breakup, big tech narrative. And even though it's not necessarily specifically targeting Apple but maybe the US government needs to think twice before it attacks big tech and thinks about the long-term strategic ramifications. Wouldn't that be ironic? Apple dumps Intel in favor of Arm for the M1 and then incubates, and essentially saves Intel with a pipeline of Foundry business. Now back to IBM in this scenario, we've put a question mark on the slide because maybe IBM just gets in the way and why not? A nice clean partnership between Intel and Apple? Who knows? Maybe Gelsinger can even negotiate this without giving up any equity to Apple, but Apple could be a key ingredient to a cocktail of a new strategy under Pat Gelsinger leadership. Gobs of cash from the US and EU governments and volume from Apple. Wow, still a long shot, but one worth pursuing because as we've written, Intel is too strategic to fail. Okay, well, what do you think? You can DM me @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn post. Remember, these episodes are all available as podcasts so please subscribe wherever you listen. I publish weekly on wikibon.com and siliconangle.com. And don't forget to check out etr.plus for all the survey analysis. And I want to thank my colleague, David Floyer for his collaboration on this and other related episodes. This is Dave Vellante for theCUBE insights powered by ETR. Thanks for watching, be well, and we'll see you next time. (upbeat music)

Published Date : May 1 2021

SUMMARY :

This is Breaking Analysis and most of the spending is going to be.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

HPORGANIZATION

0.99+

AppleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

DellORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

TSMCORGANIZATION

0.99+

IBMORGANIZATION

0.99+

2011DATE

0.99+

LenovoORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

$10 billionQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

GoogleORGANIZATION

0.99+

50%QUANTITY

0.99+

AlibabaORGANIZATION

0.99+

$600QUANTITY

0.99+

AWSORGANIZATION

0.99+

45%QUANTITY

0.99+

two chipsQUANTITY

0.99+

10 timesQUANTITY

0.99+

10 centsQUANTITY

0.99+

South KoreaLOCATION

0.99+

USLOCATION

0.99+

Last weekDATE

0.99+

OracleORGANIZATION

0.99+

ArizonaLOCATION

0.99+

U SORGANIZATION

0.99+

BostonLOCATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

1 trillionQUANTITY

0.99+

2030DATE

0.99+

MarvellORGANIZATION

0.99+

ChinaORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

two yearsQUANTITY

0.99+

MoorePERSON

0.99+

$9 billionQUANTITY

0.99+

10QUANTITY

0.99+

EUORGANIZATION

0.99+

last yearDATE

0.99+

last weekDATE

0.99+

twiceQUANTITY

0.99+

first lineQUANTITY

0.99+

OkinawaLOCATION

0.99+

last SeptemberDATE

0.99+

Hong KongLOCATION

0.99+

Breaking Analysis: Arm Lays Down the Gauntlet at Intel's Feet


 

>> Announcer: From the Cube's studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> Exactly one week after Pat Gelsinger's announcement of his plans to reinvent Intel. Arm announced version nine of its architecture and laid out its vision for the next decade. We believe this vision is extremely strong as it combines an end-to-end capability from Edge to Cloud, to the data center, to the home and everything in between. Arms aspirations are ambitious and powerful. Leveraging its business model, ecosystem and software compatibility with previous generations. Hello every one and welcome to this week's Wikibon Cube Insights powered by ETR. And this breaking analysis will explain why we think this announcement is so important and what it means for Intel and the broader technology landscape. We'll also share with you some feedback that we received from the Cube Community on last week's episode and a little inside baseball on how Intel, IBM, Samsung, TSMC and the U.S. government might be thinking about the shifting landscape of semiconductor technology. Now, there were two notable announcements this week that were directly related to Intel's announcement of March 23rd. The Armv9 news and TSMC's plans to invest a $100 billion in chip manufacturing and development over the next three years. That is a big number. It appears to tramp Intel's plan $20 billion investment to launch two new fabs in the U.S. starting in 2024. You may remember back in 2019, Samsung pledged to invest a $116 billion to diversify its production beyond memory trip, memory chips. Why are all these companies getting so aggressive? And won't this cause a glut in chips? Well, first, China looms large and aims to dominate its local markets, which in turn is going to confer advantages globally. The second, there's a huge chip shortage right now. And the belief is that it's going to continue through the decade and possibly beyond. We are seeing a new inflection point in the demand as we discussed last week. Stemming from digital, IOT, cloud, autos in new use cases in the home as so well presented by Sarjeet Johal in our community. As to the glut, these manufacturers believe that demand will outstrip supply indefinitely. And I understand that a lack of manufacturing capacity is actually more deadly than an oversupply. Look, if there's a glut, manufacturers can cut production and take the financial hit. Whereas capacity constraints mean you can miss entire cycles of growth and really miss out on the demand and the cost reductions. So, all these manufacturers are going for it. Now let's talk about Arm, its approach and the announcements that it made this week. Now last week, we talked about how Pat Gelsinger his vision of a system on package was an attempt to leapfrog system on chip SOC, while Arm is taking a similar system approach. But in our view, it's even broader than the vision laid out by Pat at Intel. Arm is targeting a wide variety of use cases that are shown here. Arm's fundamental philosophy is that the future will require highly specialized chips and Intel as you recall from Pat's announcement, would agree. But Arm historically takes an ecosystem approach that is different from Intel's model. Arm is all about enabling the production of specialized chips to really fit a specific application. For example, think about the amount of AI going on iPhones. They move if I remember from fingerprint to face recognition. This requires specialized neural processing units, NPUs that are designed by Apple for that particular use case. Arm is facilitating the creation of these specialized chips to be designed and produced by the ecosystem. Intel on the other hand has historically taken a one size fits all approach. Built around the x86. The Intel's design has always been about improving the processor. For example, in terms of speed, density, adding vector processing to accommodate AI, et cetera. And Intel does all the design and the manufacturing in any specialization for the ecosystem is done by Intel. Much of the value, that's added from the ecosystem is frankly been bending metal or adding displays or other features at the margin. But, the advantage is that the x86 architecture is well understood. It's consistent, reliable, and let's face it. Most enterprise software runs on x86. So, but very, very different models historically, which we heard from Gelsinger last week they're going to change with a new trusted foundry strategy. Now let's go through an example that might help explain the power of Arm's model. Let's say, your AWS and you're doing graviton. Designing graviton and graviton2. Or Apple, designing the M1 chip, or Tesla designing its own chip, or any other company in in any one of these use cases that are shown here. Tesla is a really good example. In order to optimize for video processing, Tesla needed to add specialized code firmware in the NPU for it's specific use case within autos. It was happy to take off the shelf CPU or GPU or whatever, and leverage Arm's standards there. And then it added its own value in the NPU. So the advantage of this model is Tesla could go from tape out in less or, or, or or in less than a year versus get the tape out in less than a year versus what would normally take many years. Arm is, think of Arm is like customize a Lego blocks that enable unique value add by the ecosystem with a much faster time to market. So like I say, the Tesla goes from logical tape out if you will, to Samsung and then says, okay run this against your manufacturing process. And it should all work as advertised by Arm. Tesla, interestingly, just as an aside chose the 14 nanometer process to keep its costs down. It didn't need the latest and greatest density. Okay, so you can see big difference in philosophies historically between Arm and Intel. And you can see Intel vectoring toward the Arm model based on what Gelsinger said last week for its foundry business. Essentially it has to. Now, Arm announced a new Arm architecture, Armv9. v9 is backwards compatible with previous generations. Perhaps Arm learned from Intel's failed, Itanium effort for those remember that word. Had no backward compatibility and it really floundered. As well, Arm adds some additional capabilities. And today we're going to focus on the two areas that have highlighted, machine learning piece and security. I'll take note of the call out, 300 billion chips. That's Arm's vision. That's a lot. And we've said, before, Arm's way for volumes are 10X those of x86. Volume, we sound like a broken record. Volume equals cost reduction. We'll come back to that a little bit later. Now let's have a word on AI and machine learning. Arm is betting on AI and ML. Big as are many others. And this chart really shows why, it's a graphic that shows ETR data and spending momentum and pervasiveness in the dataset across all the different sectors that ETR tracks within its taxonomy. Note that ML/AI gets the top spot on the vertical axis, which represents net score. That's a measure of spending momentum or spending velocity. The horizontal axis is market share presence in the dataset. And we give this sector four stars to signify it's consistent lead in the data. So pretty reasonable bet by Arm. But the other area that we're going to talk about is security. And its vision day, Arm talked about confidential compute architecture and these things called realms. Note in the left-hand side, showing data traveling all over the different use cases and around the world and the call-out from the CISO below, it's a large public airline CISO that spoke at an ETR Venn round table. And this individual noted that the shifting end points increase the threat vectors. We all know that. Arm said something that really resonated. Specifically, they said today, there's far too much trust on the OS and the hypervisor that are running these applications. And their broad access to data is a weakness. Arm's concept of realms as shown in the right-hand side, underscores the company strategy to remove the assumption that privileged software. Like the hypervisor needs to be able to see the data. So by creating realms, in a virtualized multi-tenant environment, data can be more protected from memory leaks which of course is a major opportunity for hackers that they exploit. So it's a nice concept in a way for the system to isolate attendance data from other users. Okay, we want, we want to share some feedback that we got last week from the community on our analysis of Intel. A tech exec from city pointed out that, Intel really didn't miss a mobile, as we said, it really missed smartphones. In fact, whell, this is a kind of a minor distinction, it's important to recognize we think. Because Intel facilitated WIFI with Centrino, under the direction of Paul Alini. Who by the way, was not an engineer. I think he was the first non-engineer to be the CEO of Intel. He was a marketing person by background. Ironically, Intel's work in wifi connectivity enabled, actually enabled the smartphone revolution. And maybe that makes the smartphone missed by Intel all that more egregious, I don't know. Now the other piece of feedback we received related to our IBM scenario and our three-way joint venture prediction bringing together Intel, IBM, and Samsung in a triumvirate where Intel brings the foundry and it's process manufacturing. IBM brings its dis-aggregated memory technology and Samsung brings its its volume and its knowledge of of volume down the learning curve. Let's start with IBM. Remember we said that IBM with power 10 has the best technology in terms of this notion of dis-aggregating compute from memory and sharing memory in a pool across different processor types. So a few things in this regard, IBM when it restructured its micro electronics business under Ginni Rometty, catalyzed the partnership with global foundries and you know, this picture in the upper right it shows the global foundries facility outside of Albany, New York in Malta. And the partnership included AMD and Samsung. But we believe that global foundries is backed away from some of its contractual commitments with IBM causing a bit of a rift between the companies and leaving a hole in your original strategy. And evidently AMD hasn't really leaned in to move the needle in any way and so the New York foundry, is it a bit of a state of limbo with respect to its original vision. Now, well, Arvind Krishna was the face of the Intel announcement. It clearly has deep knowledge of IBM semiconductor strategy. Dario Gill, we think is a key player in the mix. He's the senior vice president director of IBM research. And it is in a position to affect some knowledge sharing and maybe even knowledge transfer with Intel possibly as it relates to disaggregated architecture. His questions remain as to how open IBM will be. And how protected it will be with its IP. It's got, as we said, last week, it's got to have an incentive to do so. Now why would IBM do that? Well, it wants to compete more effectively with VMware who has done a great job leveraging x86 and that's the biggest competitor in threat to open shift. So Arvind needs Intel chips to really execute on IBM's cloud strategy. Because almost all of IBM's customers are running apps on x86. So IBM's cloud and hybrid cloud. Strategy really need to leverage that Intel partnership. Now Intel for its part has great FinFET technology. FinFET is a tactic goes beyond CMOs. You all mainframes might remember when IBM burned the boat on ECL, Emitter-coupled Logic. And then moved to CMOs for its mainframes. Well, this is the next gen beyond, and it could give Intel a leg up on AMD's chiplet intellectual properties. Especially as it relates to latency. And there could be some benefits there for IBM. So maybe there's a quid pro quo going on. Now, where it really gets interesting is New York Senator, Chuck Schumer, is keen on building up an alternative to Silicon Valley in New York now it is Silicon Alley. So it's possible that Intel, who by the way has really good process technology. This is an aside, it really allowed TSMC to run the table with the whole seven nanometers versus 10 minute nanometer narrative. TSMC was at seven nanometer. Intel was at 10 nanometer. And really, we've said in the past that Intel's 10 nanometer tech is pretty close to TSMC seven. So Intel's ahead in that regard, even though in terms of, you know, the intervener thickness density, it's it's not, you know. These are sort of games that the semiconductor companies play, but you know it's possible that Intel with the U.S. government and IBM and Samsung could make a play for that New York foundry as part of Intel's trusted foundry strategy and kind of reshuffle that deck in Albany. Sounds like a "Game of Thrones," doesn't it? By the way, TSMC has been so consumed servicing Apple for five nanometer and eventually four nanometer that it's dropped the ball on some of its other's customers, namely Nvidia. And remember, a long-term competitiveness and cost reductions, they all come down to volume. And we think that Intel can't get to volume without an Arm strategy. Okay, so maybe the JV, the Joint Venture that we talked about, maybe we're out on a limb there and that's a stretch. And perhaps Samsung's not willing to play ball, given it's made huge investments in fabs and infrastructure and other resources, locally, but we think it's still viable scenario because we think Samsung definitely would covet a presence in the United States. No good to do that directly but maybe a partnership makes more sense in terms of gaining ground on TSMC. But anyway, let's say Intel can become a trusted foundry with the help of IBM and the U.S. government. Maybe then it could compete on volume. Well, how would that work? Well, let's say Nvidia, let's say they're not too happy with TSMC. Maybe with entertain Intel as a second source. Would that do it? In and of itself, no. But what about AWS and Google and Facebook? Maybe this is a way to placate the U.S. government and call off the antitrust dogs. Hey, we'll give Intel Foundry our business to secure America's semiconductor leadership and future and pay U.S. government. Why don't you chill out, back off a little bit. Microsoft even though, you know, it's not getting as much scrutiny from the U.S. government, it's anti trustee is maybe perhaps are behind it, who knows. But I think Microsoft would be happy to play ball as well. Now, would this give Intel a competitive volume posture? Yes, we think it would, for sure. If it can gain the trust of these companies and the volume we think would be there. But as we've said, currently, this is a very, very long shot because of the, the, the new strategy, the distance the difference in the Foundry business all those challenges that we laid out last week, it's going to take years to play out. But the dots are starting to connect in this scenario and the stakes are exceedingly high hence the importance of the U.S. government. Okay, that's it for now. Thanks to the community for your comments and insights. And thanks again to David Floyer whose analysis around Arm and semiconductors. And this work that he's done for the past decade is of tremendous help. Remember I publish each week on wikibon.com and siliconangle.com. And these episodes are all available as podcasts, just search for braking analysis podcast and you can always connect on Twitter. You can hit the chat right here or this live event or email me at david.vellante@siliconangle.com. Look, I always appreciate the comments on LinkedIn and Clubhouse. You can follow me so you're notified when we start a room and riff on these topics as well as others. And don't forget to check out etr.plus where all the survey data. This is Dave Vellante for the Cube Insights powered by ETR. Be well, and we'll see you next time. (cheerful music) (cheerful music)

Published Date : Apr 5 2021

SUMMARY :

Announcer: From the Cube's studios And maybe that makes the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

David FloyerPERSON

0.99+

Dario GillPERSON

0.99+

AMDORGANIZATION

0.99+

Dave VellantePERSON

0.99+

TSMCORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

March 23rdDATE

0.99+

PatPERSON

0.99+

AlbanyLOCATION

0.99+

Palo AltoLOCATION

0.99+

AWSORGANIZATION

0.99+

Paul AliniPERSON

0.99+

New YorkLOCATION

0.99+

$116 billionQUANTITY

0.99+

AppleORGANIZATION

0.99+

2019DATE

0.99+

TeslaORGANIZATION

0.99+

10 nanometerQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

last weekDATE

0.99+

NvidiaORGANIZATION

0.99+

ArvindPERSON

0.99+

less than a yearQUANTITY

0.99+

IntelORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

$100 billionQUANTITY

0.99+

Game of ThronesTITLE

0.99+

Ginni RomettyPERSON

0.99+

GoogleORGANIZATION

0.99+

10 nanometerQUANTITY

0.99+

10XQUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

david.vellante@siliconangle.comOTHER

0.99+

seven nanometersQUANTITY

0.99+

United StatesLOCATION

0.99+

FacebookORGANIZATION

0.99+

2024DATE

0.99+

14 nanometerQUANTITY

0.99+

this weekDATE

0.99+

last weekDATE

0.99+

Silicon ValleyLOCATION

0.99+

$20 billionQUANTITY

0.99+

secondQUANTITY

0.99+

Sarjeet JohalPERSON

0.99+

New YorkLOCATION

0.99+

U.S.LOCATION

0.99+