Image Title

Search Results for Floyer:

Renen Hallak & David Floyer | CUBE Conversation 2021


 

(upbeat music) >> In 2010 Wikibon predicted that the all flash data center was coming. The forecast at the time was that flash memory consumer volumes, would drive prices of enterprise flash down faster than those of high spin speed, hard disks. And by mid decade, buyers would opt for flash over 15K HDD for virtually all active data. That call was pretty much dead on and the percentage of flash in the data center continues to accelerate faster than that, of spinning disk. Now, the analyst that made this forecast was David FLoyer and he's with me today, along with Renen Hallak who is the founder and CEO of Vast Data. And they're going to discuss these trends and what it means for the future of data and the data center. Gentlemen, welcome to the program. Thanks for coming on. >> Great to be here. >> Thank you for having me. >> You're very welcome. Now David, let's start with you. You've been looking at this for over a decade and you know, frankly, your predictions have caused some friction, in the marketplace but where do you see things today? >> Well, what I was forecasting was based on the fact that the key driver in any technology is volume, volume reduces the cost over time and the volume comes from the consumers. So flash has been driven over the years by initially by the iPod in 2006 the Nano where Steve Jobs did a great job with Samsung and introducing large volumes of flash. And then the iPhone in 2008. And since then, all of mobile has been flash and mobile has been taking in a greater and greater percentage share. To begin with the PC dropped. But now the PCs are over 90% are using flash when there delivered. So flash has taken over the consumer market, very aggressively and that has driven down the cost of flash much much faster than the declining market of HDD. >> Okay and now, so Renen I wonder if we could come to you, we've got I want you to talk about the innovations that you're doing, but before we get there, talk about why you started Vast. >> Sure, so it was five years ago and it was basically the kill of the hard drive. I think what David is saying resonates very, very well. In fact, if you look at our original presentation for Vast Data. It showed flash and tape. There was no hard drive in the middle. And we said 10 years from now, and this was five years ago. So even the dates match up pretty well. We're not going to have hard drives anymore. Any piece of information that needs to be accessible at all will be on flash and anything that is dormant and never gets read will be on tape. >> So, okay. So we're entering this kind of new phase now, with which is being driven by QLC. David maybe you could give us a quick what is QLC? Just give us a bumper sticker there. >> There's 3D NAND, which is the thing that's growing, very very fast and it's growing on several dimensions. One dimension is the number of layers. Another dimension is the size of each of those pieces. And the third dimension is the number of bits which a QLC is five bits per cell. So those three dimensions have all been improving. And the result of that is that on a wafer of, that you create, more and more data can be stored on the whole wafer on the chip that comes from that wafer. And so QLC is the latest, set of 3D NAND flash NAND flash. That's coming off the lines at the moment. >> Okay, so my understanding is that there's new architectures that are entering the data center space, that could take advantage of QLC enter Vast. Someone said they've rented this, a nice set up for you and maybe before we get into the architecture, can you talk a little bit more about the company? I mean, maybe not everybody's familiar with with Vast, you share why you started it but what can you tell us about the business performance and any metrics you can share would be great? >> Sure, so the company as I said is five years old, about 170, 180 people today. We started selling product just around two years ago and have just hit $150 million in run rate. That's with eight sales people. And so, as you can imagine, there's a lot of demand for flash all the way down the stack in the way that David predicted. >> Wow, okay. So you got pretty comfortable. I think you've got product market fit, right? And now you're going to scale. I would imagine you're going to go after escape velocity and you're going to build your moat. Now part of that, I mean a lot of that is product, right? Product is sales. Those are the cool two golden pillars, but, and David when you think back to your early forecast last decade it was really about block storage. That was really what was under attack. You know, kind of fusion IO got it started with Facebook. They were trying to solve their SQL database performance problems. And then we saw pure storage. They hit escape velocity. They drove a truck through EMC sym metrics HDD based install base which precipitated the acquisition of XtremeIO by EMC. Something Renan knows a little bit about having led development, of the product but flash was late to the NAS party guys, Renan let me start with you. Why is that? And what is the relevance of QLC in that regard? >> The way storage has been always, it looks like a pyramid and you have your block devices up at the top and then your NAS underneath. And today you have object down at the bottom of that pyramid. And the pyramid basically represents capacity and the Y axis is price performance. And so if you could only serve a small subset of the capacity, you would go for block. And that is the subset that needed high performance. But as you go to QLC and PLC will soon follow the price of all flash systems goes down to a point where it can compete on the lower ends of that pyramid. And the capacity grows to a point where there's enough flash to support those workloads. And so now with QLC and a lot of innovation that goes with it it makes sense to build an all flash, NAS and object store. >> Yeah, okay. And David, you and I have talked about the volumes and Renan sort of just alluded to that, the higher volumes of NAS, not to mention the fact that NAS is hard, you know files difficult, but that's another piece of the equation here, isn't it? >> Absolutely, NAS is difficult. It's a large, very large scale. We're talking about petabytes of data. You're talking about very important data. And you're talking about data, which is at the moment very difficult to manage. It takes a lot of people to manage it, takes a lot of resources and it takes up a lot, a lot of space as well. So all of those issues with NAS and complexity is probably the biggest single problem. >> So maybe we could geek out a little bit here. You guys go at it, but Renan talk about the Vast architecture. I presume it was built from the ground up for flash since you were trying to kill HTD. What else do we need to know? >> It was built for flash. It was also built for Crosspoint which is a new technology that came out from Intel and micron about three years ago. Cross point is basically another level of persistent media above flash and below Ram. But what we really set out to do is, as I said to kill the hard drive, and for that what you need is to get the price parity. And of course, flash and hard drives are not at price parity today. As David said, they probably will be in a few years from now. And so we wanted to, jumpstart that, to accelerate that. And so we spent a lot of time in building a new type of architecture with a lot of new metadata structures and algorithms on top to bring that effective price down to a point where it's competitive today. And in fact, two years ago the way we did it was by going out to talk to these vendors Intel with 3D Crosspoint and QLC flash Mellanox with NVMe over fabrics, and very fast ethernet networks. And we took those building blocks and we thought how can we use this to build a completely different type of architecture, that doesn't just take flash one level down the stack but actually allows us to break that pyramid, to collapse it down and to build a single system that is as fast as your fastest all flash block device or faster but as affordable as your hard drive based archives. And once that happens you don't need to think about storage anymore. You have a single system that's big enough and cheap enough to throw everything at it. And it's fast enough such that everything is accessible as sub-millisecond latencies. The way the architecture is built is pretty much the opposite of the way scale-out storage has been done. It's not based on shared nothing. The way XtremIO was the way Isilon is the way Hadoop and the Google file system are. We're basing it on a concept called Dis-aggregated Shared Everything. And what that means is that we have the media on one set of devices, the logic running in containers, just software and you can scale each of those independently. So you can scale capacity independently from performance and you have this shared metadata space, that all of the containers can see. So the containers don't actually have to talk to each other in the synchronous path. That means that it's much more scalable. You can go up to hundreds of thousands of nodes rather than just a few dozen. It's much more resilient. You can have all of them fail and you still didn't lose any data. And it's much more easy to use to David's point about complexity. >> Thank you for that. And then you, you mentioned up front that you not only built for flash, but built for Crosspoint. So you're using Crosspoint today. It's interesting. There was always been this sort of debate about Crosspoint It's less expensive than Ram, or maybe I got that wrong but it's persistent, >> It is. >> Okay, but it's more expensive than flash. And it was sort of thought it was a fence sitter cause it didn't have the volume but you're using it today successfully. That's interesting. >> We're using it both to offset the deficiencies of the low cost flash. And the nice thing about QLC and PLC is that you get the same levels of read performance as you would from high-end flash. The only difference between high cost and low cost flash today is in right cycles and in right performance. And so Crosspoint helps us offset both of those. We use it as a large right buffer and we use it as a large metadata store. And that allows us not just to arrange the information in a very large persistent right buffer before we need to place it on the low cost flash. But it also allows us to develop new types of metadata structures and algorithms that allow us to make better use of the low cost flash and reduce the effective price down even lower than the rock capacity. >> Very cool. David, what are your thoughts on the architecture? give us kind of the independent perspective >> I think it's brilliant architecture. I'd like to just go one step down on the network side of things. The whole use of NBME over fabric allows the users all of the servers to get any data across this whole network directly to it. So you've got great performance right away across the stack. And then the other thing is that by using RDMA for NASS, you're able, if you need to, to get down in microseconds to the data. So overall that's a thousand times faster than any HDD system could manage. So this architecture really allows an any to any simple, single level of storage which is so much easier to think about, architect use or manage is just so much simpler. >> If you had I mean, I said I don't know if there's an answer to this question but if you had to pick one thing Renan that you really were dogmatic about and you bet on from an architectural standpoint, what would that be? >> I think what we bet on in the early days is the fact that the pyramid doesn't work anymore and that tiering doesn't work anymore. In fact, we stole Johnson and Johnson's tagline No More Tears. Only, It's not spelled the same way. The reason for that is not because of storage. It's because of the applications as we move to applications more and more that are machine-based and machines are now not just generating the data. They're also reading the data and analyzing it and providing insights for humans to consume. Then the workloads changed dramatically. And the one thing that we saw is that you can't choose which pieces of information need to be accessible anymore. These new algorithms, especially around AI and machine learning and deep learning they need fast access to the entirety of the dataset and they want to read it over and over and over again in order to generate those insights. And so that was the driving force behind us building this new type of architecture. And we're seeing every single day when we talk to customers how the old architecture is simply break down in the face of these new applications. >> Very cool speaking of customers. I wonder if you could talk about use cases, customers you know, and this NASS arena maybe you could add some color there. >> Sure, our customers are large in data. We started half a petabyte and we grow into the exabyte range. The system likes to be big as, as it grows it grows super linearly. If you have a 100 nodes or a 1000 nodes you get more than 10X in performance, in capacity efficiency and resilience, et cetera. And so that's where we thrive. And those workloads are today. Mainly analytics workloads, although not entirely. If you look at it geographically we have a lot of life science in Boston research institutes medical imaging, genomics universities pharmaceutical companies here in New York. We have a lot of financials, hedge funds, Analyzing everything from satellite imagery to trade data to Twitter feeds out in California. A lot of AI, autonomous driving vehicles as well as media and entertainment both generation of films like animation, as well as content distribution are being done on top of best. >> Great thank you and David, when you look at the forecast that you've made over the years and when I imagine that they match nicely with your assumptions. And so, okay, I get that, but that doesn't, not everybody agrees, David. I mean, certainly the HDD guys don't agree but they, they're obviously fighting to hang on to their awesome run for 50 years, but as well there's others to do in hybrids and the like, and they kind of challenge your assumptions and you don't have a dog in this fight. We just want the truth and try to do our best to report it. But let me start with this. One of the things I've seen is that you're comparing deduped and compressed flash with raw HDD. Is that true or false? >> It's in terms of the fundamentals of the forecast, et cetera, it's false. What I'm taking is the new egg price. And I did it this morning and I looked up a two terabyte disc drive, NAS disc drive. I think it was $54. And if you look at the cost of a a NAND for two terabytes, it's about $200. So it's a four to one ratio. >> So, >> So and that's coming down from what people saw last year, which was five or six and every year has been, that ratio has been coming down. >> The ratio between the cost Delta, between HDD is still cheaper. So Renan I wonder one of the other things that Floyer has said is that because of the advantages of flash, not only performance but also data sharing, et cetera, which really drives other factors like TCO. That it doesn't have to be at parody in order for customers to consume that. I certainly saw that on my laptop, I could have got more storage and it could have been cheaper for per bit for my laptop. I took the flash. I mean, no problem. That that was an intelligence test but what are you seeing from customers? And by the way Floyer I think is forecasting by what, 2026 there will be actually a raw to raw crossover. So then it's game over. But what are you seeing in terms of what customers are telling you or any evidence you have that it doesn't have to be, even that customers actually get more value even if it's more expensive from flash, what are you seeing? >> Yeah in the enterprise space customers aren't buying raw flash they're buying storage systems. And so even if the raw numbers flash versus hard drive are still not there there is a lot of things that can be done at the system level to equalize those two. In fact, a lot of our IP is based on that we are taking flash today is, as David said more expensive than hard drives, but at the system level it doesn't remain more expensive. And the reason for that is storage systems waste space. They waste it on metadata, they waste it on redundancy. We built our new metadata structures, such that they everything lives in Crosspoint and is so much smaller because of the way Crosspoint is accessible at byte level granularity, we built our erasure codes in a way where you can sustain 10, 20, 30 drive failures but you only pay two or 1% in overhead. We built our data reduction mechanisms such that they can reduce down data even if the application has already compressed it and already de-duplicated it. And so there's a lot of innovation that can happen at the software level as part of this new direct dis-aggregated shared everything architecture that allows us to bridge that cost gap today without having customers do fancy TCO calculations. And of course, as prices of flash over the next few years continue declining, all of those advantages remain and it will just widen the gap between hard drives and flash. And there really is no advantage to hard drives once the price thing is solved. >> So thank you. So David, the other thing I've seen around these forecasts is that the comments that you can't really data reduce effectively hard disk. And I understand why the overhead and of course you can in flash you can use all kinds of data reduction techniques and not affect performance, or it's not even noticeable like put the cloud guys, do it upstream. Others do it upstream. What's your comment on that? >> Yes, if you take sequential data and you do a lot of work upfront you can write out in very lot big blocks and that's a perfect sequentially, good way of doing it. The challenge for the HDD people is if they go for that for that sort of sequential type of application that the cheapest way of doing that is to use tape which comes back to the discussion that the two things that are going to remain are tape and flash. So that part of the HDD market in my assertion will go towards tape and tape libraries. And those are serving very well at the moment. >> Yeah I mean, It's just the economics of tape are really attractive. I just feel like I've said this many times that the marketing of tape is lacking. Like I'd like to see, better thinking around how it could play. Cause I think customers have this perception tape, but there's actually a lot of value there. I want to carry on, >> Small point there. Yeah, I mean, there's an opportunity in the same way that Vast have created an architecture for flash. There's an opportunity out there for the tech people with flash to make an architecture that allows you to take that workload and really lower the price, enormously. >> You've called it Flape >> Flape yes. >> There's some interesting metadata opportunities there but we won't go into that. And then David, I want to ask you about NAND shortages. We saw this in 2016 and 2017. A lot of people saying there's an NAND shortage again. So that's a flaw in your forecast prices of you're assuming prices of flash continue to come down faster than those of HDD but the shortages of NAND could be problematic. What do you say to that? >> Well, I've looked at that in some detail and one of the big, important things is what's happening in the flash market and the Chinese, YMTC Chinese company has introduced a lot more volume into the market. They're making 100,000 wafers a month for this year. That's around six to 8% of market of NAND at this year, as a result, Samsung, micron, Intel, Hynix they're all increasing their volumes of NAND so that they're all investing. So I don't see that NAND itself is going to be a problem. There is certainly a shortage of processor chips which drive the intelligence in the NAND itself. But that's a problem for everybody. That's a problem for cars. It's a problem for disk drives. >> You could argue that's going to create an oversupply, potentially. Let's not go there, but you know what at the end of the day it comes back to the customer and all this stuff. It's interesting. I love talking about the architecture but it's really all about customer value. And so, so Renan, I want you to sort of close there. What should customers be paying attention to? And what should observers of Vast Data really watch as indicators for progress for you guys milestones and things in the market that we should be paying attention to but start with the customers. What's your advice to them? >> Sure, for any customer that I talked to I always ask the same thing. Imagine where you'll be five years from now because you're making an investment now that is at least five years long. In our case, we guaranteed the lifespan of the devices for a decade, such that you know that it's going to be there for you and imagine what is going to happen over those next five years. What we're seeing in most customers is that they have a lot of doormen data and with the advances in analytics and AI they want to make use of that data. They want to turn it from a cost center to a profit center and to gain insight from that data and to improve their business based on that information that they have the same way the hyperscalers are doing in order to do that, you need one thing you need fast access to all of that information. Once you have that, you have the foundation to step into this next generation type world where you can actually make money off of your information. And the best way to get very, very fast access to all of your information is to put it on Vast media like flash and Crosspoint. If I can give one example, Hedge Funds. Hedge funds do a lot of back-testing on Vast. And what makes sense for them is to test as much information back as they possibly can but because of storage limitations, they can't do that. And the other thing that's important to them is to have a real-time experience to be able to run those simulations in a few minutes and not as a batch process overnight, but because of storage limitations, they can't do that either. The third thing is if you have many different applications and many different users on the same system they usually step on each other's toes. And so the Vast architecture is solves those three problems. It allows you a lot of information very fast access and fast processing an amazing quality of service where different users of the system don't even notice that somebody else is accessing the same piece of information. And so Hedge Funds is one example. Any one of these verticals that make use of a lot of information will benefit from this architecture in this system. And if it doesn't cost any more, there's really no real reason delay this transition into all flash. >> Excellent very clear thinking. Thanks for laying that out. And what about, you know, things that we should how should we judge you? What are the things that we should watch? >> I think the most important way to judge us is to look at customer adoption and what we're seeing and what we're showing investors is a very high net dollar retention number. What that means is basically a customer buys a piece of kit today, how much more will they buy over the next year, over the next two years? And we're seeing them buy more than three times more, within a year of the initial purchase. And we see more than 90% of them buying more within that first year. And that to me indicates that we're solving a real problem and that they're making strategic decisions to stop buying any other type of storage system. And to just put everything on Vast over the next few years we're going to expand beyond just storage services and provide a full stack for these AI applications. We'll expand into other areas of infrastructure and develop the best possible vertically integrated system to allow those new applications to thrive. >> Nice, yeah. Think investors love that lifetime value story. If you can get above 3X of the customer acquisition cost is to IPO in the way. Guys hey, thanks so much for coming to the Cube. We had a great conversation and really appreciate your time. >> Thank you. >> Thank you. >> All right, Thanks for watching everybody. This is Dave Volante for the Cube. We'll see you next time. (gentle music)

Published Date : Apr 5 2021

SUMMARY :

that the all flash data center was coming. in the marketplace but where and the volume comes from the consumers. the innovations that you're doing, kill of the hard drive. David maybe you could give And so QLC is the latest, and any metrics you can in the way that David predicted. having led development, of the product And the capacity grows to a point where And David, you and I have talked about the biggest single problem. the ground up for flash that all of the containers can see. that you not only built for cause it didn't have the volume and PLC is that you get the same levels David, what are your all of the servers to get any data And the one thing that we saw I wonder if you could talk And so that's where we thrive. One of the things I've seen is that of the forecast, et cetera, it's false. So and that's coming down And by the way Floyer I at the system level to equalize those two. the comments that you can't really So that part of the HDD market that the marketing of tape is lacking. and really lower the price, enormously. but the shortages of NAND and one of the big, important I love talking about the architecture that it's going to be there for you What are the things that we should watch? And that to me indicates that of the customer acquisition This is Dave Volante for the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Renen HallakPERSON

0.99+

2008DATE

0.99+

SamsungORGANIZATION

0.99+

RenanPERSON

0.99+

2016DATE

0.99+

10QUANTITY

0.99+

David FLoyerPERSON

0.99+

David FloyerPERSON

0.99+

fiveQUANTITY

0.99+

New YorkLOCATION

0.99+

$54QUANTITY

0.99+

2006DATE

0.99+

Dave VolantePERSON

0.99+

HynixORGANIZATION

0.99+

$150 millionQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CaliforniaLOCATION

0.99+

EMCORGANIZATION

0.99+

2010DATE

0.99+

50 yearsQUANTITY

0.99+

Steve JobsPERSON

0.99+

twoQUANTITY

0.99+

2017DATE

0.99+

fourQUANTITY

0.99+

IntelORGANIZATION

0.99+

last yearDATE

0.99+

Vast DataORGANIZATION

0.99+

20QUANTITY

0.99+

sixQUANTITY

0.99+

three dimensionsQUANTITY

0.99+

three problemsQUANTITY

0.99+

YMTCORGANIZATION

0.99+

FloyerORGANIZATION

0.99+

BostonLOCATION

0.99+

DeltaORGANIZATION

0.99+

RenenPERSON

0.99+

30QUANTITY

0.99+

100 nodesQUANTITY

0.99+

FacebookORGANIZATION

0.99+

two terabytesQUANTITY

0.99+

1%QUANTITY

0.99+

next yearDATE

0.99+

more than 90%QUANTITY

0.99+

bothQUANTITY

0.99+

2026DATE

0.99+

two thingsQUANTITY

0.99+

five years agoDATE

0.99+

third dimensionQUANTITY

0.99+

one exampleQUANTITY

0.99+

third thingQUANTITY

0.99+

two terabyteQUANTITY

0.99+

iPodCOMMERCIAL_ITEM

0.99+

more than three timesQUANTITY

0.98+

1000 nodesQUANTITY

0.98+

todayDATE

0.98+

last decadeDATE

0.98+

single problemQUANTITY

0.98+

eachQUANTITY

0.98+

One dimensionQUANTITY

0.98+

oneQUANTITY

0.98+

five yearsQUANTITY

0.98+

one setQUANTITY

0.98+

TwitterORGANIZATION

0.98+

about $200QUANTITY

0.97+

this yearDATE

0.97+

two years agoDATE

0.97+

single systemQUANTITY

0.97+

first yearQUANTITY

0.97+

half a petabyteQUANTITY

0.97+

one thingQUANTITY

0.97+

micronORGANIZATION

0.97+

OneQUANTITY

0.97+

Dec 10th Keynote Analysis Dave Vellante & Dave Floyer | AWS re:Invent 2020


 

>>From around the globe. It's the queue with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Hi, this is Dave Volante. Welcome back to the cubes. Continuous coverage of AWS reinvent 2020, the virtual version of the cube and reinvent. I'm here with David foyer. Who's the CTO Wiki Bon, and we're going to break down today's infrastructure keynote, which was headlined by Peter DeSantis. David. Good to see you. Good to see you. So David, we have a very tight timeframe and I just want to cover a couple of things. Something that I've learned for many, many years, working with you is the statement. It's all about recovery. And that really was the first part of Peter's discussion today. It was, he laid out the operational practices of AWS and he talked a lot about, he actually had some really interesting things up there. You know, you use the there's no compression algorithm for experience, but he talked a lot about availability and he compared AWS's availability philosophy with some of its competitors. >>And he talked about generators being concurrent and maintainable. He got, he took it down to the batteries and the ups and the thing that impressed me, most of the other thing that you've taught me over the years is system thinking. You've got to look at the entire system. That one little component could have Peter does emphasis towards a huge blast radius. So what AWS tries to do is, is constrict that blast radius so he can sleep at night. So non-disruptive replacements of things like batteries. He talked a lot about synchronous versus asynchronous trade-offs and it was like, kind of async versus sync one-on-one synchronous. You got latency asynchronous, you got your data loss to exposure. So a lot of discussions around that, but what was most interesting is he CA he compared and contrasted AWS's philosophy on availability zones, uh, with the competition. And he didn't specifically call out Microsoft and Google, but he showed some screenshots of their websites and the competition uses terms like usually available and generally available this meaning that certain regions and availability zone may not be available. That's not the case with AWS, your thoughts on that. >>They have a very impressive track record, uh, despite the, a beta the other day. Um, but they've got a very impressive track record. I, I think there is a big difference, however, between a general purpose computing and, uh, mission critical computing. And when you've got to bring up, uh, databases and everything else like that, then I think there are other platforms, uh, which, uh, which in the longterm, uh, AWS in my view, should be embracing that do a better job in mission critical areas, uh, in terms of bringing things up and not using data and recovery. So that's, that's an area which I think AWS will need to partner with in the past. >>Yeah. So, um, the other area of the keynote that was critical was, um, he spent a lot of time on custom Silicon and you and I have talked about this a lot, of course, AWS and Intel are huge partners. Uh, but, but we know that Intel owns its own fabs, uh, it's competitors, you know, we'll outsource to the other, other manufacturers. So Intel is motivated to put as much function on the real estate as possible to create general purpose processors and, and get as much out of that real estate as they possibly can. So what AWS has been been doing, and they certainly didn't throw Intel under the bus. They were very complimentary and, and friendly, but they also lay it out that they're developing a number of components that are custom Silicon. They talked about the nitro controllers, uh, inferential, which is, you know, specialized chips around, around inference to do things like PI torch, uh, and TensorFlow. >>Uh, they talked about training them, you know, the new training ship for training AI models or ML models. They spent a lot of time on Gravatar, which is 64 bit, like you say, everything's 64 bit these days, but it's the arm processor. And so, you know, they, they didn't specifically mention Moore's law, but they certainly taught, they gave, uh, a microprocessor one Oh one overview, which I really enjoyed. They talked about, they didn't specifically talk about Moore's law, but they talked about the need to put, put on more, more cores, uh, and then running multithreaded apps and the whole new programming models that, that brings out. Um, and, and, and basically laid out the case that these specialized processors that they're developing are more efficient. They talked about all these cores and the overhead that, that those cores bring in the difficulty of keeping those processors, those cores busy. >>Uh, and so they talked about symmetric, uh, uh, a simultaneous multi-threading, uh, and sharing cores, which like, it was like going back to the old days of, of microprocessor development. But the point being that as you add more cores and you have that overhead, you get non-linear, uh, performance improvements. And so, so it defeats the notion of scale out, right? And so what I, what I want to get to is to get your take on this as you've been talking for a long, long time about arm in the data center, and remind me just like object storage. We talked for years about object storage. It never went anywhere until Amazon brought forth simple storage service. And then object storage obviously is, you know, a mainstream mainstream storage. Now I see the same thing happening, happening with, with arm and the data center specifically, of course, alternative processes are taking off, but, but what's your take on all this? You, you listened to the keynote, uh, give us your takeaways. >>Well, let's go back to first principles for a second. Why is this happening? It's happening because of volume, volume, volume, volume is incredibly important, obviously in terms of cost. Um, and if you, if you're, if you look at a volume, uh, arm is, is, was based on the volumes that came from that from the, uh, from the, um, uh, handhelds and all of their, all of the mobile stuff that's been generating. So there's billions of chips being made, uh, on that. >>I can interrupt you for a second, David. So we're showing a slide here, uh, and, and it's, it's, it, it, it relates to volume and somewhat, I mean, we, we talk a lot about the volume that flash for instance gained from the consumer. Uh, and, and, and now we're talking about these emerging workloads. You call them matrix workloads. These are things like AI influencing edge work, and this gray area shows these alternative workloads. And that's really what Amazon is going after. So you show in this chart, you know, basically very small today, 2020, but you show a very large and growing position, uh, by the end of this decade, really eating into traditional, the traditional space. >>That, that that's absolutely correct. And, and that's being led by what's happening in the mobile market. If you look at all of the work that's going on, on your, on your, uh, Apple, uh, Apple iPhone, there's a huge amount of, uh, modern, uh, matrix workloads are going there to help you with your photography and everything like that. And that's going to come into the, uh, into the data center within, within two years. Uh, and that's what, what, uh, AWS is focusing on is capabilities of doing this type of new workload in real time. And, and it's hundreds of times, hundreds of times more processing, uh, to do these workloads and it's gotta be done in real time. >>Yeah. So we have a, we have a chart on that this bar chart that you've, you've produced. Uh, I don't know if you can see the bars here. Um, I can't see them, but, but maybe we can, we can editorialize. So on the left-hand side, you basically have traditional workloads, uh, on blue and you have matrix workloads. What you calling these emerging workloads and red you, so you show performance 0.9, five versus 50, then price performance for traditional 3.6. And it's more than 150 times greater for ARM-based workload. >>Yeah. And that's a analysis of the previous generation of arm. And if you take the new ones, the M one, for example, which has come in to the, uh, to the PC area, um, that's going to be even higher. So the arm is producing hybrid computers, uh, multi, uh, uh, uh, heterogeneous computers with multiple different things inside the computer. And that is making life a lot more efficient. And especially in the inference world, they're using NPUs instead of GPU's, they conferred about four times more NPUs that you can GPU's. And, um, uh, it, it's just a, uh, it's a different world and, uh, arm is ahead because it's done all the work in the volume area, and that's now going to go into PCs and, and it's going to, going to go into the data center. >>Okay, great. Now, yeah, if we could, uh, uh, guys bring up the, uh, the, the other chart that's titled workloads moving to ARM-based servers, this one is just amazing to me, David, you'll see that I, for some reason, the slides aren't translating, so, uh, forget that, forget the slides. So, um, but, but basically you have the revenue coming from arm as to be substantially higher, uh, in the out years, uh, or certainly substantially growing more than the traditional, uh, workload revenue. Now that's going to take a decade, but maybe you could explain, you know, why you see that. >>Yeah, the, the, the, the, the reason is that these matrix workloads, uh, and also, uh, the offload of like nitro is doing it's the offload of the storage and the networking from the, the main CPU's, uh, the dis-aggregation of computing, uh, plus the traditional workloads, which can move, uh, over or are moving over and where AWS, uh, and, and Microsoft and the PC and Apple, and the PC where those leaders are leading us is that they are doing the hard work of making sure that their software, uh, and their API APIs can utilize the capabilities of arm. Uh, so, uh, it's, it's the it, and the advantage that AWS has of course, is that enormous economies of scale, across many, many users. Uh, that's going to take longer to go into the, the enterprise data center much longer, but the, the, uh, Microsoft, Google and AWS, they're going to be leading the charge of this movement, all of arm into the data center. Uh, it was amazing some of the people or what some of the arm customers or the AWS customers were seeing today with much faster performance and much lower price. It was, they were, they were affirming. Uh, and, and the fundamental reason is that arm are two generations of production. They are in at the moment at five nano meters, whereas, um, Intel is still at 10. Uh, so that's a big, big issue that, uh, Intel have to address. Yeah. And so >>You get, you've been getting this core creep, I'll call it, which brings a lot of overhead. And now you're seeing these very efficient, specialized processes in your premises. We're going to see these explode for these new workloads. And in particular, the edge is such an enormous opportunity. I think you've pointed out that you see a big, uh, uh, market for edge, these edge emergent edge workloads kind of start in the data center and then push out to the edge. Andy Jassy says that the edge, uh, or, or we're going to bring AWS to the edge of the data center is just another edge node. I liked that vision, your thoughts. >>Uh, I, I think that is a, a compelling vision. I think things at the edge, you have many different form factors. So, uh, you, you will need an edge and a car for example, which is cheap enough to fit into a car and it's, but it's gotta be a hundred times more processing than it is in the, in the computers, in the car at the moment, that's a big leap and, and for, to get to automated driving, uh, but that's going to happen. Um, and it's going to happen on ARM-based systems and the amount of work that's going to go out to the edge is enormous. And the amount of data that's generated at the edge is enormous. That's not going to come back to the center, that's going to be processed at the edge, and the edge is going to be the center. If you're like of where computing is done. Uh, it doesn't mean to say that you're not going to have a lot of inference work inside the data center, but a lot of, lot of work in terms of data and processing is move, is going to move into the edge over the next decade. >>Yeah, well, many of, uh, AWS is edge offerings today, you know, assume data is going to be sent back. Although of course you see outpost and then smaller versions of outposts. That's a, to me, that's a clue of what's coming. Uh, basically again, bringing AWS to, to, to the edge. I want to also touch on, uh, Amazon's, uh, comments on renewable. Peter has talked a lot about what they're doing to reduce carbon. Uh, one of the interesting things was they're actually reusing their cooling water that they clean and reuse. I think, I think you said three or multiple times, uh, and then they put it back out and they were able to purify it and reuse it. So, so that's a really great sustainable story. There was much more to it. Uh, but I think, you know, companies like Amazon, especially, you know, large companies really have a responsibility. So it's great to see Amazon stepping up. Uh, anyway, we're out of time, David, thanks so much for coming on and sharing your insights really, really appreciate it. Those, by the way, those slides of Wiki bond.com has a lot of David's work on there. Apologize for some of the data not showing through, but, uh, working in real time here. This is Dave Volante for David foyer. Are you watching the cubes that continuous coverage of AWS reinvent 2020, we'll be right back.

Published Date : Dec 18 2020

SUMMARY :

It's the queue with digital coverage of Who's the CTO Wiki Bon, and we're going to break down today's infrastructure keynote, That's not the case with AWS, your thoughts on that. a beta the other day. uh, inferential, which is, you know, specialized chips around, around inference to do things like PI Uh, they talked about training them, you know, the new training ship for training AI models or ML models. Uh, and so they talked about symmetric, uh, uh, a simultaneous multi-threading, uh, on that. So you show in this chart, you know, basically very small today, 2020, but you show a very And that's going to come into the, uh, into the data center within, So on the left-hand side, you basically have traditional workloads, And especially in the inference world, they're using NPUs instead of more than the traditional, uh, workload revenue. the main CPU's, uh, the dis-aggregation of computing, in the data center and then push out to the edge. and the edge is going to be the center. Uh, one of the interesting things was they're actually reusing their cooling water

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

Dave VolantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Peter DeSantisPERSON

0.99+

Dave FloyerPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Andy JassyPERSON

0.99+

PeterPERSON

0.99+

Dec 10thDATE

0.99+

50QUANTITY

0.99+

IntelORGANIZATION

0.99+

2020DATE

0.99+

AppleORGANIZATION

0.99+

hundreds of timesQUANTITY

0.99+

3.6QUANTITY

0.99+

threeQUANTITY

0.99+

0.9QUANTITY

0.99+

five nano metersQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

64 bitQUANTITY

0.99+

two generationsQUANTITY

0.98+

10QUANTITY

0.98+

more than 150 timesQUANTITY

0.98+

fiveQUANTITY

0.97+

two yearsQUANTITY

0.95+

first partQUANTITY

0.95+

todayDATE

0.95+

first principlesQUANTITY

0.94+

next decadeDATE

0.93+

oneQUANTITY

0.93+

2020TITLE

0.92+

end of this decadeDATE

0.9+

one little componentQUANTITY

0.9+

billions of chipsQUANTITY

0.88+

a decadeQUANTITY

0.85+

MoorePERSON

0.81+

Wiki bond.comORGANIZATION

0.76+

secondQUANTITY

0.74+

hundred timesQUANTITY

0.71+

InventEVENT

0.7+

about four timesQUANTITY

0.69+

a secondQUANTITY

0.68+

Marc Staimer, Dragon Slayer Consulting & David Floyer, Wikibon | December 2020


 

>> Announcer: From theCUBE studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is theCUBE conversation. >> Hi everyone, this is Dave Vellante and welcome to this CUBE conversation where we're going to dig in to this, the area of cloud databases. And Gartner just published a series of research in this space. And it's really a growing market, rapidly growing, a lot of new players, obviously the big three cloud players. And with me are three experts in the field, two long time industry analysts. Marc Staimer is the founder, president, and key principal at Dragon Slayer Consulting. And he's joined by David Floyer, the CTO of Wikibon. Gentlemen great to see you. Thanks for coming on theCUBE. >> Good to be here. >> Great to see you too Dave. >> Marc, coming from the great Northwest, I think first time on theCUBE, and so it's really great to have you. So let me set this up, as I said, you know, Gartner published these, you know, three giant tomes. These are, you know, publicly available documents on the web. I know you guys have been through them, you know, several hours of reading. And so, night... (Dave chuckles) Good night time reading. The three documents where they identify critical capabilities for cloud database management systems. And the first one we're going to talk about is, operational use cases. So we're talking about, you know, transaction oriented workloads, ERP financials. The second one was analytical use cases, sort of an emerging space to really try to, you know, the data warehouse space and the like. And, of course, the third is the famous Gartner Magic Quadrant, which we're going to talk about. So, Marc, let me start with you, you've dug into this research just at a high level, you know, what did you take away from it? >> Generally, if you look at all the players in the space they all have some basic good capabilities. What I mean by that is ultimately when you have, a transactional or an analytical database in the cloud, the goal is not to have to manage the database. Now they have different levels of where that goes to as how much you have to manage or what you have to manage. But ultimately, they all manage the basic administrative, or the pedantic tasks that DBAs have to do, the patching, the tuning, the upgrading, all of that is done by the service provider. So that's the number one thing they all aim at, from that point on every database has different capabilities and some will automate a whole bunch more than others, and will have different primary focuses. So it comes down to what you're looking for or what you need. And ultimately what I've learned from end users is what they think they need upfront, is not what they end up needing as they implement. >> David, anything you'd add to that, based on your reading of the Gartner work. >> Yes. It's a thorough piece of work. It's taking on a huge number of different types of uses and size of companies. And I think those are two parameters which really change how companies would look at it. If you're a Fortune 500 or Fortune 2000 type company, you're going to need a broader range of features, and you will need to deal with size and complexity in a much greater sense, and a lot of probably higher levels of availability, and reliability, and recoverability. Again, on the workload side, there are different types of workload and there're... There is as well as having the two transactional and analytic workloads, I think there's an emerging type of workload which is going to be very important for future applications where you want to combine transactional with analytic in real time, in order to automate business processes at a higher level, to make the business processes synchronous as opposed to asynchronous. And that degree of granularity, I think is missed, in a broader view of these companies and what they offer. It's in my view trying in some ways to not compare like with like from a customer point of view. So the very nuance, what you talked about, let's get into it, maybe that'll become clear to the audience. So like I said, these are very detailed research notes. There were several, I'll say analysts cooks in the kitchen, including Henry Cook, whom I don't know, but four other contributing analysts, two of whom are CUBE alum, Don Feinberg, and Merv Adrian, both really, you know, awesome researchers. And Rick Greenwald, along with Adam Ronthal. And these are public documents, you can go on the web and search for these. So I wonder if we could just look at some of the data and bring up... Guys, bring up the slide one here. And so we'll first look at the operational side and they broke it into four use cases. The traditional transaction use cases, the augmented transaction processing, stream/event processing and operational intelligence. And so we're going to show you there's a lot of data here. So what Gartner did is they essentially evaluated critical capabilities, or think of features and functions, and gave them a weighting, or a weighting, and then a rating. It was a weighting and rating methodology. On a s... The rating was on a scale of one to five, and then they weighted the importance of the features based on their assessment, and talking to the many customers they talk to. So you can see here on the first chart, we're showing both the traditional transactions and the augmented transactions and, you know, the thing... The first thing that jumps out at you guys is that, you know, Oracle with Autonomous is off the charts, far ahead of anybody else on this. And actually guys, if you just bring up slide number two, we'll take a look at the stream/event processing and operational intelligence use cases. And you can see, again, you know, Oracle has a big lead. And I don't want to necessarily go through every vendor here, but guys, if you don't mind going back to the first slide 'cause I think this is really, you know, the core of transaction processing. So let's look at this, you've got Oracle, you've got SAP HANA. You know, right there interestingly Amazon Web Services with the Aurora, you know, IBM Db2, which, you know, it goes back to the good old days, you know, down the list. But so, let me again start with Marc. So why is that? I mean, I guess this is no surprise, Oracle still owns the Mission-Critical for the database space. They earned that years ago. One that, you know, over the likes of Db2 and, you know, Informix and Sybase, and, you know, they emerged as number one there. But what do you make of this data Marc? >> If you look at this data in a vacuum, you're looking at specific functionality, I think you need to look at all the slides in total. And the reason I bring that up is because I agree with what David said earlier, in that the use case that's becoming more prevalent is the integration of transaction and analytics. And more importantly, it's not just your traditional data warehouse, but it's AI analytics. It's big data analytics. It's users are finding that they need more than just simple reporting. They need more in-depth analytics so that they can get more actionable insights into their data where they can react in real time. And so if you look at it just as a transaction, that's great. If you're going to just as a data warehouse, that's great, or analytics, that's fine. If you have a very narrow use case, yes. But I think today what we're looking at is... It's not so narrow. It's sort of like, if you bought a streaming device and it only streams Netflix and then you need to get another streaming device 'cause you want to watch Amazon Prime. You're not going to do that, you want one, that does all of it, and that's kind of what's missing from this data. So I agree that the data is good, but I don't think it's looking at it in a total encompassing manner. >> Well, so before we get off the horses on the track 'cause I love to do that. (Dave chuckles) I just kind of let's talk about that. So Marc, you're putting forth the... You guys seem to agree on that premise that the database that can do more than just one thing is of appeal to customers. I suppose that makes, certainly makes sense from a cost standpoint. But, you know, guys feel free to flip back and forth between slides one and two. But you can see SAP HANA, and I'm not sure what cloud that's running on, it's probably running on a combination of clouds, but, you know, scoring very strongly. I thought, you know, Aurora, you know, given AWS says it's one of the fastest growing services in history and they've got it ahead of Db2 just on functionality, which is pretty impressive. I love Google Spanner, you know, love the... What they're trying to accomplish there. You know, you go down to Microsoft is, they're kind of the... They're always good enough a database and that's how they succeed and et cetera, et cetera. But David, it sounds like you agree with Marc. I would say, I would think though, Amazon kind of doesn't agree 'cause they're like a horses for courses. >> I agree. >> Yeah, yeah. >> So I wonder if you could comment on that. >> Well, I want to comment on two vectors. The first vector is that the size of customer and, you know, a mid-sized customer versus a global $2,000 or global 500 customer. For the smaller customer that's the heart of AWS, and they are taking their applications and putting pretty well everything into their cloud, the one cloud, and Aurora is a good choice. But when you start to get to a requirements, as you do in larger companies have very high levels of availability, the functionality is not there. You're not comparing apples and... Apples with apples, it's two very different things. So from a tier one functionality point of view, IBM Db2 and Oracle have far greater capability for recovery and all the features that they've built in over there. >> Because of their... You mean 'cause of the maturity, right? maturity and... >> Because of their... Because of their focus on transaction and recovery, et cetera. >> So SAP though HANA, I mean, that's, you know... (David talks indistinctly) And then... >> Yeah, yeah. >> And then I wanted your comments on that, either of you or both of you. I mean, SAP, I think has a stated goal of basically getting its customers off Oracle that's, you know, there's always this urinary limping >> Yes, yes. >> between the two companies by 2024. Larry has said that ain't going to happen. You know, Amazon, we know still runs on Oracle. It's very hard to migrate Mission-Critical, David, you and I know this well, Marc you as well. So, you know, people often say, well, everybody wants to get off Oracle, it's too expensive, blah, blah, blah. But we talked to a lot of Oracle customers there, they're very happy with the reliability, availability, recoverability feature set. I mean, the core of Oracle seems pretty stable. >> Yes. >> But I wonder if you guys could comment on that, maybe Marc you go first. >> Sure. I've recently done some in-depth comparisons of Oracle and Aurora, and all their other RDS services and Snowflake and Google and a variety of them. And ultimately what surprised me is you made a statement it costs too much. It actually comes in half of Aurora for in most cases. And it comes in less than half of Snowflake in most cases, which surprised me. But no matter how you configure it, ultimately based on a couple of things, each vendor is focused on different aspects of what they do. Let's say Snowflake, for example, they're on the analytical side, they don't do any transaction processing. But... >> Yeah, so if I can... Sorry to interrupt. Guys if you could bring up the next slide that would be great. So that would be slide three, because now we get into the analytical piece Marc that you're talking about that's what Snowflake specialty is. So please carry on. >> Yeah, and what they're focused on is sharing data among customers. So if, for example, you're an automobile manufacturer and you've got a huge supply chain, you can supply... You can share the data without copying the data with any of your suppliers that are on Snowflake. Now, can you do that with the other data warehouses? Yes, you can. But the focal point is for Snowflake, that's where they're aiming it. And whereas let's say the focal point for Oracle is going to be performance. So their performance affects cost 'cause the higher the performance, the less you're paying for the performing part of the payment scale. Because you're paying per second for the CPUs that you're using. Same thing on Snowflake, but the performance is higher, therefore you use less. I mean, there's a whole bunch of things to come into this but at the end of the day what I've found is Oracle tends to be a lot less expensive than the prevailing wisdom. So let's talk value for a second because you said something, that yeah the other databases can do that, what Snowflake is doing there. But my understanding of what Snowflake is doing is they built this global data mesh across multiple clouds. So not only are they compatible with Google or AWS or Azure, but essentially you sign up for Snowflake and then you can share data with anybody else in the Snowflake cloud, that I think is unique. And I know, >> Marc: Yes. >> Redshift, for instance just announced, you know, Redshift data sharing, and I believe it's just within, you know, clusters within a customer, as opposed to across an ecosystem. And I think that's where the network effect is pretty compelling for Snowflake. So independent of costs, you and I can debate about costs and, you know, the tra... The lack of transparency of, because AWS you don't know what the bill is going to be at the end of the month. And that's the same thing with Snowflake, but I find that... And by the way guys, you can flip through slides three and four, because we've got... Let me just take a quick break and you have data warehouse, logical data warehouse. And then the next slide four you got data science, deep learning and operational intelligent use cases. And you can see, you know, Teradata, you know, law... Teradata came up in the mid 1980s and dominated in that space. Oracle does very well there. You can see Snowflake pop-up, SAP with the Data Warehouse, Amazon with Redshift. You know, Google with BigQuery gets a lot of high marks from people. You know, Cloud Data is in there, you know, so you see some of those names. But so Marc and David, to me, that's a different strategy. They're not trying to be just a better data warehouse, easier data warehouse. They're trying to create, Snowflake that is, an incremental opportunity as opposed to necessarily going after, for example, Oracle. David, your thoughts. >> Yeah, I absolutely agree. I mean, ease of use is a primary benefit for Snowflake. It enables you to do stuff very easily. It enables you to take data without ETL, without any of the complexity. It enables you to share a number of resources across many different users and know... And be able to bring in what that particular user wants or part of the company wants. So in terms of where they're focusing, they've got a tremendous ease of use, tremendous focus on what the customer wants. And you pointed out yourself the restrictions there are of doing that both within Oracle and AWS. So yes, they have really focused very, very hard on that. Again, for the future, they are bringing in a lot of additional functions. They're bringing in Python into it, not Python, JSON into the database. They can extend the database itself, whether they go the whole hog and put in transaction as well, that's probably something they may be thinking about but not at the moment. >> Well, but they, you know, they obviously have to have TAM expansion designs because Marc, I mean, you know, if they just get a 100% of the data warehouse market, they're probably at a third of their stock market valuation. So they had better have, you know, a roadmap and plans to extend there. But I want to come back Marc to this notion of, you know, the right tool for the right job, or, you know, best of breed for a specific, the right specific, you know horse for course, versus this kind of notion of all in one, I mean, they're two different ends of the spectrum. You're seeing, you know, Oracle obviously very successful based on these ratings and based on, you know their track record. And Amazon, I think I lost count of the number of data stores (Dave chuckles) with Redshift and Aurora and Dynamo, and, you know, on and on and on. (Marc talks indistinctly) So they clearly want to have that, you know, primitive, you know, different APIs for each access, completely different philosophies it's like Democrats or Republicans. Marc your thoughts as to who ultimately wins in the marketplace. >> Well, it's hard to say who is ultimately going to win, but if I look at Amazon, Amazon is an all-cart type of system. If you need time series, you go with their time series database. If you need a data warehouse, you go with Redshift. If you need transaction, you go with one of the RDS databases. If you need JSON, you go with a different database. Everything is a different, unique database. Moving data between these databases is far from simple. If you need to do a analytics on one database from another, you're going to use other services that cost money. So yeah, each one will do what they say it's going to do but it's going to end up costing you a lot of money when you do any kind of integration. And you're going to add complexity and you're going to have errors. There's all sorts of issues there. So if you need more than one, probably not your best route to go, but if you need just one, it's fine. And if, and on Snowflake, you raise the issue that they're going to have to add transactions, they're going to have to rewrite their database. They have no indexes whatsoever in Snowflake. I mean, part of the simplicity that David talked about is because they had to cut corners, which makes sense. If you're focused on the data warehouse you cut out the indexes, great. You don't need them. But if you're going to do transactions, you kind of need them. So you're going to have to do some more work there. So... >> Well... So, you know, I don't know. I have a different take on that guys. I think that, I'm not sure if Snowflake will add transactions. I think maybe, you know, their hope is that the market that they're creating is big enough. I mean, I have a different view of this in that, I think the data architecture is going to change over the next 10 years. As opposed to having a monolithic system where everything goes through that big data platform, the data warehouse and the data lake. I actually see what Snowflake is trying to do and, you know, I'm sure others will join them, is to put data in the hands of product builders, data product builders or data service builders. I think they're betting that that market is incremental and maybe they don't try to take on... I think it would maybe be a mistake to try to take on Oracle. Oracle is just too strong. I wonder David, if you could comment. So it's interesting to see how strong Gartner rated Oracle in cloud database, 'cause you don't... I mean, okay, Oracle has got OCI, but you know, you think a cloud, you think Google, or Amazon, Microsoft and Google. But if I have a transaction database running on Oracle, very risky to move that, right? And so we've seen that, it's interesting. Amazon's a big customer of Oracle, Salesforce is a big customer of Oracle. You know, Larry is very outspoken about those companies. SAP customers are many, most are using Oracle. I don't, you know, it's not likely that they're going anywhere. My question to you, David, is first of all, why do they want to go to the cloud? And if they do go to the cloud, is it logical that the least risky approach is to stay with Oracle, if you're an Oracle customer, or Db2, if you're an IBM customer, and then move those other workloads that can move whether it's more data warehouse oriented or incremental transaction work that could be done in a Aurora? >> I think the first point, why should Oracle go to the cloud? Why has it gone to the cloud? And if there is a... >> Moreso... Moreso why would customers of Oracle... >> Why would customers want to... >> That's really the question. >> Well, Oracle have got Oracle Cloud@Customer and that is a very powerful way of doing it. Where exactly the same Oracle system is running on premise or in the cloud. You can have it where you want, you can have them joined together. That's unique. That's unique in the marketplace. So that gives them a very special place in large customers that have data in many different places. The second point is that moving data is very expensive. Marc was making that point earlier on. Moving data from one place to another place between two different databases is a very expensive architecture. Having the data in one place where you don't have to move it where you can go directly to it, gives you enormous capabilities for a single database, single database type. And I'm sure that from a transact... From an analytic point of view, that's where Snowflake is going, to a large single database. But where Oracle is going to is where, you combine both the transactional and the other one. And as you say, the cost of migration of databases is incredibly high, especially transaction databases, especially large complex transaction databases. >> So... >> And it takes a long time. So at least a two year... And it took five years for Amazon to actually succeed in getting a lot of their stuff over. And five years they could have been doing an awful lot more with the people that they used to bring it over. So it was a marketing decision as opposed to a rational business decision. >> It's the holy grail of the vendors, they all want your data in their database. That's why Amazon puts so much effort into it. Oracle is, you know, in obviously a very strong position. It's got growth and it's new stuff, it's old stuff. It's, you know... The problem with Oracle it has like many of the legacy vendors, it's the size of the install base is so large and it's shrinking. And the new stuff is.... The legacy stuff is shrinking. The new stuff is growing very, very fast but it's not large enough yet to offset that, you see that in all the learnings. So very positive news on, you know, the cloud database, and they just got to work through that transition. Let's bring up slide number five, because Marc, this is to me the most interesting. So we've just shown all these detailed analysis from Gartner. And then you look at the Magic Quadrant for cloud databases. And, you know, despite Amazon being behind, you know, Oracle, or Teradata, or whomever in every one of these ratings, they're up to the right. Now, of course, Gartner will caveat this and say, it doesn't necessarily mean you're the best, but of course, everybody wants to be in the upper, right. We all know that, but it doesn't necessarily mean that you should go by that database, I agree with what Gartner is saying. But look at Amazon, Microsoft and Google are like one, two and three. And then of course, you've got Oracle up there and then, you know, the others. So that I found that very curious, it is like there was a dissonance between the hardcore ratings and then the positions in the Magic Quadrant. Why do you think that is Marc? >> It, you know, it didn't surprise me in the least because of the way that Gartner does its Magic Quadrants. The higher up you go in the vertical is very much tied to the amount of revenue you get in that specific category which they're doing the Magic Quadrant. It doesn't have to do with any of the revenue from anywhere else. Just that specific quadrant is with that specific type of market. So when I look at it, Oracle's revenue still a big chunk of the revenue comes from on-prem, not in the cloud. So you're looking just at the cloud revenue. Now on the right side, moving to the right of the quadrant that's based on functionality, capabilities, the resilience, other things other than revenue. So visionary says, hey how far are you on the visionary side? Now, how they weight that again comes down to Gartner's experts and how they want to weight it and what makes more sense to them. But from my point of view, the right side is as important as the vertical side, 'cause the vertical side doesn't measure the growth rate either. And if we look at these, some of these are growing much faster than the others. For example, Snowflake is growing incredibly fast, and that doesn't reflect in these numbers from my perspective. >> Dave: I agree. >> Oracle is growing incredibly fast in the cloud. As David pointed out earlier, it's not just in their cloud where they're growing, but it's Cloud@Customer, which is basically an extension of their cloud. I don't know if that's included these numbers or not in the revenue side. So there's... There're a number of factors... >> Should it be in your opinion, Marc, would you include that in your definition of cloud? >> Yeah. >> The things that are hybrid and on-prem would that cloud... >> Yes. >> Well especially... Well, again, it depends on the hybrid. For example, if you have your own license, in your own hardware, but it connects to the cloud, no, I wouldn't include that. If you have a subscription license and subscription hardware that you don't own, but it's owned by the cloud provider, but it connects with the cloud as well, that I would. >> Interesting. Well, you know, to your point about growth, you're right. I mean, it's probably looking at, you know, revenues looking, you know, backwards from guys like Snowflake, it will be double, you know, the next one of these. It's also interesting to me on the horizontal axis to see Cloud Data and Databricks further to the right, than Snowflake, because that's kind of the data lake cloud. >> It is. >> And then of course, you've got, you know, the other... I mean, database used to be boring, so... (David laughs) It's such a hot market space here. (Marc talks indistinctly) David, your final thoughts on all this stuff. What does the customer take away here? What should I... What should my cloud database management strategy be? >> Well, I was positive about Oracle, let's take some of the negatives of Oracle. First of all, they don't make it very easy to rum on other platforms. So they have put in terms and conditions which make it very difficult to run on AWS, for example, you get double counts on the licenses, et cetera. So they haven't played well... >> Those are negotiable by the way. Those... You bring it up on the customer. You can negotiate that one. >> Can be, yes, They can be. Yes. If you're big enough they are negotiable. But Aurora certainly hasn't made it easy to work with other plat... Other clouds. What they did very... >> How about Microsoft? >> Well, no, that is exactly what I was going to say. Oracle with adjacent workloads have been working very well with Microsoft and you can then use Microsoft Azure and use a database adjacent in the same data center, working with integrated very nicely indeed. And I think Oracle has got to do that with AWS, it's got to do that with Google as well. It's got to provide a service for people to run where they want to run things not just on the Oracle cloud. If they did that, that would in my term, and my my opinion be a very strong move and would make make the capabilities available in many more places. >> Right. Awesome. Hey Marc, thanks so much for coming to theCUBE. Thank you, David, as well, and thanks to Gartner for doing all this great research and making it public on the web. You can... If you just search critical capabilities for cloud database management systems for operational use cases, that's a mouthful, and then do the same for analytical use cases, and the Magic Quadrant. There's the third doc for cloud database management systems. You'll get about two hours of reading and I learned a lot and I learned a lot here too. I appreciate the context guys. Thanks so much. >> My pleasure. All right, thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (upbeat music)

Published Date : Dec 18 2020

SUMMARY :

leaders all around the world. Marc Staimer is the founder, to really try to, you know, or what you have to manage. based on your reading of the Gartner work. So the very nuance, what you talked about, You're not going to do that, you I thought, you know, Aurora, you know, So I wonder if you and, you know, a mid-sized customer You mean 'cause of the maturity, right? Because of their focus you know... either of you or both of you. So, you know, people often say, But I wonder if you But no matter how you configure it, Guys if you could bring up the next slide and then you can share And by the way guys, you can And you pointed out yourself to have that, you know, So if you need more than one, I think maybe, you know, Why has it gone to the cloud? Moreso why would customers of Oracle... on premise or in the cloud. And as you say, the cost in getting a lot of their stuff over. and then, you know, the others. to the amount of revenue you in the revenue side. The things that are hybrid and on-prem that you don't own, but it's Well, you know, to your point got, you know, the other... you get double counts Those are negotiable by the way. hasn't made it easy to work and you can then use Microsoft Azure and the Magic Quadrant. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

Rick GreenwaldPERSON

0.99+

DavePERSON

0.99+

Marc StaimerPERSON

0.99+

MarcPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Adam RonthalPERSON

0.99+

Don FeinbergPERSON

0.99+

GoogleORGANIZATION

0.99+

LarryPERSON

0.99+

AWSORGANIZATION

0.99+

OracleORGANIZATION

0.99+

December 2020DATE

0.99+

IBMORGANIZATION

0.99+

Henry CookPERSON

0.99+

Palo AltoLOCATION

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

GartnerORGANIZATION

0.99+

Merv AdrianPERSON

0.99+

100%QUANTITY

0.99+

second pointQUANTITY

0.99+

11 25 19 HPE Launch Floyer 5 (Do not make public)


 

[upbeat funk music] >> [Female Announcer] From our studios In the heart of Silicon Valley, Palo Alto California This is a Cube Conversation! >> Welcome to the Cube Studios for another Cube Conversation, where we go in depth with thought leaders driving business outcomes with technology. I'm your host, Peter Burris. When we have considered solving storage-related challenges, we found ourselves worrying about things like, how far does the device sit from the server? What kinds of wiring we were going to utilize, what kind of protocol was gonna run over that wiring. These are very physical concerns that were largely driven by the nature of the devices we were using. In a digital business that's using data as an asset, we can't think about storage the same way. We can't approach storage challenges the same way, we need a new mindset to help us better understand how to approach these storage issues, so that we're better serving the business outcomes and not just the device characteristics. Now to have that conversation about this new data services approach, we've got David Floyer, CTO and co-founder of Wikibon and my colleague, here on the Cube with us today. David, welcome to the Cube. >> Many thanks, yes. >> So David, I said upfront that we need a new mindset. Now I know you agree with us, but explain what that new mindset is. >> Yes, I completely agree that that new mindset is required. And it starts with, you want to be able to deal with data, wherever it's gonna be. We are in a hybrid world, a hybrid cloud world. Your own clouds, other public clouds, partner clouds, all of these need to be integrated and data is at the core of it. So that the requirement then is to have rather than think about each individual piece, is to think about services, which are going to be applied to that data and can be applied, not only to the data in one place, but across all of that data. And there isn't such a thing as just one set of services. There're going to be multiple sets of these services available. >> But hope we will see some degree of conversion so- >> Absolutely, yeah, there'll be the same ... >> Lexicon and conceptual, et cetera. >> Yeah, there'll be the same levels of things that are needed within each of these architectures, but there'll be different emphasis on different areas. If you've got a very, very high performance requirement, and recovery, speed of recovery is absolutely paramount with complex databases, then you're going to be thinking about, you know, oracle, cloud per customer as a way of being able to do that sort of thing. If you're wanting to manage containers in an area where it's stateless, then you've got a different set of priorities and requirements that you're gonna put together. >> But you wanna come instead of services. >> Yes. Let me give you an example. So I was talking to a CIO not too long ago, a client, guy I've worked with a lot and I was talking about the development world, and made the observation that you could build really rotten applications in Cobalt but you could also build really rotten applications with Containers. And he totally agreed and the observation he made to me was, you know, what microservices really is, it's an approach to solving a problem, that then suggest new technologies like in Containers, as opposed to being the product that you use to create the new applications. And so in many respects I think it's analogous to notion of data services. We need to look at the way we administer data as a set of services that create outcomes for the business, as opposed to, that are then translated into individual devices. So let's jump into this notion of what those services look like. It seems though we can list off a couple of them. >> Sure, yeah so we must have data reduction techniques. So you must have deduplication, compression, type of techniques and you want apply that across as big an amount of data as you can. The more data you apply those, the higher the levels of compression and deduplication you can get. So that's clearly, you've got those sort of sets of services across there. You must backup and restore data in another place and be able to restore it quickly and easily. There's that again is a service. How quickly, how integrated that recovery, again that's gonna be a variable. >> That's a differentiation in the service. >> Different, exactly. You're gonna need data protection in general. End to end protection of one sort or another. For example, you need end to end encryption across there. It's not longer good enough to say, this bit's been encrypted and then this bit's encrypted. It's gotta be an end to end, from one location to another location, seamlessly provided, that sort of data protection. >> Well let me press on that 'cause I think it's a really important point and it's, you know, the notion that weakest link determines the strength of the chain, right? >> Yeah, yep. >> What you just described says, if you have encryption here and you don't have encryption there, but because of the nature of digital you can start bringing that data together, guess what? The weakest link determines the protection of the old world data. >> The protection of the, absolutely, yes. And then you need services like snapshots, like other services which provide much better usage of that data. One of the great things about Flash and has brought this about is that, you can take a copy of that in real time and use that for a totally different purpose and have that being changed in a different way, so there are some really significantly great improvements you can have with services like snapshots. And then you need some other services which are becoming even more important in my opinion. The advent of bad actors in the world has really brought about the requirement for things like air gaps. To have your data with the metadata all in one place, and completely separated from everything else. There are such things as called logical air gaps, I think as long as they're real, in the real sense that the two paths can't interfere with each other, those are gonna be services which become very, very important indeed. >> And that's generally as an example of a general class of security data service is gonna be required. >> Correct, yes. So ultimately what we're describing is, we're describing a new mindset that says, that a storage administrator has to think about the services that the applications and the business requires and then seek out technologies that can provide those services at the price point, with the degree power consumption, in the space, or the environmentals, or with the type of maintenance and services, really the support that are required, based on the physical location, the degree to which it's under the control, et cetera. Is that kinda how we're thinking about this? >> I think absolutely and again, if there're gonna be multiple of these around in the market place, one size is not gonna fit all. If you're wanting super fast response time at an edge and if you don't get that response in time, it's gonna be no use whatsoever, you're gonna have a different architecture, a different way of doing it, than if you need to be a hundred percent certain that every bit is captured in a financial sort of environment. >> But from the service standpoint you wanna be able to look at that specific solution in a common way across policies and capabilities. >> Correct, correct. >> David Floyer! Once again thanks again for being on the Cube and talking about this important issue and thank you for joining us again. I'm Peter Burris, see you next time. [upbeat funk music]

Published Date : May 1 2019

SUMMARY :

of Wikibon and my colleague, here on the Cube with us today. Now I know you agree with us, So that the requirement then is to have about, you know, oracle, cloud per customer and made the observation that you could build across as big an amount of data as you can. For example, you need end to end encryption across there. but because of the nature of digital that the two paths can't interfere with each other, of a general class of security data that the applications and the business requires in the market place, one size is not gonna fit all. But from the service standpoint and thank you for joining us again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Peter BurrisPERSON

0.99+

David FloyerPERSON

0.99+

WikibonORGANIZATION

0.99+

todayDATE

0.99+

Cube StudiosORGANIZATION

0.98+

eachQUANTITY

0.97+

OneQUANTITY

0.97+

two pathsQUANTITY

0.97+

CubeORGANIZATION

0.97+

one sizeQUANTITY

0.97+

one setQUANTITY

0.96+

each individual pieceQUANTITY

0.94+

Silicon Valley, Palo Alto CaliforniaLOCATION

0.94+

one locationQUANTITY

0.94+

one placeQUANTITY

0.93+

ConversationEVENT

0.92+

hundred percentQUANTITY

0.9+

oneQUANTITY

0.89+

FlashTITLE

0.89+

CobaltTITLE

0.87+

CTOPERSON

0.83+

CubeCOMMERCIAL_ITEM

0.62+

HPEORGANIZATION

0.6+

11 25 19 HPE Launch Floyer 4 (Do not make public)


 

from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation welcome to the cube studios for the cube conversation where we go in-depth with thought leaders driving business outcomes with technology I'm your host Peter Burris digital business and the need to drive the value of data within organizations is creating an explosion of technology in multiple domains systems networking and storage we've seen advances in flash we've seen advances in HD DS we've seen advances and all kinds of different elements but it's essential that users and enterprises still think in terms not just of these individual technologies piecemeal but as solutions that are applied to use cases now you always have to be aware of what are the underlying technology components but it's still important to think about how systems integration is going to bring them together and apply them to serve business outcomes now to have that conversation we've got David Fleur who's the CTO and co-founder of wiki bond and my colleague David welcome to the cube thank you very much Peter all right so I've just laid out this proposition that systems integration as a discipline is not gonna go away when we think about how to build these capabilities that businesses need in digital business so let's talk about that what are some of the key features of systems integration especially in the storage world that will continue to be a helps differentiate between winners and losers absolutely so you you need to be able to use software to be able to combine all these different layers and it has to be an architect software solution that will work wherever you've got equipment and where have you got data so it needs to work in the cloud it needs to work in a private cloud it needs to work at the edge all of these needs to be architected in a way which is available to the users to put where the data is going to be created as opposed to bring it all in in one super large collection of data and so we've got different types of technology at the very fastest we've got DRAM we've got we've got non-volatile DRAM which is coming very fast indeed we've got flash and there are many different sorts of flash there's obtained from Intel that may be trying to get in between there as well and then there are different HD DS as well so we got a long hierarchy the important thing is that we protect the application and the operations from all of that complexity by having an overall hierarchy and utilizing software from an integration standpoint but it suggests that when an enterprise thinks about a solution for how they store their data they need to think in terms of as you said first off physically where is it going to be secondly what kinds of services at the software level am I going to utilize to ensure that I can have a common administrative experience and the differentiated usage experience based on the physical characteristics of where it's being used and then obviously and very importantly from an administration standpoint I need to ensure that I'm not having to learn new and unique administration practices everywhere because I would just blow everything up absolutely but there is a real there's going to be in my opinion a large number of these solutions out there I mean one data architecture is not going to be sufficient for all applications they're gonna have many different architectures out there I think it's probably useful just to start with one as an example in this area just let's take one as an example and then we can see what the major characteristics of you are so let's take something that would fit in most places a mid-range type solution let's take nimble nimble storage which has a very specific architecture so it was started off by being a virtualization of all those different layers so the application sees that everything is in flash and in cash or whatever it is but where it is is totally different it can be anywhere within that hierarchy so the application sees effectively a pool of resources that it can call yes all it sees and and it doesn't know and nobody and it doesn't need to know that it's on disk or a hard disk or in in memory in in in a cache inside the controller or wherever it is so it starts with using nimble as an example nimble is successfully masking the complexities and specificities of that storage heart and from the application right so so and and that's an advantage because it's simpler but it's also needs to cover more things you need to be able to do everything within that virtualized environment so you need for example to be able to take snapshots and you the snapshots need all the metadata about the snapshots needs to be put in a separate place so one of the things you find that comes from this sort of architecture is that the metadata is separated out completely different from the actual data itself but still proximate to the data because data locality still matters absolutely has to be there but it's in a different part of a hierarchy it's much further up the hierarchy all the metadata so what we've got the metadata we've got the flash high speed we've got the the fastest which is the DRAM itself that when for writes is has a protection mechanism for that that part of the DRAM specialized hardware in that area so that allows you to do writes very very quickly indeed and then you come down to the next layer which is flash and indeed within the in the in taking the nimble example you have two sorts of flash you can have the high-speed flash at the top and if you want to you can have lower performance flash you know using the 3d quad flash or whatever it is you can have lower performance flash if that's what you need and then going lower down then you have HD DS and the architecture combines the benefits of flash with the character and the characteristics of flash with the benefits of HD d which is much lower cost but the characteristics of HD d which are slower but very suited to writing out large volumes or reading in large volumes so that's read out to the disk but where where it's all held is held in the metadata so it's really looking at the workloads that are going to be they're gonna hit the data and then with out of making the application aware of it utilizing the underlying storage hierarchy to so best support those workloads again with a virtualized interface that keeps it really simple from an administration development and runtime perspective actually all right David foyer thanks very much for being on the cube and talking about some of these new solution-oriented requirements for thinking about storage over the next few years once again I'm Peter Burris see you next time you [Music]

Published Date : May 1 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FleurPERSON

0.99+

Peter BurrisPERSON

0.99+

PeterPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Peter BurrisPERSON

0.99+

two sortsQUANTITY

0.95+

nimbleORGANIZATION

0.95+

oneQUANTITY

0.93+

David foyerPERSON

0.92+

Palo Alto CaliforniaLOCATION

0.91+

wiki bondORGANIZATION

0.9+

firstQUANTITY

0.83+

one ofQUANTITY

0.76+

secondlyQUANTITY

0.72+

nimbleTITLE

0.64+

11OTHER

0.64+

next few yearsDATE

0.61+

thingsQUANTITY

0.6+

HPE Launch Floyer 4TITLE

0.5+

25OTHER

0.44+

19OTHER

0.38+

11 25 19 HPE Launch Floyer 2


 

(upbeat jazz music) >> From our studios, in the heart of Silicon Valley, Palo Alta, California. This is a Cube Conversation. >> Hi, welcome to the Cube Studio for another Cube Conversation where we go in-depth with thought leaders driving business outcomes with technology. I'm your host Peter Buriss. As enterprise is look to take advantage of new classes of applications like AI and others that make possible this notion of a data first or data driven enterprise in a digital business world. They absolutely have to consider what they need to do with their stored resources to modernize them to make possible new types of performance today, but also sustain and keep open options for how they use data in the future. To have that conversation we're here with David Floyer, CTO and co-founder of Wikibon. David welcome to the conversation. >> Thank you. >> So David you've been looking at this notion of modern storage architectures for 10 years now. >> Yeah. >> And you've been relatively prescient in understanding what's gonna happen. You were one of the first guys to predict well in advance of everybody else that the crossover between flash and HDD was gonna happen sooner rather than later. So I'm not going to spend a lot of time quizzing you. What do you see as a modern storage architecture? Let's, just let it rip. >> Okay well let's start with one simple observation. The days of standalone systems for data have gone we're in a software defined world and you wanna be able to run those data architectures anywhere where the data is. And that means in your data center where it was created or in the cloud or in a public cloud or at the edge. You want to be able to be flexible enough to be able to do all of the data services where the best place is and that means everything has to be software driven. >> Software defined is the first proposition of modern data storage facility? >> Absolutely. >> Second. >> So the second thing is that there are different types of technology. You have the very fastest storage which is in the in the DIRUM itself. You have NVDIMM which is the next one down from that expensive but a lot cheaper than the DIMM. And then you have different sorts of flash. You have the high performance flash and you have the 3D flash, you know as many layers as you can which is much cheaper flash and then at the bottom you have HDD and even tape as storage devices. So how. The key question is how do you manage that sort of environment. >> Where do we start because it still sounds like we still have a storage hierarchy. >> Absolutely. >> And it still sounds like that hierarchy is defined largely in terms of access speeds >> Yeap. >> And price points. >> Price points. Yes. >> Those are the two Mason and bandwidth and latency as well are within that. >> which are tied into that? >> which are tied into those. Yes. So what you, if you're gonna have this everywhere and you need services everywhere what you have to have is an architecture which takes away all of that complexity, so that you, all you see from an application point of view is data and how it gets there and how is put away and how it's stored and how it's protected that's under the covers. So the first thing is you need a virtualization of that data layer. >> The physical layer? >> The virtualization of that physical layer. >> Right right. >> Yes. And secondly you need that physical layer to extend to all the places that may be using this data. You don't wanna be constrained to this data set lives here. You want to be able to say Okay, I wanna move this piece of programming to the data as quickly as I can, that's much much faster than moving the data to the processing. So I want to be able to know where all the data is for this particular dataset or file or whatever it is, where they all are, how they connect together, what the latency is between everything. I wanna understand that architecture and I want to virtualize view of that across that whole the nodes that make up my hybrid cloud. >> So let me be clear here so, so we are going to use a software defined infrastructure >> Yeah. that allows us to place the physical devices that have the right cost performance characteristics where they need to be based on the physical realities of latency power availability, hardening, et cetera. >> And the network >> And the network. But we wanna mask that complexity from the application, application developer and application administrator. >> Yes. >> And software defined helps do that, but doesn't completely do it. >> No. Well you want services which say >> Exactly, so their services on top of all that. >> On top of all that. >> Absolutely. >> That are recognizable by the developer, by the business person, by the administrator, as they think about how they use data towards those outcomes not use storage or user device but use the data. >> Data to reach application outcomes. That's absolutely right. And that's what I call the data plane which is a series of services which enable that to happen and driven by the application requirements themselves. >> So we've looked at this and some of the services include end end compression, duplication, >> Duplication. backup restore, security, data protection. >> Protection. Yeah. So that's kind of, that's kind of the services that now the enterprise buyer needs to think about. >> Yes. >> So that those services can be applied by policy. >> Yes. >> Wherever they're required based on the utilization of the data >> Correct. >> Where the event takes place. >> And then you still have at the bottom of that you have the different types of devices. You still have you still won't >> A lot of hamsters making stuff work. >> You still want hard disk for example they're not disappearing, but if you're gonna use hard disks then you want to use it in the right way for using a hard disk. You wanna give it large box. You want to have it going sequentially in and out all the time. >> So the storage administration and the physical schema and everything else is still important in all these? >> Absolutely. But it's less important, less a centerpiece of the buying decision. >> Correct. >> Increasingly it's how well does this stuff prove support the services that the business is using to achieve your outcomes. >> And you want to use costs the lowest cost that you can and they'll be many different options open, more more options open. But the automation of that is absolutely key and that automation from a vendor point of view one of the key things they have to do is to be able to learn from the usage by their customers, across as broad a number of customers as they can. Learn what works or doesn't work, learn so that they can put automation into their own software their own software service. >> So it sounds like we talking four things. We got software defined, still have a storage hierarchy defined by cost and performance, but with mainly semiconductor stuff. We've got great data services that are relevant to the business and automation that mask the complexity from everything. >> And a lot of the artificial AI there is, automated >> Running things. Fantastic. David Floyer, talking about modern storage architectures. Once again thanks for joining us on the Cube Conversation. And I'm your host Peter Burris. See you next time. (jazz music)

Published Date : May 1 2019

SUMMARY :

in the heart of Silicon Valley, Palo Alta, California. and others that make possible this notion of a data first So David you've been looking at this notion in advance of everybody else that the crossover and that means everything has to be software driven. You have the very fastest storage Where do we start because it still sounds like Yes. Those are the two Mason So the first thing is you need than moving the data to the processing. that have the right cost performance characteristics And the network. And software defined helps do that, on top of all that. by the business person, by the administrator, and driven by the application requirements themselves. that now the enterprise buyer needs to think about. And then you still have at the bottom of that and out all the time. less a centerpiece of the buying decision. that the business is using to achieve your outcomes. one of the key things they have to do and automation that mask the complexity from everything. And I'm your host Peter Burris.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

DavidPERSON

0.99+

Peter BurissPERSON

0.99+

10 yearsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

second thingQUANTITY

0.99+

twoQUANTITY

0.99+

SecondQUANTITY

0.98+

first guysQUANTITY

0.97+

one simple observationQUANTITY

0.96+

oneQUANTITY

0.96+

Palo Alta, CaliforniaLOCATION

0.93+

Silicon Valley,LOCATION

0.91+

Cube StudioORGANIZATION

0.91+

secondlyQUANTITY

0.9+

four thingsQUANTITY

0.9+

ConversationEVENT

0.9+

todayDATE

0.85+

first thingQUANTITY

0.78+

25 19DATE

0.76+

NVDIMMOTHER

0.73+

first propositionQUANTITY

0.71+

CTOPERSON

0.7+

Cube ConversationEVENT

0.69+

11OTHER

0.6+

firstQUANTITY

0.6+

CubeORGANIZATION

0.56+

DIRUMTITLE

0.55+

CubeCOMMERCIAL_ITEM

0.52+

MasonORGANIZATION

0.51+

HPEORGANIZATION

0.4+

FloyerEVENT

0.36+

2OTHER

0.31+

11 25 19 HPE Launch Floyer 1 (Do not make public)


 

(lively funk music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBEConversation. >> Hi, welcome to the CUBE Studio for another CUBEConversation where we go in-depth with the thought leaders driving outcomes with technology. I'm your host, Peter Burris. One of the biggest challenges that enterprises face is how to appropriately apply artificial intelligence. Now, let's be clear, the basic precepts and concepts and approaches to artificial intelligence have been around for a long time. One might argue decades. It's happening now because the technology can perform it. And one of the technologies that's especially important, and is absolutely essential to determining success or failure in AI, is storage. So what we're gonna do now is have a conversation with David Floyer, the CTO and co-founder of Wikibon, about that crucial relationship between AI and storage. David, welcome to the conversation. >> Thanks very much, indeed, Peter. Interesting subject. >> Oh, very interesting subject, so let's get right into it, David. >> Sure. >> What is it about AI and storage that makes the two of them so essential to the co-evolution of each? >> Absolutely, so first of all, you've got different parts of AI. So you've got the part where you're developing all of the models themselves, where you've got a large amount of data. You're trying to capture that data. You're trying to find out what's important in that data. And then you're developing models which you're going to use to do something. Either automate something or give information to somebody about the business process. >> All right, so that's the first one. What's the second one? >> So the second one, they're both concerned with inferencing. There's inferencing close to that data, the overall data, and there's inferencing right at the Edge itself. And they both important and driven in different ways. The inferencing close to the applications, the centralized applications-- >> So inferencing in the data center, so to speak. >> In the data center itself. Those are going to be, essentially, most of them, real-time decisions that are being made. For example, if I am trying to find out what sort of customer you are, what sort of price that I'm gonna give you, what sort of delivery, what sort of terms I'm gonna give you, that's information that I'm gonna have to get from a whole number of different sources, push them all together, and give that information to my systems of record. They are gonna make those decisions and they're gonna push them down to maybe an Edge or Apple-type device to give you the answer to that. That's going on in real-time and has to be extremely rapidly done. >> And now we've got inferencing at the Edge. >> And then you've got inferencing at the Edge. Now here's all of the data coming in, whether it be a mobile Edge or a stationery Edge, huge amounts of data coming in to cameras to other senses of one sort or another. >> Or being generated right there where the-- >> Absolutely, generated, that's the first time that status has ever existed. And what you want to do with that is put the inference there and take what's important from that data. Because 99% or 99.9% of that data is absolutely free of value. So you're trying to extract that 0.01% of data and do actions locally with that and also pass those up the line. So you're actually getting rid of a huge amount of data at the Edge. >> All right, so that's an overall AI taxonomy. >> Yeah. >> How does storage influence what happens at the modeling and development level? What's the relationship between AI modeling and storage? >> So AI modeling is about lots and lots of data. Lots and lots of small files. Imagine thousands of millions of pictures going through millions of any sort of artificial intelligence you're trying to generate on that. So, that's one thing is, it's large amounts of data and you don't do modeling just once. You reuse the data. You run it again. You check it against something else. You're constantly looking for new types of data, new data, large amounts of data, lot of large-scale processing of that data to create models of one sort or another. >> You're not gonna do that on disk. >> You're not gonna to that on disk. That has to be flash. Has to be fast flash. And what you want, if you can, is to integrate the processing and the data, all as one, so that it fits in, it can be viewed as a system for the data scientists, which it sits there and does what they want to do and then can be managed from a storage point-of-view by the professionals. >> So in the center, it's gonna be very fast, very high-performance, very scalable, and flash. >> Yes. >> What about at the Edge? >> So, well, (laughs) >> What about at the activity Edge, let's call it? >> Yeah, activity, that again, is here you've got real-time processing. So again, the emphasis is on flash most of the time. And you've, in fact, got other technologies like, for example, envidems, which are coming in and increasing. So you've got a hierarchy there which you want to be able to use the right sort of storage for that job. But a lot of that's gonna be extremely rapid. And you want to be able to take your current systems of record, squeeze those down to allow space for all this inference work to be added in so that everything is real-time. So that's really, it's much faster. Of course, it doesn't mean you get rid of all of the things like data services and all of the things which you've collected. >> Well, on the contrary, doesn't it mean that those types of things become more important? >> Become more important. >> Well, so here's a hypothesis that I've had for a while and we've talked about, that the traditional storage notion of data, which was size, class, format-- >> Latency. >> IOPs. >> Yeah. >> Those types of things-- >> Bandwidth. >> Means nothing to the data scientists. >> Correct. >> AI is a business problem driving business observations so data services, in many respects, are a way of mediating the performance and other realities at the device level with the business and tool chain requirements at the AI level, right? >> Absolutely, absolutely, and you've gotta have those services. And, indeed, with hybrid computing, you want to move that processing to where the data is created, as much as you can. So if it's created in the Cloud, you go to the Cloud. If it's created-- >> Created or used? >> If you can, you want to do it where the data is created. The less data you move around, the better. So it's much better to send a request to that data where it's created, as close as possible to that. >> Okay, subject to the realities of latency. >> Absolutely. >> So, in many respects, it's still gonna be you want the data where it's gonna be used, but if you don't have to move it to where it's used, because the latency envelope is large enough, then keep it where it's created. >> Keep it where it's created. >> Got it. >> Absolutely, yes. And now, if we go to the Edge, there you really want to avoid having to store data at all. There's 99% of that data is useless. 99.9% of that data is useless. You wanna get rid of that. You want to use the inferencing to store only what is necessary. Now, to begin with, when you're still in the data modeling stage of AI, you may want to send some of that back, quite a lot of it back. But once you get into a normal running of it, you want to get rid of as much of that possible data as you can, take the core of that data, what it matters, the exceptions, etc. Send that up and get rid of it. Just destroy it. >> Well, this is one area where you and I, we generally agree. You say 99%, maybe it's 95%, maybe it's 90% of the data gets, you know, gotten rid of. Because there's always gonna be derivative opportunities to use data in valuable ways. But that's something we're gonna discover over the next few years. >> Sure. >> But we're not gonna go through that process if we don't have storage that can handle these workloads. >> Absolutely. >> All right. >> Yep. >> David Floyer, talking about the relationship between AI and storage. Thanks again for being on the CUBE. >> You're welcome. >> And thanks for joining us for another CUBEConversation. I'm Peter Burris. See you next time. (lively funk music)

Published Date : May 1 2019

SUMMARY :

in the heart of Silicon Valley, One of the biggest challenges Thanks very much, indeed, Peter. so let's get right into it, David. all of the so that's the first one. So the second one, and give that information to my systems of record. Now here's all of the data coming in, of a huge amount of data at the Edge. You reuse the data. the data scientists, So in the center, it's gonna be very fast, and all of the things which you've collected. So if it's created in the Cloud, you go to the Cloud. So it's much better to send a request to that data because the latency envelope is large enough, in the data of the data gets, you know, gotten rid of. that can handle these workloads. Thanks again for being on the CUBE. See you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

PeterPERSON

0.99+

Peter BurrisPERSON

0.99+

90%QUANTITY

0.99+

99%QUANTITY

0.99+

99.9%QUANTITY

0.99+

95%QUANTITY

0.99+

twoQUANTITY

0.99+

0.01%QUANTITY

0.99+

second oneQUANTITY

0.99+

millionsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

bothQUANTITY

0.98+

CUBEConversationEVENT

0.98+

first oneQUANTITY

0.98+

oneQUANTITY

0.97+

thousands of millions of picturesQUANTITY

0.97+

one thingQUANTITY

0.97+

first timeQUANTITY

0.96+

AppleORGANIZATION

0.96+

eachQUANTITY

0.95+

OneQUANTITY

0.95+

firstQUANTITY

0.95+

one areaQUANTITY

0.9+

Silicon Valley,LOCATION

0.89+

decadesQUANTITY

0.86+

Palo Alto, CaliforniaLOCATION

0.84+

EdgeORGANIZATION

0.78+

EdgeTITLE

0.73+

FloyerPERSON

0.71+

onceQUANTITY

0.65+

yearsDATE

0.64+

CUBE StudioORGANIZATION

0.61+

11OTHER

0.61+

lotsQUANTITY

0.59+

25 19DATE

0.54+

LotsQUANTITY

0.54+

dataQUANTITY

0.49+

CUBEORGANIZATION

0.45+

Old Version: James Kobielus & David Floyer, Wikibon | VMworld 2018


 

from Las Vegas it's the queue covering VMworld 2018 brought to you by VMware and its ecosystem partners and we're back here at the Mandalay Bay in somewhat beautiful Las Vegas where we're doing third day of VMworld on the cube and on Peterborough and I'm joined by my two lead analysts here at Ricky bond with me Jim Camilo's who's looking at a lot of the software stuff David floor who's helping to drive a lot of our hardware's research guys you've spent an enormous amount of time talking to an enormous number of customers a lot of partners and we all participated in the Analyst Day on Monday let me give you my first impressions and I want to ask you guys some questions here you thought so I have it this is you know my third I guess VMworld in or in a row and and my impression is that this has been the most coherent of the VM worlds I've seen you can tell when a company's going through a transition because they're reaching to try to bring a story together and that sets the tone but this one hot calendar did a phenomenal job of setting up the story it makes sense it's coherent possibly because it aligns so well with what we think is going to happen in the industry so I want to ask you guys based on three days of one around and talking to customers David foyer what's been the high point what have you found is the most interesting thing well I think the most interesting thing is the excitement that there is over VMware if you if you contrast that with a two three years ago the degree of commitment of customers to viennois the degree of integration they're wanting to make the degree rate of change and ideas that have come out of VMware it's like two different companies totally different companies some of the highlights for me were the RDS the bringing from AWS to on site as well as on the AWS cloud RDS capabilities I think that's a very very interesting thing that's the relational database is services the Maria DB and all the other services that's a very exciting thing to me and a hint to me that AWS is going to have to get serious about well Moore's gone out I think it's a really interesting point that after a lot of conversations with a lot of folks saying all AWS it's all going to go up to the cloud and wondering whether that also is a one-way street for VMware Casta Moore's right but now we're seeing it's much more of a bilateral relationship it's a moving it to the right place and that's the second thing the embracing of multi-cloud by everybody one cloud is not going to do everything they're going to be SAS clouds they're going to be multiple places where people are gonna put certain workloads because that's the best strategic fit for it and the acceptance in the marketplace that that is where it's going to go I think that again is a major change so hybrid cloud and multi cloud environments and then the third thing is I think the richness of the ecosystem is amazing the the going on the floor and the number of people that have come to talk to us with new ideas really fascinating ideas is something I haven't seen at all for the last last three four years and so I'm gonna come back to you on that but it goes back to the first point that you make that yeah there is a palpable excitement here about VMware that two-three years ago the conversation was how much longer is the franchise gonna be around Jim but now it's clear yeah it's gonna be around Jim how about you yeah actually I'm like you guys I'm a newbie to VM world this is my very first remember I'm a big data analyst I'm a data science an AI guy but obviously I've been aware of VMware and I've had many contacts with them over the years my take away my prime and I like Pat Gail singers I agree with you Peter they're really coherent take and I like that phrase even though it sounds clucking impact kind of apologize they are the dial tone to the multi-cloud if the surgery really gives you a strong sense or who else can you character is in this whole market space cloud computing has essentially a multi cloud provider who provide the unifying virtualization glue to help their custom to help customers who are investing in an AWS and maybe in a bit of you know you're adopting Google and Microsoft Azure and so forth providing a virtualization layer that's the above server virtualization network virtualization VDI all the way to the edge nobody can put it all is putting it all together and quite the way that VMware is one of the my chief takeaways is similar to David's which is that in terms of the notion of a hybrid cloud VMware with its whole what's it's doing with RDS but also projects like this project dimension which is in project in progress taking essentially the entire VMware virtualization stack and putting it onto an appliance for deployment on the edges and then for them to manage it VMware of this their plans as an end-to-end managed edge cloud service and so forth Wow the blurring of public and private cloud I don't even think the term hybrid cloud applies it's just a blurry the common cloud yeah it's moving to the workload the clouds moving to the data which is exactly what we say they are halfway there in terms of that vision halfway in a sense that RDS has been announced the you know on the VMware and this project dimension they're well along with that if there was a briefings for the analyst space I'm really impressed for how they're architecting this I think they've got a shot to really dominate well I'll tell you so I would agree with you just to maybe provide a slightly different version of one of the things you said I definitely agree I think what's VMware hopes to do and I think they're not alone is to have AWS look like an appliance to their console to have as you look like an appliance of their Khan so through free em where you can get access to whatever services you need including your VMware machines your VMs inside those clouds but that increasingly their their goal is to be that control point that management point for all of these different resources that are building and it is very compelling I think that there's one area that I still think we need more from as analysts and we always got to look through no and what's yeah what was more required and I hear what you say about project dimension but I think that the edge story still requires a fair amount of work oh yeah it's a project in place but that's going to be an increasingly important locus of how architectures get laid out how people think about applications in the future how design happens how methodologies for building software work David what do you think what when you look out what what is what what is more is needed for you so really I think there are two things that give me a small concern the the edge that's a long term view so they got time to get that right but the edge view is very much an IT view top-down and they are looking to put in place everything that they think the OT people should fit in with I think that is personally not going to be a winning strategy you you have to take it from the bottom up the world is going to go towards devices very rich devices and sensors lots of software right on that device the inference work on those devices and the job of IT will be to integrate those devices it won't be those devices taking on the standards of IT it'll be IT that has to shape itself to look after all those devices there so that's a that's the main viewpoint I think that needs adjustment and it will come I'm sure over time but as you said there's a lot of computer science it's going to be an enormous amount of new partnerships are gonna be fabricate exactly to make this happen Jim what do you think yeah I agree terms of partnerships one big gap from both VMware and Dell technologies partnerships and romance and technology proposes AI now they have a project VMware call from another project called project Magna which is really AI ops in fact I published a wiki about reports this week on AI ops AI to drive IT Service Management and to and they're doing some stuff they're working on that project it's just you know the beginning stages I think what's going to happen is that vmware dell technologies they're gonna have to make strategic acquisitions of AI solution providers to build up that capability because that's going to be fundamental to their ability to manage this complex multi called fabric from end to end continuously they need that competency internally that can't be simply a partner providing that that's got to be their core competencies so you know I'm gonna push it I'll give you the contrarian point of view okay we actually had Khamsin VMware we've had a lot of conversations about this does that is that a reflection of David's point about top-down buying things and pushing it down as opposed to other conversations we've had about how the edge is going to evolve where a lot of OT guys are going to combine with business expertise and technology expertise to create specialized solutions and is and then VMware is gonna have to reach out to them and make VMware relevant to them do you think it's going to be VMware buying a bunch of stuff or an a-grade no solution or is it going to be the solutions coming from elsewhere and VM at VMware I just becoming more relevant to them now you can still be buying a bunch of stuff to get that horizontal in place but which way you think it's going to go I think it's gonna be the top-down they're gonna buy stuff because if I talk to the channel one of the channel people this morning about well you know but they've got an IOT connected bundle and so forth they announced this show you know I think they agree with me that the core AI technology needs to be built into the fundamentals like the IOT stack bundle that they then provide to the channel partners for with you know with channel specific content that they can then tweak and customize to their specific needs but you know the core requirements for a I are horizontal you know it's the ability to run neural networks to do predictive analysis anomaly detection and so forth this is all cross-cutting across all domains it has to be in the core application stack they can't be simply something they source for particular channel opportunities it has to be leveraged across you know the same core tensorflow models for anomaly detection for manufacturing for logistics for you know customer relationship management whatever it's or are you saying essentially that then VMware becomes that horizontal play even though even if the solution providers are increasingly close to the actual action where the edges III I'm gonna disagree we can gently on that but we'd still be friends [Music] no it's you know I'm I'm an OT guy of hearth I suppose and I think that that is going to be a stronger force in terms of VMware but there will be some places where you it will be top-down but other places that where it's going to be need needed to adjust but I think there's one other there very interesting area I'd like to bring up in terms of of this question of acquisition what what we heard about beforehand was excellent results and VMware has been adding a you know a billion dollars a year in terms of free cash there and they have thirteen billion in short term cash there and the the refinancing from Dell is gonna take eleven of that thirteen and put it towards the towards the the company now you can work towards deltek yes well just Dell Dell as a hold and and silver later towards those partners I I personally believe that there is such a lot of opportunity that's going to be out there if you take NSX for example it has the potential to do things in new areas they're gonna need to provide solutions in those new areas and aggressively go after those new areas and that's going to mean big investments and many other areas where I think they are going to need acquisitions to strengthen the whole story they have the whole multi-cloud story about this real-time operating system in a sexy has a network routing virtualization backplane I mean it needs to go real-time so sensitive guaranteed ladies if they need that big investments guarantee yeah they need to go there yeah so what we're agreeing on that and I get concerned that it's not going to be given the right resources you know to be able to actually go after the opportunities that they have genuinely created it's gonna mean from you see how that plays out so I think all drugs in the future I think saying though is that there is going to be a solution a set of solution players that VMware is going to have to make significant moves to make them relevant and then the question is where it's the values story what's the value proposition it's probably gonna be like all partnerships yeah some are gonna claim that they are doing it also some are gonna DM where it's gonna claim that they do more of it but at the end of the day VMware has to make themself relevant to the edge however that happens I want to pick up on NSX because I'm a pretty big believer that NSX may be the very special crown jewel and a lot of the stuff this notion of hybrid cloud whatever we call it let's just call it extended cloud let me talk of a better word like it is predicated on the idea that I also have a network that can naturally and easily not just bridge but truly multi network interoperate internet work with a lot of different cloud sources but also all different cloud locations and there's not a lot of technologies out there that are great candidates to do that and it's and I look at NSX and I'm wondering is that gonna be kind of a I want to take the metaphor too far but is that gonna be kind of a new tcp/ip for the cloud in the sense that you're still gonna run over tcp/ip and you're still gonna run over the Internet but now we're gonna get greater visibility into jobs into workloads into management infrastructures into data locations and data placement predictive movement and NSX is going to be the at the vanguard of showing how that's gonna work and the security side of that especially to be able to know what is connected to what and what shouldn't be connected to what and to be able to have that yeah they need stateful structured streaming others Kafka flink whatever they need that to be baked into the whole nsx virtualization layer that much more programmable and that provides that much better a target for applications all right last question then we got a wrap guys David as you walk out the door get in the plane what are you taking away what's your last impression my last impression is one of genuine excitement wanting to work wanting to follow up with so many of the smaller organizations the partners that have been here and who are genuinely providing in this ecosystem a very rich tapestry of of capability that's great Jim my takeaway is I want to see their roadmap for kubernetes and serverless there wasn't a hole last year they made an announcement of a serverless project I forgot what the code name is didn't hear a whole lot about it this year but they're going up the app stack they got a coop you know distribution you know they're if they need a developer story I mean developers are building functional apps and so forth you know you can and they're also containerized they need they need a developer story and they need a server list story and they need to you need to bring us up to speed on where they're going in that regard because AWS their predominant partner I mean they got lambda functions and all that stuff you know that's that's the development platform of the present and future and I'm not hearing an intersection of that story with VMware's a story yeah my last thing that I'll say is that I think that for the next five years VMware is gonna be one of the companies that shapes the future of the cloud and I don't think we would have said that a couple of names no they wouldn't I agree with you so you said yes all right so this has been the wiki bond research leadership team talking about what we've heard at VMware this year VMworld this year a lot of great conversation feel free to reach out to us and if you want to spend more time with rookie bond love to have you once again Peter burrows for David floor and Jim Kabila's thank you very much for watching the cube we'll talk to you again [Music]

Published Date : Aug 29 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

James KobielusPERSON

0.99+

Jim KabilaPERSON

0.99+

thirteen billionQUANTITY

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

Jim CamiloPERSON

0.99+

VMwareORGANIZATION

0.99+

DellORGANIZATION

0.99+

Las VegasLOCATION

0.99+

JimPERSON

0.99+

first impressionsQUANTITY

0.99+

three daysQUANTITY

0.99+

two thingsQUANTITY

0.99+

thirteenQUANTITY

0.99+

PeterPERSON

0.99+

last yearDATE

0.99+

Pat GailPERSON

0.99+

MoorePERSON

0.99+

Mandalay BayLOCATION

0.99+

first pointQUANTITY

0.99+

second thingQUANTITY

0.98+

firstQUANTITY

0.98+

GoogleORGANIZATION

0.97+

third thingQUANTITY

0.97+

this yearDATE

0.97+

thirdQUANTITY

0.97+

this yearDATE

0.97+

NSXORGANIZATION

0.97+

two-three years agoDATE

0.97+

David floorPERSON

0.96+

VMworldORGANIZATION

0.96+

two different companiesQUANTITY

0.95+

bothQUANTITY

0.95+

VMworld 2018EVENT

0.95+

Maria DBTITLE

0.95+

wikiORGANIZATION

0.95+

MicrosoftORGANIZATION

0.95+

this weekDATE

0.94+

two lead analystsQUANTITY

0.94+

David foyerPERSON

0.93+

deltekORGANIZATION

0.93+

MondayDATE

0.93+

third dayQUANTITY

0.93+

two three years agoDATE

0.92+

one areaQUANTITY

0.92+

this morningDATE

0.91+

oneQUANTITY

0.91+

KafkaTITLE

0.9+

Analyst DayEVENT

0.89+

VMworldEVENT

0.89+

KhamsinORGANIZATION

0.88+

VMwareTITLE

0.84+

Ricky bondORGANIZATION

0.84+

WikibonORGANIZATION

0.83+

one cloudQUANTITY

0.82+

lot of partnersQUANTITY

0.82+

elevenQUANTITY

0.81+

a billion dollars a yearQUANTITY

0.81+

David Floyer, Wikibon | Pure Storage Accelerate 2018


 

>> Narrator: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE, covering Pure Storage Accelerate, 2018, brought to you by Pure Storage. >> Welcome back to theCUBE's coverage of Pure Storage Accelerate 2018. I'm Lisa Martin. Been here all day with Dave Vellante. We're joined by David Floyer now. Guys, really interesting, very informative day. We got to talk to a lot of puritans, but also a breadth of customers, from Mercedes Formula One, to Simpson Strong-Tie to UCLA's School of Medicine. Lot of impact that data is making in a diverse set of industries. Dave, you've been sitting here, with me, all day. What are some of the key takeaways that you have from today? >> Well, Pure's winning in the marketplace. I mean, Pure said, "We're not going to bump along. "We're going to go for it. "We're going to drive growth. "We don't care if we lose money, early on." They bet that the street would reward that model, it has. Kind of a little mini Amazon, version of Amazon model. Grow, grow, grow, worry about profits down the road. They're eking out a slight, little positive free cashflow, on a non-gap basis, so that's good. And they were first with All-Flash, really kind of early on. They kind of won that game. You heard David, today. The NVMe, the first with NVMe. No uplifts on pricing for NVMe. So everybody's going to follow that. They can do the Evergreen model. The can do these things and claim these things as we were first. Of course, we know, David Floyer, you were first to make the call, back in 2008, (laughs) on Flash and the All-Flash data center, but Pure was right there with you. So they're winning in that respect. Their ecosystem is growing. But, you know, storage companies never really have this massive ecosystem that follow them. They really have to do integration. So that's, that's a good thing. So, you know, we're watching growth, we're watching continued execution. It seems like they are betting that their product portfolio, their platform, can serve a lot of different workloads. And it's going to be interesting to see if they can get to two billion, the kind of, the next milestone. They hit a billion. Can they get to two billion with the existing sort of product portfolio and roadmap, or do they have to do M&A? >> David: You're right. >> That's one thing to watch. The other is, can Pure remain independent? David, you know well, we used to have this conversation, all the time, with the likes of David Scott, at 3PAR, and the guys at Compellent, Phil Soran and company. They weren't able, Frank Slootman at Data Domain, they weren't able to stay independent. They got taken out. They weren't pricey enough for the market not to buy them. They got bought out. You know, Pure, five billion dollar market cap, that's kind of rich for somebody to absorb. So it was kind of like NetApp. NetApp got too expensive to get acquired. So, can they achieve that next milestone, two billion. Can they get to five billion. The big difference-- >> Or is there any hiccup, on the way, which will-- >> Yeah, right, exactly. Well the other thing, too, is that, you know, NetApp's market was growing, pretty substantially, at the time, even though they got hit in the dot-com boom. The overall market for Pure isn't really growing. So they have to gain share in order to get to that two billion, three billion, five billion dollar mark. >> If you break the market into the flash and non flash, then they're in the much better half of the market. That one is still growing, from that perspective. >> Well, I kind of like to look at the service end piece of it. I mean, they use this term, by Gartner, today, the something, accelerated, it's a new Gartner term, in 2018-- >> Shared Accelerated Storage >> Shared Accelerated Storage. Gartner finally came up with a category that we called service end. I've been joking all day. Gartner has a better V.P. of naming than we do. (chuckles) We're looking' at service end. I mean, I started, first talking about it, in 2009, thanks to your guidance. But that chart that you have that shows the sort of service end, which is essentially Pure, right? It's the, it's not-- >> Yes. It's a little more software than Pure is. But Pure is an awful lot of software, yes. And showing it growing, at the expense of the other segments, you know. >> David: Particularly sad. >> Particularly sad. Very particularly sad. >> So they're really well positioned, from that standpoint. And, you know, the other thing, Lisa, that was really interesting, we heard from customers today, that they switched for simplicity. Okay, not a surprise. But they were relatively unhappy with some of their existing suppliers. >> Right. >> They got kind of crummy service from some of their existing suppliers. >> Right. >> Now these are, maybe, smaller companies. One customer called out SimpliVity, specifically. He said, "I loved 'em when they were an independent company, "now they're part of HPE, meh, "I don't get service like the way I used to." So, that's a sort of a warning sign and a concern. Maybe their, you know, HPE's prioritizing the bigger customers, maybe the more profitable customers, but that can come back to bite you. >> Lisa: Right. >> So Pure, the point is, Pure has the luxury of being able to lose money, service, like crazy, those customers that might not be as profitable, and grow from it's position of a smaller company, on up. >> Yeah, besides the Evergreen model and the simplicity being, resoundingly, drivers and benefits, that customers across, you know, from Formula One to medical schools, are having, you're right. The independence that Pure has currently is a selling factor for them. And it's also probably a big factor in retention. I mean, they've got a Net Promoter Score of over 83, which is extremely high. >> It's fantastic, isn't it? I think there would be VMI, that I know of, has even higher one, but it's a very, very high score. >> It's very high. They added 300 new customers, last quarter alone, bringing their global customer count to over 4800. And that was a resounding benefit that we were hearing. They, no matter how small, if it's Mercedes Formula One or the Department of Revenue in Mississippi, they all feel important. They feel like they're supported. And that's really key for driving something like a Net Promoter Score. >> Pure had definitely benefited from, it's taken share from EMC. It did early on with VMAX and Symmetrix and VNX. We've seen Dell EMC storage business, you know, decline. It probably has hit bottom, maybe it starts to grow again. When it starts to grow again, I think, even last quarter, it's growth, in dollars, was probably the size of Pure. (chuckles) You know, so, but Pure has definitely benefited from stealing share. The flip side of all this, is when you talk to you know, the CxOs, the big customers, they're doing these big digital transformations. They're not buying products, you know, they're buying transformations. They're buying sets of services. They're buying relationships, and big companies like Dell and IBM and HPE, who have large services arms, can vie for certain business that Pure, necessarily, can't. So, they've got the advantage of being smaller, nimbler, best of breed product, but they don't have this huge portfolio of capabilities that gives them a seat at the CxO table. And you saw that, today. Charlie Giancarlo, his talk, he's a techie. The guys here, Kicks, Hat, they're techies. They're hardcore storage guys. They love storage. It reminds me of the early days of EMC, you know, it's-- >> David: Or NetApp. Yeah. Yeah, or NetApp, right. They're really focused on that. So there's plenty of market for them, right now. But I wonder, David, if you could talk about, sort of architecturally, people used to criticize the two controller, you know, approach. It obviously seems to be doing very well. People take shots at their, the Evergreen model, saying "Oh, we can do that too." But, again, Pure was first. Architecturally, what's your assessment of Pure? >> So, the Evergreen, I think, is excellent. They've gone about that, well. I think, from a straighforward architecture, they kept it very simple. They made a couple of slightly, odd decisions. They went with their own NAND chips, putting them into their own stuff, which made them much smaller, much more compact, completely in charge of the storage stack. And that was a very important choice they made, and it's come out well for them. I have a feeling. My own view is that M.2 is actually going to be the form factor of the future, not the SSD. The Ssd just fitted into a hard disk slot. That was it's only benefit. So, when that comes along, and the NAND vendors want to increase the value that they get from these stacks, etc., I'm a little bit nervous about that. But, having said that, they can convert back. >> Yeah, I mean, that seems like something they could respond to, right? >> Yeah, absolutely. >> I was at the Micron financial analysts' meeting, this week. And a lot of people were expecting that, you know, the memory business has always been very cyclical, it's like the disk drive business. But, it looks like, because of the huge capital expenses required, it looks like supply, looks like they've got a good handle on supply. Micron made a good strong case to the street that, you know, the pricing is probably going to stay pretty favorable for them. So, I don't know what your thoughts are on that, but that could be a little bit of a head wind for some of the systems suppliers. >> I take that with a pinch of salt. They always want to have the market saying it's not going to go down. >> Of course, yeah. And then it crashes. (chuckles) >> The normal market place is, for any of that, is go through this series of S-curves, as you reach a certain point of volume, and 3D NAND has reached that point, that it will go down, inevitably, and then cue comes in,and then that there will go down, again, through that curve. So, I don't see the marketplace changes. I also think that there's plenty of room in the marketplace for enterprise, because the biggest majority of NAND production is for consumer, 80% goes to consumer. So there's plenty of space, in the marketplace, for enterprise to grow. >> But clearly, the prices have not come down as fast as expected because of supply constraints And the way in which companies like Pure have competed with spinning disks, go through excellent data reduction algorithms, right? >> Yes. >> So, at one point, you had predicted there would be a crossover between the cost per bit of flash and spinning disk. Has that crossover occurred, or-- >> Well, I added in the concept of sharing. >> Raw. >> Yeah, raw. But, added in the cost of sharing, the cost-benefit of sharing, and one of the things that really impresses me is their focus on sharing, which is to be able to share that data, for multiple workloads, in one place. And that's excellent technology, they have. And they're extending that from snapshots to cloud snaps, as well. >> Right. >> And I understand that benefit, but from a pure cost per bit standpoint, the crossover hasn't occurred? >> Oh no. No, they're never going to. I don't think they'll ever get to that. The second that happens, disks will just disappear, completely. >> Gosh, guys, I wish we had more time to wrap things up, but thanks, so much, Dave, for joining me all day-- >> Pleasure, Lisa. >> And sporting The Who to my Prince symbol. >> Awesome. >> David, thanks for joining us in the wrap. We appreciate you watching theCUBE, from Pure Storage Accelerate, 2018. I'm Lisa Martin, for Dave and David, thanks for watching.

Published Date : May 24 2018

SUMMARY :

brought to you by Pure Storage. that you have from today? They bet that the street would reward that model, it has. Can they get to five billion. Well the other thing, too, is that, you know, If you break the market into the flash and non flash, Well, I kind of like to look at But that chart that you have that shows the at the expense of the other segments, Particularly sad. And, you know, the other thing, Lisa, They got kind of crummy service but that can come back to bite you. So Pure, the point is, Pure has the luxury that customers across, you know, from I think there would be VMI, that I know of, And that was a resounding benefit that we were hearing. It reminds me of the early days of EMC, you know, it's-- the two controller, you know, approach. completely in charge of the storage stack. And a lot of people were expecting that, you know, I take that with a pinch of salt. And then it crashes. So, I don't see the marketplace changes. So, at one point, you had predicted But, added in the cost of sharing, I don't think they'll ever get to that. We appreciate you watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LisaPERSON

0.99+

DavidPERSON

0.99+

IBMORGANIZATION

0.99+

David FloyerPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

2018DATE

0.99+

2008DATE

0.99+

EMCORGANIZATION

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

VMAXORGANIZATION

0.99+

Charlie GiancarloPERSON

0.99+

2009DATE

0.99+

GartnerORGANIZATION

0.99+

two billionQUANTITY

0.99+

80%QUANTITY

0.99+

David ScottPERSON

0.99+

VNXORGANIZATION

0.99+

five billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

three billionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

SymmetrixORGANIZATION

0.99+

Department of RevenueORGANIZATION

0.99+

300 new customersQUANTITY

0.99+

Data DomainORGANIZATION

0.99+

3PARORGANIZATION

0.99+

PureORGANIZATION

0.99+

last quarterDATE

0.99+

Pure StorageORGANIZATION

0.99+

Phil SoranPERSON

0.99+

MississippiLOCATION

0.99+

UCLAORGANIZATION

0.99+

firstQUANTITY

0.99+

MicronORGANIZATION

0.98+

CompellentORGANIZATION

0.98+

EvergreenORGANIZATION

0.98+

todayDATE

0.98+

One customerQUANTITY

0.98+

oneQUANTITY

0.98+

a billionQUANTITY

0.98+

over 4800QUANTITY

0.98+

San FranciscoLOCATION

0.97+

theCUBEORGANIZATION

0.97+

two controllerQUANTITY

0.97+

over 83QUANTITY

0.96+

Dell EMCORGANIZATION

0.96+

five billion dollarQUANTITY

0.96+

one placeQUANTITY

0.95+

NVMeORGANIZATION

0.95+

PurePERSON

0.95+

Simpson Strong-TieORGANIZATION

0.94+

WikibonORGANIZATION

0.92+

NetAppTITLE

0.92+

Action Item Quick Take | David Floyer | Flash and SSD, April 2018


 

>> Hi, I'm Peter Burris with another Wikibon Action Item Quick Take. David Floyer, you've been at the vanguard of talking about the role that Flash, SSD's, and others, other technologies are going to have in the technology industry, predicting early on that it was going to eclipse HDD, even though you got a lot of blow back about the "We're going to remain expensive and small". That's changed. What's going on? >> Well, I've got a prediction that we'll have petabyte drives, SSD drives, within five years. Let me tell you a little bit why. So there's this new type of SSD that's coming into town. It's the mega SSD, and Nimbus Data has just announced this mega SSD. It's a hundred terabyte drive. It's very high density, obviously. It has much fewer, uh, much fewer? It has fewer IOPS and bandwidth than SSD. The access density is much better than HDD, but still obviously lower than high-performance SSD. Much, much lower space power than either SSD or HDD in terms of environmentals. It's three and a half inch. That's compatible with HDD. It's obviously looking to go into the same slots. A hundred terabytes today, two hundred terabytes, 10x, that 10x of the Hammer drives that are coming in from HDD's in 2019, 2020, and the delta will increase over time. It's still more expensive than HDD per bit, but it's, and it's not a direct replacement, but much greater ability to integrate with data services and other things like that. So the prediction, then, is get ready for mega SSD's. It's going to carve out a space at the low end of SSD's and into the HDD's, and we're going to have one petabyte, or more, drives within five years. >> Big stuff from small things. David Floyer, thank you very much. And, once again, this has been a Wikibon Action Item Quick Take. (chill techno music)

Published Date : Apr 6 2018

SUMMARY :

about the "We're going to remain expensive and small". It's the mega SSD, and Nimbus Data has just announced Wikibon Action Item Quick Take.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

April 2018DATE

0.99+

2019DATE

0.99+

2020DATE

0.99+

10xQUANTITY

0.99+

two hundred terabytesQUANTITY

0.99+

three and a half inchQUANTITY

0.99+

Nimbus DataORGANIZATION

0.98+

A hundred terabytesQUANTITY

0.98+

one petabyteQUANTITY

0.97+

todayDATE

0.95+

five yearsQUANTITY

0.94+

WikibonORGANIZATION

0.92+

a hundred terabyteQUANTITY

0.83+

petabyteQUANTITY

0.56+

David Floyer | Action Item Quick Take - March 30, 2018


 

>> Hi, this is Peter Burris with another Wikibon Action Item Quick Take. David Floyer, big news from Redmond, what's going on? >> Well, big Microsoft announcement. If we go back a few years before Nadella took over, Ballmer was a great believer in one Microsoft. They bought Nokia, they were looking at putting Windows into everything, it was a Windows led, one Microsoft organization. And a lot of ambitious ideas were cut off because they didn't get the sign off by, for example, the Windows group. Nadella's first action, and I actually was there, was to announce Office on the iPhone. A major, major thing that had been proposed for a long time was being held up internally. And now he's gone even further. The focus, clear focus of Microsoft is on the cloud, you know 50% plus CAGR on the cloud, Office 365 CAGR 41% and AI, focusing on AI and obviously the intelligent age as well. So Windows 10, Myerson, the leader there, is out, 2% CAGR, he missed his one billion Windows target, by a long way, something like 50%. Windows functionality is being distributed, essentially, across the whole of Microsoft. So hardware is taking the Xbox and the Surface. Windows server itself is going to the cloud. So, big change from the historical look of Microsoft, but, a trimming down of the organization and a much clearer focus on the key things driving Microsoft's fantastic increase in net worth. >> So Microsoft retooling to take advantage and be more relevant, sustain it's relevance in the new era of computing. Once again, this has been a Wikibon Action Item Quick Take. (soft electronic music)

Published Date : Mar 30 2018

SUMMARY :

David Floyer, big news from Redmond, what's going on? So Windows 10, Myerson, the leader there, is out, in the new era of computing.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

March 30, 2018DATE

0.99+

MicrosoftORGANIZATION

0.99+

NadellaPERSON

0.99+

NokiaORGANIZATION

0.99+

50%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

BallmerPERSON

0.99+

2%QUANTITY

0.99+

one billionQUANTITY

0.99+

Windows 10TITLE

0.99+

Office 365TITLE

0.99+

WindowsTITLE

0.98+

XboxCOMMERCIAL_ITEM

0.98+

SurfaceCOMMERCIAL_ITEM

0.96+

first actionQUANTITY

0.95+

OfficeTITLE

0.94+

MyersonPERSON

0.93+

oneQUANTITY

0.92+

WikibonORGANIZATION

0.87+

RedmondLOCATION

0.86+

41%QUANTITY

0.84+

WindowsORGANIZATION

0.8+

few yearsDATE

0.43+

Wikibon Action Item Quick Take | David Floyer | OCP Summit, March 2018


 

>> Hi I'm Peter Burris, and welcome once again to another Wikibon Action Item Quick Take. David Floyer you were at OCP, the Open Compute Platform show, or summit this week, wandered the floor, talked to a lot of people, one company in particular stood out, Nimbus Data, what'd you hear? >> Well they had a very interesting announcement of their 100 terrabyte three and a half inch SSD, called the ExaData. That's a lot of storage in a very small space. It's high capacity SSDs, in my opinion, are going to be very important. They are denser, much less power, much less space, not as much performance, but fit in very nicely between the lowest level of disc, hard disc storage and the upper level. So they are going to be very useful in lower tier two applications. Very low friction for adoption there. They're going to be useful in tier three, but they're not direct replacement for disc. They work in a slightly different way. So the friction is going to be a little bit higher there. And then in tier four, there's again very interesting of putting all of the metadata about large amounts of data and put the metadata on high capacity SSD to enable much faster access at a tier four level. So action item for me is have a look at my research, and have a look at the general pricing: it's about half of what a standard SSD is. >> Excellent so this is once again a Wikibon Action Item Quick Take. David Floyer talking about Nimbus Data and their new high capacity, slightly lower performance, cost effective SSD. (upbeat music)

Published Date : Mar 23 2018

SUMMARY :

to another Wikibon Action Item Quick Take. So they are going to be very useful and their new high capacity, slightly lower performance,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

Steve MulaneyPERSON

0.99+

GeorgePERSON

0.99+

John CurrierPERSON

0.99+

Derek MonahanPERSON

0.99+

Justin SmithPERSON

0.99+

StevePERSON

0.99+

MexicoLOCATION

0.99+

George BuckmanPERSON

0.99+

AmazonORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

StephenPERSON

0.99+

CiscoORGANIZATION

0.99+

Steve EleniPERSON

0.99+

Bobby WilloughbyPERSON

0.99+

millionsQUANTITY

0.99+

John FordPERSON

0.99+

Santa ClaraLOCATION

0.99+

20%QUANTITY

0.99+

MissouriLOCATION

0.99+

twenty-yearQUANTITY

0.99+

Luis CastilloPERSON

0.99+

SeattleLOCATION

0.99+

Ellie MaePERSON

0.99+

80 percentQUANTITY

0.99+

EuropeLOCATION

0.99+

10%QUANTITY

0.99+

25 yearsQUANTITY

0.99+

USLOCATION

0.99+

twenty yearsQUANTITY

0.99+

three monthsQUANTITY

0.99+

JeffPERSON

0.99+

80%QUANTITY

0.99+

John fritzPERSON

0.99+

JustinPERSON

0.99+

GoogleORGANIZATION

0.99+

North AmericaLOCATION

0.99+

JenniferPERSON

0.99+

AWSORGANIZATION

0.99+

Michael KeatonPERSON

0.99+

Santa Clara, CALOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

National InstrumentsORGANIZATION

0.99+

Jon FourierPERSON

0.99+

50%QUANTITY

0.99+

20 mileQUANTITY

0.99+

DavidPERSON

0.99+

Toby FosterPERSON

0.99+

hundred-percentQUANTITY

0.99+

fiveQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

PythonTITLE

0.99+

GartnerORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

StaceyPERSON

0.99+

Palo AltoLOCATION

0.99+

next yearDATE

0.99+

two sidesQUANTITY

0.99+

18 months agoDATE

0.99+

two typesQUANTITY

0.99+

Andy JessePERSON

0.99+

Action Item Quick Take | David Floyer - Feb 2018


 

(groovy music) >> Hi, I'm Peter Burris, welcome to a Wikibon action item quick take. David Floyer, you and I visited Half Moon Bay this week for announcements, what happened? >> Well, there were a number of IBM, Spectrum, and NVMe over Fabric announcements, and they were, I thought, good. The first one was a broad range of Spectrum software announcments working on any hardware, not just IBM, and it's a good step towards the hyperconverged software-led service and environments that we've been talking about. The second, they filled in the IBM NAS gap with Spectrum NAS, so that's always a good thing to fill in. There's a lot of practical reasons for using that. The third is they announced an IBM 900 storage product with fantastic IO performance. 95 microsecond, including inline compression. And for the hardware people, that's really, really good. And the last one is, I thought the most interesting of all, which is a good IBM announcement on the commitment to NVME over fabrics. They announced a very fast solution with the POWER9 with gen four PCIE and the 900 storage, that's best of breed in terms of speed, and they guarantee that all of their current products would support NVMe over fabrics as it comes over in 2018 and some of 2019. So, a very good overall announcement, and puts IBM back into storage. >> Great, so a very aggressive announcement from IBM. Good to see them back in the storage world. This has been Peter Burris talking with David Floyer, and a Wikibon action item quick take. (groovy music)

Published Date : Feb 23 2018

SUMMARY :

and the 900 storage, and a Wikibon action item quick take.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

2018DATE

0.99+

Feb 2018DATE

0.99+

IBMORGANIZATION

0.99+

2019DATE

0.99+

thirdQUANTITY

0.98+

Half Moon BayLOCATION

0.98+

95 microsecondQUANTITY

0.97+

secondQUANTITY

0.94+

this weekDATE

0.93+

POWER9COMMERCIAL_ITEM

0.91+

first oneQUANTITY

0.91+

WikibonORGANIZATION

0.77+

NVMeORGANIZATION

0.71+

900COMMERCIAL_ITEM

0.65+

SpectrumORGANIZATION

0.63+

SpectrumTITLE

0.55+

PCIECOMMERCIAL_ITEM

0.49+

gen fourOTHER

0.38+

SpectrumCOMMERCIAL_ITEM

0.32+

David Floyer, Wikibon | Action Item Quick Take: Storage Networks, Feb 2018


 

>> Hi, I'm Peter Burris, and this is a Wikibon Action Item Quick Take. (techno music) David Floyer, lot of new opportunities for thinking about how we can spread data. That puts new types of pressure on networks. What's going on? >> So, what's interesting is the future of networks and in particular one type of network. So, if we generalize about networks you can have simplicity, which is N-F-V, for example, Network Function Virtualization is incredibly important for. You can have scale, reach, the number of different places that you place data and how you can have the same admin for that. And you can have performance. Those are three things and there's usually a trade-off between those. You can't ... very, very difficult to have all three. What's interesting is that Mellanox have defined one piece of that network, the storage network, as a place where performance is absolutely critical. And they've defined the storage network with an emphasis on this performance using ethernet. Why? Because now ethernet can offer the same point-to-point capabilities, no lost capabilities. The fastest switches are in ethernet now. They go up to 400 has been announced, which is much ... >> David: 400 ... >> Gigabits per second, which is much faster than anybody else for any other protocol. So, and the reason for, one of the major reasons for this is that volume is coming from the Cloud providers. So they are providing a statement that storage networks are different from other networks. They need to have very low latency, they need to have high bandwidth, they need to have no loss, they need this point-to-point capability so that things can be done very, very fast indeed. I think their vision of where storage networks go is very sound and that is what all storage vendors need to take heed of, and C-I-Os, C-T-Os need to take heed of, is that type of network is going to be what is in the Cloud and is going to come to the Enterprise Data Center very quickly. >> David Floyer, thank you very much. Bottom line, ethernet, storage area networks, segmentation, still going to happen. >> Yup. >> I'm Peter Burris, this has been a Wikibon Action Item Quick Take. (techno music)

Published Date : Feb 16 2018

SUMMARY :

and this is a Wikibon Action Item Quick Take. and how you can have the same admin for that. So, and the reason for, one of the major reasons for this David Floyer, thank you very much. this has been a Wikibon Action Item Quick Take.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

David FloyerPERSON

0.99+

Feb 2018DATE

0.99+

oneQUANTITY

0.98+

one pieceQUANTITY

0.98+

three thingsQUANTITY

0.97+

MellanoxORGANIZATION

0.97+

WikibonORGANIZATION

0.95+

one typeQUANTITY

0.95+

threeQUANTITY

0.93+

DavidPERSON

0.92+

secondQUANTITY

0.89+

up to 400QUANTITY

0.73+

I-OsCOMMERCIAL_ITEM

0.54+

-T-OsTITLE

0.52+

C-TITLE

0.52+

400QUANTITY

0.43+

CORGANIZATION

0.38+

Breaking Analysis: Databricks faces critical strategic decisions…here’s why


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Spark became a top level Apache project in 2014, and then shortly thereafter, burst onto the big data scene. Spark, along with the cloud, transformed and in many ways, disrupted the big data market. Databricks optimized its tech stack for Spark and took advantage of the cloud to really cleverly deliver a managed service that has become a leading AI and data platform among data scientists and data engineers. However, emerging customer data requirements are shifting into a direction that will cause modern data platform players generally and Databricks, specifically, we think, to make some key directional decisions and perhaps even reinvent themselves. Hello and welcome to this week's wikibon theCUBE Insights, powered by ETR. In this Breaking Analysis, we're going to do a deep dive into Databricks. We'll explore its current impressive market momentum. We're going to use some ETR survey data to show that, and then we'll lay out how customer data requirements are changing and what the ideal data platform will look like in the midterm future. We'll then evaluate core elements of the Databricks portfolio against that vision, and then we'll close with some strategic decisions that we think the company faces. And to do so, we welcome in our good friend, George Gilbert, former equities analyst, market analyst, and current Principal at TechAlpha Partners. George, good to see you. Thanks for coming on. >> Good to see you, Dave. >> All right, let me set this up. We're going to start by taking a look at where Databricks sits in the market in terms of how customers perceive the company and what it's momentum looks like. And this chart that we're showing here is data from ETS, the emerging technology survey of private companies. The N is 1,421. What we did is we cut the data on three sectors, analytics, database-data warehouse, and AI/ML. The vertical axis is a measure of customer sentiment, which evaluates an IT decision maker's awareness of the firm and the likelihood of engaging and/or purchase intent. The horizontal axis shows mindshare in the dataset, and we've highlighted Databricks, which has been a consistent high performer in this survey over the last several quarters. And as we, by the way, just as aside as we previously reported, OpenAI, which burst onto the scene this past quarter, leads all names, but Databricks is still prominent. You can see that the ETR shows some open source tools for reference, but as far as firms go, Databricks is very impressively positioned. Now, let's see how they stack up to some mainstream cohorts in the data space, against some bigger companies and sometimes public companies. This chart shows net score on the vertical axis, which is a measure of spending momentum and pervasiveness in the data set is on the horizontal axis. You can see that chart insert in the upper right, that informs how the dots are plotted, and net score against shared N. And that red dotted line at 40% indicates a highly elevated net score, anything above that we think is really, really impressive. And here we're just comparing Databricks with Snowflake, Cloudera, and Oracle. And that squiggly line leading to Databricks shows their path since 2021 by quarter. And you can see it's performing extremely well, maintaining an elevated net score and net range. Now it's comparable in the vertical axis to Snowflake, and it consistently is moving to the right and gaining share. Now, why did we choose to show Cloudera and Oracle? The reason is that Cloudera got the whole big data era started and was disrupted by Spark. And of course the cloud, Spark and Databricks and Oracle in many ways, was the target of early big data players like Cloudera. Take a listen to Cloudera CEO at the time, Mike Olson. This is back in 2010, first year of theCUBE, play the clip. >> Look, back in the day, if you had a data problem, if you needed to run business analytics, you wrote the biggest check you could to Sun Microsystems, and you bought a great big, single box, central server, and any money that was left over, you handed to Oracle for a database licenses and you installed that database on that box, and that was where you went for data. That was your temple of information. >> Okay? So Mike Olson implied that monolithic model was too expensive and inflexible, and Cloudera set out to fix that. But the best laid plans, as they say, George, what do you make of the data that we just shared? >> So where Databricks has really come up out of sort of Cloudera's tailpipe was they took big data processing, made it coherent, made it a managed service so it could run in the cloud. So it relieved customers of the operational burden. Where they're really strong and where their traditional meat and potatoes or bread and butter is the predictive and prescriptive analytics that building and training and serving machine learning models. They've tried to move into traditional business intelligence, the more traditional descriptive and diagnostic analytics, but they're less mature there. So what that means is, the reason you see Databricks and Snowflake kind of side by side is there are many, many accounts that have both Snowflake for business intelligence, Databricks for AI machine learning, where Snowflake, I'm sorry, where Databricks also did really well was in core data engineering, refining the data, the old ETL process, which kind of turned into ELT, where you loaded into the analytic repository in raw form and refine it. And so people have really used both, and each is trying to get into the other. >> Yeah, absolutely. We've reported on this quite a bit. Snowflake, kind of moving into the domain of Databricks and vice versa. And the last bit of ETR evidence that we want to share in terms of the company's momentum comes from ETR's Round Tables. They're run by Erik Bradley, and now former Gartner analyst and George, your colleague back at Gartner, Daren Brabham. And what we're going to show here is some direct quotes of IT pros in those Round Tables. There's a data science head and a CIO as well. Just make a few call outs here, we won't spend too much time on it, but starting at the top, like all of us, we can't talk about Databricks without mentioning Snowflake. Those two get us excited. Second comment zeros in on the flexibility and the robustness of Databricks from a data warehouse perspective. And then the last point is, despite competition from cloud players, Databricks has reinvented itself a couple of times over the year. And George, we're going to lay out today a scenario that perhaps calls for Databricks to do that once again. >> Their big opportunity and their big challenge for every tech company, it's managing a technology transition. The transition that we're talking about is something that's been bubbling up, but it's really epical. First time in 60 years, we're moving from an application-centric view of the world to a data-centric view, because decisions are becoming more important than automating processes. So let me let you sort of develop. >> Yeah, so let's talk about that here. We going to put up some bullets on precisely that point and the changing sort of customer environment. So you got IT stacks are shifting is George just said, from application centric silos to data centric stacks where the priority is shifting from automating processes to automating decision. You know how look at RPA and there's still a lot of automation going on, but from the focus of that application centricity and the data locked into those apps, that's changing. Data has historically been on the outskirts in silos, but organizations, you think of Amazon, think Uber, Airbnb, they're putting data at the core, and logic is increasingly being embedded in the data instead of the reverse. In other words, today, the data's locked inside the app, which is why you need to extract that data is sticking it to a data warehouse. The point, George, is we're putting forth this new vision for how data is going to be used. And you've used this Uber example to underscore the future state. Please explain? >> Okay, so this is hopefully an example everyone can relate to. The idea is first, you're automating things that are happening in the real world and decisions that make those things happen autonomously without humans in the loop all the time. So to use the Uber example on your phone, you call a car, you call a driver. Automatically, the Uber app then looks at what drivers are in the vicinity, what drivers are free, matches one, calculates an ETA to you, calculates a price, calculates an ETA to your destination, and then directs the driver once they're there. The point of this is that that cannot happen in an application-centric world very easily because all these little apps, the drivers, the riders, the routes, the fares, those call on data locked up in many different apps, but they have to sit on a layer that makes it all coherent. >> But George, so if Uber's doing this, doesn't this tech already exist? Isn't there a tech platform that does this already? >> Yes, and the mission of the entire tech industry is to build services that make it possible to compose and operate similar platforms and tools, but with the skills of mainstream developers in mainstream corporations, not the rocket scientists at Uber and Amazon. >> Okay, so we're talking about horizontally scaling across the industry, and actually giving a lot more organizations access to this technology. So by way of review, let's summarize the trend that's going on today in terms of the modern data stack that is propelling the likes of Databricks and Snowflake, which we just showed you in the ETR data and is really is a tailwind form. So the trend is toward this common repository for analytic data, that could be multiple virtual data warehouses inside of Snowflake, but you're in that Snowflake environment or Lakehouses from Databricks or multiple data lakes. And we've talked about what JP Morgan Chase is doing with the data mesh and gluing data lakes together, you've got various public clouds playing in this game, and then the data is annotated to have a common meaning. In other words, there's a semantic layer that enables applications to talk to the data elements and know that they have common and coherent meaning. So George, the good news is this approach is more effective than the legacy monolithic models that Mike Olson was talking about, so what's the problem with this in your view? >> So today's data platforms added immense value 'cause they connected the data that was previously locked up in these monolithic apps or on all these different microservices, and that supported traditional BI and AI/ML use cases. But now if we want to build apps like Uber or Amazon.com, where they've got essentially an autonomously running supply chain and e-commerce app where humans only care and feed it. But the thing is figuring out what to buy, when to buy, where to deploy it, when to ship it. We needed a semantic layer on top of the data. So that, as you were saying, the data that's coming from all those apps, the different apps that's integrated, not just connected, but it means the same. And the issue is whenever you add a new layer to a stack to support new applications, there are implications for the already existing layers, like can they support the new layer and its use cases? So for instance, if you add a semantic layer that embeds app logic with the data rather than vice versa, which we been talking about and that's been the case for 60 years, then the new data layer faces challenges that the way you manage that data, the way you analyze that data, is not supported by today's tools. >> Okay, so actually Alex, bring me up that last slide if you would, I mean, you're basically saying at the bottom here, today's repositories don't really do joins at scale. The future is you're talking about hundreds or thousands or millions of data connections, and today's systems, we're talking about, I don't know, 6, 8, 10 joins and that is the fundamental problem you're saying, is a new data error coming and existing systems won't be able to handle it? >> Yeah, one way of thinking about it is that even though we call them relational databases, when we actually want to do lots of joins or when we want to analyze data from lots of different tables, we created a whole new industry for analytic databases where you sort of mung the data together into fewer tables. So you didn't have to do as many joins because the joins are difficult and slow. And when you're going to arbitrarily join thousands, hundreds of thousands or across millions of elements, you need a new type of database. We have them, they're called graph databases, but to query them, you go back to the prerelational era in terms of their usability. >> Okay, so we're going to come back to that and talk about how you get around that problem. But let's first lay out what the ideal data platform of the future we think looks like. And again, we're going to come back to use this Uber example. In this graphic that George put together, awesome. We got three layers. The application layer is where the data products reside. The example here is drivers, rides, maps, routes, ETA, et cetera. The digital version of what we were talking about in the previous slide, people, places and things. The next layer is the data layer, that breaks down the silos and connects the data elements through semantics and everything is coherent. And then the bottom layers, the legacy operational systems feed that data layer. George, explain what's different here, the graph database element, you talk about the relational query capabilities, and why can't I just throw memory at solving this problem? >> Some of the graph databases do throw memory at the problem and maybe without naming names, some of them live entirely in memory. And what you're dealing with is a prerelational in-memory database system where you navigate between elements, and the issue with that is we've had SQL for 50 years, so we don't have to navigate, we can say what we want without how to get it. That's the core of the problem. >> Okay. So if I may, I just want to drill into this a little bit. So you're talking about the expressiveness of a graph. Alex, if you'd bring that back out, the fourth bullet, expressiveness of a graph database with the relational ease of query. Can you explain what you mean by that? >> Yeah, so graphs are great because when you can describe anything with a graph, that's why they're becoming so popular. Expressive means you can represent anything easily. They're conducive to, you might say, in a world where we now want like the metaverse, like with a 3D world, and I don't mean the Facebook metaverse, I mean like the business metaverse when we want to capture data about everything, but we want it in context, we want to build a set of digital twins that represent everything going on in the world. And Uber is a tiny example of that. Uber built a graph to represent all the drivers and riders and maps and routes. But what you need out of a database isn't just a way to store stuff and update stuff. You need to be able to ask questions of it, you need to be able to query it. And if you go back to prerelational days, you had to know how to find your way to the data. It's sort of like when you give directions to someone and they didn't have a GPS system and a mapping system, you had to give them turn by turn directions. Whereas when you have a GPS and a mapping system, which is like the relational thing, you just say where you want to go, and it spits out the turn by turn directions, which let's say, the car might follow or whoever you're directing would follow. But the point is, it's much easier in a relational database to say, "I just want to get these results. You figure out how to get it." The graph database, they have not taken over the world because in some ways, it's taking a 50 year leap backwards. >> Alright, got it. Okay. Let's take a look at how the current Databricks offerings map to that ideal state that we just laid out. So to do that, we put together this chart that looks at the key elements of the Databricks portfolio, the core capability, the weakness, and the threat that may loom. Start with the Delta Lake, that's the storage layer, which is great for files and tables. It's got true separation of compute and storage, I want you to double click on that George, as independent elements, but it's weaker for the type of low latency ingest that we see coming in the future. And some of the threats highlighted here. AWS could add transactional tables to S3, Iceberg adoption is picking up and could accelerate, that could disrupt Databricks. George, add some color here please? >> Okay, so this is the sort of a classic competitive forces where you want to look at, so what are customers demanding? What's competitive pressure? What are substitutes? Even what your suppliers might be pushing. Here, Delta Lake is at its core, a set of transactional tables that sit on an object store. So think of it in a database system, this is the storage engine. So since S3 has been getting stronger for 15 years, you could see a scenario where they add transactional tables. We have an open source alternative in Iceberg, which Snowflake and others support. But at the same time, Databricks has built an ecosystem out of tools, their own and others, that read and write to Delta tables, that's what makes the Delta Lake and ecosystem. So they have a catalog, the whole machine learning tool chain talks directly to the data here. That was their great advantage because in the past with Snowflake, you had to pull all the data out of the database before the machine learning tools could work with it, that was a major shortcoming. They fixed that. But the point here is that even before we get to the semantic layer, the core foundation is under threat. >> Yep. Got it. Okay. We got a lot of ground to cover. So we're going to take a look at the Spark Execution Engine next. Think of that as the refinery that runs really efficient batch processing. That's kind of what disrupted the DOOp in a large way, but it's not Python friendly and that's an issue because the data science and the data engineering crowd are moving in that direction, and/or they're using DBT. George, we had Tristan Handy on at Supercloud, really interesting discussion that you and I did. Explain why this is an issue for Databricks? >> So once the data lake was in place, what people did was they refined their data batch, and Spark has always had streaming support and it's gotten better. The underlying storage as we've talked about is an issue. But basically they took raw data, then they refined it into tables that were like customers and products and partners. And then they refined that again into what was like gold artifacts, which might be business intelligence metrics or dashboards, which were collections of metrics. But they were running it on the Spark Execution Engine, which it's a Java-based engine or it's running on a Java-based virtual machine, which means all the data scientists and the data engineers who want to work with Python are really working in sort of oil and water. Like if you get an error in Python, you can't tell whether the problems in Python or where it's in Spark. There's just an impedance mismatch between the two. And then at the same time, the whole world is now gravitating towards DBT because it's a very nice and simple way to compose these data processing pipelines, and people are using either SQL in DBT or Python in DBT, and that kind of is a substitute for doing it all in Spark. So it's under threat even before we get to that semantic layer, it so happens that DBT itself is becoming the authoring environment for the semantic layer with business intelligent metrics. But that's again, this is the second element that's under direct substitution and competitive threat. >> Okay, let's now move down to the third element, which is the Photon. Photon is Databricks' BI Lakehouse, which has integration with the Databricks tooling, which is very rich, it's newer. And it's also not well suited for high concurrency and low latency use cases, which we think are going to increasingly become the norm over time. George, the call out threat here is customers want to connect everything to a semantic layer. Explain your thinking here and why this is a potential threat to Databricks? >> Okay, so two issues here. What you were touching on, which is the high concurrency, low latency, when people are running like thousands of dashboards and data is streaming in, that's a problem because SQL data warehouse, the query engine, something like that matures over five to 10 years. It's one of these things, the joke that Andy Jassy makes just in general, he's really talking about Azure, but there's no compression algorithm for experience. The Snowflake guy started more than five years earlier, and for a bunch of reasons, that lead is not something that Databricks can shrink. They'll always be behind. So that's why Snowflake has transactional tables now and we can get into that in another show. But the key point is, so near term, it's struggling to keep up with the use cases that are core to business intelligence, which is highly concurrent, lots of users doing interactive query. But then when you get to a semantic layer, that's when you need to be able to query data that might have thousands or tens of thousands or hundreds of thousands of joins. And that's a SQL query engine, traditional SQL query engine is just not built for that. That's the core problem of traditional relational databases. >> Now this is a quick aside. We always talk about Snowflake and Databricks in sort of the same context. We're not necessarily saying that Snowflake is in a position to tackle all these problems. We'll deal with that separately. So we don't mean to imply that, but we're just sort of laying out some of the things that Snowflake or rather Databricks customers we think, need to be thinking about and having conversations with Databricks about and we hope to have them as well. We'll come back to that in terms of sort of strategic options. But finally, when come back to the table, we have Databricks' AI/ML Tool Chain, which has been an awesome capability for the data science crowd. It's comprehensive, it's a one-stop shop solution, but the kicker here is that it's optimized for supervised model building. And the concern is that foundational models like GPT could cannibalize the current Databricks tooling, but George, can't Databricks, like other software companies, integrate foundation model capabilities into its platform? >> Okay, so the sound bite answer to that is sure, IBM 3270 terminals could call out to a graphical user interface when they're running on the XT terminal, but they're not exactly good citizens in that world. The core issue is Databricks has this wonderful end-to-end tool chain for training, deploying, monitoring, running inference on supervised models. But the paradigm there is the customer builds and trains and deploys each model for each feature or application. In a world of foundation models which are pre-trained and unsupervised, the entire tool chain is different. So it's not like Databricks can junk everything they've done and start over with all their engineers. They have to keep maintaining what they've done in the old world, but they have to build something new that's optimized for the new world. It's a classic technology transition and their mentality appears to be, "Oh, we'll support the new stuff from our old stuff." Which is suboptimal, and as we'll talk about, their biggest patron and the company that put them on the map, Microsoft, really stopped working on their old stuff three years ago so that they could build a new tool chain optimized for this new world. >> Yeah, and so let's sort of close with what we think the options are and decisions that Databricks has for its future architecture. They're smart people. I mean we've had Ali Ghodsi on many times, super impressive. I think they've got to be keenly aware of the limitations, what's going on with foundation models. But at any rate, here in this chart, we lay out sort of three scenarios. One is re-architect the platform by incrementally adopting new technologies. And example might be to layer a graph query engine on top of its stack. They could license key technologies like graph database, they could get aggressive on M&A and buy-in, relational knowledge graphs, semantic technologies, vector database technologies. George, as David Floyer always says, "A lot of ways to skin a cat." We've seen companies like, even think about EMC maintained its relevance through M&A for many, many years. George, give us your thought on each of these strategic options? >> Okay, I find this question the most challenging 'cause remember, I used to be an equity research analyst. I worked for Frank Quattrone, we were one of the top tech shops in the banking industry, although this is 20 years ago. But the M&A team was the top team in the industry and everyone wanted them on their side. And I remember going to meetings with these CEOs, where Frank and the bankers would say, "You want us for your M&A work because we can do better." And they really could do better. But in software, it's not like with EMC in hardware because with hardware, it's easier to connect different boxes. With software, the whole point of a software company is to integrate and architect the components so they fit together and reinforce each other, and that makes M&A harder. You can do it, but it takes a long time to fit the pieces together. Let me give you examples. If they put a graph query engine, let's say something like TinkerPop, on top of, I don't even know if it's possible, but let's say they put it on top of Delta Lake, then you have this graph query engine talking to their storage layer, Delta Lake. But if you want to do analysis, you got to put the data in Photon, which is not really ideal for highly connected data. If you license a graph database, then most of your data is in the Delta Lake and how do you sync it with the graph database? If you do sync it, you've got data in two places, which kind of defeats the purpose of having a unified repository. I find this semantic layer option in number three actually more promising, because that's something that you can layer on top of the storage layer that you have already. You just have to figure out then how to have your query engines talk to that. What I'm trying to highlight is, it's easy as an analyst to say, "You can buy this company or license that technology." But the really hard work is making it all work together and that is where the challenge is. >> Yeah, and well look, I thank you for laying that out. We've seen it, certainly Microsoft and Oracle. I guess you might argue that well, Microsoft had a monopoly in its desktop software and was able to throw off cash for a decade plus while it's stock was going sideways. Oracle had won the database wars and had amazing margins and cash flow to be able to do that. Databricks isn't even gone public yet, but I want to close with some of the players to watch. Alex, if you'd bring that back up, number four here. AWS, we talked about some of their options with S3 and it's not just AWS, it's blob storage, object storage. Microsoft, as you sort of alluded to, was an early go-to market channel for Databricks. We didn't address that really. So maybe in the closing comments we can. Google obviously, Snowflake of course, we're going to dissect their options in future Breaking Analysis. Dbt labs, where do they fit? Bob Muglia's company, Relational.ai, why are these players to watch George, in your opinion? >> So everyone is trying to assemble and integrate the pieces that would make building data applications, data products easy. And the critical part isn't just assembling a bunch of pieces, which is traditionally what AWS did. It's a Unix ethos, which is we give you the tools, you put 'em together, 'cause you then have the maximum choice and maximum power. So what the hyperscalers are doing is they're taking their key value stores, in the case of ASW it's DynamoDB, in the case of Azure it's Cosmos DB, and each are putting a graph query engine on top of those. So they have a unified storage and graph database engine, like all the data would be collected in the key value store. Then you have a graph database, that's how they're going to be presenting a foundation for building these data apps. Dbt labs is putting a semantic layer on top of data lakes and data warehouses and as we'll talk about, I'm sure in the future, that makes it easier to swap out the underlying data platform or swap in new ones for specialized use cases. Snowflake, what they're doing, they're so strong in data management and with their transactional tables, what they're trying to do is take in the operational data that used to be in the province of many state stores like MongoDB and say, "If you manage that data with us, it'll be connected to your analytic data without having to send it through a pipeline." And that's hugely valuable. Relational.ai is the wildcard, 'cause what they're trying to do, it's almost like a holy grail where you're trying to take the expressiveness of connecting all your data in a graph but making it as easy to query as you've always had it in a SQL database or I should say, in a relational database. And if they do that, it's sort of like, it'll be as easy to program these data apps as a spreadsheet was compared to procedural languages, like BASIC or Pascal. That's the implications of Relational.ai. >> Yeah, and again, we talked before, why can't you just throw this all in memory? We're talking in that example of really getting down to differences in how you lay the data out on disk in really, new database architecture, correct? >> Yes. And that's why it's not clear that you could take a data lake or even a Snowflake and why you can't put a relational knowledge graph on those. You could potentially put a graph database, but it'll be compromised because to really do what Relational.ai has done, which is the ease of Relational on top of the power of graph, you actually need to change how you're storing your data on disk or even in memory. So you can't, in other words, it's not like, oh we can add graph support to Snowflake, 'cause if you did that, you'd have to change, or in your data lake, you'd have to change how the data is physically laid out. And then that would break all the tools that talk to that currently. >> What in your estimation, is the timeframe where this becomes critical for a Databricks and potentially Snowflake and others? I mentioned earlier midterm, are we talking three to five years here? Are we talking end of decade? What's your radar say? >> I think something surprising is going on that's going to sort of come up the tailpipe and take everyone by storm. All the hype around business intelligence metrics, which is what we used to put in our dashboards where bookings, billings, revenue, customer, those things, those were the key artifacts that used to live in definitions in your BI tools, and DBT has basically created a standard for defining those so they live in your data pipeline or they're defined in their data pipeline and executed in the data warehouse or data lake in a shared way, so that all tools can use them. This sounds like a digression, it's not. All this stuff about data mesh, data fabric, all that's going on is we need a semantic layer and the business intelligence metrics are defining common semantics for your data. And I think we're going to find by the end of this year, that metrics are how we annotate all our analytic data to start adding common semantics to it. And we're going to find this semantic layer, it's not three to five years off, it's going to be staring us in the face by the end of this year. >> Interesting. And of course SVB today was shut down. We're seeing serious tech headwinds, and oftentimes in these sort of downturns or flat turns, which feels like this could be going on for a while, we emerge with a lot of new players and a lot of new technology. George, we got to leave it there. Thank you to George Gilbert for excellent insights and input for today's episode. I want to thank Alex Myerson who's on production and manages the podcast, of course Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our EIC over at Siliconangle.com, he does some great editing. Remember all these episodes, they're available as podcasts. Wherever you listen, all you got to do is search Breaking Analysis Podcast, we publish each week on wikibon.com and siliconangle.com, or you can email me at David.Vellante@siliconangle.com, or DM me @DVellante. Comment on our LinkedIn post, and please do check out ETR.ai, great survey data, enterprise tech focus, phenomenal. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis.

Published Date : Mar 10 2023

SUMMARY :

bringing you data-driven core elements of the Databricks portfolio and pervasiveness in the data and that was where you went for data. and Cloudera set out to fix that. the reason you see and the robustness of Databricks and their big challenge and the data locked into in the real world and decisions Yes, and the mission of that is propelling the likes that the way you manage that data, is the fundamental problem because the joins are difficult and slow. and connects the data and the issue with that is the fourth bullet, expressiveness and it spits out the and the threat that may loom. because in the past with Snowflake, Think of that as the refinery So once the data lake was in place, George, the call out threat here But the key point is, in sort of the same context. and the company that put One is re-architect the platform and architect the components some of the players to watch. in the case of ASW it's DynamoDB, and why you can't put a relational and executed in the data and manages the podcast, of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

David FloyerPERSON

0.99+

Mike OlsonPERSON

0.99+

2014DATE

0.99+

George GilbertPERSON

0.99+

Dave VellantePERSON

0.99+

GeorgePERSON

0.99+

Cheryl KnightPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Andy JassyPERSON

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Erik BradleyPERSON

0.99+

DavePERSON

0.99+

UberORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Sun MicrosystemsORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Bob MugliaPERSON

0.99+

GartnerORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

60 yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Ali GhodsiPERSON

0.99+

2010DATE

0.99+

DatabricksORGANIZATION

0.99+

Kristin MartinPERSON

0.99+

Rob HofPERSON

0.99+

threeQUANTITY

0.99+

15 yearsQUANTITY

0.99+

Databricks'ORGANIZATION

0.99+

two placesQUANTITY

0.99+

BostonLOCATION

0.99+

Tristan HandyPERSON

0.99+

M&AORGANIZATION

0.99+

Frank QuattronePERSON

0.99+

second elementQUANTITY

0.99+

Daren BrabhamPERSON

0.99+

TechAlpha PartnersORGANIZATION

0.99+

third elementQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

50 yearQUANTITY

0.99+

40%QUANTITY

0.99+

ClouderaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

five yearsQUANTITY

0.99+

Breaking Analysis: re:Invent 2022 marks the next chapter in data & cloud


 

from the cube studios in Palo Alto in Boston bringing you data-driven insights from the cube and ETR this is breaking analysis with Dave vellante the ascendancy of AWS under the leadership of Andy jassy was marked by a tsunami of data and corresponding cloud services to leverage that data now those Services they mainly came in the form of Primitives I.E basic building blocks that were used by developers to create more sophisticated capabilities AWS in the 2020s being led by CEO Adam solipski will be marked by four high-level Trends in our opinion one A Rush of data that will dwarf anything we've previously seen two a doubling or even tripling down on the basic elements of cloud compute storage database security Etc three a greater emphasis on end-to-end integration of AWS services to simplify and accelerate customer adoption of cloud and four significantly deeper business integration of cloud Beyond it as an underlying element of organizational operations hello and welcome to this week's wikibon Cube insights powered by ETR in this breaking analysis we extract and analyze nuggets from John furrier's annual sit-down with the CEO of AWS we'll share data from ETR and other sources to set the context for the market and competition in cloud and we'll give you our glimpse of what to expect at re invent in 2022. now before we get into the core of our analysis Alibaba has announced earnings they always announced after the big three you know a month later and we've updated our Q3 slash November hyperscale Computing forecast for the year as seen here and we're going to spend a lot of time on this as most of you have seen the bulk of it already but suffice to say alibaba's cloud business is hitting that same macro Trend that we're seeing across the board but a more substantial slowdown than we expected and more substantial than its peers they're facing China headwinds they've been restructuring its Cloud business and it's led to significantly slower growth uh in in the you know low double digits as opposed to where we had it at 15 this puts our year-end estimates for 2022 Revenue at 161 billion still a healthy 34 growth with AWS surpassing 80 billion in 2022 Revenue now on a related note one of the big themes in Cloud that we've been reporting on is how customers are optimizing their Cloud spend it's a technique that they use and when the economy looks a little shaky and here's a graphic that we pulled from aws's website which shows the various pricing plans at a high level as you know they're much more granular than that and more sophisticated but Simplicity we'll just keep it here basically there are four levels first one here is on demand I.E pay by the drink now we're going to jump down to what we've labeled as number two spot instances that's like the right place at the right time I can use that extra capacity in the moment the third is reserved instances or RIS where I pay up front to get a discount and the fourth is sort of optimized savings plans where customers commit to a one or three year term and for a better price now you'll notice we labeled the choices in a different order than AWS presented them on its website and that's because we believe that the order that we chose is the natural progression for customers this started on demand they maybe experiment with spot instances they move to reserve instances when the cloud bill becomes too onerous and if you're large enough you lock in for one or three years okay the interesting thing is the order in which AWS presents them we believe that on-demand accounts for the majority of AWS customer spending now if you think about it those on-demand customers they're also at risk customers yeah sure there's some switching costs like egress and learning curve but many customers they have multiple clouds and they've got experience and so they're kind of already up to a learning curve and if you're not married to AWS with a longer term commitment there's less friction to switch now AWS here presents the most attractive plan from a financial perspective second after on demand and it's also the plan that makes the greatest commitment from a lock-in standpoint now In fairness to AWS it's also true that there is a trend towards subscription-based pricing and we have some data on that this chart is from an ETR drill down survey the end is 300. pay attention to the bars on the right the left side is sort of busy but the pink is subscription and you can see the trend upward the light blue is consumption based or on demand based pricing and you can see there's a steady Trend toward subscription now we'll dig into this in a later episode of Breaking analysis but we'll share with you a little some tidbits with the data that ETR provides you can select which segment is and pass or you can go up the stack Etc but so when you choose is and paths 44 of customers either prefer or are required to use on-demand pricing whereas around 40 percent of customers say they either prefer or are required to use subscription pricing again that's for is so now the further mu you move up the stack the more prominent subscription pricing becomes often with sixty percent or more for the software-based offerings that require or prefer subscription and interestingly cyber security tracks along with software at around 60 percent that that prefer subscription it's likely because as with software you're not shutting down your cyber protection on demand all right let's get into the expectations for reinvent and we're going to start with an observation in data in this 2018 book seeing digital author David michella made the point that whereas most companies apply data on the periphery of their business kind of as an add-on function successful data companies like Google and Amazon and Facebook have placed data at the core of their operations they've operationalized data and they apply machine intelligence to that foundational element why is this the fact is it's not easy to do what the internet Giants have done very very sophisticated engineering and and and cultural discipline and this brings us to reinvent 2022 in the future of cloud machine learning and AI will increasingly be infused into applications we believe the data stack and the application stack are coming together as organizations build data apps and data products data expertise is moving from the domain of Highly specialized individuals to Everyday business people and we are just at the cusp of this trend this will in our view be a massive theme of not only re invent 22 but of cloud in the 2020s the vision of data mesh We Believe jamachtagani's principles will be realized in this decade now what we'd like to do now is share with you a glimpse of the thinking of Adam solipsky from his sit down with John Furrier each year John has a one-on-one conversation with the CEO of AWS AWS he's been doing this for years and the outcome is a better understanding of the directional thinking of the leader of the number one Cloud platform so we're now going to share some direct quotes I'm going to run through them with some commentary and then bring in some ETR data to analyze the market implications here we go this is from solipsky quote I.T in general and data are moving from departments into becoming intrinsic parts of how businesses function okay we're talking here about deeper business integration let's go on to the next one quote in time we'll stop talking about people who have the word analyst we inserted data he meant data data analyst in their title rather will have hundreds of millions of people who analyze data as part of their day-to-day job most of whom will not have the word analyst anywhere in their title we're talking about graphic designers and pizza shop owners and product managers and data scientists as well he threw that in I'm going to come back to that very interesting so he's talking about here about democratizing data operationalizing data next quote customers need to be able to take an end-to-end integrated view of their entire data Journey from ingestion to storage to harmonizing the data to being able to query it doing business Intelligence and human-based Analysis and being able to collaborate and share data and we've been putting together we being Amazon together a broad Suite of tools from database to analytics to business intelligence to help customers with that and this last statement it's true Amazon has a lot of tools and you know they're beginning to become more and more integrated but again under jassy there was not a lot of emphasis on that end-to-end integrated view we believe it's clear from these statements that solipsky's customer interactions are leading him to underscore that the time has come for this capability okay continuing quote if you have data in one place you shouldn't have to move it every time you want to analyze that data couldn't agree more it would be much better if you could leave that data in place avoid all the ETL which has become a nasty three-letter word more and more we're building capabilities where you can query that data in place end quote okay this we see a lot in the marketplace Oracle with mySQL Heatwave the entire Trend toward converge database snowflake [ __ ] extending their platforms into transaction and analytics respectively and so forth a lot of the partners are are doing things as well in that vein let's go into the next quote the other phenomenon is infusing machine learning into all those capabilities yes the comments from the michelleographic come into play here infusing Ai and machine intelligence everywhere next one quote it's not a data Cloud it's not a separate Cloud it's a series of broad but integrated capabilities to help you manage the end-to-end life cycle of your data there you go we AWS are the cloud we're going to come back to that in a moment as well next set of comments around data very interesting here quote data governance is a huge issue really what customers need is to find the right balance of their organization between access to data and control and if you provide too much access then you're nervous that your data is going to end up in places that it shouldn't shouldn't be viewed by people who shouldn't be viewing it and you feel like you lack security around that data and by the way what happens then is people overreact and they lock it down so that almost nobody can see it it's those handcuffs there's data and asset are reliability we've talked about that for years okay very well put by solipsky but this is a gap in our in our view within AWS today and we're we're hoping that they close it at reinvent it's not easy to share data in a safe way within AWS today outside of your organization so we're going to look for that at re invent 2022. now all this leads to the following statement by solipsky quote data clean room is a really interesting area and I think there's a lot of different Industries in which clean rooms are applicable I think that clean rooms are an interesting way of enabling multiple parties to share and collaborate on the data while completely respecting each party's rights and their privacy mandate okay again this is a gap currently within AWS today in our view and we know snowflake is well down this path and databricks with Delta sharing is also on this curve so AWS has to address this and demonstrate this end-to-end data integration and the ability to safely share data in our view now let's bring in some ETR spending data to put some context around these comments with reference points in the form of AWS itself and its competitors and partners here's a chart from ETR that shows Net score or spending momentum on the x-axis an overlap or pervasiveness in the survey um sorry let me go back up the net scores on the y-axis and overlap or pervasiveness in the survey is on the x-axis so spending momentum by pervasiveness okay or should have share within the data set the table that's inserted there with the Reds and the greens that informs us to how the dots are positioned so it's Net score and then the shared ends are how the plots are determined now we've filtered the data on the three big data segments analytics database and machine learning slash Ai and we've only selected one company with fewer than 100 ends in the survey and that's databricks you'll see why in a moment the red dotted line indicates highly elevated customer spend at 40 percent now as usual snowflake outperforms all players on the y-axis with a Net score of 63 percent off the charts all three big U.S cloud players are above that line with Microsoft and AWS dominating the x-axis so very impressive that they have such spending momentum and they're so large and you see a number of other emerging data players like rafana and datadog mongodbs there in the mix and then more established players data players like Splunk and Tableau now you got Cisco who's gonna you know it's a it's a it's a adjacent to their core networking business but they're definitely into you know the analytics business then the really established players in data like Informatica IBM and Oracle all with strong presence but you'll notice in the red from the momentum standpoint now what you're going to see in a moment is we put red highlights around databricks Snowflake and AWS why let's bring that back up and we'll explain so there's no way let's bring that back up Alex if you would there's no way AWS is going to hit the brakes on innovating at the base service level what we call Primitives earlier solipsky told Furrier as much in their sit down that AWS will serve the technical user and data science Community the traditional domain of data bricks and at the same time address the end-to-end integration data sharing and business line requirements that snowflake is positioned to serve now people often ask Snowflake and databricks how will you compete with the likes of AWS and we know the answer focus on data exclusively they have their multi-cloud plays perhaps the more interesting question is how will AWS compete with the likes of Specialists like Snowflake and data bricks and the answer is depicted here in this chart AWS is going to serve both the technical and developer communities and the data science audience and through end-to-end Integrations and future services that simplify the data Journey they're going to serve the business lines as well but the Nuance is in all the other dots in the hundreds or hundreds of thousands that are not shown here and that's the AWS ecosystem you can see AWS has earned the status of the number one Cloud platform that everyone wants to partner with as they say it has over a hundred thousand partners and that ecosystem combined with these capabilities that we're discussing well perhaps behind in areas like data sharing and integrated governance can wildly succeed by offering the capabilities and leveraging its ecosystem now for their part the snowflakes of the world have to stay focused on the mission build the best products possible and develop their own ecosystems to compete and attract the Mind share of both developers and business users and that's why it's so interesting to hear solipski basically say it's not a separate Cloud it's a set of integrated Services well snowflake is in our view building a super cloud on top of AWS Azure and Google when great products meet great sales and marketing good things can happen so this will be really fun to watch what AWS announces in this area at re invent all right one other topic that solipsky talked about was the correlation between serverless and container adoption and you know I don't know if this gets into there certainly their hybrid place maybe it starts to get into their multi-cloud we'll see but we have some data on this so again we're talking about the correlation between serverless and container adoption but before we get into that let's go back to 2017 and listen to what Andy jassy said on the cube about serverless play the clip very very earliest days of AWS Jeff used to say a lot if I were starting Amazon today I'd have built it on top of AWS we didn't have all the capability and all the functionality at that very moment but he knew what was coming and he saw what people were still able to accomplish even with where the services were at that point I think the same thing is true here with Lambda which is I think if Amazon were starting today it's a given they would build it on the cloud and I think we with a lot of the applications that comprise Amazon's consumer business we would build those on on our serverless capabilities now we still have plenty of capabilities and features and functionality we need to add to to Lambda and our various serverless services so that may not be true from the get-go right now but I think if you look at the hundreds of thousands of customers who are building on top of Lambda and lots of real applications you know finra has built a good chunk of their market watch application on top of Lambda and Thompson Reuters has built you know one of their key analytics apps like people are building real serious things on top of Lambda and the pace of iteration you'll see there will increase as well and I really believe that to be true over the next year or two so years ago when Jesse gave a road map that serverless was going to be a key developer platform going forward and so lipsky referenced the correlation between serverless and containers in the Furrier sit down so we wanted to test that within the ETR data set now here's a screen grab of The View across 1300 respondents from the October ETR survey and what we've done here is we've isolated on the cloud computing segment okay so you can see right there cloud computing segment now we've taken the functions from Google AWS Lambda and Microsoft Azure functions all the serverless offerings and we've got Net score on the vertical axis we've got presence in the data set oh by the way 440 by the way is highly elevated remember that and then we've got on the horizontal axis we have the presence in the data center overlap okay that's relative to each other so remember 40 all these guys are above that 40 mark okay so you see that now what we're going to do this is just for serverless and what we're going to do is we're going to turn on containers to see the correlation and see what happens so watch what happens when we click on container boom everything moves to the right you can see all three move to the right Google drops a little bit but all the others now the the filtered end drops as well so you don't have as many people that are aggressively leaning into both but all three move to the right so watch again containers off and then containers on containers off containers on so you can see a really major correlation between containers and serverless okay so to get a better understanding of what that means I call my friend and former Cube co-host Stu miniman what he said was people generally used to think of VMS containers and serverless as distinctly different architectures but the lines are beginning to blur serverless makes things simpler for developers who don't want to worry about underlying infrastructure as solipsky and the data from ETR indicate serverless and containers are coming together but as Stu and I discussed there's a spectrum where on the left you have kind of native Cloud VMS in the middle you got AWS fargate and in the rightmost anchor is Lambda AWS Lambda now traditionally in the cloud if you wanted to use containers developers would have to build a container image they have to select and deploy the ec2 images that they or instances that they wanted to use they have to allocate a certain amount of memory and then fence off the apps in a virtual machine and then run the ec2 instances against the apps and then pay for all those ec2 resources now with AWS fargate you can run containerized apps with less infrastructure management but you still have some you know things that you can you can you can do with the with the infrastructure so with fargate what you do is you'd build the container images then you'd allocate your memory and compute resources then run the app and pay for the resources only when they're used so fargate lets you control the runtime environment while at the same time simplifying the infrastructure management you gotta you don't have to worry about isolating the app and other stuff like choosing server types and patching AWS does all that for you then there's Lambda with Lambda you don't have to worry about any of the underlying server infrastructure you're just running code AS functions so the developer spends their time worrying about the applications and the functions that you're calling the point is there's a movement and we saw in the data towards simplifying the development environment and allowing the cloud vendor AWS in this case to do more of the underlying management now some folks will still want to turn knobs and dials but increasingly we're going to see more higher level service adoption now re invent is always a fire hose of content so let's do a rapid rundown of what to expect we talked about operate optimizing data and the organization we talked about Cloud optimization there'll be a lot of talk on the show floor about best practices and customer sharing data solipsky is leading AWS into the next phase of growth and that means moving beyond I.T transformation into deeper business integration and organizational transformation not just digital transformation organizational transformation so he's leading a multi-vector strategy serving the traditional peeps who want fine-grained access to core services so we'll see continued Innovation compute storage AI Etc and simplification through integration and horizontal apps further up to stack Amazon connect is an example that's often cited now as we've reported many times databricks is moving from its stronghold realm of data science into business intelligence and analytics where snowflake is coming from its data analytics stronghold and moving into the world of data science AWS is going down a path of snowflake meet data bricks with an underlying cloud is and pass layer that puts these three companies on a very interesting trajectory and you can expect AWS to go right after the data sharing opportunity and in doing so it will have to address data governance they go hand in hand okay price performance that is a topic that will never go away and it's something that we haven't mentioned today silicon it's a it's an area we've covered extensively on breaking analysis from Nitro to graviton to the AWS acquisition of Annapurna its secret weapon new special specialized capabilities like inferential and trainium we'd expect something more at re invent maybe new graviton instances David floyer our colleague said he's expecting at some point a complete system on a chip SOC from AWS and maybe an arm-based server to eventually include high-speed cxl connections to devices and memories all to address next-gen applications data intensive applications with low power requirements and lower cost overall now of course every year Swami gives his usual update on machine learning and AI building on Amazon's years of sagemaker innovation perhaps a focus on conversational AI or a better support for vision and maybe better integration across Amazon's portfolio of you know large language models uh neural networks generative AI really infusing AI everywhere of course security always high on the list that reinvent and and Amazon even has reinforce a conference dedicated to it uh to security now here we'd like to see more on supply chain security and perhaps how AWS can help there as well as tooling to make the cio's life easier but the key so far is AWS is much more partner friendly in the security space than say for instance Microsoft traditionally so firms like OCTA and crowdstrike in Palo Alto have plenty of room to play in the AWS ecosystem we'd expect of course to hear something about ESG it's an important topic and hopefully how not only AWS is helping the environment that's important but also how they help customers save money and drive inclusion and diversity again very important topics and finally come back to it reinvent is an ecosystem event it's the Super Bowl of tech events and the ecosystem will be out in full force every tech company on the planet will have a presence and the cube will be featuring many of the partners from the serial floor as well as AWS execs and of course our own independent analysis so you'll definitely want to tune into thecube.net and check out our re invent coverage we start Monday evening and then we go wall to wall through Thursday hopefully my voice will come back we have three sets at the show and our entire team will be there so please reach out or stop by and say hello all right we're going to leave it there for today many thanks to Stu miniman and David floyer for the input to today's episode of course John Furrier for extracting the signal from the noise and a sit down with Adam solipski thanks to Alex Meyerson who was on production and manages the podcast Ken schiffman as well Kristen Martin and Cheryl Knight helped get the word out on social and of course in our newsletters Rob hoef is our editor-in-chief over at siliconangle does some great editing thank thanks to all of you remember all these episodes are available as podcasts wherever you listen you can pop in the headphones go for a walk just search breaking analysis podcast I published each week on wikibon.com at siliconangle.com or you can email me at david.valante at siliconangle.com or DM me at di vallante or please comment on our LinkedIn posts and do check out etr.ai for the best survey data in the Enterprise Tech business this is Dave vellante for the cube insights powered by ETR thanks for watching we'll see it reinvent or we'll see you next time on breaking analysis [Music]

Published Date : Nov 26 2022

SUMMARY :

so now the further mu you move up the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David michellaPERSON

0.99+

Alex MeyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

AWSORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

oneQUANTITY

0.99+

Dave vellantePERSON

0.99+

David floyerPERSON

0.99+

Kristen MartinPERSON

0.99+

JohnPERSON

0.99+

sixty percentQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Adam solipskiPERSON

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2022DATE

0.99+

Andy jassyPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

hundredsQUANTITY

0.99+

2017DATE

0.99+

Palo AltoLOCATION

0.99+

40 percentQUANTITY

0.99+

alibabaORGANIZATION

0.99+

LambdaTITLE

0.99+

63 percentQUANTITY

0.99+

1300 respondentsQUANTITY

0.99+

Super BowlEVENT

0.99+

80 billionQUANTITY

0.99+

John furrierPERSON

0.99+

ThursdayDATE

0.99+

CiscoORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Monday eveningDATE

0.99+

JessePERSON

0.99+

Stu minimanPERSON

0.99+

siliconangle.comOTHER

0.99+

OctoberDATE

0.99+

thecube.netOTHER

0.99+

fourthQUANTITY

0.99+

a month laterDATE

0.99+

thirdQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

fargateORGANIZATION

0.99+

Breaking Analysis: VMware Explore 2022 will mark the start of a Supercloud journey


 

>> From the Cube studios in Palo Alto and Boston, bringing you data driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> While the precise direction of VMware's future is unknown, given the plan Broadcom acquisition, one thing is clear. The topic of what Broadcom plans will not be the main focus of the agenda at the upcoming VMware Explore event next week in San Francisco. We believe that despite any uncertainty, VMware will lay out for its customers what it sees as its future. And that future is multi-cloud or cross-cloud services, what we call Supercloud. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we drill into the latest survey data on VMware from ETR. And we'll share with you the next iteration of the Supercloud definition based on feedback from dozens of contributors. And we'll give you our take on what to expect next week at VMware Explorer 2022. Well, VMware is maturing. You can see it in the numbers. VMware had a solid quarter just this week, which was announced beating earnings and growing the top line by 6%. But it's clear from its financials and the ETR data that we're showing here that VMware's Halcion glory days are behind it. This chart shows the spending profile from ETR's July survey of nearly 1500 IT buyers and CIOs. The survey included 722 VMware customers with the green bars showing elevated spending momentum, ie: growth, either new or growing at more than 6%. And the red bars show lower spending, either down 6% or worse or defections. The gray bars, that's the flat spending crowd, and it really tells a story. Look, nobody's throwing away their VMware platforms. They're just not investing as rapidly as in previous years. The blue line shows net score or spending momentum and subtracts the reds from the greens. The yellow line shows market penetration or pervasiveness in the survey. So the data is pretty clear. It's steady, but it's not remarkable. Now, the timing of the acquisition, quite rightly, is quite good, I would say. Now, this next chart shows the net score and pervasiveness juxtaposed on an XY graph and breaks down the VMware portfolio in those dimensions, the product portfolio. And you can see the dominance of respondents citing VMware as the platform. They might not know exactly which services they use, but they just respond VMware. That's on the X axis. You can see it way to the right. And the spending momentum or the net score is on the Y axis. That red dotted line at 4%, that indicates elevated levels and only VMware cloud on AWS is above that line. Notably, Tanzu has jumped up significantly from previous quarters, with the rest of the portfolio showing steady, as you would expect from a maturing platform. Only carbon black is hovering in the red zone, kind of ironic given the name. We believe that VMware is going to be a major player in cross cloud services, what we refer to as Supercloud. For months, we've been refining the concept and the definition. At Supercloud '22, we had discussions with more than 30 technology and business experts, and we've gathered input from many more. Based on that feedback, here's the definition we've landed on. It's somewhat refined from our earlier definition that we published a couple weeks ago. Supercloud is an emerging computing architecture that comprises a set of services abstracted from the underlying primitives of hyperscale clouds, e.g. compute, storage, networking, security, and other native resources, to create a global system spanning more than one cloud. Supercloud is three essential properties, three deployment models, and three service models. So what are those essential elements, those properties? We've simplified the picture from our last report. We show them here. I'll review them briefly. We're not going to go super in depth here because we've covered this topic a lot. But supercloud, it runs on more than one cloud. It creates that common or identical experience across clouds. It contains a necessary capability that we call a superPaaS that acts as a cloud interpreter, and it has metadata intelligence to optimize for a specific purpose. We'll publish this definition in detail. So again, we're not going to spend a ton of time here today. Now, we've identified three deployment models for Supercloud. The first is a single instantiation, where a control plane runs on one cloud but supports interactions with multiple other clouds. An example we use is Kubernetes cluster management service that runs on one cloud but can deploy and manage clusters on other clouds. The second model is a multi-cloud, multi-region instantiation where a full stack of services is instantiated on multiple clouds and multiple cloud regions with a common interface across them. We've used cohesity as one example of this. And then a single global instance that spans multiple cloud providers. That's our snowflake example. Again, we'll publish this in detail. So we're not going to spend a ton of time here today. Finally, the service models. The feedback we've had is IaaS, PaaS, and SaaS work fine to describe the service models for Supercloud. NetApp's Cloud Volume is a good example in IaaS. VMware cloud foundation and what we expect at VMware Explore is a good PaaS example. And SAP HANA Cloud is a good example of SaaS running as a Supercloud service. That's the SAP HANA multi-cloud. So what is it that we expect from VMware Explore 2022? Well, along with what will be an exciting and speculation filled gathering of the VMware community at the Moscone Center, we believe VMware will lay out its future architectural direction. And we expect it will fit the Supercloud definition that we just described. We think VMware will show its hand on a set of cross-cloud services and will promise a common experience for users and developers alike. As we talked about at Supercloud '22, VMware kind of wants to have its cake, eat it too, and lose weight. And by that, we mean that it will not only abstract the underlying primitives of each of the individual clouds, but if developers want access to them, they will allow that and actually facilitate that. Now, we don't expect VMware to use the term Supercloud, but it will be a cross-cloud multi-cloud services model that they put forth, we think, at VMworld Explore. With IaaS comprising compute, storage, and networking, a very strong emphasis, we believe, on security, of course, a governance and a comprehensive set of data protection services. Now, very importantly, we believe Tanzu will play a leading role in any announcements this coming week, as a purpose-built PaaS layer, specifically designed to create a common experience for cross clouds for data and application services. This, we believe, will be VMware's most significant offering to date in cross-cloud services. And it will position VMware to be a leader in what we call Supercloud. Now, while it remains to be seen what Broadcom exactly intends to do with VMware, we've speculated, others have speculated. We think this Supercloud is a substantial market opportunity generally and for VMware specifically. Look, if you don't own a public cloud, and very few companies do, in the tech business, we believe you better be supporting the build out of superclouds or building a supercloud yourself on top of hyperscale infrastructure. And we believe that as cloud matures, hyperscalers will increasingly I cross cloud services as an opportunity. We asked David Floyer to take a stab at a market model for super cloud. He's really good at these types of things. What he did is he took the known players in cloud and estimated their IaaS and PaaS cloud services, their total revenue, and then took a percentage. So this is super set of just the public cloud and the hyperscalers. And then what he did is he took a percentage to fit the Supercloud definition, as we just shared above. He then added another 20% on top to cover the long tail of Other. Other over time is most likely going to grow to let's say 30%. That's kind of how these markets work. Okay, so this is obviously an estimate, but it's an informed estimate by an individual who has done this many, many times and is pretty well respected in these types of forecasts, these long term forecasts. Now, by the definition we just shared, Supercloud revenue was estimated at about $3 billion in 2022 worldwide, growing to nearly $80 billion by 2030. Now remember, there's not one Supercloud market. It comprises a bunch of purpose-built superclouds that solve a specific problem. But the common attribute is it's built on top of hyperscale infrastructure. So overall, cloud services, including Supercloud, peak by the end of the decade. But Supercloud continues to grow and will take a higher percentage of the cloud market. The reasoning here is that the market will change and compute, will increasingly become distributed and embedded into edge devices, such as automobiles and robots and factory equipment, et cetera, and not necessarily be a discreet... I mean, it still will be, of course, but it's not going to be as much of a discrete component that is consumed via services like EZ2, that will mature. And this will be a key shift to watch in spending dynamics and really importantly, computing economics, the things we've talked about around arm and edge and AI inferencing and new low cost computing architectures at the edge. We're talking not the near edge, like, Lowes and Home Depot, we're talking far edge and embedded devices. Now, whether this becomes a seamless part of Supercloud remains to be seen. Look, if that's how we see it, the current and the future state of Supercloud, and we're committed to keeping the discussion going with an inclusive model that gathers input from all parts of the industry. Okay, that's it for today. Thanks to Alex Morrison, who's on production, and he also manages the podcast. Ken Schiffman, as well, is on production in our Boston office. Kristin Martin and Cheryl Knight, they help us get the word out on social media and in our newsletters. And Rob Hoffe is our editor in chief over at Silicon Angle and does some helpful editing. Thank you, all. Remember these episodes, they're all available as podcasts, wherever you listen. All you got to do is search Breaking Analysis Podcast. I publish each week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com or DM me @Dvellante or comment on our LinkedIn posts. Please do check out etr.ai. They've got some great enterprise survey research. So please go there and poke around, And if you need any assistance, let them know. This is Dave Vellante for the Cube Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis. (lively music)

Published Date : Aug 27 2022

SUMMARY :

From the Cube studios and subtracts the reds from the greens.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

Cheryl KnightPERSON

0.99+

Dave VellantePERSON

0.99+

Rob HoffePERSON

0.99+

VMwareORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

David FloyerPERSON

0.99+

Kristin MartinPERSON

0.99+

30%QUANTITY

0.99+

BostonLOCATION

0.99+

2022DATE

0.99+

LowesORGANIZATION

0.99+

20%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

722QUANTITY

0.99+

4%QUANTITY

0.99+

San FranciscoLOCATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

2030DATE

0.99+

Silicon AngleORGANIZATION

0.99+

JulyDATE

0.99+

BroadcomORGANIZATION

0.99+

Home DepotORGANIZATION

0.99+

6%QUANTITY

0.99+

next weekDATE

0.99+

AWSORGANIZATION

0.99+

second modelQUANTITY

0.99+

more than 6%QUANTITY

0.99+

ETRORGANIZATION

0.99+

more than one cloudQUANTITY

0.99+

siliconangle.comOTHER

0.99+

nearly $80 billionQUANTITY

0.99+

about $3 billionQUANTITY

0.99+

more than 30 technologyQUANTITY

0.99+

firstQUANTITY

0.99+

this weekDATE

0.98+

SupercloudORGANIZATION

0.98+

each weekQUANTITY

0.98+

one exampleQUANTITY

0.98+

three service modelsQUANTITY

0.98+

VMware ExploreEVENT

0.98+

dozens of contributorsQUANTITY

0.97+

todayDATE

0.97+

NetAppTITLE

0.97+

this weekDATE

0.97+

SupercloudTITLE

0.97+

SAP HANATITLE

0.97+

VMworld ExploreORGANIZATION

0.97+

three essential propertiesQUANTITY

0.97+

three deployment modelsQUANTITY

0.97+

one cloudQUANTITY

0.96+

TanzuORGANIZATION

0.96+

eachQUANTITY

0.96+

Moscone CenterLOCATION

0.96+

wikibon.comOTHER

0.95+

SAP HANA CloudTITLE

0.95+

Cube InsightsORGANIZATION

0.92+

single instantiationQUANTITY

0.9+

Closing Remarks | Supercloud22


 

(gentle upbeat music) >> Welcome back everyone, to "theCUBE"'s live stage performance here in Palo Alto, California at "theCUBE" Studios. I'm John Furrier with Dave Vellante, kicking off our first inaugural Supercloud event. It's an editorial event, we wanted to bring together the best in the business, the smartest, the biggest, the up-and-coming startups, venture capitalists, everybody, to weigh in on this new Supercloud trend, this structural change in the cloud computing business. We're about to run the Ecosystem Speaks, which is a bunch of pre-recorded companies that wanted to get their voices on the record, so stay tuned for the rest of the day. We'll be replaying all that content and they're going to be having some really good commentary and hear what they have to say. I had a chance to interview and so did Dave. Dave, this is our closing segment where we kind of unpack everything or kind of digest and report. So much to kind of digest from the conversations today, a wide range of commentary from Supercloud operating system to developers who are in charge to maybe it's an ops problem or maybe Oracle's a Supercloud. I mean, that was debated. So so much discussion, lot to unpack. What was your favorite moments? >> Well, before I get to that, I think, I go back to something that happened at re:Invent last year. Nick Sturiale came up, Steve Mullaney from Aviatrix; we're going to hear from him shortly in the Ecosystem Speaks. Nick Sturiale's VC said "it's happening"! And what he was talking about is this ecosystem is exploding. They're building infrastructure or capabilities on top of the CapEx infrastructure. So, I think it is happening. I think we confirmed today that Supercloud is a thing. It's a very immature thing. And I think the other thing, John is that, it seems to me that the further you go up the stack, the weaker the business case gets for doing Supercloud. We heard from Marianna Tessel, it's like, "Eh, you know, we can- it was easier to just do it all on one cloud." This is a point that, Adrian Cockcroft just made on the panel and so I think that when you break out the pieces of the stack, I think very clearly the infrastructure layer, what we heard from Confluent and HashiCorp, and certainly VMware, there's a real problem there. There's a real need at the infrastructure layer and then even at the data layer, I think Benoit Dageville did a great job of- You know, I was peppering him with all my questions, which I basically was going through, the Supercloud definition and they ticked the box on pretty much every one of 'em as did, by the way Ali Ghodsi you know, the big difference there is the philosophy of Republicans and Democrats- got open versus closed, not to apply that to either one side, but you know what I mean! >> And the similarities are probably greater than differences. >> Berkely, I would probably put them on the- >> Yeah, we'll put them on the Democrat side we'll make Snowflake the Republicans. But so- but as we say there's a lot of similarities as well in terms of what their objectives are. So, I mean, I thought it was a great program and a really good start to, you know, an industry- You brought up the point about the industry consortium, asked Kit Colbert- >> Yep. >> If he thought that was something that was viable and what'd they say? That hyperscale should lead it? >> Yeah, they said hyperscale should lead it and there also should be an industry consortium to get the voices out there. And I think VMware is very humble in how they're putting out their white paper because I think they know that they can't do it all and that they do not have a great track record relative to cloud. And I think, but they have a great track record of loyal installed base ops people using VMware vSphere all the time. >> Yeah. >> So I think they need a catapult moment where they can catapult to the cloud native which they've been working on for years under Raghu and the team. So the question on VMware is in the light of Broadcom, okay, acquisition of VMware, this is an opportunity or it might not be an opportunity or it might be a spin-out or something, I just think VMware's got way too much engineering culture to be ignored, Dave. And I think- well, I'm going to watch this very closely because they can pull off some sort of rallying moment. I think they could. And then you hear the upstarts like Platform9, Rafay Systems and others they're all like, "Yes, we need to unify behind something. There needs to be some sort of standard". You know, we heard the argument of you know, more standards bodies type thing. So, it's interesting, maybe "theCUBE" could be that but we're going to certainly keep the conversation going. >> I thought one of the most memorable statements was Vittorio who said we- for VMware, we want our cake, we want to eat it too and we want to lose weight. So they have a lot of that aspirations there! (John laughs) >> And then I thought, Adrian Cockcroft said you know, the devs, they want to get married. They were marrying everybody, and then the ops team, they have to deal with the divorce. >> Yeah. >> And I thought that was poignant. It's like, they want consistency, they want standards, they got to be able to scale And Lori MacVittie, I'm not sure you agree with this, I'd have to think about it, but she was basically saying, all we've talked about is devs devs devs for the last 10 years, going forward we're going to be talking about ops. >> Yeah, and I think one of the things I learned from this day and looking back, and some kind of- I've been sauteing through all the interviews. If you zoom out, for me it was the epiphany of developers are still in charge. And I've said, you know, the developers are doing great, it's an ops security thing. Not sure I see that the way I was seeing before. I think what I learned was the refactoring pattern that's emerging, In Sik Rhee brought this up from Vertex Ventures with Marianna Tessel, it's a nuanced point but I think he's right on which is the pattern that's emerging is developers want ease-of-use tooling, they're driving the change and I think the developers in the devs ops ethos- it's never going to be separate. It's going to be DevOps. That means developers are driving operations and then security. So what I learned was it's not ops teams leveling up, it's devs redefining what ops is. >> Mm. And I think that to me is where Supercloud's going to be interesting- >> Forcing that. >> Yeah. >> Forcing the change because the structural change is open sources thriving, devs are still in charge and they still want more developers, Vittorio "we need more developers", right? So the developers are in charge and that's clear. Now, if that happens- if you believe that to be true the domino effect of that is going to be amazing because then everyone who gets on the wrong side of history, on the ops and security side, is going to be fighting a trend that may not be fight-able, you know, it might be inevitable. And so the winners are the ones that are refactoring their business like Snowflake. Snowflake is a data warehouse that had nothing to do with Amazon at first. It was the developers who said "I'm going to refactor data warehouse on AWS". That is a developer-driven refactorization and a business model. So I think that's the pattern I'm seeing is that this concept refactoring, patterns and the developer trajectory is critical. >> I thought there was another great comment. Maribel Lopez, her Lord of the Rings comment: "there will be no one ring to rule them all". Now at the same time, Kit Colbert, you know what we asked him straight out, "are you the- do you want to be the, the Supercloud OS?" and he basically said, "yeah, we do". Now, of course they're confined to their world, which is a pretty substantial world. I think, John, the reason why Maribel is so correct is security. I think security's a really hard problem to solve. You've got cloud as the first layer of defense and now you've got multiple clouds, multiple layers of defense, multiple shared responsibility models. You've got different tools for XDR, for identity, for governance, for privacy all within those different clouds. I mean, that really is a confusing picture. And I think the hardest- one of the hardest parts of Supercloud to solve. >> Yeah, and I thought the security founder Gee Rittenhouse, Piyush Sharrma from Accurics, which sold to Tenable, and Tony Kueh, former head of product at VMware. >> Right. >> Who's now an investor kind of looking for his next gig or what he is going to do next. He's obviously been extremely successful. They brought up the, the OS factor. Another point that they made I thought was interesting is that a lot of the things to do to solve the complexity is not doable. >> Yeah. >> It's too much work. So managed services might field the bit. So, and Chris Hoff mentioned on the Clouderati segment that the higher level services being a managed service and differentiating around the service could be the key competitive advantage for whoever does it. >> I think the other thing is Chris Hoff said "yeah, well, Web 3, metaverse, you know, DAO, Superclouds" you know, "Stupercloud" he called it and this bring up- It resonates because one of the criticisms that Charles Fitzgerald laid on us was, well, it doesn't help to throw out another term. I actually think it does help. And I think the reason it does help is because it's getting people to think. When you ask people about Supercloud, they automatically- it resonates with them. They play back what they think is the future of cloud. So Supercloud really talks to the future of cloud. There's a lot of aspects to it that need to be further defined, further thought out and we're getting to the point now where we- we can start- begin to say, okay that is Supercloud or that isn't Supercloud. >> I think that's really right on. I think Supercloud at the end of the day, for me from the simplest way to describe it is making sure that the developer experience is so good that the operations just happen. And Marianna Tessel said, she's investing in making their developer experience high velocity, very easy. So if you do that, you have to run on premise and on the cloud. So hybrid really is where Supercloud is going right now. It's not multi-cloud. Multi-cloud was- that was debunked on this session today. I thought that was clear. >> Yeah. Yeah, I mean I think- >> It's not about multi-cloud. It's about operationally seamless operations across environments, public cloud to on-premise, basically. >> I think we got consensus across the board that multi-cloud, you know, is a symptom Chuck Whitten's thing of multi-cloud by default versus multi- multi-cloud has not been a strategy, Kit Colbert said, up until the last couple of years. Yeah, because people said, "oh we got all these multiple clouds, what do we do with it?" and we got this mess that we have to solve. Whereas, I think Supercloud is something that is a strategy and then the other nuance that I keep bringing up is it's industries that are- as part of their digital transformation, are building clouds. Now, whether or not they become superclouds, I'm not convinced. I mean, what Goldman Sachs is doing, you know, with AWS, what Walmart's doing with Azure connecting their on-prem tools to those public clouds, you know, is that a supercloud? I mean, we're going to have to go back and really look at that definition. Or is it just kind of a SAS that spans on-prem and cloud. So, as I said, the further you go up the stack, the business case seems to wane a little bit but there's no question in my mind that from an infrastructure standpoint, to your point about operations, there's a real requirement for super- what we call Supercloud. >> Well, we're going to keep the conversation going, Dave. I want to put a shout out to our founding supporters of this initiative. Again, we put this together really fast kind of like a pilot series, an inaugural event. We want to have a face-to-face event as an industry event. Want to thank the founding supporters. These are the people who donated their time, their resource to contribute content, ideas and some cash, not everyone has committed some financial contribution but we want to recognize the names here. VMware, Intuit, Red Hat, Snowflake, Aisera, Alteryx, Confluent, Couchbase, Nutanix, Rafay Systems, Skyhigh Security, Aviatrix, Zscaler, Platform9, HashiCorp, F5 and all the media partners. Without their support, this wouldn't have happened. And there are more people that wanted to weigh in. There was more demand than we could pull off. We'll certainly continue the Supercloud conversation series here on "theCUBE" and we'll add more people in. And now, after this session, the Ecosystem Speaks session, we're going to run all the videos of the big name companies. We have the Nutanix CEOs weighing in, Aviatrix to name a few. >> Yeah. Let me, let me chime in, I mean you got Couchbase talking about Edge, Platform 9's going to be on, you know, everybody, you know Insig was poopoo-ing Oracle, but you know, Oracle and Azure, what they did, two technical guys, developers are coming on, we dig into what they did. Howie Xu from Zscaler, Paula Hansen is going to talk about going to market in the multi-cloud world. You mentioned Rajiv, the CEO of Nutanix, Ramesh is going to talk about multi-cloud infrastructure. So that's going to run now for, you know, quite some time here and some of the pre-record so super excited about that and I just want to thank the crew. I hope guys, I hope you have a list of credits there's too many of you to mention, but you know, awesome jobs really appreciate the work that you did in a very short amount of time. >> Well, I'm excited. I learned a lot and my takeaway was that Supercloud's a thing, there's a kind of sense that people want to talk about it and have real conversations, not BS or FUD. They want to have real substantive conversations and we're going to enable that on "theCUBE". Dave, final thoughts for you. >> Well, I mean, as I say, we put this together very quickly. It was really a phenomenal, you know, enlightening experience. I think it confirmed a lot of the concepts and the premises that we've put forth, that David Floyer helped evolve, that a lot of these analysts have helped evolve, that even Charles Fitzgerald with his antagonism helped to really sharpen our knives. So, you know, thank you Charles. And- >> I like his blog, by the I'm a reader- >> Yeah, absolutely. And it was great to be back in Palo Alto. It was my first time back since pre-COVID, so, you know, great job. >> All right. I want to thank all the crew and everyone. Thanks for watching this first, inaugural Supercloud event. We are definitely going to be doing more of these. So stay tuned, maybe face-to-face in person. I'm John Furrier with Dave Vellante now for the Ecosystem chiming in, and they're going to speak and share their thoughts here with "theCUBE" our first live stage performance event in our studio. Thanks for watching. (gentle upbeat music)

Published Date : Aug 9 2022

SUMMARY :

and they're going to be having as did, by the way Ali Ghodsi you know, And the similarities on the Democrat side And I think VMware is very humble So the question on VMware is and we want to lose weight. they have to deal with the divorce. And I thought that was poignant. Not sure I see that the Mm. And I think that to me is where And so the winners are the ones that are of the Rings comment: the security founder Gee Rittenhouse, a lot of the things to do So, and Chris Hoff mentioned on the is the future of cloud. is so good that the public cloud to on-premise, basically. So, as I said, the further and all the media partners. So that's going to run now for, you know, I learned a lot and my takeaway was and the premises that we've put forth, since pre-COVID, so, you know, great job. and they're going to speak

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

Breaking Analysis: How the cloud is changing security defenses in the 2020s


 

>> Announcer: From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> The rapid pace of cloud adoption has changed the way organizations approach cybersecurity. Specifically, the cloud is increasingly becoming the first line of cyber defense. As such, along with communicating to the board and creating a security aware culture, the chief information security officer must ensure that the shared responsibility model is being applied properly. Meanwhile, the DevSecOps team has emerged as the critical link between strategy and execution, while audit becomes the free safety, if you will, in the equation, i.e., the last line of defense. Hello, and welcome to this week's, we keep on CUBE Insights, powered by ETR. In this "Breaking Analysis", we'll share the latest data on hyperscale, IaaS, and PaaS market performance, along with some fresh ETR survey data. And we'll share some highlights and the puts and takes from the recent AWS re:Inforce event in Boston. But first, the macro. It's earning season, and that's what many people want to talk about, including us. As we reported last week, the macro spending picture is very mixed and weird. Think back to a week ago when SNAP reported. A player like SNAP misses and the Nasdaq drops 300 points. Meanwhile, Intel, the great semiconductor hope for America misses by a mile, cuts its revenue outlook by 15% for the year, and the Nasdaq was up nearly 250 points just ahead of the close, go figure. Earnings reports from Meta, Google, Microsoft, ServiceNow, and some others underscored cautious outlooks, especially those exposed to the advertising revenue sector. But at the same time, Apple, Microsoft, and Google, were, let's say less bad than expected. And that brought a sigh of relief. And then there's Amazon, which beat on revenue, it beat on cloud revenue, and it gave positive guidance. The Nasdaq has seen this month best month since the isolation economy, which "Breaking Analysis" contributor, Chip Symington, attributes to what he calls an oversold rally. But there are many unknowns that remain. How bad will inflation be? Will the fed really stop tightening after September? The Senate just approved a big spending bill along with corporate tax hikes, which generally don't favor the economy. And on Monday, August 1st, the market will likely realize that we are in the summer quarter, and there's some work to be done. Which is why it's not surprising that investors sold the Nasdaq at the close today on Friday. Are people ready to call the bottom? Hmm, some maybe, but there's still lots of uncertainty. However, the cloud continues its march, despite some very slight deceleration in growth rates from the two leaders. Here's an update of our big four IaaS quarterly revenue data. The big four hyperscalers will account for $165 billion in revenue this year, slightly lower than what we had last quarter. We expect AWS to surpass 83 billion this year in revenue. Azure will be more than 2/3rds the size of AWS, a milestone from Microsoft. Both AWS and Azure came in slightly below our expectations, but still very solid growth at 33% and 46% respectively. GCP, Google Cloud Platform is the big concern. By our estimates GCP's growth rate decelerated from 47% in Q1, and was 38% this past quarter. The company is struggling to keep up with the two giants. Remember, both GCP and Azure, they play a shell game and hide the ball on their IaaS numbers, so we have to use a survey data and other means of estimating. But this is how we see the market shaping up in 2022. Now, before we leave the overall cloud discussion, here's some ETR data that shows the net score or spending momentum granularity for each of the hyperscalers. These bars show the breakdown for each company, with net score on the right and in parenthesis, net score from last quarter. lime green is new adoptions, forest green is spending up 6% or more, the gray is flat, pink is spending at 6% down or worse, and the bright red is replacement or churn. Subtract the reds from the greens and you get net score. One note is this is for each company's overall portfolio. So it's not just cloud. So it's a bit of a mixed bag, but there are a couple points worth noting. First, anything above 40% or 40, here as shown in the chart, is considered elevated. AWS, as you can see, is well above that 40% mark, as is Microsoft. And if you isolate Microsoft's Azure, only Azure, it jumps above AWS's momentum. Google is just barely hanging on to that 40 line, and Alibaba is well below, with both Google and Alibaba showing much higher replacements, that bright red. But here's the key point. AWS and Azure have virtually no churn, no replacements in that bright red. And all four companies are experiencing single-digit numbers in terms of decreased spending within customer accounts. People may be moving some workloads back on-prem selectively, but repatriation is definitely not a trend to bet the house on, in our view. Okay, let's get to the main subject of this "Breaking Analysis". TheCube was at AWS re:Inforce in Boston this week, and we have some observations to share. First, we had keynotes from Steven Schmidt who used to be the chief information security officer at Amazon on Web Services, now he's the CSO, the chief security officer of Amazon. Overall, he dropped the I in his title. CJ Moses is the CISO for AWS. Kurt Kufeld of AWS also spoke, as did Lena Smart, who's the MongoDB CISO, and she keynoted and also came on theCUBE. We'll go back to her in a moment. The key point Schmidt made, one of them anyway, was that Amazon sees more data points in a day than most organizations see in a lifetime. Actually, it adds up to quadrillions over a fairly short period of time, I think, it was within a month. That's quadrillion, it's 15 zeros, by the way. Now, there was drill down focus on data protection and privacy, governance, risk, and compliance, GRC, identity, big, big topic, both within AWS and the ecosystem, network security, and threat detection. Those are the five really highlighted areas. Re:Inforce is really about bringing a lot of best practice guidance to security practitioners, like how to get the most out of AWS tooling. Schmidt had a very strong statement saying, he said, "I can assure you with a 100% certainty that single controls and binary states will absolutely positively fail." Hence, the importance of course, of layered security. We heard a little bit of chat about getting ready for the future and skating to the security puck where quantum computing threatens to hack all of the existing cryptographic algorithms, and how AWS is trying to get in front of all that, and a new set of algorithms came out, AWS is testing. And, you know, we'll talk about that maybe in the future, but that's a ways off. And by its prominent presence, the ecosystem was there enforced, to talk about their role and filling the gaps and picking up where AWS leaves off. We heard a little bit about ransomware defense, but surprisingly, at least in the keynotes, no discussion about air gaps, which we've talked about in previous "Breaking Analysis", is a key factor. We heard a lot about services to help with threat detection and container security and DevOps, et cetera, but there really wasn't a lot of specific talk about how AWS is simplifying the life of the CISO. Now, maybe it's inherently assumed as AWS did a good job stressing that security is job number one, very credible and believable in that front. But you have to wonder if the world is getting simpler or more complex with cloud. And, you know, you might say, "Well, Dave, come on, of course it's better with cloud." But look, attacks are up, the threat surface is expanding, and new exfiltration records are being set every day. I think the hard truth is, the cloud is driving businesses forward and accelerating digital, and those businesses are now exposed more than ever. And that's why security has become such an important topic to boards and throughout the entire organization. Now, the other epiphany that we had at re:Inforce is that there are new layers and a new trust framework emerging in cyber. Roles are shifting, and as a direct result of the cloud, things are changing within organizations. And this first hit me in a conversation with long-time cyber practitioner and Wikibon colleague from our early Wikibon days, and friend, Mike Versace. And I spent two days testing the premise that Michael and I talked about. And here's an attempt to put that conversation into a graphic. The cloud is now the first line of defense. AWS specifically, but hyperscalers generally provide the services, the talent, the best practices, and automation tools to secure infrastructure and their physical data centers. And they're really good at it. The security inside of hyperscaler clouds is best of breed, it's world class. And that first line of defense does take some of the responsibility off of CISOs, but they have to understand and apply the shared responsibility model, where the cloud provider leaves it to the customer, of course, to make sure that the infrastructure they're deploying is properly configured. So in addition to creating a cyber aware culture and communicating up to the board, the CISO has to ensure compliance with and adherence to the model. That includes attracting and retaining the talent necessary to succeed. Now, on the subject of building a security culture, listen to this clip on one of the techniques that Lena Smart, remember, she's the CISO of MongoDB, one of the techniques she uses to foster awareness and build security cultures in her organization. Play the clip >> Having the Security Champion program, so that's just, it's like one of my babies. That and helping underrepresented groups in MongoDB kind of get on in the tech world are both really important to me. And so the Security Champion program is purely purely voluntary. We have over 100 members. And these are people, there's no bar to join, you don't have to be technical. If you're an executive assistant who wants to learn more about security, like my assistant does, you're more than welcome. Up to, we actually, people grade themselves when they join us. We give them a little tick box, like five is, I walk on security water, one is I can spell security, but I'd like to learn more. Mixing those groups together has been game-changing for us. >> Now, the next layer is really where it gets interesting. DevSecOps, you know, we hear about it all the time, shifting left. It implies designing security into the code at the dev level. Shift left and shield right is the kind of buzz phrase. But it's getting more and more complicated. So there are layers within the development cycle, i.e., securing the container. So the app code can't be threatened by backdoors or weaknesses in the containers. Then, securing the runtime to make sure the code is maintained and compliant. Then, the DevOps platform so that change management doesn't create gaps and exposures, and screw things up. And this is just for the application security side of the equation. What about the network and implementing zero trust principles, and securing endpoints, and machine to machine, and human to app communication? So there's a lot of burden being placed on the DevOps team, and they have to partner with the SecOps team to succeed. Those guys are not security experts. And finally, there's audit, which is the last line of defense or what I called at the open, the free safety, for you football fans. They have to do more than just tick the box for the board. That doesn't cut it anymore. They really have to know their stuff and make sure that what they sign off on is real. And then you throw ESG into the mix is becoming more important, making sure the supply chain is green and also secure. So you can see, while much of this stuff has been around for a long, long time, the cloud is accelerating innovation in the pace of delivery. And so much is changing as a result. Now, next, I want to share a graphic that we shared last week, but a little different twist. It's an XY graphic with net score or spending velocity in the vertical axis and overlap or presence in the dataset on the horizontal. With that magic 40% red line as shown. Okay, I won't dig into the data and draw conclusions 'cause we did that last week, but two points I want to make. First, look at Microsoft in the upper-right hand corner. They are big in security and they're attracting a lot of dollars in the space. We've reported on this for a while. They're a five-star security company. And every time, from a spending standpoint in ETR data, that little methodology we use, every time I've run this chart, I've wondered, where the heck is AWS? Why aren't they showing up there? If security is so important to AWS, which it is, and its customers, why aren't they spending money with Amazon on security? And I asked this very question to Merrit Baer, who resides in the office of the CISO at AWS. Listen to her answer. >> It doesn't mean don't spend on security. There is a lot of goodness that we have to offer in ESS, external security services. But I think one of the unique parts of AWS is that we don't believe that security is something you should buy, it's something that you get from us. It's something that we do for you a lot of the time. I mean, this is the definition of the shared responsibility model, right? >> Now, maybe that's good messaging to the market. Merritt, you know, didn't say it outright, but essentially, Microsoft they charge for security. At AWS, it comes with the package. But it does answer my question. And, of course, the fact is that AWS can subsidize all this with egress charges. Now, on the flip side of that, (chuckles) you got Microsoft, you know, they're both, they're competing now. We can take CrowdStrike for instance. Microsoft and CrowdStrike, they compete with each other head to head. So it's an interesting dynamic within the ecosystem. Okay, but I want to turn to a powerful example of how AWS designs in security. And that is the idea of confidential computing. Of course, AWS is not the only one, but we're coming off of re:Inforce, and I really want to dig into something that David Floyer and I have talked about in previous episodes. And we had an opportunity to sit down with Arvind Raghu and J.D. Bean, two security experts from AWS, to talk about this subject. And let's share what we learned and why we think it matters. First, what is confidential computing? That's what this slide is designed to convey. To AWS, they would describe it this way. It's the use of special hardware and the associated firmware that protects customer code and data from any unauthorized access while the data is in use, i.e., while it's being processed. That's oftentimes a security gap. And there are two dimensions here. One is protecting the data and the code from operators on the cloud provider, i.e, in this case, AWS, and protecting the data and code from the customers themselves. In other words, from admin level users are possible malicious actors on the customer side where the code and data is being processed. And there are three capabilities that enable this. First, the AWS Nitro System, which is the foundation for virtualization. The second is Nitro Enclaves, which isolate environments, and then third, the Nitro Trusted Platform Module, TPM, which enables cryptographic assurances of the integrity of the Nitro instances. Now, we've talked about Nitro in the past, and we think it's a revolutionary innovation, so let's dig into that a bit. This is an AWS slide that was shared about how they protect and isolate data and code. On the left-hand side is a classical view of a virtualized architecture. You have a single host or a single server, and those white boxes represent processes on the main board, X86, or could be Intel, or AMD, or alternative architectures. And you have the hypervisor at the bottom which translates instructions to the CPU, allowing direct execution from a virtual machine into the CPU. But notice, you also have blocks for networking, and storage, and security. And the hypervisor emulates or translates IOS between the physical resources and the virtual machines. And it creates some overhead. Now, companies like VMware have done a great job, and others, of stripping out some of that overhead, but there's still an overhead there. That's why people still like to run on bare metal. Now, and while it's not shown in the graphic, there's an operating system in there somewhere, which is privileged, so it's got access to these resources, and it provides the services to the VMs. Now, on the right-hand side, you have the Nitro system. And you can see immediately the differences between the left and right, because the networking, the storage, and the security, the management, et cetera, they've been separated from the hypervisor and that main board, which has the Intel, AMD, throw in Graviton and Trainium, you know, whatever XPUs are in use in the cloud. And you can see that orange Nitro hypervisor. That is a purpose-built lightweight component for this system. And all the other functions are separated in isolated domains. So very strong isolation between the cloud software and the physical hardware running workloads, i.e., those white boxes on the main board. Now, this will run at practically bare metal speeds, and there are other benefits as well. One of the biggest is security. As we've previously reported, this came out of AWS's acquisition of Annapurna Labs, which we've estimated was picked up for a measly $350 million, which is a drop in the bucket for AWS to get such a strategic asset. And there are three enablers on this side. One is the Nitro cards, which are accelerators to offload that wasted work that's done in traditional architectures by typically the X86. We've estimated 25% to 30% of core capacity and cycles is wasted on those offloads. The second is the Nitro security chip, which is embedded and extends the root of trust to the main board hardware. And finally, the Nitro hypervisor, which allocates memory and CPU resources. So the Nitro cards communicate directly with the VMs without the hypervisors getting in the way, and they're not in the path. And all that data is encrypted while it's in motion, and of course, encryption at rest has been around for a while. We asked AWS, is this an, we presumed it was an Arm-based architecture. We wanted to confirm that. Or is it some other type of maybe hybrid using X86 and Arm? They told us the following, and quote, "The SoC, system on chips, for these hardware components are purpose-built and custom designed in-house by Amazon and Annapurna Labs. The same group responsible for other silicon innovations such as Graviton, Inferentia, Trainium, and AQUA. Now, the Nitro cards are Arm-based and do not use any X86 or X86/64 bit CPUs. Okay, so it confirms what we thought. So you may say, "Why should we even care about all this technical mumbo jumbo, Dave?" Well, a year ago, David Floyer and I published this piece explaining why Nitro and Graviton are secret weapons of Amazon that have been a decade in the making, and why everybody needs some type of Nitro to compete in the future. This is enabled, this Nitro innovations and the custom silicon enabled by the Annapurna acquisition. And AWS has the volume economics to make custom silicon. Not everybody can do it. And it's leveraging the Arm ecosystem, the standard software, and the fabrication volume, the manufacturing volume to revolutionize enterprise computing. Nitro, with the alternative processor, architectures like Graviton and others, enables AWS to be on a performance, cost, and power consumption curve that blows away anything we've ever seen from Intel. And Intel's disastrous earnings results that we saw this past week are a symptom of this mega trend that we've been talking about for years. In the same way that Intel and X86 destroyed the market for RISC chips, thanks to PC volumes, Arm is blowing away X86 with volume economics that cannot be matched by Intel. Thanks to, of course, to mobile and edge. Our prediction is that these innovations and the Arm ecosystem are migrating and will migrate further into enterprise computing, which is Intel's stronghold. Now, that stronghold is getting eaten away by the likes of AMD, Nvidia, and of course, Arm in the form of Graviton and other Arm-based alternatives. Apple, Tesla, Amazon, Google, Microsoft, Alibaba, and others are all designing custom silicon, and doing so much faster than Intel can go from design to tape out, roughly cutting that time in half. And the premise of this piece is that every company needs a Nitro to enable alternatives to the X86 in order to support emergent workloads that are data rich and AI-based, and to compete from an economic standpoint. So while at re:Inforce, we heard that the impetus for Nitro was security. Of course, the Arm ecosystem, and its ascendancy has enabled, in our view, AWS to create a platform that will set the enterprise computing market this decade and beyond. Okay, that's it for today. Thanks to Alex Morrison, who is on production. And he does the podcast. And Ken Schiffman, our newest member of our Boston Studio team is also on production. Kristen Martin and Cheryl Knight help spread the word on social media and in the community. And Rob Hof is our editor in chief over at SiliconANGLE. He does some great, great work for us. Remember, all these episodes are available as podcast. Wherever you listen, just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me directly at David.Vellante@siliconangle.com or DM me @dvellante, comment on my LinkedIn post. And please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. Be well, and we'll see you next time on "Breaking Analysis." (upbeat theme music)

Published Date : Jul 30 2022

SUMMARY :

This is "Breaking Analysis" and the Nasdaq was up nearly 250 points And so the Security Champion program the SecOps team to succeed. of the shared responsibility model, right? and it provides the services to the VMs.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

David FloyerPERSON

0.99+

Mike VersacePERSON

0.99+

MichaelPERSON

0.99+

AWSORGANIZATION

0.99+

Steven SchmidtPERSON

0.99+

AmazonORGANIZATION

0.99+

Kurt KufeldPERSON

0.99+

AppleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

TeslaORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

J.D. BeanPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Arvind RaghuPERSON

0.99+

Lena SmartPERSON

0.99+

Kristen MartinPERSON

0.99+

Cheryl KnightPERSON

0.99+

40%QUANTITY

0.99+

Rob HofPERSON

0.99+

DavePERSON

0.99+

SchmidtPERSON

0.99+

Palo AltoLOCATION

0.99+

2022DATE

0.99+

fiveQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

two daysQUANTITY

0.99+

Annapurna LabsORGANIZATION

0.99+

6%QUANTITY

0.99+

SNAPORGANIZATION

0.99+

five-starQUANTITY

0.99+

Chip SymingtonPERSON

0.99+

47%QUANTITY

0.99+

AnnapurnaORGANIZATION

0.99+

$350 millionQUANTITY

0.99+

BostonLOCATION

0.99+

Merrit BaerPERSON

0.99+

CJ MosesPERSON

0.99+

40QUANTITY

0.99+

MerrittPERSON

0.99+

15%QUANTITY

0.99+

25%QUANTITY

0.99+

AMDORGANIZATION

0.99+

The Great Supercloud Debate | Supercloud22


 

[Music] welcome to the great super cloud debate a power panel of three top technology industry analysts maribel lopez is here she's the founder and principal analyst at lopez research keith townsend is ceo and founder of the cto advisor and sanjeev mohan is principal at sanjmo super cloud is a term that we've used to describe the future of cloud architectures the idea is that super clouds are built on top of hyperscaler capex infrastructure and the idea is it goes beyond multi-cloud the premise being that multi-cloud is primarily a symptom of multi-vendor or m a or both and results in more stove we're going to talk about that super cloud's meant to connote a new architecture that leverages the underlying primitives of hyperscale clouds but hides and abstracts that complexity of each of their respective clouds and adds new value on top of that with services and a continuous experience a similar or identical experience across more than one cloud people may say hey that's multi-cloud we're going to talk about that as well so with that as brief background um i'd like to first welcome our painless guys thanks so much for coming on thecube it's great to see you all again great to be here thank you to be here so i'm going to start with maribel you know what i just described what's your reaction to that is it just like what like cloud is supposed to be is that really what multi-cloud is do you agree with the premise that multi-cloud has really been you know what like chuck whitten from dell calls it it's been multi-cloud by default i call it a symptom of multi-vendor what's your take on on what this is oh wow dave another term here we go right more more to define for people but okay the reality is i agree that it's time for something new something evolved right whether we call that super cloud or something else i you know i don't want to really debate the term but we need to move beyond where we are today in multi-cloud and into if we want to call it cloud 5 multi-cloud 2 whatever we want to call it i believe that we're at the next generation that we have to define what that next generation is but if you think about it we went from public to private to hybrid to multi and every time you have a discussion with somebody about cloud you spend 10 minutes defining what you're talking about so this doesn't seem any different to me so let's just go with super cloud for the moment and see where we go and you know if you're interested after everybody else makes their comments i got a few thoughts about what super cloud might mean as well yeah great so i and i agree with you when we like i said in a recent post you could call it cl cloud you know multi-cloud 2.0 but it's something different is happening and sanjeev i know you're not a you're not a big fan of buzz words either but i wonder if you could weigh in on this topic uh you mean by the way sanjeev is at the mit cdo iq conference a great conference uh in boston uh and so he's it's a public place so we're going to have i think you viewed his line when he's not speaking please go ahead yeah so you know i come from a pedigree of uh being an analyst of uh firms that love inventing new terms i am not a big fan of inventing new terms i feel that when we come up with a new term i spend all my time standing on a stage trying to define what it is it takes me away from trying to solve the problem so so i'm you know i find these terms to be uh words of convenience like for example big data you know big data to me may not mean anything but big data connotes some of this modern way of handling vast volumes of data that traditional systems could not handle so from that point of view i'm i'm completely okay with super cloud but just inventing a new term is what i have called in my previous sessions tyranny of jargons where we have just too many jargons and uh and they resonate with i.t people they do not resonate with the business people business people care about the problem they don't care about what we and i t called them yeah and i think this is a really important point that you make and by the way we're not trying to create a new industry category per se yeah we leave that to gartner that's why actually i like super cloud because nobody's going to use that no vendor's going to use the term super cloud it's just too buzzy so so but but but it brings up the point about practitioners and so keith i want to bring you in so the what we've talked about and i'll just sort of share some some thoughts on the problems that we see and and get keith get your practitioner view most clouds most companies use multiple clouds we all kind of agree on that i think and largely these clouds operate in silos and they have their own development environment their own operating environment different apis different primitives and the functionality of a particular cloud doesn't necessarily extend to other clouds so the problem is that increases friction for customers increases cost increases security risk and so there's this promise maribel multi-cloud 2.0 that's going to solve that problem so keith my question to you is is is that an accurate description of the problem that practitioners face today do what did i miss and i wonder if you could elaborate so i think we'll get into some of the detail later on why this is a problem specifically around technologies but if we think about it in the abstract most customers have their hands full dealing with one cloud like we'll you know through m a and such and you zoom in and you look at companies that have multiple clouds or multi-cloud from result of mma mna m a activity you'll see that most of that is in silos so organizationally the customer may have multiple clouds but sub orchid silos they're generally a single silo in a single cloud so as you think about being able to take advantage of of tooling across the multicloud of what dave you guys are calling the super cloud this becomes a serious problem it's just a skill problem it's too much capability uh across too many things that look completely different than another okay so dave can i pick up on that please i'd love i was gonna just go to you maribel please chime in here okay so if we think about what we're talking about with super cloud and what keith just mentioned remember when we went to see tcp ip and the whole idea was like how do we get computers to talk to each other in a more standardized way how do we get data to move in a more standardized way i think that the problem we have with multi-cloud right now is that we don't have that so i think that's sort of a ground level of getting us to your super cloud premise is that and and you know google's tried it with anthony's like everybody every hyperscaler has tried their like right one to run anywhere but that abstraction layer you talk about what whatever we want to call it is super necessary and it's sort of the foundation so if you really think about it we've spent like 15 years or so building out all the various components of cloud and now's the time to take it so that cloud is actually more of an operating model versus a place there's at least a base level of it that is vendor neutral and then to your point the value that's going to be built on top of that you know people been trying to commoditize the basic infrastructure for a while now and i think that's what you're seeing in your super cloud multi-cloud whatever you want to call it the infrastructure is the infrastructure and then what would have been traditionally that past layer and above is where we're going to start to see some real innovation but we still haven't gotten to that point where you can do visibility observability manageability across that really complex cloud stack that we have the reason i the reason i love that tcpip example hm is because it changed the industry and it had an ecosystem effect in sanjiv the the the example that i first example that i used was snowflake a company that you're very familiar with that is sort of hiding all that complexity and right and so we're not there yet but please chime in on this topic uh you gotta you gotta view it again uh after you building upon what maribel said you know to me uh this sounds like a multi-cloud operating system where uh you know you need that kind of a common uh set of primitives and layers because if you go in in the typical multi-cloud process you've got multiple identities and you can't have that you how can you govern if i'm if i have multiple identities i don't have observability i don't know what's going on across my different stacks so to me super cloud is that call it single pane of glass or or one way through which i'm unifying my experience my my technology interfaces my integration and uh and i as an end user don't even care which uh which cloud i'm in it makes no difference to me it makes a difference to the vendor the vendor may say this is coming from aws and this is coming from gcp or azure but to the end user it is a consistent experience with consistent id and and observability and governance so that to me makes it a big difference and so one of floyer's contribution conversation was in order to have a super cloud you got to have a super pass i'm like oh boy people are going to love that but the point being that that allows a consistent developer experience and to maribel's earlier point about tcp it explodes the ecosystem because the ecosystem can now write to that super pass if you will those apis so keith do you do do you buy that number one and number two do you see that industries financial services and healthcare are actually going to be on clouds or what we call super clouds so sanjeev hit on a really key aspect of this is identity let's make this real they you love talk about data collaboration i love senji's point on the business user kind of doesn't care if this is aws versus super cloud versus etc i was collaborating with the client and he wanted to send video file and the video file uh his organization's access control policy didn't allow him to upload or share the file from their preferred platform so he had to go out to another cloud provider and create yet another identity for that data on the cloud same data different identity a proper super cloud will enable me to simply say as a end user here's a set of data or data sets and i want to share a collaboration a collaborator and that requires cross identity across multiple clouds so even before we get to the past layer and the apis we have to solve the most basic problem which is data how do we stop data scientists from shipping snowballs to a location because we can't figure out the identity the we're duplicating the same data within the same cloud because we can't share identity across customer accounts or etc we we have to solve these basic thoughts before we get to supercloud otherwise we get to us a turtles all the way down thing so we'll get into snowflake and what snowflake can do but that's what happens when i want to share my snowflake data across multiple clouds to a different platform yeah you have to go inside the snowflake cloud which leads right so i would say to keith's question sanjeev snowflake i think is solving that problem but then he brings up the other problem which is what if i want to share share data outside the snowflake cloud so that gets to the point of visit open is it closed and so sanji chime in on the sort of snowflake example and in maribel i wonder if there are networking examples because that's that's keith's saying you got to fix the plumbing before you get these higher level abstractions but sanji first yeah so i so i actually want to go and talk a little bit about network but from a data and analytics point of view so i never built upon what what keith said so i i want to give an example let's say i am getting fantastic web logs i and i know who uh uh how much time they're spending on my web pages and which pages they're looking at so i have all of that now all of that is going into cloud a now it turns out that i use google analytics or maybe i use adobe's you know analytics uh suite now that is giving me the business view and i'm trying to do customer journey analytics and guess what i now have two separate identities two separate products two separate clouds if i and i as an id person no problem i can solve any problem by writing tons of code but why would i do that if i can have that super pass or a multi-cloud layout where i've got like a single way of looking at my network traffic my customer metrics and i can do my customer journey analytics it solves a huge problem and then i can share that data with my with my partners so they can see data about their products which is a combination of data from different uh clouds great thank you uh maribel please i think we're having a lord of the rings moment here with the run one room to rule them all concept and i'm not sure that anybody's actually incented to do that right so i think there's two levels of the stack i think in the basic we're talking a lot about we don't have the basic fundamentals of how do you move data authenticate data secure data do data lineage all that stuff across different clouds right we haven't even spoken right now i feel like we're really just talking about the public cloud venue and we haven't even pulled in the fact that people are doing hybrid cloud right so hybrid cloud you know then you're talking about you've got hardware vendors and you've got hyperscaler vendors and there's two or three different ways of doing things so i honestly think that something will emerge like if we think about where we are in technology today it's almost like we need back to that operating system that sanji was talking about like we need a next generation operating system like nobody wants to build the cloud mouse driver of the 21st century over and over again right we need something like that as a foundation layer but then on top of it you know there's obviously a lot of opportunity to build differentiation like when i think back on what happened with cloud amazon remained aws remained very powerful and popular because people invested in building things on amazon right they created a platform and it took a while for anybody else to catch up to that or to have that kind of presence and i still feel that way when i talk to companies but having said that i talked to retail the other day and they were like hey we spent a long time building an abstraction layer on top of the clouds so that our developers could basically write once and run anywhere but they were a massive global presence retailer that's not something that everybody can do so i think that we are still missing a gap i don't know if that exactly answers your question but i i do feel like we're kind of in this chicken and egg thing which comes first and nobody wants to necessarily invest in like oh well you know amazon has built a way to do this so we're all just going to do it the amazon way right it seems like that's not going to work either but i think you bring up a really important point which there is going to be no one ring to rule them all you're going to have you know vmware is going to solve its multi-cloud problem snowflake's going to do a very has a very specific you know purpose-built system for it itself databricks is going to do its thing and it's going to be you know more open source i would companies like aviatrix i would say cisco even is going to go out and solve this problem dell showed at uh at dell tech world a thing called uh project alpine which is basically storage across clouds they're going to be many super clouds we're going to get maybe super cloud stove pipes but but the point is however for a specific problem in a set of use cases they will be addressing those and solving incremental value so keith maybe we won't have that single cloud operating you know system but we'll have multiple ones what are your thoughts on that yeah we're definitely going to have multiple ones uh the there is no um there is no community large enough or influential enough to push a design take maribel's example of the mega retailer they've solved it but they're not going to that's that's competitive that's their competitive advantage they're not going to share that with the rest of us and open source that and force that upon the industry via just agreement from everyone else so we're not going to get uh the level of collaboration either originated by the cloud provider originated from user groups that solves this problem big for us we will get silos in which this problem is solved we'll get groups working together inside of maybe uh industry or subgroups within the industry to say that hey we're going to share or federate identity across our three or four or five or a dozen organizations we'll be able to share data we're going to solve that data problem but in the same individual organizations in another part of the super cloud problem are going to again just be silos i can't uh i can't run machine learning against my web assets for the community group that i run because that's not part of the working group that solved a different data science problem so yes we're going to have these uh bifurcations and forks within the super cloud the question is where is the focus for each individual organization where do i point my smart people and what problems they solve okay i want to throw out a premise and get you guys reaction to it because i think this again i go back to the maribel's tcpip example it changed the industry it opened up an ecosystem and to me this is what digital transformation is all about you've got now industry participants marc andreessen says every company is a software company you've now got industry participants and here's some examples it's not i wouldn't call them true super clouds yet but walmart's doing their hybrid thing with azure you got goldman sachs announced at the last reinvent and it's going to take its tools its software its data and which is on-prem and connect that to the aws cloud and actually deliver a service capital one we saw sanjiv at the snowflake summit is is taking their tooling and doing it now granted just within snowflake and aws but i fully expect them to expand that across other clouds these are industry examples capital one software is the name of the division that are now it's to the re reason why i don't get so worried that we're not solving the lord of the rings problem that maribel mentioned is because it opens up tremendous opportunities for companies we got like just under five minutes left i want to throw that out there and see what you guys think yeah i would just i want to build upon what maribel said i love what she said you're not going to build a mouse driver so if multi-cloud supercloud is a multi-cloud os the mouse driver would be identity or maybe it's data quality and to teach point that data quality is not going to come from a single vendor that is going to come from a different vendor whose job is to to harmonize data because there might be data might be for the same identity but it may be a different granularity level so you cannot just mix and match so you need to have some sort of like resolution and that is is an example of a driver for multi-cloud interesting okay so you know octa might be the identity cloud or z scaler might be the security cloud or calibre has its cloud etc any thoughts on that keith or maribel yeah so let's talk about where the practical challenges run into this we did some really great research that was sponsored by one of the large cloud providers in which we took all we looked at all the vmware cloud solutions when i say vmware cloud vmware has a lot of products across multi-cloud now in the rock broadcloud portfolio but we're talking about the og solution vmware vsphere it would seem like on paper if i put vmware vsphere in each cloud that is therefore a super cloud i think we would all agree to that in principle what we found in our research was that when we put hands on keyboard the differences of the clouds show themselves in the training gap and that skills gap between the clouds show themselves if i needed to expose less our favorite friend a friend a tc pip address to the public internet that is a different process on each one of the clouds that needs to be done on each one of the clouds and not abstracted in vmware vsphere so as we look at the nuance yes we can give the big controls but where the capital ones the uh jp morgan chase just spent two billion dollars on this type of capability where the spin effort is done is taking it from that 80 percent to that 90 95 experience and that's where the effort and money is spent on that last mile maribel we're out of time but please you know bring us home give us your closing thoughts hey i think we're still going to be working on what the multi-cloud thing is for a while and you know super cloud i think is a direction of the future of cloud computing but we got some real problems to solve around authentication uh identity data lineage data security so i think those are going to be sort of the tactical things that we're working on for the next couple years right guys always a pleasure having you on the cube i hope we see you around keith i understand you're you're bringing your airstream to vmworld or vmware explorer putting it on the on the floor i can't wait to see that and uh mrs cto advisor i'm sure we'll be uh by your side so looking forward to that hopefully sanjeev and maribel we'll see you uh on the circuit as well yes hope to see you there right looking forward to hopefully even doing some content with you guys at vmware explorer too awesome looking forward all right keep it right there for more content from super cloud 22 right back [Music] you

Published Date : Jul 20 2022

SUMMARY :

that problem so keith my question to you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
marc andreessenPERSON

0.99+

maribel lopezPERSON

0.99+

threeQUANTITY

0.99+

amazonORGANIZATION

0.99+

10 minutesQUANTITY

0.99+

twoQUANTITY

0.99+

two billion dollarsQUANTITY

0.99+

maribelPERSON

0.99+

sanjeevPERSON

0.99+

fourQUANTITY

0.99+

ciscoORGANIZATION

0.99+

fiveQUANTITY

0.99+

keithPERSON

0.99+

80 percentQUANTITY

0.99+

sanjiPERSON

0.99+

walmartORGANIZATION

0.99+

aviatrixORGANIZATION

0.99+

bostonLOCATION

0.99+

sanjmoORGANIZATION

0.99+

cto advisorORGANIZATION

0.99+

two levelsQUANTITY

0.98+

15 yearsQUANTITY

0.98+

sanjeev mohanPERSON

0.98+

21st centuryDATE

0.98+

more than one cloudQUANTITY

0.97+

uh project alpineORGANIZATION

0.96+

each oneQUANTITY

0.96+

awsORGANIZATION

0.96+

lopezORGANIZATION

0.96+

each cloudQUANTITY

0.96+

under five minutesQUANTITY

0.96+

senjiPERSON

0.96+

todayDATE

0.95+

oneQUANTITY

0.94+

first exampleQUANTITY

0.94+

firstQUANTITY

0.94+

vmwareTITLE

0.93+

bothQUANTITY

0.93+

one roomQUANTITY

0.92+

vmworldORGANIZATION

0.92+

azureTITLE

0.92+

single cloudQUANTITY

0.92+

keith townsendPERSON

0.91+

one wayQUANTITY

0.91+

googleORGANIZATION

0.9+

three different waysQUANTITY

0.89+

two separateQUANTITY

0.89+

single wayQUANTITY

0.89+

eachQUANTITY

0.88+

adobeTITLE

0.88+

each individual organizationQUANTITY

0.86+

gartnerORGANIZATION

0.86+

dellORGANIZATION

0.86+

awsTITLE

0.86+

vmwareORGANIZATION

0.85+

uhORGANIZATION

0.85+

single paneQUANTITY

0.84+

next couple yearsDATE

0.83+

single vendorQUANTITY

0.83+

a dozen organizationsQUANTITY

0.83+

floyerPERSON

0.82+

tons of codeQUANTITY

0.81+

one cloudQUANTITY

0.81+

super cloudTITLE

0.8+

maribelLOCATION

0.79+

three top technology industry analystsQUANTITY

0.78+

dell tech worldORGANIZATION

0.78+

davePERSON

0.77+

cloudsORGANIZATION

0.77+

Breaking Analysis: Broadcom, Taming the VMware Beast


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the words of my colleague CTO David Nicholson, Broadcom buys old cars, not to restore them to their original luster and beauty. Nope. They buy classic cars to extract the platinum that's inside the catalytic converter and monetize that. Broadcom's planned 61 billion acquisition of VMware will mark yet another new era and chapter for the virtualization pioneer, a mere seven months after finally getting spun out as an independent company by Dell. For VMware, this means a dramatically different operating model with financial performance and shareholder value creation as the dominant and perhaps the sole agenda item. For customers, it will mean a more focused portfolio, less aspirational vision pitches, and most certainly higher prices. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we'll share data, opinions and customer insights about this blockbuster deal and forecast the future of VMware, Broadcom and the broader ecosystem. Let's first look at the key deal points, it's been well covered in the press. But just for the record, $61 billion in a 50/50 cash and stock deal, resulting in a blended price of $138 per share, which is a 44% premium to the unaffected price, i.e. prior to the news breaking. Broadcom will assume 8 billion of VMware debt and promises that the acquisition will be immediately accretive and will generate 8.5 billion in EBITDA by year three. That's more than 4 billion in EBITDA relative to VMware's current performance today. In a classic Broadcom M&A approach, the company promises to dilever debt and maintain investment grade ratings. They will rebrand their software business as VMware, which will now comprise about 50% of revenues. There's a 40 day go shop and importantly, Broadcom promises to continue to return 60% of its free cash flow to shareholders in the form of dividends and buybacks. Okay, with that out of the way, we're going to get to the money slide literally in a moment that Broadcom shared on its investor call. Broadcom has more than 20 business units. It's CEO Hock Tan makes it really easy for his business unit managers to understand. Rule number one, you agreed to an operating plan with targets for revenue, growth, EBITDA, et cetera, hit your numbers consistently and we're good. You'll be very well compensated and life will be wonderful for you and your family. Miss the number, and we're going to have a frank and uncomfortable bottom line discussion. You'll four, perhaps five quarters to turn your business around, if you don't, we'll kill it or sell it if we can. Rule number two, refer to rule number one. Hello, VMware, here's the money slide. I'll interpret the bullet points on the left for clarity. Your fiscal year 2022 EBITDA was 4.7 billion. By year three, it will be 8.5 billion. And we Broadcom have four knobs to turn with you, VMware to help you get there. First knob, if it ain't recurring revenue with rubber stamp renewals, we're going to convert that revenue or kill it. Knob number two, we're going to focus R&D in the most profitable areas of the business. AKA expect the R&D budget to be cut. Number three, we're going to spend less on sales and marketing by focusing on existing customers. We're not going to lose money today and try to make it up many years down the road. And number four, we run Broadcom with 1% GNA. You will too. Any questions? Good. Now, just to give you a little sense of how Broadcom runs its business and how well run a company it is, let's do a little simple comparison with this financial snapshot. All we're doing here is taking the most recent quarterly earnings reports from Broadcom and VMware respectively. We take the quarterly revenue and multiply by four X to get the revenue run rate and then we calculate the ratios off of the most recent quarters revenue. It's worth spending some time on this to get a sense of how profitable the Broadcom business actually is and what the spreadsheet gurus at Broadcom are seeing with respect to the possibilities for VMware. So combined, we're talking about a 40 plus billion dollar company. Broadcom is growing at more than 20% per year. Whereas VMware's latest quarter showed a very disappointing 3% growth. Broadcom is mostly a hardware company, but its gross margin is in the high seventies. As a software company of course VMware has higher gross margins, but FYI, Broadcom's software business, the remains of Symantec and what they purchased as CA has 90% gross margin. But the I popper is operating margin. This is all non gap. So it excludes things like stock based compensation, but Broadcom had 61% operating margin last quarter. This is insanely off the charts compared to VMware's 25%. Oracle's non gap operating margin is 47% and Oracle is an incredibly profitable company. Now the red box is where the cuts are going to take place. Broadcom doesn't spend much on marketing. It doesn't have to. It's SG&A is 3% of revenue versus 18% for VMware and R&D spend is almost certainly going to get cut. The other eye popper is free cash flow as a percentage of revenue at 51% for Broadcom and 29% for VMware. 51%. That's incredible. And that my dear friends is why Broadcom a company with just under 30 billion in revenue has a market cap of 230 billion. Let's dig into the VMware portfolio a bit more and identify the possible areas that will be placed under the microscope by Hock Tan and his managers. The data from ETR's latest survey shows the net score or spending momentum across VMware's portfolio in this chart, net score essentially measures the net percent of customers that are spending more on a specific product or vendor. The yellow bar is the most recent survey and compares the April 22 survey data to April 21 and January of 22. Everything is down in the yellow from January, not surprising given the economic outlook and the change in spending patterns that we've reported. VMware Cloud on AWS remains the product in the ETR survey with the most momentum. It's the only offering in the portfolio with spending momentum above the 40% line, a level that we consider highly elevated. Unified Endpoint Management looks more than respectable, but that business is a rock fight with Microsoft. VMware Cloud is things like VMware Cloud foundation, VCF and VMware's cross cloud offerings. NSX came from the Nicira acquisition. Tanzu is not yet pervasive and one wonders if VMware is making any money there. Server is ESX and vSphere and is the bread and butter. That is where Broadcom is going to focus. It's going to look at VSAN and NSX, which is software probably profitable. And of course the other products and see if the investments are paying off, if they are Broadcom will keep, if they are not, you can bet your socks, they will be sold off or killed. Carbon Black is at the far right. VMware paid $2.1 billion for Carbon Black. And it's the lowest performer on this list in terms of net score or spending momentum. And that doesn't mean it's not profitable. It just doesn't have the momentum you'd like to see, so you can bet that is going to get scrutiny. Remember VMware's growth has been under pressure for the last several years. So it's been buying companies, dozens of them. It bought AirWatch, bought Heptio, Carbon Black, Nicira, SaltStack, Datrium, Versedo, Bitnami, and on and on and on. Many of these were to pick up engineering teams. Some of them were to drive new revenue. Now this is definitely going to be scrutinized by Broadcom. So that helps explain why Michael Dell would sell VMware. And where does VMware go from here? It's got great core product. It's an iconic name. It's got an awesome ecosystem, fantastic distribution channel, but its growth is slowing. It's got limited developer chops in a world that developers and cloud native is all the rage. It's got a far flung R&D agenda going at war with a lot of different places. And it's increasingly fighting this multi front war with cloud companies, companies like Cisco, IBM Red Hat, et cetera. VMware's kind of becoming a heavy lift. It's a perfect acquisition target for Broadcom and why the street loves this deal. And we titled this Breaking Analysis taming the VMware beast because VMware is a beast. It's ubiquitous. It's an epic software platform. EMC couldn't control it. Dell used it as a piggy bank, but really didn't change its operating model. Broadcom 100% will. Now one of the things that we get excited about is the future of systems architectures. We published a breaking analysis about a year ago, talking about AWS's secret weapon with Nitro and it's Annapurna custom Silicon efforts. Remember it acquired Annapurna for a measly $350 million. And we talked about how there's a new architecture and a new price performance curve emerging in the enterprise, driven by AWS and being followed by Microsoft, Google, Alibaba, a trend toward custom Silicon with the arm based Nitro and which is AWS's hypervisor and Nick strategy, enabling processor diversity with things like Graviton and Trainium and other diverse processors, really diversifying away from x86 and how this leads to much faster product cycles, faster tape out, lower costs. And our premise was that everyone in the data center is going to competes, is going to need a Nitro to be competitive long term. And customers are going to gravitate toward the most economically favorable platform. And as we describe the landscape with this chart, we've updated this for this Breaking Analysis and we'll come back to nitro in a moment. This is a two dimensional graphic with net score or spending momentum on the vertical axis and overlap formally known as market share or presence within the survey, pervasiveness that's on the horizontal axis. And we plot various companies and products and we've inserted VMware's net score breakdown. The granularity in those colored bars on the bottom right. Net score is essentially the green minus the red and a couple points on that. VMware in the latest survey has 6% new adoption. That's that lime green. It's interesting. The question Broadcom is going to ask is, how much does it cost you to acquire that 6% new. 32% of VMware customers in the survey are increasing spending, meaning they're increasing spending by 6% or more. That's the forest green. And the question Broadcom will dig into is what percent of that increased spend (chuckles) you're capturing is profitable spend? Whatever isn't profitable is going to be cut. Now that 52% gray area flat spending that is ripe for the Broadcom picking, that is the fat middle, and those customers are locked and loaded for future rent extraction via perpetual renewals and price increases. Only 8% of customers are spending less, that's the pinkish color and only 3% are defecting, that's the bright red. So very, very sticky profile. Perfect for Broadcom. Now the rest of the chart lays out some of the other competitor names and we've plotted many of the VMware products so you can see where they fit. They're all pretty respectable on the vertical axis, that's spending momentum. But what Broadcom wants is that core ESX vSphere base where we've superimposed the Broadcom logo. Broadcom doesn't care so much about spending momentum. It cares about profitability potential and then momentum. AWS and Azure, they're setting the pace in this business, in the upper right corner. Cisco very huge presence in the data center, as does Intel, they're not in the ETR survey, but we've superimposed them. Now, Intel of course, is in a dog fight within Nvidia, the Arm ecosystem, AMD, don't forget China. You see a Google cloud platform is in there. Oracle is also on the chart as well, somewhat lower on the vertical axis, but it doesn't have that spending momentum, but it has a big presence. And it owns a cloud as we've talked about many times and it's highly differentiated. It's got a strategy that allows it to differentiate from the pack. It's very financially driven. It knows how to extract lifetime value. Safra Catz operates in many ways, similar to what we're seeing from Hock Tan and company, different from a portfolio standpoint. Oracle's got the full stack, et cetera. So it's a different strategy. But very, very financially savvy. You could see IBM and IBM Red Hat in the mix and then Dell and HP. I want to come back to that momentarily to talk about where value is flowing. And then we plotted Nutanix, which with Acropolis could suck up some V tax avoidance business. Now notice Symantec and CA, relatively speaking in the ETR survey, they have horrible spending momentum. As we said, Broadcom doesn't care. Hock Tan is not going for growth at the expense of profitability. So we fully expect VMware to come down on the vertical axis over time and go up on the profit scale. Of course, ETR doesn't measure the profitability here. Now back to Nitro, VMware has this thing called Project Monterey. It's essentially their version of Nitro and will serve as their future architecture diversifying off x86 and accommodating alternative processors. And a much more efficient performance, price in energy consumption curve. Now, one of the things that we've advocated for, we said this about Dell and others, including VMware to take a page out of AWS and start developing custom Silicon to better integrate hardware and software and accelerate multi-cloud or what we call supercloud. That layer above the cloud, not just running on individual clouds. So this is all about efficiency and simplicity to own this space. And we've challenged organizations to do that because otherwise we feel like the cloud guys are just going to have consistently better costs, not necessarily price, but better cost structures, but it begs the question. What happens to Project Monterey? Hock Tan and Broadcom, they don't invest in something that is unproven and doesn't throw off free cash flow. If it's not going to pay off for years to come, they're probably not going to invest in it. And yet Project Monterey could help secure VMware's future in not only the data center, but at the edge and compete more effectively with cloud economics. So we think either Project Monterey is toast or the VMware team will knock on the door of one of Broadcom's 20 plus business units and say, guys, what if we work together with you to develop a version of Monterey that we can use and sell to everyone, it'd be the arms dealer to everyone and be competitive with the cloud and other players out there and create the de facto standard for data center performance and supercloud. I mean, it's not outrageously expensive to develop custom Silicon. Tesla is doing it for example. And Broadcom obviously is capable of doing it. It's got good relationships with semiconductor fabs. But I think this is going to be a tough sell to Broadcom, unless VMware can hide this in plain site and make it profitable fast, like AWS most likely has with Nitro and Graviton. Then Project Monterey and our pipe dream of alternatives to Nitro in the data center could happen but if it can't, it's going to be toast. Or maybe Intel or Nvidia will take it over or maybe the Monterey team will spin out a VMware and do a Pensando like deal and demonstrate the viability of this concept and then Broadcom will buy it back in 10 years. Here's a double click on that previous data that we put in tabular form. It's how the data on that previous slide was plotted. I just want to give you the background data here. So net score spending momentum is the sorted on the left. So it's sorted by net score in the left hand chart, that was the y-axis in the previous data set and then shared and or presence in the data set is the right hand chart. In other words, it's sorted on the right hand chart, right hand table. That right most column is shared and you can see it's sorted top to bottom, and that was the x-axis on the previous chart. The point is not many on the left hand side are above the 40% line. VMware Cloud on AWS is, it's expensive, so it's probably profitable and it's probably a keeper. We'll see about the rest of VMware's portfolio. Like what happens to Tanzu for example. On the right, we drew a red line, just arbitrarily at those companies and products with more than a hundred mentions in the survey, everything but Tanzu from VMware makes that cut. Again, this is no indication of profitability here, and that's what's going to matter to Broadcom. Now let's take a moment to address the question of Broadcom as a software company. What the heck do they know about software, right. Well, they're not dumb over there and they know how to run a business, but there is a strategic rationale to this move beyond just doing portfolios and extracting rents and cutting R&D, et cetera, et cetera. Why, for example, isn't Broadcom going after coming back to Dell or HPE, it could pick up for a lot less than VMware, and they got way more revenue than VMware. Well, it's obvious, software's more profitable of course, and Broadcom wants to move up the stack, but there's a trend going on, which Broadcom is very much in touch with. First, it sells to Dell and HPE and Cisco and all the OEM. so it's not going to disrupt that. But this chart shows that the value is flowing away from traditional servers and storage and networking to two places, merchant Silicon, which itself is morphing. Broadcom... We focus on the left hand side of this chart. Broadcom correctly believes that the world is shifting from a CPU centric center of gravity to a connectivity centric world. We've talked about this on theCUBE a lot. You should listen to Broadcom COO Charlie Kawwas speak about this. It's all that supporting infrastructure around the CPU where value is flowing, including of course, alternative GPUs and XPUs, and NPUs et cetera, that are sucking the value out of the traditional x86 architecture, offloading some of the security and networking and storage functions that traditionally have been done in x86 which are part of the waste right now in the data center. This is that shifting dynamic of Moore's law. Moore's law, not keeping pace. It's slowing down. It's slower relative to some of the combinatorial factors. When you add up in all the CPU and GPU and NPU and accelerators, et cetera. So we've talked about this a lot in Breaking Analysis episodes. So the value is shifting left within that middle circle. And it's shifting left within that left circle toward components, other than CPU, many of which Broadcom supplies. And then you go back to the middle, value is shifting from that middle section, that traditional data center up into hyperscale clouds, and then to the right toward infrastructure software to manage all that equipment in the data center and across clouds. And look Broadcom is an arms dealer. They simply sell to everyone, locking up key vectors of the value chain, cutting costs and raising prices. It's a pretty straightforward strategy, but not for the fate of heart. And Broadcom has become pretty good at it. Let's close with the customer feedback. I spoke with ETRs Eric Bradley this morning. He and I both reached out to VMware customers that we know and got their input. And here's a little snapshot of what they said. I'll just read this. Broadcom will be looking to invest in the core and divest of any underperforming assets, right on. It's just what we were saying. This doesn't bode well for future innovation, this is a CTO at a large travel company. Next comment, we're a Carbon Black customer. VMware didn't seem to interfere with Carbon Black, but now that we're concerned about short term disruption to their tech roadmap and long term, are they going to split and be sold off like Symantec was, this is a CISO at a large hospitality organization. Third comment, I got directly from a VMware practitioner, an IT director at a manufacturing firm. This individual said, moving off VMware would be very difficult for us. We have over 500 applications running on VMware, and it's really easy to manage. We're not going to move those into the cloud and we're worried Broadcom will raise prices and just extract rents. Last comment, we'll share as, Broadcom sees the cloud data center and IoT is their next revenue source. The VMware acquisition provides them immediate virtualization capabilities to support a lightweight IoT offering. Big concern for customers is what technology they will invest in and innovate, and which will be stripped off and sold. Interesting. I asked David Floyer to give me a back of napkin estimate for the following question. I said, David, if you're running mission critical applications on VMware, how much would it increase your operating cost moving those applications into the cloud? Or how much would it save? And he said, Dave, VMware's really easy to run. It can run any application pretty much anywhere, and you don't need an army of people to manage it. All your processes are tied to VMware, you're locked and loaded. Move that into the cloud and your operating cost would double by his estimates. Well, there you have it. Broadcom will pinpoint the optimal profit maximization strategy and raise prices to the point where customers say, you know what, we're still better off staying with VMware. And sadly, for many practitioners there aren't a lot of choices. You could move to the cloud and increase your cost for a lot of your applications. You could do it yourself with say Zen or OpenStack. Good luck with that. You could tap Nutanix. That will definitely work for some applications, but are you going to move your entire estate, your application portfolio to Nutanix? It's not likely. So you're going to pay more for VMware and that's the price you're going to pay for two decades of better IT. So our advice is get out ahead of this, do an application portfolio assessment. If you can move apps to the cloud for less, and you haven't yet, do it, start immediately. Definitely give Nutanix a call, but going to have to be selective as to what you actually can move, forget porting to OpenStack, or do it yourself Hypervisor, don't even go there. And start building new cloud native apps where it makes sense and let the VMware stuff go into manage decline. Let certain apps just die through attrition, shift your development resources to innovation in the cloud and build a brick wall around the stable apps with VMware. As Paul Maritz, the former CEO of VMware said, "We are building the software mainframe". Now marketing guys got a hold of that and said, Paul, stop saying that, but it's true. And with Broadcom's help that day we'll soon be here. That's it for today. Thanks to Stephanie Chan who helps research our topics for Breaking Analysis. Alex Myerson does the production and he also manages the Breaking Analysis podcast. Kristen Martin and Cheryl Knight help get the word out on social and thanks to Rob Hof, who was our editor in chief at siliconangle.com. Remember, these episodes are all available as podcast, wherever you listen, just search Breaking Analysis podcast. Check out ETRs website at etr.ai for all the survey action. We publish a full report every week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com. You can DM me at DVellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 28 2022

SUMMARY :

This is Breaking Analysis and promises that the acquisition

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Stephanie ChanPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SymantecORGANIZATION

0.99+

Rob HofPERSON

0.99+

Alex MyersonPERSON

0.99+

April 22DATE

0.99+

HPORGANIZATION

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Paul MaritzPERSON

0.99+

BroadcomORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

April 21DATE

0.99+

NSXORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

DavePERSON

0.99+

JanuaryDATE

0.99+

$61 billionQUANTITY

0.99+

8.5 billionQUANTITY

0.99+

$2.1 billionQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

EMCORGANIZATION

0.99+

AcropolisORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

90%QUANTITY

0.99+

6%QUANTITY

0.99+

4.7 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Hock TanORGANIZATION

0.99+

60%QUANTITY

0.99+

44%QUANTITY

0.99+

40 dayQUANTITY

0.99+

61%QUANTITY

0.99+

8 billionQUANTITY

0.99+

Michael DellPERSON

0.99+

52%QUANTITY

0.99+

47%QUANTITY

0.99+

Analyst Power Panel: Future of Database Platforms


 

(upbeat music) >> Once a staid and boring business dominated by IBM, Oracle, and at the time newcomer Microsoft, along with a handful of wannabes, the database business has exploded in the past decade and has become a staple of financial excellence, customer experience, analytic advantage, competitive strategy, growth initiatives, visualizations, not to mention compliance, security, privacy and dozens of other important use cases and initiatives. And on the vendor's side of the house, we've seen the rapid ascendancy of cloud databases. Most notably from Snowflake, whose massive raises leading up to its IPO in late 2020 sparked a spate of interest and VC investment in the separation of compute and storage and all that elastic resource stuff in the cloud. The company joined AWS, Azure and Google to popularize cloud databases, which have become a linchpin of competitive strategies for technology suppliers. And if I get you to put your data in my database and in my cloud, and I keep innovating, I'm going to build a moat and achieve a hugely attractive lifetime customer value in a really amazing marginal economics dynamic that is going to fund my future. And I'll be able to sell other adjacent services, not just compute and storage, but machine learning and inference and training and all kinds of stuff, dozens of lucrative cloud offerings. Meanwhile, the database leader, Oracle has invested massive amounts of money to maintain its lead. It's building on its position as the king of mission critical workloads and making typical Oracle like claims against the competition. Most were recently just yesterday with another announcement around MySQL HeatWave. An extension of MySQL that is compatible with on-premises MySQLs and is setting new standards in price performance. We're seeing a dramatic divergence in strategies across the database spectrum. On the far left, we see Amazon with more than a dozen database offerings each with its own API and primitives. AWS is taking a right tool for the right job approach, often building on open source platforms and creating services that it offers to customers to solve very specific problems for developers. And on the other side of the line, we see Oracle, which is taking the Swiss Army Knife approach, converging database functionality, enabling analytic and transactional workloads to run in the same data store, eliminating the need to ETL, at the same time adding capabilities into its platform like automation and machine learning. Welcome to this database Power Panel. My name is Dave Vellante, and I'm so excited to bring together some of the most respected industry analyst in the community. Today we're going to assess what's happening in the market. We're going to dig into the competitive landscape and explore the future of database and database platforms and decode what it means to customers. Let me take a moment to welcome our guest analyst today. Matt Kimball is a vice president and principal analysts at Moor Insights and Strategy, Matt. He knows products, he knows industry, he's got real world IT expertise, and he's got all the angles 25 plus years of experience in all kinds of great background. Matt, welcome. Thanks very much for coming on theCUBE. Holgar Mueller, friend of theCUBE, vice president and principal analyst at Constellation Research in depth knowledge on applications, application development, knows developers. He's worked at SAP and Oracle. And then Bob Evans is Chief Content Officer and co-founder of the Acceleration Economy, founder and principle of Cloud Wars. Covers all kinds of industry topics and great insights. He's got awesome videos, these three minute hits. If you haven't seen 'em, checking them out, knows cloud companies, his Cloud Wars minutes are fantastic. And then of course, Marc Staimer is the founder of Dragon Slayer Research. A frequent contributor and guest analyst at Wikibon. He's got a wide ranging knowledge across IT products, knows technology really well, can go deep. And then of course, Ron Westfall, Senior Analyst and Director Research Director at Futurum Research, great all around product trends knowledge. Can take, you know, technical dives and really understands competitive angles, knows Redshift, Snowflake, and many others. Gents, thanks so much for taking the time to join us in theCube today. It's great to have you on, good to see you. >> Good to be here, thanks for having us. >> Thanks, Dave. >> All right, let's start with an around the horn and briefly, if each of you would describe, you know, anything I missed in your areas of expertise and then you answer the following question, how would you describe the state of the database, state of platform market today? Matt Kimball, please start. >> Oh, I hate going first, but that it's okay. How would I describe the world today? I would just in one sentence, I would say, I'm glad I'm not in IT anymore, right? So, you know, it is a complex and dangerous world out there. And I don't envy IT folks I'd have to support, you know, these modernization and transformation efforts that are going on within the enterprise. It used to be, you mentioned it, Dave, you would argue about IBM versus Oracle versus this newcomer in the database space called Microsoft. And don't forget Sybase back in the day, but you know, now it's not just, which SQL vendor am I going to go with? It's all of these different, divergent data types that have to be taken, they have to be merged together, synthesized. And somehow I have to do that cleanly and use this to drive strategic decisions for my business. That is not easy. So, you know, you have to look at it from the perspective of the business user. It's great for them because as a DevOps person, or as an analyst, I have so much flexibility and I have this thing called the cloud now where I can go get services immediately. As an IT person or a DBA, I am calling up prevention hotlines 24 hours a day, because I don't know how I'm going to be able to support the business. And as an Oracle or as an Oracle or a Microsoft or some of the cloud providers and cloud databases out there, I'm licking my chops because, you know, my market is expanding and expanding every day. >> Great, thank you for that, Matt. Holgar, how do you see the world these days? You always have a good perspective on things, share with us. >> Well, I think it's the best time to be in IT, I'm not sure what Matt is talking about. (laughing) It's easier than ever, right? The direction is going to cloud. Kubernetes has won, Google has the best AI for now, right? So things are easier than ever before. You made commitments for five plus years on hardware, networking and so on premise, and I got gray hair about worrying it was the wrong decision. No, just kidding. But you kind of both sides, just to be controversial, make it interesting, right. So yeah, no, I think the interesting thing specifically with databases, right? We have this big suite versus best of breed, right? Obviously innovation, like you mentioned with Snowflake and others happening in the cloud, the cloud vendors server, where to save of their databases. And then we have one of the few survivors of the old guard as Evans likes to call them is Oracle who's doing well, both their traditional database. And now, which is really interesting, remarkable from that because Oracle it was always the power of one, have one database, add more to it, make it what I call the universal database. And now this new HeatWave offering is coming and MySQL open source side. So they're getting the second (indistinct) right? So it's interesting that older players, traditional players who still are in the market are diversifying their offerings. Something we don't see so much from the traditional tools from Oracle on the Microsoft side or the IBM side these days. >> Great, thank you Holgar. Bob Evans, you've covered this business for a while. You've worked at, you know, a number of different outlets and companies and you cover the competition, how do you see things? >> Dave, you know, the other angle to look at this from is from the customer side, right? You got now CEOs who are any sort of business across all sorts of industries, and they understand that their future success is going to be dependent on their ability to become a digital company, to understand data, to use it the right way. So as you outline Dave, I think in your intro there, it is a fantastic time to be in the database business. And I think we've got a lot of new buyers and influencers coming in. They don't know all this history about IBM and Microsoft and Oracle and you know, whoever else. So I think they're going to take a long, hard look, Dave, at some of these results and who is able to help these companies not serve up the best technology, but who's going to be able to help their business move into the digital future. So it's a fascinating time now from every perspective. >> Great points, Bob. I mean, digital transformation has gone from buzzword to imperative. Mr. Staimer, how do you see things? >> I see things a little bit differently than my peers here in that I see the database market being segmented. There's all the different kinds of databases that people are looking at for different kinds of data, and then there is databases in the cloud. And so database as cloud service, I view very differently than databases because the traditional way of implementing a database is changing and it's changing rapidly. So one of the premises that you stated earlier on was that you viewed Oracle as a database company. I don't view Oracle as a database company anymore. I view Oracle as a cloud company that happens to have a significant expertise and specialty in databases, and they still sell database software in the traditional way, but ultimately they're a cloud company. So database cloud services from my point of view is a very distinct market from databases. >> Okay, well, you gave us some good meat on the bone to talk about that. Last but not least-- >> Dave did Marc, just say Oracle's a cloud company? >> Yeah. (laughing) Take away the database, it would be interesting to have that discussion, but let's let Ron jump in here. Ron, give us your take. >> That's a great segue. I think it's truly the era of the cloud database, that's something that's rising. And the key trends that come with it include for example, elastic scaling. That is the ability to scale on demand, to right size workloads according to customer requirements. And also I think it's going to increase the prioritization for high availability. That is the player who can provide the highest availability is going to have, I think, a great deal of success in this emerging market. And also I anticipate that there will be more consolidation across platforms in order to enable cost savings for customers, and that's something that's always going to be important. And I think we'll see more of that over the horizon. And then finally security, security will be more important than ever. We've seen a spike (indistinct), we certainly have seen geopolitical originated cybersecurity concerns. And as a result, I see database security becoming all the more important. >> Great, thank you. Okay, let me share some data with you guys. I'm going to throw this at you and see what you think. We have this awesome data partner called Enterprise Technology Research, ETR. They do these quarterly surveys and each period with dozens of industry segments, they track clients spending, customer spending. And this is the database, data warehouse sector okay so it's taxonomy, so it's not perfect, but it's a big kind of chunk. They essentially ask customers within a category and buy a specific vendor, you're spending more or less on the platform? And then they subtract the lesses from the mores and they derive a metric called net score. It's like NPS, it's a measure of spending velocity. It's more complicated and granular than that, but that's the basis and that's the vertical axis. The horizontal axis is what they call market share, it's not like IDC market share, it's just pervasiveness in the data set. And so there are a couple of things that stand out here and that we can use as reference point. The first is the momentum of Snowflake. They've been off the charts for many, many, for over two years now, anything above that dotted red line, that 40%, is considered by ETR to be highly elevated and Snowflake's even way above that. And I think it's probably not sustainable. We're going to see in the next April survey, next month from those guys, when it comes out. And then you see AWS and Microsoft, they're really pervasive on the horizontal axis and highly elevated, Google falls behind them. And then you got a number of well funded players. You got Cockroach Labs, Mongo, Redis, MariaDB, which of course is a fork on MySQL started almost as protest at Oracle when they acquired Sun and they got MySQL and you can see the number of others. Now Oracle who's the leading database player, despite what Marc Staimer says, we know, (laughs) and they're a cloud player (laughing) who happens to be a leading database player. They dominate in the mission critical space, we know that they're the king of that sector, but you can see here that they're kind of legacy, right? They've been around a long time, they get a big install base. So they don't have the spending momentum on the vertical axis. Now remember this is, just really this doesn't capture spending levels, so that understates Oracle but nonetheless. So it's not a complete picture like SAP for instance is not in here, no Hana. I think people are actually buying it, but it doesn't show up here, (laughs) but it does give an indication of momentum and presence. So Bob Evans, I'm going to start with you. You've commented on many of these companies, you know, what does this data tell you? >> Yeah, you know, Dave, I think all these compilations of things like that are interesting, and that folks at ETR do some good work, but I think as you said, it's a snapshot sort of a two-dimensional thing of a rapidly changing, three dimensional world. You know, the incidents at which some of these companies are mentioned versus the volume that happens. I think it's, you know, with Oracle and I'm not going to declare my religious affiliation, either as cloud company or database company, you know, they're all of those things and more, and I think some of our old language of how we classify companies is just not relevant anymore. But I want to ask too something in here, the autonomous database from Oracle, nobody else has done that. So either Oracle is crazy, they've tried out a technology that nobody other than them is interested in, or they're onto something that nobody else can match. So to me, Dave, within Oracle, trying to identify how they're doing there, I would watch autonomous database growth too, because right, it's either going to be a big plan and it breaks through, or it's going to be caught behind. And the Snowflake phenomenon as you mentioned, that is a rare, rare bird who comes up and can grow 100% at a billion dollar revenue level like that. So now they've had a chance to come in, scare the crap out of everybody, rock the market with something totally new, the data cloud. Will the bigger companies be able to catch up and offer a compelling alternative, or is Snowflake going to continue to be this outlier. It's a fascinating time. >> Really, interesting points there. Holgar, I want to ask you, I mean, I've talked to certainly I'm sure you guys have too, the founders of Snowflake that came out of Oracle and they actually, they don't apologize. They say, "Hey, we not going to do all that complicated stuff that Oracle does, we were trying to keep it real simple." But at the same time, you know, they don't do sophisticated workload management. They don't do complex joints. They're kind of relying on the ecosystems. So when you look at the data like this and the various momentums, and we talked about the diverging strategies, what does this say to you? >> Well, it is a great point. And I think Snowflake is an example how the cloud can turbo charge a well understood concept in this case, the data warehouse, right? You move that and you find steroids and you see like for some players who've been big in data warehouse, like Sentara Data, as an example, here in San Diego, what could have been for them right in that part. The interesting thing, the problem though is the cloud hides a lot of complexity too, which you can scale really well as you attract lots of customers to go there. And you don't have to build things like what Bob said, right? One of the fascinating things, right, nobody's answering Oracle on the autonomous database. I don't think is that they cannot, they just have different priorities or the database is not such a priority. I would dare to say that it's for IBM and Microsoft right now at the moment. And the cloud vendors, you just hide that right through scripts and through scale because you support thousands of customers and you can deal with a little more complexity, right? It's not against them. Whereas if you have to run it yourself, very different story, right? You want to have the autonomous parts, you want to have the powerful tools to do things. >> Thank you. And so Matt, I want to go to you, you've set up front, you know, it's just complicated if you're in IT, it's a complicated situation and you've been on the customer side. And if you're a buyer, it's obviously, it's like Holgar said, "Cloud's supposed to make this stuff easier, but the simpler it gets the more complicated gets." So where do you place your bets? Or I guess more importantly, how do you decide where to place your bets? >> Yeah, it's a good question. And to what Bob and Holgar said, you know, the around autonomous database, I think, you know, part of, as I, you know, play kind of armchair psychologist, if you will, corporate psychologists, I look at what Oracle is doing and, you know, databases where they've made their mark and it's kind of, that's their strong position, right? So it makes sense if you're making an entry into this cloud and you really want to kind of build momentum, you go with what you're good at, right? So that's kind of the strength of Oracle. Let's put a lot of focus on that. They do a lot more than database, don't get me wrong, but you know, I'm going to short my strength and then kind of pivot from there. With regards to, you know, what IT looks at and what I would look at you know as an IT director or somebody who is, you know, trying to consume services from these different cloud providers. First and foremost, I go with what I know, right? Let's not forget IT is a conservative group. And when we look at, you know, all the different permutations of database types out there, SQL, NoSQL, all the different types of NoSQL, those are largely being deployed by business users that are looking for agility or businesses that are looking for agility. You know, the reason why MongoDB is so popular is because of DevOps, right? It's a great platform to develop on and that's where it kind of gained its traction. But as an IT person, I want to go with what I know, where my muscle memory is, and that's my first position. And so as I evaluate different cloud service providers and cloud databases, I look for, you know, what I know and what I've invested in and where my muscle memory is. Is there enough there and do I have enough belief that that company or that service is going to be able to take me to, you know, where I see my organization in five years from a data management perspective, from a business perspective, are they going to be there? And if they are, then I'm a little bit more willing to make that investment, but it is, you know, if I'm kind of going in this blind or if I'm cloud native, you know, that's where the Snowflakes of the world become very attractive to me. >> Thank you. So Marc, I asked Andy Jackson in theCube one time, you have all these, you know, data stores and different APIs and primitives and you know, very granular, what's the strategy there? And he said, "Hey, that allows us as the market changes, it allows us to be more flexible. If we start building abstractions layers, it's harder for us." I think also it was not a good time to market advantage, but let me ask you, I described earlier on that spectrum from AWS to Oracle. We just saw yesterday, Oracle announced, I think the third major enhancement in like 15 months to MySQL HeatWave, what do you make of that announcement? How do you think it impacts the competitive landscape, particularly as it relates to, you know, converging transaction and analytics, eliminating ELT, I know you have some thoughts on this. >> So let me back up for a second and defend my cloud statement about Oracle for a moment. (laughing) AWS did a great job in developing the cloud market in general and everything in the cloud market. I mean, I give them lots of kudos on that. And a lot of what they did is they took open source software and they rent it to people who use their cloud. So I give 'em lots of credit, they dominate the market. Oracle was late to the cloud market. In fact, they actually poo-pooed it initially, if you look at some of Larry Ellison's statements, they said, "Oh, it's never going to take off." And then they did 180 turn, and they said, "Oh, we're going to embrace the cloud." And they really have, but when you're late to a market, you've got to be compelling. And this ties into the announcement yesterday, but let's deal with this compelling. To be compelling from a user point of view, you got to be twice as fast, offer twice as much functionality, at half the cost. That's generally what compelling is that you're going to capture market share from the leaders who established the market. It's very difficult to capture market share in a new market for yourself. And you're right. I mean, Bob was correct on this and Holgar and Matt in which you look at Oracle, and they did a great job of leveraging their database to move into this market, give 'em lots of kudos for that too. But yesterday they announced, as you said, the third innovation release and the pace is just amazing of what they're doing on these releases on HeatWave that ties together initially MySQL with an integrated builtin analytics engine, so a data warehouse built in. And then they added automation with autopilot, and now they've added machine learning to it, and it's all in the same service. It's not something you can buy and put on your premise unless you buy their cloud customers stuff. But generally it's a cloud offering, so it's compellingly better as far as the integration. You don't buy multiple services, you buy one and it's lower cost than any of the other services, but more importantly, it's faster, which again, give 'em credit for, they have more integration of a product. They can tie things together in a way that nobody else does. There's no additional services, ETL services like Glue and AWS. So from that perspective, they're getting better performance, fewer services, lower cost. Hmm, they're aiming at the compelling side again. So from a customer point of view it's compelling. Matt, you wanted to say something there. >> Yeah, I want to kind of, on what you just said there Marc, and this is something I've found really interesting, you know. The traditional way that you look at software and, you know, purchasing software and IT is, you look at either best of breed solutions and you have to work on the backend to integrate them all and make them all work well. And generally, you know, the big hit against the, you know, we have one integrated offering is that, you lose capability or you lose depth of features, right. And to what you were saying, you know, that's the thing I found interesting about what Oracle is doing is they're building in depth as they kind of, you know, build that service. It's not like you're losing a lot of capabilities, because you're going to one integrated service versus having to use A versus B versus C, and I love that idea. >> You're right. Yeah, not only you're not losing, but you're gaining functionality that you can't get by integrating a lot of these. I mean, I can take Snowflake and integrate it in with machine learning, but I also have to integrate in with a transactional database. So I've got to have connectors between all of this, which means I'm adding time. And what it comes down to at the end of the day is expertise, effort, time, and cost. And so what I see the difference from the Oracle announcements is they're aiming at reducing all of that by increasing performance as well. Correct me if I'm wrong on that but that's what I saw at the announcement yesterday. >> You know, Marc, one thing though Marc, it's funny you say that because I started out saying, you know, I'm glad I'm not 19 anymore. And the reason is because of exactly what you said, it's almost like there's a pseudo level of witchcraft that's required to support the modern data environment right in the enterprise. And I need simpler faster, better. That's what I need, you know, I am no longer wearing pocket protectors. I have turned from, you know, break, fix kind of person, to you know, business consultant. And I need that point and click simplicity, but I can't sacrifice, you know, a depth of features of functionality on the backend as I play that consultancy role. >> So, Ron, I want to bring in Ron, you know, it's funny. So Matt, you mentioned Mongo, I often and say, if Oracle mentions you, you're on the map. We saw them yesterday Ron, (laughing) they hammered RedShifts auto ML, they took swipes at Snowflake, a little bit of BigQuery. What were your thoughts on that? Do you agree with what these guys are saying in terms of HeatWaves capabilities? >> Yes, Dave, I think that's an excellent question. And fundamentally I do agree. And the question is why, and I think it's important to know that all of the Oracle data is backed by the fact that they're using benchmarks. For example, all of the ML and all of the TPC benchmarks, including all the scripts, all the configs and all the detail are posted on GitHub. So anybody can look at these results and they're fully transparent and replicate themselves. If you don't agree with this data, then by all means challenge it. And we have not really seen that in all of the new updates in HeatWave over the last 15 months. And as a result, when it comes to these, you know, fundamentals in looking at the competitive landscape, which I think gives validity to outcomes such as Oracle being able to deliver 4.8 times better price performance than Redshift. As well as for example, 14.4 better price performance than Snowflake, and also 12.9 better price performance than BigQuery. And so that is, you know, looking at the quantitative side of things. But again, I think, you know, to Marc's point and to Matt's point, there are also qualitative aspects that clearly differentiate the Oracle proposition, from my perspective. For example now the MySQL HeatWave ML capabilities are native, they're built in, and they also support things such as completion criteria. And as a result, that enables them to show that hey, when you're using Redshift ML for example, you're having to also use their SageMaker tool and it's running on a meter. And so, you know, nobody really wants to be running on a meter when, you know, executing these incredibly complex tasks. And likewise, when it comes to Snowflake, they have to use a third party capability. They don't have the built in, it's not native. So the user, to the point that he's having to spend more time and it increases complexity to use auto ML capabilities across the Snowflake platform. And also, I think it also applies to other important features such as data sampling, for example, with the HeatWave ML, it's intelligent sampling that's being implemented. Whereas in contrast, we're seeing Redshift using random sampling. And again, Snowflake, you're having to use a third party library in order to achieve the same capabilities. So I think the differentiation is crystal clear. I think it definitely is refreshing. It's showing that this is where true value can be assigned. And if you don't agree with it, by all means challenge the data. >> Yeah, I want to come to the benchmarks in a minute. By the way, you know, the gentleman who's the Oracle's architect, he did a great job on the call yesterday explaining what you have to do. I thought that was quite impressive. But Bob, I know you follow the financials pretty closely and on the earnings call earlier this month, Ellison said that, "We're going to see HeatWave on AWS." And the skeptic in me said, oh, they must not be getting people to come to OCI. And then they, you remember this chart they showed yesterday that showed the growth of HeatWave on OCI. But of course there was no data on there, it was just sort of, you know, lines up and to the right. So what do you guys think of that? (Marc laughs) Does it signal Bob, desperation by Oracle that they can't get traction on OCI, or is it just really a smart tame expansion move? What do you think? >> Yeah, Dave, that's a great question. You know, along the way there, and you know, just inside of that was something that said Ellison said on earnings call that spoke to a different sort of philosophy or mindset, almost Marc, where he said, "We're going to make this multicloud," right? With a lot of their other cloud stuff, if you wanted to use any of Oracle's cloud software, you had to use Oracle's infrastructure, OCI, there was no other way out of it. But this one, but I thought it was a classic Ellison line. He said, "Well, we're making this available on AWS. We're making this available, you know, on Snowflake because we're going after those users. And once they see what can be done here." So he's looking at it, I guess you could say, it's a concession to customers because they want multi-cloud. The other way to look at it, it's a hunting expedition and it's one of those uniquely I think Oracle ways. He said up front, right, he doesn't say, "Well, there's a big market, there's a lot for everybody, we just want on our slice." Said, "No, we are going after Amazon, we're going after Redshift, we're going after Aurora. We're going after these users of Snowflake and so on." And I think it's really fairly refreshing these days to hear somebody say that, because now if I'm a buyer, I can look at that and say, you know, to Marc's point, "Do they measure up, do they crack that threshold ceiling? Or is this just going to be more pain than a few dollars savings is worth?" But you look at those numbers that Ron pointed out and that we all saw in that chart. I've never seen Dave, anything like that. In a substantive market, a new player coming in here, and being able to establish differences that are four, seven, eight, 10, 12 times better than competition. And as new buyers look at that, they're going to say, "What the hell are we doing paying, you know, five times more to get a poor result? What's going on here?" So I think this is going to rattle people and force a harder, closer look at what these alternatives are. >> I wonder if the guy, thank you. Let's just skip ahead of the benchmarks guys, bring up the next slide, let's skip ahead a little bit here, which talks to the benchmarks and the benchmarking if we can. You know, David Floyer, the sort of semiretired, you know, Wikibon analyst said, "Dave, this is going to force Amazon and others, Snowflake," he said, "To rethink actually how they architect databases." And this is kind of a compilation of some of the data that they shared. They went after Redshift mostly, (laughs) but also, you know, as I say, Snowflake, BigQuery. And, like I said, you can always tell which companies are doing well, 'cause Oracle will come after you, but they're on the radar here. (laughing) Holgar should we take this stuff seriously? I mean, or is it, you know, a grain salt? What are your thoughts here? >> I think you have to take it seriously. I mean, that's a great question, great point on that. Because like Ron said, "If there's a flaw in a benchmark, we know this database traditionally, right?" If anybody came up that, everybody will be, "Oh, you put the wrong benchmark, it wasn't audited right, let us do it again," and so on. We don't see this happening, right? So kudos to Oracle to be aggressive, differentiated, and seem to having impeccable benchmarks. But what we really see, I think in my view is that the classic and we can talk about this in 100 years, right? Is the suite versus best of breed, right? And the key question of the suite, because the suite's always slower, right? No matter at which level of the stack, you have the suite, then the best of breed that will come up with something new, use a cloud, put the data warehouse on steroids and so on. The important thing is that you have to assess as a buyer what is the speed of my suite vendor. And that's what you guys mentioned before as well, right? Marc said that and so on, "Like, this is a third release in one year of the HeatWave team, right?" So everybody in the database open source Marc, and there's so many MySQL spinoffs to certain point is put on shine on the speed of (indistinct) team, putting out fundamental changes. And the beauty of that is right, is so inherent to the Oracle value proposition. Larry's vision of building the IBM of the 21st century, right from the Silicon, from the chip all the way across the seven stacks to the click of the user. And that what makes the database what Rob was saying, "Tied to the OCI infrastructure," because designed for that, it runs uniquely better for that, that's why we see the cross connect to Microsoft. HeatWave so it's different, right? Because HeatWave runs on cheap hardware, right? Which is the breadth and butter 886 scale of any cloud provider, right? So Oracle probably needs it to scale OCI in a different category, not the expensive side, but also allow us to do what we said before, the multicloud capability, which ultimately CIOs really want, because data gravity is real, you want to operate where that is. If you have a fast, innovative offering, which gives you more functionality and the R and D speed is really impressive for the space, puts away bad results, then it's a good bet to look at. >> Yeah, so you're saying, that we versus best of breed. I just want to sort of play back then Marc a comment. That suite versus best of breed, there's always been that trade off. If I understand you Holgar you're saying that somehow Oracle has magically cut through that trade off and they're giving you the best of both. >> It's the developing velocity, right? The provision of important features, which matter to buyers of the suite vendor, eclipses the best of breed vendor, then the best of breed vendor is in the hell of a potential job. >> Yeah, go ahead Marc. >> Yeah and I want to add on what Holgar just said there. I mean the worst job in the data center is data movement, moving the data sucks. I don't care who you are, nobody likes it. You never get any kudos for doing it well, and you always get the ah craps, when things go wrong. So it's in- >> In the data center Marc all the time across data centers, across cloud. That's where the bleeding comes. >> It's right, you get beat up all the time. So nobody likes to move data, ever. So what you're looking at with what they announce with HeatWave and what I love about HeatWave is it doesn't matter when you started with it, you get all the additional features they announce it's part of the service, all the time. But they don't have to move any of the data. You want to analyze the data that's in your transactional, MySQL database, it's there. You want to do machine learning models, it's there, there's no data movement. The data movement is the key thing, and they just eliminate that, in so many ways. And the other thing I wanted to talk about is on the benchmarks. As great as those benchmarks are, they're really conservative 'cause they're underestimating the cost of that data movement. The ETLs, the other services, everything's left out. It's just comparing HeatWave, MySQL cloud service with HeatWave versus Redshift, not Redshift and Aurora and Glue, Redshift and Redshift ML and SageMaker, it's just Redshift. >> Yeah, so what you're saying is what Oracle's doing is saying, "Okay, we're going to run MySQL HeatWave benchmarks on analytics against Redshift, and then we're going to run 'em in transaction against Aurora." >> Right. >> But if you really had to look at what you would have to do with the ETL, you'd have to buy two different data stores and all the infrastructure around that, and that goes away so. >> Due to the nature of the competition, they're running narrow best of breed benchmarks. There is no suite level benchmark (Dave laughs) because they created something new. >> Well that's you're the earlier point they're beating best of breed with a suite. So that's, I guess to Floyer's earlier point, "That's going to shake things up." But I want to come back to Bob Evans, 'cause I want to tap your Cloud Wars mojo before we wrap. And line up the horses, you got AWS, you got Microsoft, Google and Oracle. Now they all own their own cloud. Snowflake, Mongo, Couchbase, Redis, Cockroach by the way they're all doing very well. They run in the cloud as do many others. I think you guys all saw the Andreessen, you know, commentary from Sarah Wang and company, to talk about the cost of goods sold impact of cloud. So owning your own cloud has to be an advantage because other guys like Snowflake have to pay cloud vendors and negotiate down versus having the whole enchilada, Safra Catz's dream. Bob, how do you think this is going to impact the market long term? >> Well, Dave, that's a great question about, you know, how this is all going to play out. If I could mention three things, one, Frank Slootman has done a fantastic job with Snowflake. Really good company before he got there, but since he's been there, the growth mindset, the discipline, the rigor and the phenomenon of what Snowflake has done has forced all these bigger companies to really accelerate what they're doing. And again, it's an example of how this intense competition makes all the different cloud vendors better and it provides enormous value to customers. Second thing I wanted to mention here was look at the Adam Selipsky effect at AWS, took over in the middle of May, and in Q2, Q3, Q4, AWS's growth rate accelerated. And in each of those three quotas, they grew faster than Microsoft's cloud, which has not happened in two or three years, so they're closing the gap on Microsoft. The third thing, Dave, in this, you know, incredibly intense competitive nature here, look at Larry Ellison, right? He's got his, you know, the product that for the last two or three years, he said, "It's going to help determine the future of the company, autonomous database." You would think he's the last person in the world who's going to bring in, you know, in some ways another database to think about there, but he has put, you know, his whole effort and energy behind this. The investments Oracle's made, he's riding this horse really hard. So it's not just a technology achievement, but it's also an investment priority for Oracle going forward. And I think it's going to form a lot of how they position themselves to this new breed of buyer with a new type of need and expectations from IT. So I just think the next two or three years are going to be fantastic for people who are lucky enough to get to do the sorts of things that we do. >> You know, it's a great point you made about AWS. Back in 2018 Q3, they were doing about 7.4 billion a quarter and they were growing in the mid forties. They dropped down to like 29% Q4, 2020, I'm looking at the data now. They popped back up last quarter, last reported quarter to 40%, that is 17.8 billion, so they more doubled and they accelerated their growth rate. (laughs) So maybe that pretends, people are concerned about Snowflake right now decelerating growth. You know, maybe that's going to be different. By the way, I think Snowflake has a different strategy, the whole data cloud thing, data sharing. They're not trying to necessarily take Oracle head on, which is going to make this next 10 years, really interesting. All right, we got to go, last question. 30 seconds or less, what can we expect from the future of data platforms? Matt, please start. >> I have to go first again? You're killing me, Dave. (laughing) In the next few years, I think you're going to see the major players continue to meet customers where they are, right. Every organization, every environment is, you know, kind of, we use these words bespoke in Snowflake, pardon the pun, but Snowflakes, right. But you know, they're all opinionated and unique and what's great as an IT person is, you know, there is a service for me regardless of where I am on my journey, in my data management journey. I think you're going to continue to see with regards specifically to Oracle, I think you're going to see the company continue along this path of being all things to all people, if you will, or all organizations without sacrificing, you know, kind of richness of features and sacrificing who they are, right. Look, they are the data kings, right? I mean, they've been a database leader for an awful long time. I don't see that going away any time soon and I love the innovative spirit they've brought in with HeatWave. >> All right, great thank you. Okay, 30 seconds, Holgar go. >> Yeah, I mean, the interesting thing that we see is really that trend to autonomous as Oracle calls or self-driving software, right? So the database will have to do more things than just store the data and support the DVA. It will have to show it can wide insights, the whole upside, it will be able to show to one machine learning. We haven't really talked about that. How in just exciting what kind of use case we can get of machine learning running real time on data as it changes, right? So, which is part of the E5 announcement, right? So we'll see more of that self-driving nature in the database space. And because you said we can promote it, right. Check out my report about HeatWave latest release where I post in oracle.com. >> Great, thank you for that. And Bob Evans, please. You're great at quick hits, hit us. >> Dave, thanks. I really enjoyed getting to hear everybody's opinion here today and I think what's going to happen too. I think there's a new generation of buyers, a new set of CXO influencers in here. And I think what Oracle's done with this, MySQL HeatWave, those benchmarks that Ron talked about so eloquently here that is going to become something that forces other companies, not just try to get incrementally better. I think we're going to see a massive new wave of innovation to try to play catch up. So I really take my hat off to Oracle's achievement from going to, push everybody to be better. >> Excellent. Marc Staimer, what do you say? >> Sure, I'm going to leverage off of something Matt said earlier, "Those companies that are going to develop faster, cheaper, simpler products that are going to solve customer problems, IT problems are the ones that are going to succeed, or the ones who are going to grow. The one who are just focused on the technology are going to fall by the wayside." So those who can solve more problems, do it more elegantly and do it for less money are going to do great. So Oracle's going down that path today, Snowflake's going down that path. They're trying to do more integration with third party, but as a result, aiming at that simpler, faster, cheaper mentality is where you're going to continue to see this market go. >> Amen brother Marc. >> Thank you, Ron Westfall, we'll give you the last word, bring us home. >> Well, thank you. And I'm loving it. I see a wave of innovation across the entire cloud database ecosystem and Oracle is fueling it. We are seeing it, with the native integration of auto ML capabilities, elastic scaling, lower entry price points, et cetera. And this is just going to be great news for buyers, but also developers and increased use of open APIs. And so I think that is really the key takeaways. Just we're going to see a lot of great innovation on the horizon here. >> Guys, fantastic insights, one of the best power panel as I've ever done. Love to have you back. Thanks so much for coming on today. >> Great job, Dave, thank you. >> All right, and thank you for watching. This is Dave Vellante for theCube and we'll see you next time. (soft music)

Published Date : Mar 31 2022

SUMMARY :

and co-founder of the and then you answer And don't forget Sybase back in the day, the world these days? and others happening in the cloud, and you cover the competition, and Oracle and you know, whoever else. Mr. Staimer, how do you see things? in that I see the database some good meat on the bone Take away the database, That is the ability to scale on demand, and they got MySQL and you I think it's, you know, and the various momentums, and Microsoft right now at the moment. So where do you place your bets? And to what Bob and Holgar said, you know, and you know, very granular, and everything in the cloud market. And to what you were saying, you know, functionality that you can't get to you know, business consultant. you know, it's funny. and all of the TPC benchmarks, By the way, you know, and you know, just inside of that was of some of the data that they shared. the stack, you have the suite, and they're giving you the best of both. of the suite vendor, and you always get the ah In the data center Marc all the time And the other thing I wanted to talk about and then we're going to run 'em and all the infrastructure around that, Due to the nature of the competition, I think you guys all saw the Andreessen, And I think it's going to form I'm looking at the data now. and I love the innovative All right, great thank you. and support the DVA. Great, thank you for that. And I think what Oracle's done Marc Staimer, what do you say? or the ones who are going to grow. we'll give you the last And this is just going to Love to have you back. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

Ron WestfallPERSON

0.99+

DavePERSON

0.99+

Marc StaimerPERSON

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MarcPERSON

0.99+

EllisonPERSON

0.99+

Bob EvansPERSON

0.99+

OracleORGANIZATION

0.99+

MattPERSON

0.99+

Holgar MuellerPERSON

0.99+

AWSORGANIZATION

0.99+

Frank SlootmanPERSON

0.99+

RonPERSON

0.99+

StaimerPERSON

0.99+

Andy JacksonPERSON

0.99+

BobPERSON

0.99+

Matt KimballPERSON

0.99+

GoogleORGANIZATION

0.99+

100%QUANTITY

0.99+

Sarah WangPERSON

0.99+

San DiegoLOCATION

0.99+

AmazonORGANIZATION

0.99+

RobPERSON

0.99+

Does Intel need a Miracle?


 

(upbeat music) >> Welcome everyone, this is Stephanie Chan with theCUBE. Recently analyst Dave Ross RADIO entitled, Pat Gelsinger has a vision. It just needs the time, the cash and a miracle where he highlights why he thinks Intel is years away from reversing position in the semiconductor industry. Welcome Dave. >> Hey thanks, Stephanie. Good to see you. >> So, Dave you been following the company closely over the years. If you look at Wall Street Journal most analysts are saying to hold onto Intel. can you tell us why you're so negative on it? >> Well, you know, I'm not a stock picker Stephanie, but I've seen the data there are a lot of... some buys some sells, but most of the analysts are on a hold. I think they're, who knows maybe they're just hedging their bets they don't want to a strong controversial call that kind of sitting in the fence. But look, Intel still an amazing company they got tremendous resources. They're an ICON and they pay a dividend. So, there's definitely an investment case to be made to hold onto the stock. But I would generally say that investors they better be ready to hold on to Intel for a long, long time. I mean, Intel's they're just not the dominant player that it used to be. And the challenges have been mounting for a decade and look competitively Intel's fighting a five front war. They got AMD in both PCs and the data center the entire Arm Ecosystem` and video coming after with the whole move toward AI and GPU they're dominating there. Taiwan Semiconductor is by far the leading fab in the world with terms of output. And I would say even China is kind of the fifth leg of that stool, long term. So, lot of hurdles to jump competitively. >> So what are other sources of Intel's trouble sincere besides what you just mentioned? >> Well, I think they started when PC volumes peaked which was, or David Floyer, Wikibon wrote back in 2011, 2012 that he tells if it doesn't make some moves, it's going to face some trouble. So, even though PC volumes have bumped up with the pandemic recently, they pair in comparison to the wafer volume that are coming out of the Arm Ecosystem, and TSM and Samsung factories. The volumes of the Arm Ecosystem, Stephanie they dwarf the output of Intel by probably 10 X in semiconductors. I mean, the volume in semiconductors is everything. And because that's what costs down and Intel they just knocked a little cost manufacture any anymore. And in my view, they may never be again, not without a major change in the volume strategy, which of course Gelsinger is doing everything he can to affect that change, but they're years away and they're going to have to spend, north of a 100 billion dollars trying to get there, but it's all about volume in the semiconductor game. And Intel just doesn't have it right now. >> So you mentioned Pat Gelsinger he was a new CEO last January. He's a highly respected CEO and in truth employed more than four decades, I think he has knowledge and experience. including 30 years at Intel where he began his career. What's your opinion on his performance thus far besides the volume and semiconductor industry position of Intel? >> Well, I think Gelsinger is an amazing executive. He's a technical visionary, he's an execution machine, he's doing all the right things. I mean, he's working, he was at the state of the union address and looking good in a suit, he's saying all the right things. He's spending time with EU leaders. And he's just a very clear thinker and a super strong strategist, but you can't change Physics. The thing about Pat is he's known all along what's going on with Intel. I'm sure he's watched it from not so far because I think it's always been his dream to run the company. So, the fact that he's made a lot of moves. He's bringing in new management, he's repairing some of the dead wood at Intel. He's launched, kind of relaunched if you will, the Foundry Business. But I think they're serious about that. You know, this time around, they're spinning out mobile eye to throw off some cash mobile eye was an acquisition they made years ago to throw off some more cash to pay for the fabs. They have announced things like; a fabs in Ohio, in the Heartland, Ze in Heartland which is strikes all the right chords with the various politicians. And so again, he's doing all the right things. He's trying to inject. He's calling out his best Andrew Grove. I like to say who's of course, The Iconic CEO of Intel for many, many years, but again you can't change Physics. He can't compress the cycle any faster than the cycle wants to go. And so he's doing all the right things. It's just going to take a long, long time. >> And you said that competition is better positioned. Could you elaborate on why you think that, and who are the main competitors at this moment? >> Well, it's this Five Front War that I talked about. I mean, you see what's happened in Arm changed everything, Intel remember they passed on the iPhone didn't think it could make enough money on smartphones. And that opened the door for Arm. It was eager to take Apple's business. And because of the consumer volumes the semiconductor industry changed permanently just like the PC volume changed the whole mini computer business. Well, the smartphone changed the economics of semiconductors as well. Very few companies can afford the capital expense of building semiconductor fabrication facilities. And even fewer can make cutting edge chips like; five nanometer, three nanometer and beyond. So companies like AMD and Invidia, they don't make chips they design them and then they ship them to foundries like TSM and Samsung to manufacture them. And because TSM has such huge volumes, thanks to large part to Apple it's further down or up I guess the experience curve and experience means everything in terms of cost. And they're leaving Intel behind. I mean, the best example I can give you is Apple would look at the, a series chip, and now the M one and the M one ultra, I think about the traditional Moore's law curve that we all talk about two X to transistor density every two years doubling. Intel's lucky today if can keep that pace up, let's assume it can. But meanwhile, look at Apple's Arm based M one to M one Ultra transition. It occurred in less than two years. It was more like, 15 or 18 months. And it went from 16 billion transistors on a package to over a 100 billion. And so we're talking about the competition Apple in this case using Arm standards improving it six to seven X inside of a two year period while Intel's running it two X. And that says it all. So Intel is on a curve that's more expensive and slower than the competition. >> Well recently, until what Lujan Harrison did with 5.4 billion So it can make more check order companies last February I think the middle of February what do you think of that strategic move? >> Well, it was designed to help with Foundry. And again, I said left that out of my things that in Intel's doing, as Pat's doing there's a long list actually and many more. Again I think, it's an Israeli based company they're a global company, which is important. One of the things that Pat stresses is having a a presence in Western countries, I think that's super important, he'd like to get the percentage of semiconductors coming out of Western countries back up to at least maybe not to where it was previously but by the end of the decade, much more competitive. And so that's what that acquisition was designed to do. And it's a good move, but it's, again it doesn't change Physics. >> So Dave, you've been putting a lot of content out there and been following Intel for years. What can Intel do to go back on track? >> Well, I think first it needs great leadership and Pat Gelsinger is providing that. Since we talked about it, he's doing all the right things. He's manifesting his best. Andrew Grove, as I said earlier, splitting out the Foundry business is critical because we all know Moore's law. This is Right Law talks about volume in any business not just semiconductors, but it's crucial in semiconductors. So, splitting out a separate Foundry business to make chips is important. He's going to do that. Of course, he's going to ask Intel's competitors to allow Intel to manufacture their chips which they very well may well want to do because there's such a shortage right now of supply and they need those types of manufacturers. So, the hope is that that's going to drive the volume necessary for Intel to compete cost effectively. And there's the chips act. And it's EU cousin where governments are going to possibly put in some money into the semiconductor manufacturing to make the west more competitive. It's a key initiative that Pat has put forth and a challenge. And it's a good one. And he's making a lot of moves on the design side and committing tons of CapEx in these new fabs as we talked about but maybe his best chance is again the fact that, well first of all, the market's enormous. It's a trillion dollar market, but secondly there's a very long term shortage in play here in semiconductors. I don't think it's going to be cleared up in 2022 or 2023. It's just going to be keep being an explotion whether it's automobiles and factory devices and cameras. I mean, virtually every consumer device and edge device is going to use huge numbers of semiconductor chip. So, I think that's in Pat's favor, but honestly Intel is so far behind in my opinion, that I hope by the end of this decade, it's going to be in a position maybe a stronger number two position, and volume behind TSM maybe number three behind Samsung maybe Apple is going to throw Intel some Foundry business over time, maybe under pressure from the us government. And they can maybe win that account back but that's still years away from a design cycle standpoint. And so again, maybe in the 2030's, Intel can compete for top dog status, but that in my view is the best we can hope for this national treasure called Intel. >> Got it. So we got to leave it right there. Thank you so much for your time, Dave. >> You're welcome Stephanie. Good to talk to you >> So you can check out Dave's breaking analysis on theCUBE.net each Friday. This is Stephanie Chan for theCUBE. We'll see you next time. (upbeat music)

Published Date : Mar 22 2022

SUMMARY :

It just needs the time, Good to see you. closely over the years. but most of the analysts are on a hold. I mean, the volume in far besides the volume And so he's doing all the right things. And you said that competition And because of the consumer volumes I think the middle of February but by the end of the decade, What can Intel do to go back on track? And so again, maybe in the 2030's, Thank you so much for your time, Dave. Good to talk to you So you can check out

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamsungORGANIZATION

0.99+

DavePERSON

0.99+

Stephanie ChanPERSON

0.99+

StephaniePERSON

0.99+

TSMORGANIZATION

0.99+

David FloyerPERSON

0.99+

OhioLOCATION

0.99+

Pat GelsingerPERSON

0.99+

2022DATE

0.99+

2023DATE

0.99+

30 yearsQUANTITY

0.99+

Andrew GrovePERSON

0.99+

AppleORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AMDORGANIZATION

0.99+

5.4 billionQUANTITY

0.99+

GelsingerPERSON

0.99+

10 XQUANTITY

0.99+

less than two yearsQUANTITY

0.99+

sixQUANTITY

0.99+

M oneCOMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

PatPERSON

0.99+

M one ultraCOMMERCIAL_ITEM

0.99+

fifth legQUANTITY

0.99+

15QUANTITY

0.99+

five nanometerQUANTITY

0.99+

MoorePERSON

0.99+

HeartlandLOCATION

0.99+

EUORGANIZATION

0.99+

18 monthsQUANTITY

0.99+

sevenQUANTITY

0.99+

IconicORGANIZATION

0.98+

five frontQUANTITY

0.98+

three nanometerQUANTITY

0.98+

Dave RossPERSON

0.98+

two yearQUANTITY

0.98+

CapExORGANIZATION

0.98+

last FebruaryDATE

0.97+

last JanuaryDATE

0.97+

Lujan HarrisonPERSON

0.97+

middle of FebruaryDATE

0.97+

firstQUANTITY

0.96+

OneQUANTITY

0.96+

16 billion transistorsQUANTITY

0.96+

100 billion dollarsQUANTITY

0.96+

todayDATE

0.96+

theCUBEORGANIZATION

0.96+

theCUBE.netOTHER

0.95+

both PCsQUANTITY

0.94+

Five Front WarEVENT

0.94+

Breaking Analysis: Snowflake’s Wild Ride


 

from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante snowflake they love the stock at 400 and hated at 165 that's the nature of the business i guess especially in this crazy cycle over the last two years of lockdowns free money exploding demand and now rising inflation and rates but with the fed providing some clarity on its actions the time has come to really dig into the fundamentals of companies and there's no tech company that's more fun to analyze than snowflake hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we look at the action of snowflake stock since its ipo why it's behaved the way it has how some sharp traders are looking at the stock and most importantly what customer demand looks like the stock has really provided some great theater since its ipo i know people who got in at 120 before the open and i know lots of people who kind of held their noses and bought the stock on day one at over 300 a day when it closed at around 240 that first day of trading snowflake hit 164 this week it's all-time low as a public company as my college roommate chip simonton a long time trader told me when great companies trade at all times time lows because of panic it's worth taking a shot he did now of course the stock could go lower there's geopolitical risk and the stock with a 64 billion market cap is expensive for a company that's forecast to do around 2 billion in product revenue this year and remember i don't recommend stocks you shouldn't take my advice and my comments you got to do your own research but i have lots of data and i have opinions and i'm willing to share that with you stocks like snowflake crowdstrike z-scaler octa and companies like this are highly volatile when markets are moving up they're going to move up faster than the mean when they're declining they're going to drop more severely and that's clearly what's happened to snowflake so with a company like this you when you see panic selling you'll also see panic buying sometimes like we we've seen with this name it went from 220 to 320 in a very short period earlier snowflake put in a short-term bottom this week and many traders feel the issue was oversold so they bought okay but not everyone felt this way and you can see this in the headlines snowflake hits low but cloud stocks rise and we're going to come back to that is it a buy don't buy the dip buy the dip and what snowflake investors can learn from microsoft and from the street.com snow stock is sliding on the back of ill-conceived guidance and to that i would say that conservative guidance these days is anything but ill-conceived now let's unpack all this a bit and to do so i reached out to ivana delevska who has been on this program before she's with spear invest a female-led etf that goes deep into understanding supply chains she came on breaking analysis and laid out her thesis to buy the dip on snowflake this is a while ago she told me currently spear still likes snowflake and has doubled its position let me share her analysis she called out two drivers for the downside interest rates you know rising of course in snowflakes guidance which my own publication called weak in that previous chart that i just showed you so let's dig into that a bit snowflake guided for product revenues of 67 year on year which was below buy side expectations but i believe within sell side consensus regardless the guide was nuanced and driven by snowflake's decision to pass along price efficiencies to customers from optimizing processor price performance predominantly from aws's graviton too this is going to hit snowflakes revenue a net of about a hundred million dollars this year but the timing's not precise because it's going to hit 165 million but they're going to make up 65 million in increased demand frank slootman on the earnings call made this very clear he said quote this is not philanthropy this stimulates demand classic slootman the point is spear and other bulls believe that this will result in a gain for snowflake over the medium term and we would agree price goes down roi gets better you throw more projects at snowflakes customers going to buy more snowflake and when that happens and it gives the company an advantage as they continue to build their moat it's a longer term bet on cloud and data which are good bets now some of this could also be competitive pressures there have been you know studies that are out there from competitors attacking snowflakes pricing and price performance and they make comparisons oracle's been pretty aggressive as have others but so far the company's customers continue to consume now at a very fast rate now on on this front what can we learn from microsoft that applies to snowflake that's the headline here from benzinga so the article quoted a wealth manager named josh brown talking about what happened to microsoft after the dot-com bubble burst and how they quadrupled earnings over the next decade and the stock went sideways suggesting the same thing could happen to snowflake now i'd like to make a couple of comments here first at the time microsoft was a 23 billion dollar company and it had a monopoly and was already highly profitable steve ballmer became the ceo of microsoft right after the dot-com bubble burst and he hugged onto windows for dear life and lived off of microsoft's pc software monopoly microsoft became an extremely profitable and remarkably uninteresting caretaker of a pc in on-prem software estate during balmer's tenure so i just don't see the comparison as relevant snowflake you know they're going to make struggle for other reasons but that one didn't really resonate with me what's interesting is this chart it poses the question do cloud and data markets behave differently it's a chart that shows aws growth rates over time and superimposes the revenue in the red in q1 2018 aws generated 5.4 billion dollars in revenue and that was growing at the time at nearly a 50 rate now that rate as you can see decelerated quite significantly as aws grew to a 50 billion dollar run rate company that down below where you see it bottoms now it makes sense right law of large numbers you can't keep growing that fast when you get that big well oops look what happened in 2021 aws's growth rate bottoms in the high 20s and then rockets back up to 40 this past quarter as aws surpasses a 70 billion dollar run rate so you have to ask is cloud different is data different is cloud data different or data cloud different let's put it in the snowflake parlance can cloud because of its consumption model and the speed of innovation and ecosystem depth and breadth enable snowflake to exhibit lots of variability in its growth rates versus a say progressive and somewhat linear decline as the company grows revenue which is what you would expect historically and part of the answer relates to its market size here's a chart we've shared before with some additions it's our version of snowflake's total available market they're tam which snowflake's version that that blue data cloud thing superimposed on the right it shows the various layers of market opportunity that we came up with that that snowflake and others we think have in front of them emerging from the disruption of legacy data lakes and data warehouses to what snowflake refers to as its data cloud we think about the data mesh concept and decentralized data architectures with domain ownership and data product and service builders as consistent with snowflake's data cloud vision where snowflake data stores are nodes they're just simply discoverable nodes on the mesh you could have you know data bricks data lakes you know s3 buckets on that mesh it doesn't matter they can be discovered they can be shared and of course they're governed in a federated model now in snowflake's model it's all inside the snowflake data cloud that's fine then you'll go to the out years it gets a little fuzzy you know from edge locations and ai inference it becomes massive and decision making occurs in real time where machines and machine data take over the world instead of you know clicks and keystrokes sounds out there but it's real and how exactly snowflake plays there at this point is unclear but one thing's for sure there'll be a lot of data and it's going to find its way into snowflake you know snowflake's not a real-time engine it's an analytical system it's moving into the realm of data science and you know we've talked about the need for you know semantic layer between those those two worlds of analytics and data science but expanding the scope further out we think that snowflake is a big role to play in this future and the future is massive okay check you got the big tam now as someone that looks at companies through a fundamentals prism you've got to look obviously at the markets in the tan which we just did but you also want to understand customers and it's not hard to find snowflake customers capital one disney micron alliance sainsbury sonos and hundreds of other companies i've talked to snowflake customers who have also been customers of oracle teradata ibm neteza vertica serious database practitioners and they tell me it's consistent soulflake is different they say it's simpler it's more agile it's less complicated to secure and it's disruptive to their traditional ways of doing data management now of course there are naysayers i've spoken to a number of analysts that feel snowflake is deficient in areas like workload management and course complex joins and it's too specialized in a world where we're seeing the convergence of analytics and transactional workloads our own david floyer believes that what oracle is doing with mysql heatwave is radically disruptive to many of the database architectures and blows away anything out there and he believes that snowflake and the likes of aws are going to have to respond now this the other criticism here is that snowflake is not architected for real-time inference where a lot of that edge activity is is going to happen it's a multi-hundred billion dollar market and so look snowflake has a ton of competition that's the other thing all the major cloud players have very capable and competitive database platforms even though they all partner with snowflake except oracle of course but companies like databricks and have garnered tons of vc other vc funded companies have raised billions of dollars to do this kind of elastic consumption based separate compute from storage stuff so you have to always keep an open mind and be aware of potential blind spots for these companies but to the criticisms i would say look snowflake they got there first and watch their ecosystem it's a real key to its continued success snowflake's not going to go it alone and it's going to use its ecosystem partners to expand its reach and accelerate the network effects and fill those gaps and it will acquire its stock is valuable so it should be doing that just as it did with streamlit a zero revenue company that it bought for 800 million dollars in stock and cash just recently streamlit is an open source python library that gets snowflake further deeper into that data science space that data brick space and look watch what snowflake is doing with snowpark it's an api library for processing data and building data intensive applications we've talked about snowflake essentially being becoming the super cloud and building this sort of path-like layer across clouds rather than trying to do it all themselves it seems snowflake is really staring at the api economy and building its ecosystem to plug those holes so let's come back to the customers here's a chart that shows snowflakes customer spending momentum or net score on the the top line that's the vertical axis and pervasiveness in the data or market share and that bottom brown line snowflake has unprecedented net scores and held them up for many many quarters as you can see here going back you know a couple years all leading to its expanded market penetration and measured as pervasiveness of so-called market share within the etr survey it's not like idc market share it's pervasiveness in the data set now i'll say this i don't see how this is sustainable i've been waiting for this to moderate i wouldn't be surprised to see snowflake come back to earth a little bit i think they'll clearly still be highly elevated based on the data that i've seen but but i could see in in one or more of the etr surveys this year this starting to moderate as they get they get big it's just it has to happen um but i would again expect them to have a high spending velocity score but i think we're going to see snowflake you know maybe porpoise a bit here meaning you know it moderates it comes back up it's just really hard to sustain this piece of momentum and higher train retain and scale without absorbing some some friction and some head woods that's going to slow you down but back to the aws growth example it's entirely possible that we could see a similar dynamic with snowflake that you saw with aws and you kind of see it with salesforce and servicenow very successful large entrenched entrenched companies and it's very possible that snowflake could pull back moderate and then accelerate that growth even though people are concerned about the moderated guidance of 80 percent growth yeah that's that's the new definition of tepid i guess i look i like to look at other some other metrics the one that really called you know my my my attention was the remaining performance obligations this last quarter rpo snowflakes is up to something like 2.6 billion and that is a forward-looking indicator of of future revenues so i want to i'd like to see that growing and it's growing at a fast pace so you're going to see some ups and downs with snowflake i have no doubt but i think things are still looking pretty solid for the company growth companies like snowflake and octa and z scalar those other ones that i mentioned earlier have probably been repriced and refactored by investors while there's always going to be market and of course geopolitical risk especially in these times fundamentals matter you've got huge market well capitalized you got a leadership position great products and strong customer adoption you also have a great team team is something else that we look for we haven't touched on that but i'll leave you with this thought everyone knows about frank slootman mike scarpelli and what they've accomplished in their years of working together that's why the stock you know in ipo was was so overvalued they had seen these guys do it before slootman just documented in all this in his book amp it up which gives great insight into the history of of that though you know that pair and and the teams that they've built the companies that they've built how he thinks about building companies and markets and and how you know total available markets super important but the whole philosophy and culture that that he's building in his management style but you got to wonder right how long is this guy going to keep going what keeps him motivated you know i asked him that one time here's what he said why i mean are you in this for the sport what's the story here uh actually that that's not a bad way of characterizing it i think i am in it uh you know for the sport uh you know the only way to become the best version of yourself is to be uh to be under the gun and uh you know every single day and that's that's certainly uh what we are it sort of has its own rewards building great products building great companies uh you know regardless of you know uh what the spoils may be uh it has its own rewards and i i it's hard for people like us to get off the field and uh you know hang it up so here we are so there you have it he's in it for the sport how great is that he loves building companies and that my opinion that's how frank slootman thinks about success it's not about money money's the byproduct of success as earl nightingale would say success is the progressive realization of a worthy ideal i love that quote building great companies building products that change the world changing people's lives with data and insights creating jobs creating life-altering wealth opportunities not for himself but for thousands of employees and partners i'd say that's a pretty worthy ideal and i hope frank slootman sticks with it for a while okay that's it for today thanks to stephanie chan for the background research she does for breaking analysis alex meyerson on production kristen martin and cheryl knight on social with rob hoff on siliconangle and thanks to ivana delevska of spear invest and my friend chip symington for the angles from the money side of things remember all these episodes are available as podcasts just search breaking analysis podcast i publish weekly on wikibon.com and siliconangle.com and don't forget to check out etr.plus for all the survey data you can reach me at devolante or david.velante siliconangle.com and this is dave vellante for cube insights powered by etrbsafe stay well and we'll see you next time [Music] you

Published Date : Mar 18 2022

SUMMARY :

the history of of that though you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
microsoftORGANIZATION

0.99+

josh brownPERSON

0.99+

alex meyersonPERSON

0.99+

thousandsQUANTITY

0.99+

80 percentQUANTITY

0.99+

2021DATE

0.99+

slootmanPERSON

0.99+

rob hoffPERSON

0.99+

67 yearQUANTITY

0.99+

5.4 billion dollarsQUANTITY

0.99+

50 billion dollarQUANTITY

0.99+

64 billionQUANTITY

0.99+

800 million dollarsQUANTITY

0.99+

165 millionQUANTITY

0.99+

23 billion dollarQUANTITY

0.99+

stephanie chanPERSON

0.99+

david floyerPERSON

0.99+

ivana delevskaPERSON

0.99+

steve ballmerPERSON

0.99+

this yearDATE

0.99+

2.6 billionQUANTITY

0.99+

frank slootmanPERSON

0.99+

mike scarpelliPERSON

0.99+

billions of dollarsQUANTITY

0.99+

oracleORGANIZATION

0.99+

earl nightingalePERSON

0.99+

two driversQUANTITY

0.99+

multi-hundred billion dollarQUANTITY

0.99+

david.velanteOTHER

0.98+

bostonLOCATION

0.98+

dave vellantePERSON

0.98+

oneQUANTITY

0.98+

about a hundred million dollarsQUANTITY

0.98+

120QUANTITY

0.98+

awsORGANIZATION

0.98+

Snowflake’s Wild RideTITLE

0.98+

frank slootmanPERSON

0.98+

siliconangle.comOTHER

0.98+

this weekDATE

0.98+

around 2 billionQUANTITY

0.98+

70 billion dollarQUANTITY

0.97+

400QUANTITY

0.97+

320QUANTITY

0.97+

q1 2018DATE

0.97+

kristen martinPERSON

0.97+

220QUANTITY

0.97+

chip symingtonPERSON

0.96+

firstQUANTITY

0.96+

benzingaORGANIZATION

0.96+

164QUANTITY

0.96+

over 300 a dayQUANTITY

0.96+

first dayQUANTITY

0.95+

earthLOCATION

0.95+

windowsTITLE

0.95+

two worldsQUANTITY

0.95+

past quarterDATE

0.95+

165QUANTITY

0.94+

disneyORGANIZATION

0.94+

65 millionQUANTITY

0.94+

simontonLOCATION

0.94+

pythonTITLE

0.94+

street.comOTHER

0.93+

a lot of dataQUANTITY

0.92+

last quarterDATE

0.92+

cheryl knightPERSON

0.92+

todayDATE

0.92+

50 rateQUANTITY

0.91+

day oneQUANTITY

0.9+

zero revenueQUANTITY

0.9+

devolanteOTHER

0.9+

tonsQUANTITY

0.89+

wikibon.comOTHER

0.88+

one timeQUANTITY

0.88+

hundreds of other companiesQUANTITY

0.88+

etrORGANIZATION

0.87+

single dayQUANTITY

0.86+

balmerPERSON

0.85+

around 240QUANTITY

0.85+

ipoORGANIZATION

0.85+

20sQUANTITY

0.84+

lots of dataQUANTITY

0.83+

Breaking Analysis: Pat Gelsinger has the Vision Intel Just Needs Time, Cash & a Miracle


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> If it weren't for Pat Gelsinger, Intel's future would be a disaster. Even with his clear vision, fantastic leadership, deep technical and business acumen, and amazing positivity, the company's future is in serious jeopardy. It's the same story we've been telling for years. Volume is king in the semiconductor industry, and Intel no longer is the volume leader. Despite Intel's efforts to change that dynamic With several recent moves, including making another go at its Foundry business, the company is years away from reversing its lagging position relative to today's leading foundries and design shops. Intel's best chance to survive as a leader in our view, will come from a combination of a massive market, continued supply constraints, government money, and luck, perhaps in the form of a deal with apple in the midterm. Hello, and welcome to this week's "Wikibon CUBE Insights, Powered by ETR." In this "Breaking Analysis," we'll update you on our latest assessment of Intel's competitive position and unpack nuggets from the company's February investor conference. Let's go back in history a bit and review what we said in the early 2010s. If you've followed this program, you know that our David Floyer sounded the alarm for Intel as far back as 2012, the year after PC volumes peaked. Yes, they've ticked up a bit in the past couple of years but they pale in comparison to the volumes that the ARM ecosystem is producing. The world has changed from people entering data into machines, and now it's machines that are driving all the data. Data volumes in Web 1.0 were largely driven by keystrokes and clicks. Web 3.0 is going to be driven by machines entering data into sensors, cameras. Other edge devices are going to drive enormous data volumes and processing power to boot. Every windmill, every factory device, every consumer device, every car, will require processing at the edge to run AI, facial recognition, inference, and data intensive workloads. And the volume of this space compared to PCs and even the iPhone itself is about to be dwarfed with an explosion of devices. Intel is not well positioned for this new world in our view. Intel has to catch up on the process, Intel has to catch up on architecture, Intel has to play catch up on security, Intel has to play catch up on volume. The ARM ecosystem has cumulatively shipped 200 billion chips to date, and is shipping 10x Intel's wafer volume. Intel has to have an architecture that accommodates much more diversity. And while it's working on that, it's years behind. All that said, Pat Gelsinger is doing everything he can and more to close the gap. Here's a partial list of the moves that Pat is making. A year ago, he announced IDM 2.0, a new integrated device manufacturing strategy that opened up its world to partners for manufacturing and other innovation. Intel has restructured, reorganized, and many executives have boomeranged back in, many previous Intel execs. They understand the business and have a deep passion to help the company regain its prominence. As part of the IDM 2.0 announcement, Intel created, recreated if you will, a Foundry division and recently acquired Tower Semiconductor an Israeli firm, that is going to help it in that mission. It's opening up partnerships with alternative processor manufacturers and designers. And the company has announced major investments in CAPEX to build out Foundry capacity. Intel is going to spin out Mobileye, a company it had acquired for 15 billion in 2017. Or does it try and get a $50 billion valuation? Mobileye is about $1.4 billion in revenue, and is likely going to be worth more around 25 to 30 billion, we'll see. But Intel is going to maybe get $10 billion in cash from that, that spin out that IPO and it can use that to fund more FABS and more equipment. Intel is leveraging its 19,000 software engineers to move up the stack and sell more subscriptions and high margin software. He got to sell what he got. And finally Pat is playing politics beautifully. Announcing for example, FAB investments in Ohio, which he dubbed Silicon Heartland. Brilliant! Again, there's no doubt that Pat is moving fast and doing the right things. Here's Pat at his investor event in a T-shirt that says, "torrid, bringing back the torrid pace and discipline that Intel is used to." And on the right is Pat at the State of the Union address, looking sharp in shirt and tie and suit. And he has said, "a bet on Intel is a hedge against geopolitical instability in the world." That's just so good. To that statement, he showed this chart at his investor meeting. Basically it shows that whereas semiconductor manufacturing capacity has gone from 80% of the world's volume to 20%, he wants to get it back to 50% by 2030, and reset supply chains in a market that has become important as oil. Again, just brilliant positioning and pushing all the right hot buttons. And here's a slide underscoring that commitment, showing manufacturing facilities around the world with new capacity coming online in the next few years in Ohio and the EU. Mentioning the CHIPS Act in his presentation in The US and Europe as part of a public private partnership, no doubt, he's going to need all the help he can get. Now, we couldn't resist the chart on the left here shows wafer starts and transistor capacity growth. For Intel, overtime speaks to its volume aspirations. But we couldn't help notice that the shape of the curve is somewhat misleading because it shows a two-year (mumbles) and then widens the aperture to three years to make the curve look steeper. Fun with numbers. Okay, maybe a little nitpick, but these are some of the telling nuggets we pulled from the investor day, and they're important. Another nitpick is in our view, wafers would be a better measure of volume than transistors. It's like a company saying we shipped 20% more exabytes or MIPS this year than last year. Of course you did, and your revenue shrank. Anyway, Pat went through a detailed analysis of the various Intel businesses and promised mid to high double digit growth by 2026, half of which will come from Intel's traditional PC they center in network edge businesses and the rest from advanced graphics HPC, Mobileye and Foundry. Okay, that sounds pretty good. But it has to be taken into context that the balance of the semiconductor industry, yeah, this would be a pretty competitive growth rate, in our view, especially for a 70 plus billion dollar company. So kudos to Pat for sticking his neck out on this one. But again, the promise is several years away, at least four years away. Now we want to focus on Foundry because that's the only way Intel is going to get back into the volume game and the volume necessary for the company to compete. Pat built this slide showing the baby blue for today's Foundry business just under a billion dollars and adding in another $1.5 billion for Tower Semiconductor, the Israeli firm that it just acquired. So a few billion dollars in the near term future for the Foundry business. And then by 2026, this really fuzzy blue bar. Now remember, TSM is the new volume leader, and is a $50 billion company growing. So there's definitely a market there that it can go after. And adding in ARM processors to the mix, and, you know, opening up and partnering with the ecosystems out there can only help volume if Intel can win that business, which you know, it should be able to, given the likelihood of long term supply constraints. But we remain skeptical. This is another chart Pat showed, which makes the case that Foundry and IDM 2.0 will allow expensive assets to have a longer useful life. Okay, that's cool. It will also solve the cumulative output problem highlighted in the bottom right. We've talked at length about Wright's Law. That is, for every cumulative doubling of units manufactured, cost will fall by a constant percentage. You know, let's say around 15% in semiconductor world, which is vitally important to accommodate next generation chips, which are always more expensive at the start of the cycle. So you need that 15% cost buffer to jump curves and make any money. So let's unpack this a bit. You know, does this chart at the bottom right address our Wright's Law concerns, i.e. that Intel can't take advantage of Wright's Law because it can't double cumulative output fast enough? Now note the decline in wafer starts and then the slight uptick, and then the flattening. It's hard to tell what years we're talking about here. Intel is not going to share the sausage making because it's probably not pretty, But you can see on the bottom left, the flattening of the cumulative output curve in IDM 1.0 otherwise known as the death spiral. Okay, back to the power of Wright's Law. Now, assume for a second that wafer density doesn't grow. It does, but just work with us for a second. Let's say you produce 50 million units per year, just making a number up. That gets you cumulative output to $100 million in, sorry, 100 million units in the second year to take you two years to get to that 100 million. So in other words, it takes two years to lower your manufacturing cost by, let's say, roughly 15%. Now, assuming you can get wafer volumes to be flat, which that chart showed, with good yields, you're at 150 now in year three, 200 in year four, 250 in year five, 300 in year six, now, that's four years before you can take advantage of Wright's Law. You keep going at that flat wafer start, and that simplifying assumption we made at the start and 50 million units a year, and well, you get to the point. You get the point, it's now eight years before you can get the Wright's Law to kick in, and you know, by then you're cooked. But now you can grow the density of transistors on a chip, right? Yes, of course. So let's come back to Moore's Law. The graphic on the left says that all the growth is in the new stuff. Totally agree with that. Huge term that Pat presented. Now he also said that until we exhaust the periodic table of elements, Moore's Law is alive and well, and Intel is the steward of Moore's Law. Okay, that's cool. The chart on the right shows Intel going from 100 billion transistors today to a trillion by 2030. Hold that thought. So Intel is assuming that we'll keep up with Moore's Law, meaning a doubling of transistors every let's say two years, and I believe it. So bring that back to Wright's Law, in the previous chart, it means with IDM 2.0, Intel can get back to enjoying the benefits of Wright's Law every two years, let's say, versus IDM 1.0 where they were failing to keep up. Okay, so Intel is saved, yeah? Well, let's bring into this discussion one of our favorite examples, Apple's M1 ARM-based chip. The M1 Ultra is a new architecture. And you can see the stats here, 114 billion transistors on a five nanometer process and all the other stats. The M1 Ultra has two chips. They're bonded together. And Apple put an interposer between the two chips. An interposer is a pathway that allows electrical signals to pass through it onto another chip. It's a super fast connection. You can see 2.5 terabytes per second. But the brilliance is the two chips act as a single chip. So you don't have to change the software at all. The way Intel's architecture works is it takes two different chips on a substrate, and then each has its own memory. The memory is not shared. Apple shares the memory for the CPU, the NPU, the GPU. All of it is shared, meaning it needs no change in software unlike Intel. Now Intel is working on a new architecture, but Apple and others are way ahead. Now let's make this really straightforward. The original Apple M1 had 16 billion transistors per chip. And you could see in that diagram, the recently launched M1 Ultra has $114 billion per chip. Now if you take into account the size of the chips, which are increasing, and the increase in the number of transistors per chip, that transistor density, that's a factor of around 6x growth in transistor density per chip in 18 months. Remember Intel, assuming the results in the two previous charts that we showed, assuming they were achievable, is running at 2x every two years, versus 6x for the competition. And AMD and Nvidia are close to that as well because they can take advantage of TSM's learning curve. So in the previous chart with Moore's Law, alive and well, Intel gets to a trillion transistors by 2030. The Apple ARM and Nvidia ecosystems will arrive at that point years ahead of Intel. That means lower costs and significantly better competitive advantage. Okay, so where does that leave Intel? The story is really not resonating with investors and hasn't for a while. On February 18th, the day after its investor meeting, the stock was off. It's rebound a little bit but investors are, you know, they're probably prudent to wait unless they have really a long term view. And you can see Intel's performance relative to some of the major competitors. You know, Pat talked about five nodes in for years. He made a big deal out of that, and he shared proof points with Alder Lake and Meteor Lake and other nodes, but Intel just delayed granite rapids last month that pushed it out from 2023 to 2024. And it told investors that we're going to have to boost spending to turn this ship around, which is absolutely the case. And that delay in chips I feel like the first disappointment won't be the last. But as we've said many times, it's very difficult, actually, it's impossible to quickly catch up in semiconductors, and Intel will never catch up without volume. So we'll leave you by iterating our scenario that could save Intel, and that's if its Foundry business can eventually win back Apple to supercharge its volume story. It's going to be tough to wrestle that business away from TSM especially as TSM is setting up shop in Arizona, with US manufacturing that's going to placate The US government. But look, maybe the government cuts a deal with Apple, says, hey, maybe we'll back off with the DOJ and FTC and as part of the CHIPS Act, you'll have to throw some business at Intel. Would that be enough when combined with other Foundry opportunities Intel could theoretically produce? Maybe. But from this vantage point, it's very unlikely Intel will gain back its true number one leadership position. If it were really paranoid back when David Floyer sounded the alarm 10 years ago, yeah, that might have made a pretty big difference. But honestly, the best we can hope for is Intel's strategy and execution allows it to get competitive volumes by the end of the decade, and this national treasure survives to fight for its leadership position in the 2030s. Because it would take a miracle for that to happen in the 2020s. Okay, that's it for today. Thanks to David Floyer for his contributions to this research. Always a pleasure working with David. Stephanie Chan helps me do much of the background research for "Breaking Analysis," and works with our CUBE editorial team. Kristen Martin and Cheryl Knight to get the word out. And thanks to SiliconANGLE's editor in chief Rob Hof, who comes up with a lot of the great titles that we have for "Breaking Analysis" and gets the word out to the SiliconANGLE audience. Thanks, guys. Great teamwork. Remember, these episodes are all available as podcast wherever you listen. Just search "Breaking Analysis Podcast." You'll want to check out ETR's website @etr.ai. We also publish a full report every week on wikibon.com and siliconangle.com. You could always get in touch with me on email, david.vellante@siliconangle.com or DM me @dvellante, and comment on my LinkedIn posts. This is Dave Vellante for "theCUBE Insights, Powered by ETR." Have a great week. Stay safe, be well, and we'll see you next time. (upbeat music)

Published Date : Mar 12 2022

SUMMARY :

in Palo Alto in Boston, and Intel is the steward of Moore's Law.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephanie ChanPERSON

0.99+

David FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

Cheryl KnightPERSON

0.99+

Pat GelsingerPERSON

0.99+

NvidiaORGANIZATION

0.99+

PatPERSON

0.99+

Rob HofPERSON

0.99+

AppleORGANIZATION

0.99+

DavidPERSON

0.99+

TSMORGANIZATION

0.99+

OhioLOCATION

0.99+

February 18thDATE

0.99+

MobileyeORGANIZATION

0.99+

2012DATE

0.99+

$100 millionQUANTITY

0.99+

two yearsQUANTITY

0.99+

80%QUANTITY

0.99+

ArizonaLOCATION

0.99+

WrightPERSON

0.99+

18 monthsQUANTITY

0.99+

2017DATE

0.99+

2023DATE

0.99+

AMDORGANIZATION

0.99+

6xQUANTITY

0.99+

Kristen MartinPERSON

0.99+

Palo AltoLOCATION

0.99+

20%QUANTITY

0.99+

15%QUANTITY

0.99+

two chipsQUANTITY

0.99+

2xQUANTITY

0.99+

$50 billionQUANTITY

0.99+

100 millionQUANTITY

0.99+

$1.5 billionQUANTITY

0.99+

2030sDATE

0.99+

2030DATE

0.99+

IntelORGANIZATION

0.99+

CHIPS ActTITLE

0.99+

last yearDATE

0.99+

$10 billionQUANTITY

0.99+

2020sDATE

0.99+

50%QUANTITY

0.99+

2026DATE

0.99+

two-yearQUANTITY

0.99+

10xQUANTITY

0.99+

appleORGANIZATION

0.99+

FebruaryDATE

0.99+

two chipsQUANTITY

0.99+

15 billionQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

Tower SemiconductorORGANIZATION

0.99+

M1 UltraCOMMERCIAL_ITEM

0.99+

2024DATE

0.99+

70 plus billion dollarQUANTITY

0.99+

last monthDATE

0.99+

A year agoDATE

0.99+

200 billion chipsQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

three yearsQUANTITY

0.99+

CHIPS ActTITLE

0.99+

second yearQUANTITY

0.99+

about $1.4 billionQUANTITY

0.99+

early 2010sDATE

0.99+