Image Title

Search Results for Dec 10th:

Dec 10th Keynote Analysis Dave Vellante & Dave Floyer | AWS re:Invent 2020


 

>>From around the globe. It's the queue with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Hi, this is Dave Volante. Welcome back to the cubes. Continuous coverage of AWS reinvent 2020, the virtual version of the cube and reinvent. I'm here with David foyer. Who's the CTO Wiki Bon, and we're going to break down today's infrastructure keynote, which was headlined by Peter DeSantis. David. Good to see you. Good to see you. So David, we have a very tight timeframe and I just want to cover a couple of things. Something that I've learned for many, many years, working with you is the statement. It's all about recovery. And that really was the first part of Peter's discussion today. It was, he laid out the operational practices of AWS and he talked a lot about, he actually had some really interesting things up there. You know, you use the there's no compression algorithm for experience, but he talked a lot about availability and he compared AWS's availability philosophy with some of its competitors. >>And he talked about generators being concurrent and maintainable. He got, he took it down to the batteries and the ups and the thing that impressed me, most of the other thing that you've taught me over the years is system thinking. You've got to look at the entire system. That one little component could have Peter does emphasis towards a huge blast radius. So what AWS tries to do is, is constrict that blast radius so he can sleep at night. So non-disruptive replacements of things like batteries. He talked a lot about synchronous versus asynchronous trade-offs and it was like, kind of async versus sync one-on-one synchronous. You got latency asynchronous, you got your data loss to exposure. So a lot of discussions around that, but what was most interesting is he CA he compared and contrasted AWS's philosophy on availability zones, uh, with the competition. And he didn't specifically call out Microsoft and Google, but he showed some screenshots of their websites and the competition uses terms like usually available and generally available this meaning that certain regions and availability zone may not be available. That's not the case with AWS, your thoughts on that. >>They have a very impressive track record, uh, despite the, a beta the other day. Um, but they've got a very impressive track record. I, I think there is a big difference, however, between a general purpose computing and, uh, mission critical computing. And when you've got to bring up, uh, databases and everything else like that, then I think there are other platforms, uh, which, uh, which in the longterm, uh, AWS in my view, should be embracing that do a better job in mission critical areas, uh, in terms of bringing things up and not using data and recovery. So that's, that's an area which I think AWS will need to partner with in the past. >>Yeah. So, um, the other area of the keynote that was critical was, um, he spent a lot of time on custom Silicon and you and I have talked about this a lot, of course, AWS and Intel are huge partners. Uh, but, but we know that Intel owns its own fabs, uh, it's competitors, you know, we'll outsource to the other, other manufacturers. So Intel is motivated to put as much function on the real estate as possible to create general purpose processors and, and get as much out of that real estate as they possibly can. So what AWS has been been doing, and they certainly didn't throw Intel under the bus. They were very complimentary and, and friendly, but they also lay it out that they're developing a number of components that are custom Silicon. They talked about the nitro controllers, uh, inferential, which is, you know, specialized chips around, around inference to do things like PI torch, uh, and TensorFlow. >>Uh, they talked about training them, you know, the new training ship for training AI models or ML models. They spent a lot of time on Gravatar, which is 64 bit, like you say, everything's 64 bit these days, but it's the arm processor. And so, you know, they, they didn't specifically mention Moore's law, but they certainly taught, they gave, uh, a microprocessor one Oh one overview, which I really enjoyed. They talked about, they didn't specifically talk about Moore's law, but they talked about the need to put, put on more, more cores, uh, and then running multithreaded apps and the whole new programming models that, that brings out. Um, and, and, and basically laid out the case that these specialized processors that they're developing are more efficient. They talked about all these cores and the overhead that, that those cores bring in the difficulty of keeping those processors, those cores busy. >>Uh, and so they talked about symmetric, uh, uh, a simultaneous multi-threading, uh, and sharing cores, which like, it was like going back to the old days of, of microprocessor development. But the point being that as you add more cores and you have that overhead, you get non-linear, uh, performance improvements. And so, so it defeats the notion of scale out, right? And so what I, what I want to get to is to get your take on this as you've been talking for a long, long time about arm in the data center, and remind me just like object storage. We talked for years about object storage. It never went anywhere until Amazon brought forth simple storage service. And then object storage obviously is, you know, a mainstream mainstream storage. Now I see the same thing happening, happening with, with arm and the data center specifically, of course, alternative processes are taking off, but, but what's your take on all this? You, you listened to the keynote, uh, give us your takeaways. >>Well, let's go back to first principles for a second. Why is this happening? It's happening because of volume, volume, volume, volume is incredibly important, obviously in terms of cost. Um, and if you, if you're, if you look at a volume, uh, arm is, is, was based on the volumes that came from that from the, uh, from the, um, uh, handhelds and all of their, all of the mobile stuff that's been generating. So there's billions of chips being made, uh, on that. >>I can interrupt you for a second, David. So we're showing a slide here, uh, and, and it's, it's, it, it, it relates to volume and somewhat, I mean, we, we talk a lot about the volume that flash for instance gained from the consumer. Uh, and, and, and now we're talking about these emerging workloads. You call them matrix workloads. These are things like AI influencing edge work, and this gray area shows these alternative workloads. And that's really what Amazon is going after. So you show in this chart, you know, basically very small today, 2020, but you show a very large and growing position, uh, by the end of this decade, really eating into traditional, the traditional space. >>That, that that's absolutely correct. And, and that's being led by what's happening in the mobile market. If you look at all of the work that's going on, on your, on your, uh, Apple, uh, Apple iPhone, there's a huge amount of, uh, modern, uh, matrix workloads are going there to help you with your photography and everything like that. And that's going to come into the, uh, into the data center within, within two years. Uh, and that's what, what, uh, AWS is focusing on is capabilities of doing this type of new workload in real time. And, and it's hundreds of times, hundreds of times more processing, uh, to do these workloads and it's gotta be done in real time. >>Yeah. So we have a, we have a chart on that this bar chart that you've, you've produced. Uh, I don't know if you can see the bars here. Um, I can't see them, but, but maybe we can, we can editorialize. So on the left-hand side, you basically have traditional workloads, uh, on blue and you have matrix workloads. What you calling these emerging workloads and red you, so you show performance 0.9, five versus 50, then price performance for traditional 3.6. And it's more than 150 times greater for ARM-based workload. >>Yeah. And that's a analysis of the previous generation of arm. And if you take the new ones, the M one, for example, which has come in to the, uh, to the PC area, um, that's going to be even higher. So the arm is producing hybrid computers, uh, multi, uh, uh, uh, heterogeneous computers with multiple different things inside the computer. And that is making life a lot more efficient. And especially in the inference world, they're using NPUs instead of GPU's, they conferred about four times more NPUs that you can GPU's. And, um, uh, it, it's just a, uh, it's a different world and, uh, arm is ahead because it's done all the work in the volume area, and that's now going to go into PCs and, and it's going to, going to go into the data center. >>Okay, great. Now, yeah, if we could, uh, uh, guys bring up the, uh, the, the other chart that's titled workloads moving to ARM-based servers, this one is just amazing to me, David, you'll see that I, for some reason, the slides aren't translating, so, uh, forget that, forget the slides. So, um, but, but basically you have the revenue coming from arm as to be substantially higher, uh, in the out years, uh, or certainly substantially growing more than the traditional, uh, workload revenue. Now that's going to take a decade, but maybe you could explain, you know, why you see that. >>Yeah, the, the, the, the, the reason is that these matrix workloads, uh, and also, uh, the offload of like nitro is doing it's the offload of the storage and the networking from the, the main CPU's, uh, the dis-aggregation of computing, uh, plus the traditional workloads, which can move, uh, over or are moving over and where AWS, uh, and, and Microsoft and the PC and Apple, and the PC where those leaders are leading us is that they are doing the hard work of making sure that their software, uh, and their API APIs can utilize the capabilities of arm. Uh, so, uh, it's, it's the it, and the advantage that AWS has of course, is that enormous economies of scale, across many, many users. Uh, that's going to take longer to go into the, the enterprise data center much longer, but the, the, uh, Microsoft, Google and AWS, they're going to be leading the charge of this movement, all of arm into the data center. Uh, it was amazing some of the people or what some of the arm customers or the AWS customers were seeing today with much faster performance and much lower price. It was, they were, they were affirming. Uh, and, and the fundamental reason is that arm are two generations of production. They are in at the moment at five nano meters, whereas, um, Intel is still at 10. Uh, so that's a big, big issue that, uh, Intel have to address. Yeah. And so >>You get, you've been getting this core creep, I'll call it, which brings a lot of overhead. And now you're seeing these very efficient, specialized processes in your premises. We're going to see these explode for these new workloads. And in particular, the edge is such an enormous opportunity. I think you've pointed out that you see a big, uh, uh, market for edge, these edge emergent edge workloads kind of start in the data center and then push out to the edge. Andy Jassy says that the edge, uh, or, or we're going to bring AWS to the edge of the data center is just another edge node. I liked that vision, your thoughts. >>Uh, I, I think that is a, a compelling vision. I think things at the edge, you have many different form factors. So, uh, you, you will need an edge and a car for example, which is cheap enough to fit into a car and it's, but it's gotta be a hundred times more processing than it is in the, in the computers, in the car at the moment, that's a big leap and, and for, to get to automated driving, uh, but that's going to happen. Um, and it's going to happen on ARM-based systems and the amount of work that's going to go out to the edge is enormous. And the amount of data that's generated at the edge is enormous. That's not going to come back to the center, that's going to be processed at the edge, and the edge is going to be the center. If you're like of where computing is done. Uh, it doesn't mean to say that you're not going to have a lot of inference work inside the data center, but a lot of, lot of work in terms of data and processing is move, is going to move into the edge over the next decade. >>Yeah, well, many of, uh, AWS is edge offerings today, you know, assume data is going to be sent back. Although of course you see outpost and then smaller versions of outposts. That's a, to me, that's a clue of what's coming. Uh, basically again, bringing AWS to, to, to the edge. I want to also touch on, uh, Amazon's, uh, comments on renewable. Peter has talked a lot about what they're doing to reduce carbon. Uh, one of the interesting things was they're actually reusing their cooling water that they clean and reuse. I think, I think you said three or multiple times, uh, and then they put it back out and they were able to purify it and reuse it. So, so that's a really great sustainable story. There was much more to it. Uh, but I think, you know, companies like Amazon, especially, you know, large companies really have a responsibility. So it's great to see Amazon stepping up. Uh, anyway, we're out of time, David, thanks so much for coming on and sharing your insights really, really appreciate it. Those, by the way, those slides of Wiki bond.com has a lot of David's work on there. Apologize for some of the data not showing through, but, uh, working in real time here. This is Dave Volante for David foyer. Are you watching the cubes that continuous coverage of AWS reinvent 2020, we'll be right back.

Published Date : Dec 18 2020

SUMMARY :

It's the queue with digital coverage of Who's the CTO Wiki Bon, and we're going to break down today's infrastructure keynote, That's not the case with AWS, your thoughts on that. a beta the other day. uh, inferential, which is, you know, specialized chips around, around inference to do things like PI Uh, they talked about training them, you know, the new training ship for training AI models or ML models. Uh, and so they talked about symmetric, uh, uh, a simultaneous multi-threading, uh, on that. So you show in this chart, you know, basically very small today, 2020, but you show a very And that's going to come into the, uh, into the data center within, So on the left-hand side, you basically have traditional workloads, And especially in the inference world, they're using NPUs instead of more than the traditional, uh, workload revenue. the main CPU's, uh, the dis-aggregation of computing, in the data center and then push out to the edge. and the edge is going to be the center. Uh, one of the interesting things was they're actually reusing their cooling water

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

Dave VolantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Peter DeSantisPERSON

0.99+

Dave FloyerPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Andy JassyPERSON

0.99+

PeterPERSON

0.99+

Dec 10thDATE

0.99+

50QUANTITY

0.99+

IntelORGANIZATION

0.99+

2020DATE

0.99+

AppleORGANIZATION

0.99+

hundreds of timesQUANTITY

0.99+

3.6QUANTITY

0.99+

threeQUANTITY

0.99+

0.9QUANTITY

0.99+

five nano metersQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

64 bitQUANTITY

0.99+

two generationsQUANTITY

0.98+

10QUANTITY

0.98+

more than 150 timesQUANTITY

0.98+

fiveQUANTITY

0.97+

two yearsQUANTITY

0.95+

first partQUANTITY

0.95+

todayDATE

0.95+

first principlesQUANTITY

0.94+

next decadeDATE

0.93+

oneQUANTITY

0.93+

2020TITLE

0.92+

end of this decadeDATE

0.9+

one little componentQUANTITY

0.9+

billions of chipsQUANTITY

0.88+

a decadeQUANTITY

0.85+

MoorePERSON

0.81+

Wiki bond.comORGANIZATION

0.76+

secondQUANTITY

0.74+

hundred timesQUANTITY

0.71+

InventEVENT

0.7+

about four timesQUANTITY

0.69+

a secondQUANTITY

0.68+