Image Title

Search Results for Fungible:

Pradeep Sindhu, Fungible | theCUBE on Cloud 2021


 

>>from around the globe. It's the Cube presenting Cuban cloud brought to you by silicon angle. As I've said many times on the Cube for years, decades, even we've marched to the cadence of Moore's law, relying on the doubling of performance every 18 months or so. But no longer is this the mainspring of innovation for technology. Rather, it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build out of a massively distributed computer network. Very importantly, in the last several years, alternative processors have emerged to support offloading work and performing specific Test GP use of the most widely known example of this trend, with the ascendancy of in video for certain applications like gaming and crypto mining and, more recently, machine learning. But in the middle of last decade, we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years. As we move into the next era of Cloud. And with me is deep. Sindhu, who's this co founder and CEO of Fungible, a company specializing in the design and development of GPU deep Welcome to the Cube. Great to see you. >>Thank you, Dave. And thank you for having me. >>You're very welcome. So okay, my first question is, don't CPUs and GP use process data already? Why do we need a DPU? >>Um you know that that is a natural question to ask on. CPUs have been around in one form or another for almost, you know, 55 maybe 60 years. And, uh, you know, this is when general purpose computing was invented, and essentially all CPI use went to x 80 60 x 86 architecture. Uh, by and large arm, of course, is used very heavily in mobile computing, but x 86 primarily used in data center, which is our focus. Um, now, you can understand that that architectural off general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time, uh, improvements you refer the Moore's Law, which is really the improvements off the price performance off silicon over time. Um, that, combined with architectural improvements, was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. Uh, you're not going to get very much. You're not going to squeeze more blood out of that storm from the general purpose computer architectures. What has also happened over the last decade is that Moore's law, which is essentially the doubling off the number of transistors, um, on a chip has slowed down considerably on and to the point where you're only getting maybe 10 20% improvements every generation in speed off the grandest er. If that. And what's happening also is that the spacing between successive generations of technology is actually increasing from 2, 2.5 years to now three, maybe even four years. And this is because we are reaching some physical limits in Seamus. Thes limits are well recognized, and we have to understand that these limits apply not just to general purpose if use, but they also apply to GP use now. General purpose, if used, do one kind of confrontation. They really general on bacon do lots and lots of different things. It is actually a very, very powerful engine, Um, and then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of processor called the GPU, which specializes in executing vector floating point arithmetic operations much, much better than CPL. Maybe 2030 40 times better. Well, GPS have now been around for probably 15, 20 years, mostly addressing graphics computations. But recently, in the last decade or so, they have been used heavily for AI and analytics computations. So now the question is, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago, and I recognize I was still at Juniper Networks, which is another company that I found it. I recognize that in the data center, um, as the workload changes due to addressing Mawr and Mawr, larger and larger corpus is of data number one. And as people use scale out as the standard technique for building applications, what happens is that the amount of East West traffic increases greatly. And what happens is that you now have a new type off workload which is coming, and today probably 30% off the workload in a data center is what we call data centric. I want to give you some examples of what is the data centric E? >>Well, I wonder if I could interrupt you for a second, because Because I want you to. I want those examples, and I want you to tie it into the cloud because that's kind of the topic that we're talking about today and how you see that evolving. It's a key question that we're trying to answer in this program. Of course, Early Cloud was about infrastructure, a little compute storage, networking. And now we have to get to your point all this data in the cloud and we're seeing, by the way, the definition of cloud expand into this distributed or I think the term you use is disaggregated network of computers. So you're a technology visionary, And I wonder, you know how you see that evolving and then please work in your examples of that critical workload that data centric workload >>absolutely happy to do that. So, you know, if you look at the architectural off cloud data centers, um, the single most important invention was scale out scale out off identical or near identical servers, all connected to a standard i p Internet network. That's that's the architectural. Now, the building blocks of this architecture er is, uh, Internet switches, which make up the network i p Internet switches. And then the servers all built using general purpose X 86 CPUs with D ram with SSD with hard drives all connected, uh, inside the CPU. Now, the fact that you scale these, uh, server nodes as they're called out, um, was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose computer? But this architectures, Dave, is it compute centric architectures and the reason it's a compute centric architectures. If you open this a server node, what you see is a connection to the network, typically with a simple network interface card. And then you have CP use, which are in the middle of the action. Not only are the CPUs processing the application workload, but they're processing all of the aisle workload, what we call data centric workload. And so when you connect SSD and hard drives and GPU that everything to the CPU, um, as well as to the network, you can now imagine that the CPUs is doing to functions it z running the applications, but it's also playing traffic cop for the I O. So every Io has to go to the CPU and you're executing instructions typically in the operating system, and you're interrupting the CPU many, many millions of times a second now. General Purpose CPUs and the architecture of the CPS was never designed to play traffic cop, because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's. It's critical that in this new architecture, where there's a lot of data, a lot of East West traffic, the percentage of work clothes, which is data centric, has gone from maybe 1 to 2% to 30 to 40%. I'll give you some numbers, which are absolutely stunning if you go back to, say, 1987 and which is, which is the year in which I bought my first personal computer. Um, the network was some 30 times slower. Then the CPI. The CPI was running at 50 megahertz. The network was running at three megabits per second. Well, today the network runs at 100 gigabits per second and the CPU clock speed off. A single core is about 3 to 2.3 gigahertz. So you've seen that there is a 600 x change in the ratio off I'll to compute just the raw clock speed. Now you can tell me that. Hey, um, typical CPUs have lots of lots, of course, but even when you factor that in, there's bean close toe two orders of magnitude change in the amount of ill to compute. There is no way toe address that without changing the architectures on this is where the DPU comes in on the DPU actually solves two fundamental problems in cloud data centers on these air. Fundamental. There's no escaping it, no amount off. Clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architectures the interactions between server notes are very inefficient. Okay, that's number one problem number one. Problem number two is that these data center computations and I'll give you those four examples the network stack, the storage stack, the virtualization stack and the security stack. Those four examples are executed very inefficiently by CBS. Needless to say that that if you try to execute these on GPS, you'll run into the same problem, probably even worse because GPS are not good at executing these data centric computations. So when U. S o What we were looking to do it fungible is to solve these two basic problems and you don't solve them by by just using taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the for the last 40 years. So what we did was we created this new microprocessor that we call the DPU from ground doctor is a clean sheet design and it solve those two problems. Fundamental. >>So I want to get into that. But I just want to stop you for a second and just ask you a basic question, which is so if I understand it correctly, if I just took the traditional scale out, If I scale out compute and storage, you're saying I'm gonna hit a diminishing returns, It z Not only is it not going to scale linear linearly, I'm gonna get inefficiencies. And that's really the problem that you're solving. Is that correct? >>That is correct. And you know this problem uh, the workloads that we have today are very data heavy. You take a I, for example, you take analytics, for example. It's well known that for a I training, the larger the corpus of data relevant data that you're training on, the better the result. So you can imagine where this is going to go, especially when people have figured out a formula that, hey, the more data I collect, I can use those insights to make money. >>Yeah, this is why this is why I wanted to talk to you, because the last 10 years we've been collecting all this data. Now I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research and the first thing people said is they want to improve their infrastructure on. They want to do that by moving to the cloud, and they also there was a security angle there as well. That's a whole nother topic. We could discuss the other staff that jumped out at me. There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs with alternative processing technology. So that's sort of, you know, I know it's self serving, but z right on the conversation we're having. So I >>want to >>understand the architecture. Er, aan den, how you've approached this, You've you've said you've clearly laid out the X 86 is not going to solve this problem. And even GP use are not going to solve this problem. So help us understand the architecture and how you do solve this problem. >>I'll be I'll be very happy to remember I use this term traffic cough. Andi, I use this term very specifically because, uh, first let me define what I mean by a data centric computation because that's the essence off the problem resolved. Remember, I said two problems. One is we execute data centric work clothes, at least in order of magnitude, more efficiently than CPUs or GPS, probably 30 times more efficiently on. The second thing is that we allow notes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first, let's look at the data centric piece, the data centric piece, um, for for workload to qualify as being data centric. Four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads, so I'm not saying anything new. Secondly, uh, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently. Thousands of them. Yeah, that's number two. So a lot of multiplexing number three is that this workload is state fel. In other words, you have to you can't process back. It's out of order. You have to do them in order because you're terminating network sessions on the last one Is that when you look at the actual computation, the ratio off I Oto arithmetic is medium to high. When you put all four of them together, you actually have a data centric workout, right? And this workload is terrible for general purpose, C p s not only the general purpose, C p is not executed properly. The application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them, you're going to be in trouble. So what did we do? Well, what we did was our architecture consists off very, very heavily multi threaded, general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some some of those accelerators, um, de Emma accelerators, then radio coding accelerators, compression accelerators, crypto accelerators, um, compression accelerators, thes air, just something. And then look up accelerators. These air functions that if you do not specialized, you're not going to execute them efficiently. But you cannot just put accelerators in there. These accelerators have to be multi threaded to handle. You know, we have something like 1000 different threads inside our DPU toe address. These many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that that is very important to understand is that given the paucity off transistors, I know that we have hundreds of billions of transistors on a chip. But the problem is that those transistors are used very inefficiently today. If the architecture, the architecture of the CPU or GPU, what we have done is we've improved the efficiency of those transistors by 30 times. Yeah, so you can use >>the real estate. You can use their real estate more effectively, >>much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that, you know, we're gonna end up in the same bucket where General Focus CPS are today. We were trying to solve the specific problem off data centric computations on off improving the note to note efficiency. So let me go to Point number two, because that's equally important, because in a scale out architecture, the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that, you start to get back. It drops because there are some fundamental problems caused by congestion on the network, which are unsolved as we speak today. There only one solution, which is to use DCP well. DCP is a well known is part of the D. C. P I. P. Suite. DCP was never designed to handle the agencies and speeds inside data center. It's a wonderful protocol, but it was invented 42 year 43 years ago, now >>very reliable and tested and proven. It's got a good track record, but you're a >>very good track record, unfortunately, eats a lot off CPU cycles. So if you take the idea behind TCP and you say, Okay, what's the essence of TCP? How would you apply to the data center? That's what we've done with what we call F C P, which is a fabric control protocol which we intend toe open way. Intend to publish standards on make it open. And when you do that and you you embed F c p in hardware on top of his standard I P Internet network, you end up with the ability to run at very large scale networks where the utilization of the network is 90 to 95% not 20 to 25% on you end up with solving problems of congestion at the same time. Now, why is this important today that zall geek speak so far? But the reason this stuff is important is that it such a network allows you to disaggregate pool and then virtualized, the most important and expensive resource is in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like the Ram wants to be disaggregated in food. Well, if I put everything inside a general purpose server, the problem is that those resource is get stranded because they're they're stuck behind the CPI. Well, once you disaggregate those resources and we're saying hyper disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources >>and then you're gonna re aggregate them, right? I mean, that's >>obviously exactly and the network is the key helping. So the reason the company is called fungible is because we are able to disaggregate virtualized and then pull those resources and you can get, you know, four uh, eso scale out cos you know the large aws Google, etcetera. They have been doing this aggregation and pulling for some time, but because they've been using a compute centric architecture, er that this aggregation is not nearly as efficient as we could make on their off by about a factor of three. When you look at enterprise companies, they're off by any other factor of four. Because the utilization of enterprises typically around 8% off overall infrastructure, the utilization the cloud for A W S and G, C, P and Microsoft is closer to 35 to 40%. So there is a factor off almost, uh, 4 to 8, which you can gain by disaggregated and pulling. >>Okay, so I wanna interrupt again. So thes hyper scaler zehr smart. A lot of engineers and we've seen them. Yeah, you're right. They're using ah, lot of general purpose. But we've seen them, uh, move Make moves toward GP use and and embrace things like arm eso I know, I know you can't name names but you would think that this is with all the data that's in the cloud again Our topic today you would think the hyper scaler zehr all over this >>all the hyper scale is recognized it that the problems that we have articulated are important ones on they're trying to solve them. Uh, with the resource is that they have on all the clever people that they have. So these air recognized problems. However, please note that each of these hyper scale er's has their own legacy now they've been around for 10, 15 years, and so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some >>point. Have technical debt. You mean they >>have? I'm not going to say they have technical debt, but they have a certain way of doing things on. They are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you heard the term smart neck, and all your listeners must have heard that term. Well, a smart thing is not a deep you what a smart Nick is. It's simply taking general purpose arm cores put in the network interface on a PC interface and integrating them all in the same chip and separating them from the CPI. So this does solve the problem. It solves the problem off the data centric workload, interfering with the application work, work. Good job. But it does not address the architectural problem. How to execute data centric workloads efficiently. >>Yeah, it reminds me. It reminds me of you I I understand what you're saying. I was gonna ask you about smart. Next. It does. It's almost like a bridge or a Band Aid. It's always reminds me of >>funny >>of throwing, you know, a flash storage on Ah, a disc system that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. I don't know if it's a valid analogy, but we've seen this in computing for a long time. >>Yeah, this analogy is close because, you know. Okay, so let's let's take hyper scaler X. Okay, one name names. Um, you find that, you know, half my CPUs are twiddling their thumbs because they're executing this data centric workload. Well, what are you going to do? All your code is written in, uh, C c plus plus, um, on x 86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different Let's say we use arm simply because you know x 86 licenses are not available to people to build their own CPUs. So arm was available, so they put a bunch of encores. Let's stick a PC. I express and network interface on you. Port that quote from X 86 Tow arm. Not difficult to do, but it does yield you results on, By the way, if, for example, um, this hyper scaler X shall we call them if they're able to remove 20% of the workload from general purpose CPUs? That's worth billions of dollars. So of course you're going to do that. It requires relatively little innovation other than toe for quote from one place to another place. >>That's what that's what. But that's what I'm saying. I mean, I would think again. The hyper scale is why Why can't they just, you know, do some work and do some engineering and and then give you a call and say, Okay, we're gonna We're gonna attack these workloads together. You know, that's similar to how they brought in GP use. And you're right. It's it's worth billions of dollars. You could see when when the hyper scale is Microsoft and and Azure, uh, and and AWS both announced, I think they depreciated servers now instead of four years. It's five years, and it dropped, like a billion dollars to their bottom line. But why not just work directly with you guys. I mean, Z the logical play. >>Some of them are working with us. So it's not to say that they're not working with us. So you know, all of the hyper scale is they recognize that the technology that we're building is a fundamental that we have something really special, and moreover, it's fully programmable. So you know, the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit, which is on the boundary off a server, and the network is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architectural where the functionality is programmable? But it is also very high speed for this particular set of applications. So the analogy with GPS is nearly perfect because GP use, and particularly in video that's implemented or they invented coulda, which is a programming language for GPS on it made them easy to use mirror fully programmable without compromising performance. Well, this is what we're doing with DP use. We've invented a new architectures. We've made them very easy to program. And they're these workloads or not, Workload. The computation that I talked about, which is security virtualization storage and then network. Those four are quintessential examples off data centric, foreclosed on. They're not going away. In fact, they're becoming more and more and more important over time. >>I'm very excited for you guys, I think, and really appreciate deep we're gonna have you back because I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, crypto accelerators. I want to understand that. I know there's envy me in here. There's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now, uh, into this domain, this build out of this I like this term dis aggregated, massive disaggregated network s so hyper disaggregated. Even better. And I would say this on way. I gotta go. But what got us here the last decade is not the same is what's gonna take us through the next decade. Pretty Thanks. Thanks so much for coming on the cube. It's a great company. >>You have it It's really a pleasure to speak with you and get the message of fungible out there. >>E promise. Well, I promise we'll have you back and keep it right there. Everybody, we got more great content coming your way on the Cube on Cloud, This is David. Won't stay right there.

Published Date : Jan 22 2021

SUMMARY :

a company specializing in the design and development of GPU deep Welcome to the Cube. So okay, my first question is, don't CPUs and GP use process And for the longest time, uh, improvements you refer the Moore's Law, the definition of cloud expand into this distributed or I think the term you use is disaggregated change in the amount of ill to compute. But I just want to stop you for a second and just ask you a basic So you can imagine where this is going to go, There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs and how you do solve this problem. sessions on the last one Is that when you look at the actual computation, the real estate. centric computations on off improving the note to note efficiency. but you're a disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can and then pull those resources and you can get, you know, four uh, all the data that's in the cloud again Our topic today you would think the hyper scaler all the hyper scale is recognized it that the problems that we have articulated You mean they of course, you heard the term smart neck, and all your listeners must have heard It reminds me of you I I understand what you're saying. that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. Well, the easiest thing to do is to separate the cores that run this workload. you know, do some work and do some engineering and and then give you a call and say, And so the whole trick is how do you come up I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, You have it It's really a pleasure to speak with you and get the message of fungible Well, I promise we'll have you back and keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
20%QUANTITY

0.99+

DavePERSON

0.99+

SindhuPERSON

0.99+

90QUANTITY

0.99+

AWSORGANIZATION

0.99+

30%QUANTITY

0.99+

50 megahertzQUANTITY

0.99+

CBSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Juniper NetworksORGANIZATION

0.99+

30 timesQUANTITY

0.99+

80%QUANTITY

0.99+

1QUANTITY

0.99+

four yearsQUANTITY

0.99+

55QUANTITY

0.99+

15QUANTITY

0.99+

Pradeep SindhuPERSON

0.99+

DavidPERSON

0.99+

five yearsQUANTITY

0.99+

two problemsQUANTITY

0.99+

20QUANTITY

0.99+

600 xQUANTITY

0.99+

first questionQUANTITY

0.99+

next decadeDATE

0.99+

60 yearsQUANTITY

0.99+

firstQUANTITY

0.99+

billion dollarsQUANTITY

0.99+

todayDATE

0.99+

30QUANTITY

0.99+

two thingsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

fourQUANTITY

0.99+

40%QUANTITY

0.99+

1987DATE

0.99+

1000 different threadsQUANTITY

0.99+

FirstQUANTITY

0.98+

FungibleORGANIZATION

0.98+

OneQUANTITY

0.98+

threeQUANTITY

0.98+

8QUANTITY

0.98+

25%QUANTITY

0.98+

Four thingsQUANTITY

0.98+

second thingQUANTITY

0.98+

10QUANTITY

0.98+

35QUANTITY

0.98+

one solutionQUANTITY

0.97+

singleQUANTITY

0.97+

around 8%QUANTITY

0.97+

third elementQUANTITY

0.97+

SecondlyQUANTITY

0.97+

95%QUANTITY

0.97+

billions of dollarsQUANTITY

0.97+

100 gigabits per secondQUANTITY

0.97+

hundreds of billions of transistorsQUANTITY

0.97+

2.3 gigahertzQUANTITY

0.97+

single coreQUANTITY

0.97+

2030DATE

0.97+

4QUANTITY

0.96+

CubanOTHER

0.96+

2%QUANTITY

0.96+

eachQUANTITY

0.95+

MoorePERSON

0.95+

last decadeDATE

0.95+

three megabits per secondQUANTITY

0.95+

10 20%QUANTITY

0.95+

42 yearDATE

0.94+

bothQUANTITY

0.94+

40 timesQUANTITY

0.93+

two fundamental problemsQUANTITY

0.92+

15 yearsQUANTITY

0.92+

Problem number twoQUANTITY

0.91+

two basic problemsQUANTITY

0.9+

43 years agoDATE

0.9+

86OTHER

0.9+

one placeQUANTITY

0.9+

one sideQUANTITY

0.89+

first personal computerQUANTITY

0.89+

Exclusive: Pradeep Sindhu, Introduces Fungible | Mayfield50


 

(futuristic electronic music) >> From Sand Hill Road in the heart of Silicon Valley, it's theCUBE presenting the People First Network, insights from entrepreneurs and tech leaders. >> Alright, I'm John Furrier with theCUBE. We are here in Sand Hill Road at Mayfield for their 50th anniversary content program called the People First Network, co-created with theCUBE, and with Mayfield and their network. I am John for theCUBE, our next guest is Pradeep Sindhu who is the former co-founder of Juniper Now, the co-founder and CEO of Fungible, a start-up with a super oriented technology we're going to get into, but first, Pradeep, great to see you. >> It's great to see you again, John. >> For a 50th anniversary, there's a lot of history. And just before we get started, we were talking almost 10 years ago, you and I, we did a podcast on the future of the iPhone, only about a year in, maybe half a year. You had the vision, you saw the flywheel of apps, you saw the flywheel of data, you saw mobile. That's actually exchanges to IoT that we're seeing today, that world that's playing out. So, obviously, you're a visionary and an amazing entrepreneur. That's actually happening, so. You saw it and how did you adjust to that? What was some of the things that you did after seeing that vision? >> Well, some of the things that I did, if you recall our conversation, a big piece of that was data centers and the fact that the ideal computer is centralized. There are other things I want to make distributed, but it was obvious back then that people would build very large data centers. And the same problem that happened with the internet, which is how do you connect billions of people and machines to each other, was going to come to data centers themselves. So that is the problem that I prepared myself for, and that's the problem that we're trying to solve at Fungible as well. >> And one of the things we've been having great conversation was as part of this 50th anniversary, People First, is the role of entrepreneurship. What motivated you to do another start-up? You had that itch you were scratching? You were also at Juniper Network, huge success, everyone knows the history there and your role there. But this is a wave that we've never seen before. What got you motivated, was it an itch you were scratching? Was it the vision around the data? What was the motivator? >> It wasn't necessarily an itch I was scratching. I'm a restless person. And if I'm not creating new things, I'm not happy. That's just the way I'm built. And I also saw simultaneously the ability, or this potential, to do something special a second time for the industry. So I saw a big problem to which I could contribute. >> And what was that problem? >> So that problem really was, back then, I would say 2012, 2013, it was obvious that Moore's law was going to flatten out. That this technology called CMOS, on which we've been writing now for 35, 40 years, was not giving us the gain that it once was. And that, as a result of that, transistors that one people thought were plentiful are going to become precious again. And one result of that would be that general purpose CPUs which were doubling in performance, or had been doubling in performance every couple of years, would stop doing that. And the question I ask myself is, that when that happens, what next? And so it's in the pursuit of what next is what led me to start my second company, Fungible. >> So what's interesting, we've been seeing a lot of posts out there, some cases criticizing Intel, some saying Intel has a good strategy. You see Nvidia out there doing some great things. The earnings are doing fantastic. The graphics, my kids want the new GPU for their games. Even their being bought for the people who are doing cryptocurrency mining, so the power of the processor has been a big part of that. Is that a symptom or a bridge to a solution, or is that just kind of the bloated nature of how hardware's going? >> It's not so much the bloated nature of hardware as it is the fact that, see, general purpose microprocessors or general purpose computing was invented by John Mo-noy-man in the late 1940s. This was just a concept that you could conceive and build something which is Turing-equivalent, which is completely general. In other word, any program that any computer you could conceive could be run by this one general purpose thing. This notion was new. The notion for programmable computer. This notion is incredibly powerful and it's going to take on all of the world. And Intel today is the best proponent of that idea. And they're taking it to the limit. I admire Intel hugely. But so many people have worked on the problem of building general purpose processors, faster and faster, better and better. I think there's not a lot in left that tank. That is the architecture is now played out. We've gone to multi-core. Further, the base technology on which microprocessors are built, which is CMOS, is now reaching, is beginning to reach it's limits. We think, actually general concessions in the industry, and I particularly also think, that five nanometers is probably the last CMOS technology because technology's getting more and more expensive with every generation, but the gains that you are getting previously are not there anymore. So, to give you an example, from 16 nanometers to seven, you get about a 40% improvement in power but only about a 5% improvement in performance and clock speed, and, in fact, probably even less than that. And even the increase in the number of transistors, generation to generation, is not what is used to be. It used to be doubling every couple of years, now it's maybe 40%-50% improvement every two to three years. So with that trend and the difficulty of improving the performance of general purpose CPUs, the world has to come up with some other way to provide improved performance, power performance, and so on. And so those are the fundamental kinds of problems that I am interested in. Prior to Juniper, my interest in computing goes back a long ways. I've been interested in computing and networking for a very long time. So one of the things that I concluded back in 2012, 2013, is that because of the scarcity of Silicon performance, one of the things that's going to happen is people are going to start to specialize computing engines to solve particular problem. So, what the world always wants is, they want agility, which is the ability to solve problems quickly, but they also want the ability to go fast. In other words, do lots of work per unit time, right? Well, those things are typically in conflict. So, to give you an example, if I built a specialized hardware engine to solve one and only one problem, like solving cryptocurrency problems, I can build it to be very fast. But then tomorrow if I want to turn around and use that same engine and do something different, I cannot do it. So it's not agile, but it's very fast. >> It's like a tailor-made suit. >> It's like a tailor-made suit. >> You're only wearing one-- >> It does one thing. >> You put on a little weight, you got to (chuckles), you get a new one. >> Exactly. So this trade off between agility and performance is fundamental. And so, general purpose processors can do any computation you can imagine, but if I give you a particular problem, I can design something much better. So now as long as Silicon was improving the performance every couple of years, there's no incentive to come up with new architectures. General purpose CPUs are perfect. Well, what you are seeing recently is the specialization of the engines of computing. First was GPUs. GPUs were invented for graphics. Graphics, the main computation of graphics is lots and lots of floating point numbers where the same arithmetic applies to an array of numbers. Well, people then figured that I can also do problems in AI, particularly learning and inferencing, using that same machinery. This is why Nvidia is in a very good place today. Because they have not only an engine, called a GPU, which does these computations very well, but also language that makes it easy to program, called CUDA. Now, it turns out that in addition to these two major types of computing engines, one which is general purpose compute, which is invented a long time ago, and the other one which is called a signal instruction multiple data type of SIMD engine. This was invented maybe 30 years ago in mainframes. Those are the two major types of engines and it turns out that there's a third type of engine that will become extraordinarily useful in the coming world. And this engine we call the DPU, for data processing unit. And this is the engine that specializes in workloads that we call data-heavy. Data intensive. And, in fact, in a world which is going from being compute-centric to data-centric, this kind of engine is fundamental. >> I mean, the use cases are pretty broad, but specific. AI uses a lot of data, IoT need data at the edge. Like what the GPU did for graphics, you're thinking for data? >> That is correct. So the DPU, let's talk about what the DPU can and cannot do. And maybe I can define what makes a workload data-centric. There's actually four characteristics that make a workload data-centric. One is that the work always comes in the form of packets. Everybody's familiar with packets. Internet is built using packets. So that one is no surprise. Second one has a given server. Typically serves many, many hundreds, maybe thousands, of computations concurrently. So there's a lot of multiplexing of work going on. So that the second characteristic. The third characteristic is that the computations are stateful. In other words, you don't just read memory, you read and write memory, and the computations are dependent. So you can't handle these packets independently of one another. >> I think that's interesting because stateful application are the ones that need the most horsepower and have the most inadequacy right now. APIs, we love the APIs, restless APIs, no problem. Stateless. >> Stateless. Stateful, by the way, is hard. It's hard to make stateful computations reliable. So the world has made a lot of progress. Well, the fourth characteristic, which is maybe even a defining one, but the other ones are very important also, is that if you look at ratio of input/output to arithmetic, it's high for data-centric calculations. Now, to give you-- >> Which high, I is higher, O is higher, both? >> I/O, input/output. >> I/O, input and output? But not just output? >> Not just input, not just output. Input/output is high compared to the number of instructions you execute for doing arithmetic. Now, traditionally it was very little I/O, lots of computation. Now we live in world which is very, very richly connected, thanks to the internet. And if you look inside data centers, you see the same, it's a sort of Russian dolls kind of thing. And the same structure inside which is you have hundreds of thousands to maybe millions of servers that are connected to each other, that are talking to each other. The data centers are talking to each other. So the value of networks as we know is maximized at large scale. The same thing is happening inside data centers also. So the fact that things are connected east-west and is any-to-any way, it is what leads to the the computations becoming more data-centric. >> Pradeep, I love this conversation because I've been banging my head on all my CUBE interviews for the past eight years saying that cloud is horizontally scalable. The data world has been not horizontally scalable. We've had data warehouses. Put it into a database, park it over there. Yeah, we got Hadoop, I got a data lake, and then what happens? Now you got GDPR and all these other things out there. You got a regulatory framework that people don't even know where their data is. But when you think about data in the way you're talking about it, you're talking about making data addressable. Making it horizontally scalable. And then applying DPU to solve the problem, rather then try to solve it here in the path of, or the bus if you will, I don't know what to call it, but-- >> The thing to call it is, it's the backplane off a data center. So the same way that a server, a mainframe, has a backplane where all the communications go through. Well, inside a data center, you have this notion of a network which is called a fabric of the data center. It's the backplane off the data center. >> So this is a game changer, no doubt. I can see it, I'd love to get, I can't wait to see the product announcements. But what is the impact to the industry, because now you're talking about smaller, faster, cheaper, Which has been kind of the Moore's law. Okay, the performance hasn't been there but we've had general purpose agility. Now you have specialism around the processor. You now have more flexibility in the architecture. How does that blend in with cloud architectures? How does that blend into the intelligent edge? How that fit into the overall general architecture? >> Great question. Well, the way it blends into cloud architecture is that there's one and one thing that distinguishes cloud architectures from previous architectures, and that's the notion of scale-out. So let me just maybe define scale-out for the audience. Scale-out essentially means having a small number of component types like storage servers and compute servers, identical. Put in lots of them because I can't make individual one faster, so the next best thing is to put lots of them together. Connect them by very fast network that we call a fabric, and then have the collection of these things provide you more computing and faster computing. That's scale-out. Now scale-out is magical for lots of reasons. One is that you deliver much more reliable services because individual things failing don't have an effect anymore, right? The other thing is that the cost is as good as it can get because you're doing, instead of building very, very specialized things, a few of them, you're building many, many, many things, which they are more or less identical. So those two things, the economics is good, the agility is great, and also the reliability is great. So those three things is what drive cloud architecture. Now the thing that we talked about, which is specialization of the engines inside cloud. So we had, up until now, the cloud architecture was, is homogenous scale-out servers, all x86 based. What we're entering is a phase that I would call heterogeneous specialized scale-out engines. So you are seeing this already, x86, GPUs, TPUs, which are tensor flow processors, FPGAs. And then you're going to have DPUs coming, and in this ecosystem, DPUs are going to play two roles. One which is to offload from x86 and GPUs those computations that they don't do very well, the data-centric computations. But the second one is to implement a fabric that allows these things to be connected very well. Now you had asked about the edge. Specialization of computing engines is not going to be sufficient. We have to do scale-out more broadly in a grander sense. So in addition to these massively scalable data centers, we're going to have tens of thousands of smaller data centers closer to where the data is born. We talked about IoT. There's no reason to drag data thousands of miles away if you don't have to. >> Latency kills. >> Latency kills for some applications, it's in fact deadly. So putting those data centers where both computing and storage is near the source of data is actually very good. It's also good from the standpoint of security. At least it makes people feel good that, hey, the data is located maybe 10, 20 kilometers away from me, not 10,000 kilometers away where maybe it's a different government, maybe I won't have access to my data or whatever. So we're going to see this notion of scale-out play in a very general way. Not just inside data centers, but also in the sense that the number of data centers is going to increase dramatically. And so now you're left with a networking problem that connects all these data centers together. (John chuckles) So some people think-- >> And you know networking? >> I know a little bit about networking. So some people say that, hey, networking is all played out, and so on. My take is that there is pressure on networking and network equipment vendors to delivery better and better cost per bit per second. However, networking is not going out of style. Let's be very clear about that. It is the life blood of the industry today. If I take away the internet, or DCIP for example, everything falls apart, everything that you know. >> Well, this often finds-- >> So, the audience should know that. >> Yeah, well, we didn't really bang on the drum. We seen a real resurgence in networking, in fact, I covered some of Cisco's events and also Juniper's as well, and you just go back a few years, all these network engineers, they used to be the kings of the castle. They ran the show. Now they're kind of like, cloud-natives taking it over, and you mentioned serverless. I mean, heterogeneous environment, it's essentially serverless, Lambda and other cool things are happening, but what we're seeing now is, and again, this ties back to your apps conversation 10 years ago, and your mention about the DPU and edge, is that the paradigm at the state level is a network construct. You have a concept of provisioning services, you have concepts of connectionless, you have concepts of state, stateless, and that right now is a big problem with things like Kubernetes, although Kubernetes is amazing, enabling a lot of workloads to be containerized, but now don't talk to each other. Sounds like a network problem. >> Well, it is-- >> These are network problems. Your thoughts. >> When you look, so networking is really fundamental, at one level, so as I've said, there are three horsemen of infrastructure. There is compute which is essentially transforming information in some way. By doing some form of arithmetic. I don't mean one plus one gets two. I mean generalized manipulation of data. You have some input, you do some computation, you get some output. That's one entity. Another entity is storage, which is general purpose storage. I put something in there, I want to come back later and retrieve it. And it needs to be resilient, ie. resistant to failures. The third piece of the puzzle is networking, and the kind of networking that is the most useful is any-to-any networking, which is what TCIP gives you. So, essentially these three things are three sides of the same coin, and they work together. It's not as if one is more important than the other. The industry may have placed different values, but if you look down at the fundamentals, these three things go hand in hand. >> What's interesting to me and my observations, we have an internal slide that we used in our company, it's a content, our content pillars, if you will, and they're concentric circles. Data center, cloud, AI, data, and BotChain crypto. Data being like big data now, AI. Right in the middle is IoT, security and data. You're inventing a new category of data. Not classic data. Data warehousing-- >> This is agile data. At the end of the day, what we want to build is engines and platform for data processing, taken to it's limit. So, to give you an example, with the engines that we have, we should be able to store data with arbitrary levels of reliability. I really mean that. >> Stateful data. >> Stateful data, that is, I put data in one place, I can keep it securely, in other words, it's cryptographically, it's encrypted. It is resilient, and it's distributed over distance so that I could come back a hundred years later and find it still there, and nobody can hack it. So these are the things that are absolutely necessary in this new world, and the DPUs going to be a key enabler of providing-- >> So just to tie it all together is the DPU, the data processing unit, that you're inventing. Is the glue layer in the heterogeneous world of cloud architecture? Because if you're offloading and you have a fabric-- >> That's one role. That's one role. The glue layer that enabling a fabric to rebuild is one of the roles of the DPU. The second role, which is really, really important, is to perform data-centric calculations that CPUs and GPUs do not do very well. So, on data-centric calculations, the four things that I told you about, we're about 30 times better price performance and power performance compared to either GPU or TPU, on those calculations. And to the extent of those calculations are really important, and I think they are, the DPU will be a necessary component. >> Pradeep, I've been getting a lot of heat on Twitter, well, I'm on social media, I know you're not, but I've been saying GDPR has been a train wreck. I love the idea, we want to protect our privacy, but anyone who knows anything about storage and networking knows that storage guys don't know where their databases are. But the use cases that they're trying to solve are multi-database. So, for instance, if you do a retail transaction, you're in a database. If you're doing an IoT transaction in your self-driving car that need data from what you just bought, the idea of getting that data is almost impossible. They would have to know that you want the data. Now that's just two databases, imagine bringing-- >> Hundreds. >> Hundreds of databases. Everything, signaling in. It's a signaling process problem. Part of the problem. >> Part of the problem is that data is kept in many, many, many different formats. I don't think one can try to come up with a universal format for data, it won't work. So generally what you need to do is be able to ingest data in multiple formats. And do it in real time, store it reliably, and then process it very quickly. So this is really the analytics problem. >> Well, congratulations, the future of Silicon Valley is coming back as a chip, a chip that you're making? >> We are making a chip. What's very important for me to say is that this chip is, or it's a series of chips, these are programmable. They're fully programmable. But they're extraordinarily powerful. >> Software-defiant chip sets coming online. Pradeep, thanks for spending the time. >> You're welcome. >> I'm John Furrier, here at Sand Hill Road for the People First Network, theCUBE Presents. I'm John Furrier, thanks for watching. (futuristic electronic music)

Published Date : Oct 29 2018

SUMMARY :

From Sand Hill Road in the heart of Silicon Valley, the co-founder and CEO of Fungible, You had the vision, you saw the flywheel of apps, So that is the problem that I prepared myself for, And one of the things we've been having So I saw a big problem to which I could contribute. And so it's in the pursuit of what next is or is that just kind of the bloated nature one of the things that's going to happen is people are going you got to (chuckles), you get a new one. and the other one which is called I mean, the use cases are pretty broad, but specific. One is that the work always comes in the form of packets. and have the most inadequacy right now. So the world has made a lot of progress. And the same structure inside which is you have in the path of, or the bus if you will, So the same way that a server, a mainframe, How does that blend into the intelligent edge? so the next best thing is to put lots of them together. but also in the sense that the number It is the life blood of the industry today. So, the audience is that the paradigm at the state level These are network problems. and the kind of networking that is the most useful Right in the middle is IoT, security and data. At the end of the day, what we want to build is engines and the DPUs going to be a key enabler of providing-- the data processing unit, that you're inventing. the four things that I told you about, I love the idea, we want to protect our privacy, Part of the problem. Part of the problem is that data is kept We are making a chip. Pradeep, thanks for spending the time. here at Sand Hill Road for the People First Network,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2012DATE

0.99+

35QUANTITY

0.99+

16 nanometersQUANTITY

0.99+

PradeepPERSON

0.99+

Pradeep SindhuPERSON

0.99+

NvidiaORGANIZATION

0.99+

40%QUANTITY

0.99+

JohnPERSON

0.99+

2013DATE

0.99+

John FurrierPERSON

0.99+

Silicon ValleyLOCATION

0.99+

John Mo-noyPERSON

0.99+

CiscoORGANIZATION

0.99+

one roleQUANTITY

0.99+

second roleQUANTITY

0.99+

Juniper NetworkORGANIZATION

0.99+

five nanometersQUANTITY

0.99+

MayfieldORGANIZATION

0.99+

Sand Hill RoadLOCATION

0.99+

10,000 kilometersQUANTITY

0.99+

two thingsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

third pieceQUANTITY

0.99+

three thingsQUANTITY

0.99+

twoQUANTITY

0.99+

People First NetworkORGANIZATION

0.99+

two databasesQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

FirstQUANTITY

0.99+

second characteristicQUANTITY

0.99+

third characteristicQUANTITY

0.99+

thousands of milesQUANTITY

0.99+

10, 20 kilometersQUANTITY

0.99+

JuniperORGANIZATION

0.99+

oneQUANTITY

0.99+

sevenQUANTITY

0.99+

HundredsQUANTITY

0.99+

second companyQUANTITY

0.99+

OneQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

billions of peopleQUANTITY

0.99+

tomorrowDATE

0.99+

half a yearQUANTITY

0.98+

three sidesQUANTITY

0.98+

bothQUANTITY

0.98+

one problemQUANTITY

0.98+

three horsemenQUANTITY

0.98+

late 1940sDATE

0.98+

10 years agoDATE

0.98+

Hundreds of databasesQUANTITY

0.98+

second timeQUANTITY

0.98+

a hundred years laterDATE

0.98+

three yearsQUANTITY

0.98+

one levelQUANTITY

0.98+

GDPRTITLE

0.98+

two rolesQUANTITY

0.97+

MayfieldLOCATION

0.97+

millions of serversQUANTITY

0.97+

third typeQUANTITY

0.97+

50th anniversaryQUANTITY

0.97+

one placeQUANTITY

0.97+

two major typesQUANTITY

0.96+

30 years agoDATE

0.96+

firstQUANTITY

0.96+

second oneQUANTITY

0.96+

Second oneQUANTITY

0.96+

FungibleORGANIZATION

0.96+

thousandsQUANTITY

0.96+

50%QUANTITY

0.96+

Pradeep Sindhu, Cofounder and CEO, Fungible | Mayfield50


 

>> From Sand Hill Road, in the heart of Silicon Valley, it's theCUBE! Presenting the People First Network, insights from entrepreneurs and tech leaders. >> Hello everyone, I'm John Furrier with theCUBE. We are here on Sand Hill Road at Mayfield's Venture Capital Headquarters for the People First Network. I'm here with Pradeep Sindhu, who's the co-founder of Juniper Networks and now the co-founder and CEO of Fungible. Thanks for joining me on this special conversation for the People First Program. >> Thank you, John. >> So I want to talk to you about entrepreneurship. You're doing a new startup, you've been so successful as an entrepreneur over the years, uh you keep building a great company at Juniper Networks, everyone kind of knows the success there, great success. We've interviewed you before on that, but now you got a new startup! >> I do. >> You're building a company I thought startups were for young people. (Pradeep laughs) Come on! We're nine years into our startup, we're still a startup. >> Well, I'm not quite over the hill yet. (John Laughs) One of the reasons I jumped back in to the startup world was I saw an opportunity to solve a very important industry problem and to do it rapidly and so, I took the step. >> Well, we're super excited that you shared your vision with us and folks can check that video out on theCUBE and deep dive on the future of that startup. So, it's exciting, check it out. Entrepreneurship has changed and one of the things that we're talking about here is how things have changed just since the last time you've done a round. I mean, you're now a couple years in, you've been stealth for a while building out this amazing chip, the the Data Processing Unit, the DPU. What's different about building companies now? I mean, are you a unicorn? You have a billion-dollar evaluation yet? I mean, that's the new bar, it's different. What are some of the differences now in building a company? >> You know, one thing, John, that I saw is a clear difference between when I started Juniper and started Fungible, is that the amount of bureaucracy and paperwork that one has to go through is tremendously larger. And this was disappointing because one of the things that the US does very well is to keep it light and keep it fast so that it's easy for people to create new companies. That was one difference. The other difference that I saw was actually reluctance on the part of Venture to take big bets. Because people had gotten used to the idea of a quick turn around with maybe a social media company or something. Now, you know, my tendency to work on problems is I tend to work on fundamental problems that take time to do, but the outcome is potentially large. So, I'm attracted to that kind of problem. And so, the number of VCs that were willing to look at those kinds of problems were far fewer this time around than last time. >> So you got some no's then? >> Of course, I got no's. Even from people that-- >> You're the Founder of Juniper Networks, you've done amazing things, like you created billions of dollars of value, you should be gold-plated. >> What you did 20 years ago only goes so far. I think what what people were reluctant, and remember, I started Fungible in 2015. At that time, silicon was still a dirty word. I think now there are several people who said, no, we're regretting because they see that it's kind of the second coming of silicon and it's for reasons that we have talked about in the other discussion that, you know, Moore's Law is coming to a close. And that the largest that it was distributing over the last 30, 40 years is going away so what we have to do is we have to innovate on silicon. You know, as we discussed, the world has only seen a few architectures for computing engines on silicon. One of the things that makes me very happy is that now people are going to apply their creativity to painting on this canvas. >> So, silicon's got some new life blood. What's your angle with your silicon strategy? >> So, our silicon strategy is really to focus on one aspect of computations in the data center and this aspect we call Data Centric Computing. Data Centric Computing is really computing where there's a lot more movement of data and lot less arithmetic on data. And today, giving scaled out architectures, data has to move and be stored and retrieved and so on as much as it has to be computed on. So, existing engines are not very good at doing these Data Centric Computations, so we are building a programmable DPU to actually do those computations much, much better than any engine can today. >> And that's great. And just a reminder, we got a deep dive on that topic, so check out the video on that. So, I got to ask you the question, why are people resistant at the silicon trend? Was it trendy? Was it the lack of information? You almost see people almost less informed on computer architecture these days as people Blitzscale for SASPA businesses. Cloud certainly is great for that , but there's now this renaissance. Why was it, what was the problem? >> I think the problem is very easy to identify. Building silicon is expensive. It takes very specialized set of skills. It takes a lot of money, and it takes time. Well, anything that takes a long time is risky. And Venture, while it likes risk, it tries to minimize it. So, it's completely understandable to me that, you know, people don't want to take, they don't want to put money in ventures that might take two, three years. Actually, you know, going back to the Juniper era, there are Venture folks, I won't name them, but who said, well, if you could do this thing in six months, we're in, but otherwise no. >> How long did it take? >> 2 1/2 years. >> And then the rest is history. >> Yeah. >> So, there's a lot of naysayers, it's just categorical kind of like, you know, courses for horses for courses, as they say, that expression. All right, so now with with your experience, okay, you got some no's, how did that, how did that make you feel? You're like, damn, I got to get out and do the rounds? >> Actually-- >> You just kind of moved on or? >> I just moved on because, you know, the fact that I did Juniper should not give me any special treatment. It should be the quality of the idea that I've come up with. And so, what I tried to do, my response was to make the idea more compelling, sharpen it further, and and try to convince people that, hey there was value here. I think that I've not been often wrong about predicting things maybe two, three years out, so on the basis of that people were willing to give me that credibility, and so, there were enough people who were interested in investing. >> What did you learn in the process? What was the one thing that you sharpened pretty quickly? Was it the story, was it the architecture message? What was the main thing that you just had to sharpen really fast? >> The thing I had to sharpen really fast was while the technology we were developing is disruptive, customers really, really care, they don't want to be disrupted. They actually want the insertion to be smooth. And so, this is the piece that we had to sharpen. Anytime you have a new technology, you have to think about, well, how can I make it easy for people to use? This is very, very important. >> So the impact to the architecture itself, if it was deployed in the use case, and then look at the impact of ripple effect. >> For example, you cannot require people to change their applications. That's a no-no. Nobody's going to rewrite their software. You also probably don't want to ask people to change their network architecture. You don't want to ask people to change their deployment model. So, there are certain things that need to be held constant. So, that was a very quick learning. >> So, one of the other things that we've been talking about with other entrepreneurs is okay, the durability of the company. You're going down, playing the long game, but also innovation and and attracting people and so you've done, built companies before, as with Juniper, and you've worked with a great team of people in your network. How did you attract people for this? Obviously, they probably were attracted on the merit of the idea, but how do you pick people? What's the algorithm? What's the method that you use to choose team members or partners? Because that's also super important. If you got a gestation period where you're building out. You got to have high quality DNA. How do you make that choice? What's the thought process? >> So John, the the only algorithm that I know works is to look for people that are either known to you directly or known to somebody that you trust because in an interview, it's a hit or miss. At least, I'm not so good at interviewing that I can have a 70, 80% success rate. Because people can fake it in an interview, but you cannot fake it once you've worked with somebody, so that's one very important test. The other one was, it was very important for me to have people who were collaborative. It is possible to find lots of people who are very smart but they are not collaborative. And in an endeavor like the one we're doing, collaboration is very important, and of course the base skill set is very important so, you know, almost half of our team is software because we are-- >> It's a programmable chip. >> It's a programmable chip. We're writing our own operating system, very lightweight. So, you need that combination of hardware and software skills which is getting more and more scarce regrettably. >> I had a chat with Andy Bechtolsheim at VMworld and he and I had a great conversation similar to this, he said, you know, hardware is hard, software is easier, (laughs) and that was his point, and he also was saying that with merchant silicon, it's the software that's key. >> It is absolutely the key. Software, you know, software is always important. But software doesn't run on air. We should also remember that. And there are certain problems, for example, switching packets inside a data center where the problem is reasonably well-solved by merchant silicon. But there are other problems for which there is no merchant silicon solution, like the DPU that we're talking about. Eventually, there might be. But today there isn't. So, I think Apple is a great example for me of a company that understands the value of software hardware integration. Everybody thinks of Apple as a software only company. They have thousands of silicon engineers, thousands. If you look at your Apple Watch, there are probably some 20 chips inside it. You look at the iPhone. It won't do the magic that it does without the silicon team that they have. They don't talk about it a lot on purpose because-- >> 'Cause they don't want a China chip in there. >> Well, they don't want a China chip, but not only that, they don't know to advertise. It's part of their core value. >> Yeah. >> And so, as long as people keep believing that everything can be done in software, that's good for Apple. >> So, this is the trend, and this is why, Larry also brought this up years ago when he was talking about Oracle. He tried to make the play that Oracle would be the iPhone of the data center. >> Mm-hmm. >> Which people poo-pooed and they're still struggling with that idea, but he was pointing out the benefit of the iPhone, how they are integrating into the hardware and managing what Steve Jobs always wanted which was security number one >> Absolutely. >> for the customer. >> And seamlessness of use. And the reason the iPhone actually works as well as it does is because the hardware and the software are co-designed. And the reason it delivers the value that it does to the company is because of those things. >> So you see, this as a big trend, now you see that hardware and software will work together. You see cloud native heterogeneous almost server-less environments abstracted away with software and other components, fabric and specialized processors? >> Yes. >> And just application developers just programming at will? >> Correct, and edge data centers, so computing, I would say that maybe in a decade we will see roughly half of the computing and storage being done closer to the edge and the remaining half being done in these massively skilled data centers. >> I want to get geeky with you for a second, I want to ask you a question, I want to get your take on something. I've been thinking about and haven't really talked publicly about, kind of said on theCUBE a few times in a couple interviews, but I want to get your thoughts. There's been a big discussion about hybrid cloud, private cloud, multi-cloud, all that stuff going on, and I was talking with Andy Jassy, the CEO of Amazon, and Diane Greene at Google and I'm like okay, I can buy all these definitions, I don't believe any of 'em, but, you know, what the hell does that mean, what I know. I said to Diane Greene, I said, well, if everyone's going cloud operations, if cloud operations and edge is the new paradigm, isn't the data center just a big fat edge? And she looked at me and said, hmm, interesting. So, is the data center ultimately just a device on this network? If the operating model is horizontally scalable, isn't it just a a big fat edge? >> So you can, so here's the thing, right, if we talk about, you know, what is cloud? It's essentially a particular architecture, which is scaled out architecture uh to build a data center and then having this data center be connected by a very fast network. To consumers anytime, anywhere. So, let's take that as the definition of cloud. Well, if that's the definition of cloud, now you're talking about what kind of data centers will be present over time, and I think what we observed was it's really important for many applications to come, and with the advent of 5G, with the advent of things like augmented reality, now, with the advent of self-driving cars, a lot of computing needs to be done close to the edge because it cannot be done, because of laws of physics reasons, it cannot be done far away. So, once you have this idea that you also have small scale out data centers close to the edge, all these arguments about whether it's a hybrid cloud or this cloud or that cloud, they kind of vanish because-- >> So, you agree then, it's kind of like an edge? >> It is. >> Because it's an operational philosophy if you're running it that way, then it's just what it is, it's a scale out entity. >> Correct. >> It could be a small sensor network or it could be a data center. >> Correct. So, the key is actually the operational model and the idea of using scaled out design principles, which is don't try to build 50,000 different types of widgets which are then hard to manage. Try to build a small set of things, tinker toys that you can connect together in different ways. Make it easy to manage, manage it using software, which is then centralized by itself. >> That's a great point. You you jumped the gun on me on this one. I was going to ask you that next question. As an entrepreneur who's looking at this new architecture you just mentioned, what advice would you give them? How should they attack this market? 'Cause the old way was you get a PowerPoint, you show a presentations of the VCs, they give you some money, you provision some hardware, you go on next generation, get a prototype, it's up and running, you got some users. Built it then you get some cash, you scale it (laughs). Now with this new architecture, what's the strategy of the eager entrepreneur who wants to create a valuable opportunity with this new architecture. What would you advise them? >> So I, you know, I think it really depends on what is the underlying technology that you have for your startup. There's going to be lots and lots of opportunities. >> Oh don't fight the trend, which is, the headwind would be, don't compete against the scale out. Ride that wave, right? >> Yeah, people who are competing against scale out by building large scale monolithic machines, I think they're going to have difficulty, there's fundamental difficulties there. So, don't fight the trend. There's plenty of opportunities for software. Plenty of opportunities for software. But it's not the vertical software stack that you have to go through five or six different levels before you get to doing the real work. It's more a horizontal stack, it's a more agile stack. So, if it's a software company, you can actually build prototypes very quickly today. Maybe on AWS, maybe on Google Cloud, maybe on Microsoft. >> So, maybe the marketing campaign for your company, or maybe the trend might that's emerging is data first companies. We heard cloud mobile first, cloud first, data first. >> Correct. We think that the world really, the world of infrastructure is going from compute centric to data centric. This is absolutely the case. So, data first companies, yes. >> All right, so final question for you, as someone who's had a lot of experience in building public company, multi-billions of dollars of value, embarking on a big idea that that we like, I love the idea. A lot of people struggle with the entrepreneurial equation of how to leverage their board, how to leverage their investors and advisors and service providers. What would you share to the folks watching that are out there that have struggled? Some think, oh the VCs, they don't add value. Some do, some don't. There's always missed reactions. There's different, different types out there. Some do, some don't. But in general, it's about leveraging the resources and the people involved. How should entrepreneurs leverage their advisors, their board, their investors? >> I think it's very important for an entrepreneur to look for complementarity. It's very easy to want to find people that think like you do. If you just find people that think like you do, you're not, they're not going to find weaknesses in your arguments. It's more difficult, but if you look to entrepreneurs to provide complementarity, you look to advisors to provide the complementarity, look to customers to give you feedback, that's how you build value. >> Pradeep, thanks so much for sharing the insight, a lot of opportunities. Thanks for sharing here on-- >> Thank you, John. >> The People Network. I'm John Furrier at Mayfield on Sand Hill Road for theCUBE's coverage of the People First Network series, part of Mayfield's 50th Anniversary. Thanks for watching. (upbeat music)

Published Date : Oct 29 2018

SUMMARY :

in the heart of Silicon Valley, it's theCUBE! and now the co-founder and CEO of Fungible. So I want to talk to you about entrepreneurship. I thought startups were for young people. One of the reasons I jumped back in to the startup world and deep dive on the future of that startup. is that the amount of bureaucracy and paperwork Even from people that-- You're the Founder of in the other discussion that, you know, So, silicon's got some new life blood. on one aspect of computations in the data center So, I got to ask you the question, So, it's completely understandable to me that, you know, of naysayers, it's just categorical kind of like, you know, I just moved on because, you know, you have to think about, well, So the impact to the architecture itself, So, there are certain things that need to be held constant. on the merit of the idea, but how do you pick people? is to look for people that are either known to you directly So, you need that combination he said, you know, hardware is hard, software is easier, It is absolutely the key. but not only that, they don't know to advertise. And so, as long as people keep believing that everything and this is why, Larry also brought this up years ago is because the hardware and the software are co-designed. So you see, this as a big trend, being done closer to the edge and the remaining half I want to get geeky with you for a second, So, let's take that as the definition of cloud. Because it's an operational philosophy It could be a small sensor network and the idea of using scaled out design principles, 'Cause the old way was you get a PowerPoint, that you have for your startup. Oh don't fight the trend, which is, that you have to go through five or six different levels So, maybe the marketing campaign for your company, This is absolutely the case. and the people involved. look to customers to give you feedback, Pradeep, thanks so much for sharing the insight, I'm John Furrier at Mayfield on Sand Hill Road

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenePERSON

0.99+

Pradeep SindhuPERSON

0.99+

Juniper NetworksORGANIZATION

0.99+

Steve JobsPERSON

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

Andy JassyPERSON

0.99+

Andy BechtolsheimPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

LarryPERSON

0.99+

twoQUANTITY

0.99+

John LaughsPERSON

0.99+

AppleORGANIZATION

0.99+

2015DATE

0.99+

fiveQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

PradeepPERSON

0.99+

People First NetworkORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Silicon ValleyLOCATION

0.99+

thousandsQUANTITY

0.99+

Sand Hill RoadLOCATION

0.99+

VMworldORGANIZATION

0.99+

nine yearsQUANTITY

0.99+

JuniperORGANIZATION

0.99+

PowerPointTITLE

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

secondQUANTITY

0.98+

three yearsQUANTITY

0.98+

six monthsQUANTITY

0.98+

OneQUANTITY

0.98+

billion-dollarQUANTITY

0.98+

oneQUANTITY

0.98+

The People NetworkORGANIZATION

0.98+

70, 80%QUANTITY

0.98+

20 years agoDATE

0.98+

one differenceQUANTITY

0.97+

20 chipsQUANTITY

0.97+

2 1/2 yearsQUANTITY

0.97+

one aspectQUANTITY

0.97+

six different levelsQUANTITY

0.95+

50,000 different typesQUANTITY

0.95+

MayfieldLOCATION

0.94+

VentureORGANIZATION

0.93+

Data CentricORGANIZATION

0.92+

billions of dollarsQUANTITY

0.91+

Apple WatchCOMMERCIAL_ITEM

0.9+

one thingQUANTITY

0.9+

FungibleORGANIZATION

0.9+

50th AnniversaryQUANTITY

0.88+

multi-billions of dollarsQUANTITY

0.87+

Data Centric ComputingORGANIZATION

0.85+

People First ProgramORGANIZATION

0.83+

theCUBEORGANIZATION

0.8+

couple interviewsQUANTITY

0.77+

thousands of siliconQUANTITY

0.75+

teamQUANTITY

0.72+

one very important testQUANTITY

0.71+

MayfieldORGANIZATION

0.71+

Google CloudTITLE

0.71+

ChinaOTHER

0.71+

USORGANIZATION

0.69+

years agoDATE

0.67+

halfQUANTITY

0.66+

Announcing Cube on Cloud


 

>> Hi, everyone; I am thrilled to personally invite you to a special event created and hosted by "theCUBE." On January 21st, we're holding "theCUBE on Cloud," our first editorial event of the year. We have lined up a fantastic guest list of experts in their respective fields, talking about CIOs, COOs, CEOs, and technologists, analysts, and practitioners. We're going to share their vision of Cloud in the coming decade. Of course, we also have guests from the big three Cloud companies, who are going to sit down with our hosts and have unscripted conversations that "theCUBE" is known for. For example, Mahlon Thompson Bukovec is the head of AWS's storage business, and she'll talk about the future of infrastructure in the Cloud. Amit Zavery is one of Thomas Kurian's lieutenants at Google, and he'll share a vision of the future of application development and how Google plans to compete in Cloud. And J.G. Chirapurath leads Microsoft's data and analytics business. He's going to address our questions about how Microsoft plans to simplify the complexity of tools in the Azure ecosystem and compete broadly with the other Cloud players. But this event, it's not just about the big three Cloud players. It's about how to take advantage of the biggest trends in Cloud, and, of course, data in the coming decade, because those two superpowers along with AI are going to create trillions of dollars in value, and not just for sellers, but for practitioners who apply technology to their businesses. For example, one of our guests, Zhamak Dehghani, lays out her vision of a new data architecture that breaks the decade-long failures of so-called big data architectures and data warehouse and data lakes. And she puts forth a model of a data mesh, not a centralized, monolithic data architecture, but a distributed data model. Now that dovetails into an interview we do with the CEO of Fungible, who will talk about the emergence of the DPU, the data processing unit, and that's a new class of alternative processors that's going to support these massively distributed systems. We also have a number of CXOs who are going to bring practical knowledge and experience to the program. Allen Nance, he led technology transformation for Phillips. Dan Sheehan is a CIO, COO, and CTO and has led teams at Dunkin' brands, Modell's Sporting Goods and other firms. Cathy Southwick has been a CIO at a large firm like AT&T and now is moving at the pace of Silicon Valley at Pure Storage. Automation in the Cloud is another theme we'll hit on with Daniel Dines, who founded and heads the top RPA company. And of course, we'll have a focus on developers in the Cloud with Rachel Stevens of RedMonk. That's a leading edge analyst firm focused exclusively on the developer community. And much more that I just don't have time to go into here, but rest assured, John Furrier and I will be bringing our thoughts, our hard-hitting opinions, along with some special guests that you don't want to miss. So click on the link below and register for this free event, "theCUBE on Cloud." Join us and join the conversation. We'll see you there.

Published Date : Jan 8 2021

SUMMARY :

and she'll talk about the future

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rachel StevensPERSON

0.99+

Cathy SouthwickPERSON

0.99+

Amit ZaveryPERSON

0.99+

Dan SheehanPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Zhamak DehghaniPERSON

0.99+

January 21stDATE

0.99+

RedMonkORGANIZATION

0.99+

Allen NancePERSON

0.99+

AT&TORGANIZATION

0.99+

J.G. ChirapurathPERSON

0.99+

Daniel DinesPERSON

0.99+

GoogleORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

PhillipsORGANIZATION

0.99+

John FurrierPERSON

0.99+

Thomas KurianPERSON

0.99+

oneQUANTITY

0.98+

DunkinORGANIZATION

0.98+

trillions of dollarsQUANTITY

0.98+

FungibleORGANIZATION

0.96+

Mahlon Thompson BukovecPERSON

0.94+

CloudTITLE

0.94+

threeQUANTITY

0.92+

theCUBEORGANIZATION

0.92+

Modell's Sporting GoodsORGANIZATION

0.9+

two superpowersQUANTITY

0.88+

first editorial eventQUANTITY

0.87+

AzureTITLE

0.84+

theCUBE onEVENT

0.84+

three CloudQUANTITY

0.77+

CubeTITLE

0.58+

Pure StorageORGANIZATION

0.57+

Pradeep Sindhu CLEAN


 

>> As I've said many times on theCUBE for years, decades even we've marched to the cadence of Moore's law relying on the doubling of performance every 18 months or so, but no longer is this the main spring of innovation for technology rather it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build-out of a massively distributed computer network. Very importantly, the last several years alternative processors have emerged to support offloading work and performing specific tests. GPUs are the most widely known example of this trend with the ascendancy of Nvidia for certain applications like gaming and crypto mining and more recently machine learning. But in the middle of last decade we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years as we move into the next era of cloud. And with me is Pradeep Sindhu who's the co-founder and CEO of Fungible, a company specializing in the design and development of DPUs. Pradeep, welcome to theCUBE. Great to see you. >> Thank-you, Dave and thank-you for having me. >> You're very welcome. So okay, my first question is don't CPUs and GPUs process data already. Why do we need a DPU? >> That is a natural question to ask. And CPUs have been around in one form or another for almost 55, maybe 60 years. And this is when general purpose computing was invented and essentially all CPUs went to x86 architecture by and large and of course is used very heavily in mobile computing, but x86 is primarily used in data center which is our focus. Now, you can understand that that architecture of a general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time improvements you refer to Moore's law, which is really the improvements of the price, performance of silicon over time that combined with architectural improvements was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. You're not going to get very much, you're not going to squeeze more blood out of that storm from the general purpose computer architecture. what has also happened over the last decade is that Moore's law which is essentially the doubling of the number of transistors on a chip has slowed down considerably and to the point where you're only getting maybe 10, 20% improvements every generation in speed of the transistor if that. And what's happening also is that the spacing between successive generations of technology is actually increasing from two, two and a half years to now three, maybe even four years. And this is because we are reaching some physical limits in CMOS. These limits are well-recognized. And we have to understand that these limits apply not just to general purposive use but they also apply to GPUs. Now, general purpose CPUs do one kind of competition they're really general and they can do lots and lots of different things. It is actually a very, very powerful engine. And then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of a processor called the GPU which specializes in executing vector floating-point arithmetic operations much, much better than CPU maybe 20, 30, 40 times better. Well, GPUs have now been around for probably 15, 20 years mostly addressing graphics computations, but recently in the last decade or so they have been used heavily for AI and analytics computations. So now the question is, well, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago and I recognize I was still at Juniper Networks which is another company that I founded. I recognize that in the data center as the workload changes to addressing more and more, larger and larger corpuses of data, number one and as people use scale-out as these standard technique for building applications, what happens is that the amount of east-west traffic increases greatly. And what happens is that you now have a new type of workload which is coming. And today probably 30% of the workload in a data center is what we call data-centric. I want to give you some examples of what is a data-centric workload. >> Well, I wonder if I could interrupt you for a second. >> Of course. >> Because I want those examples and I want you to tie it into the cloud 'cause that's kind of the topic that we're talking about today and how you see that evolving. I mean, it's a key question that we're trying to answer in this program. Of course, early cloud was about infrastructure, little compute, little storage, little networking and now we have to get to your point all this data in the cloud. And we're seeing, by the way the definition of cloud expand into this distributed or I think a term you use is disaggregated network of computers. So you're a technology visionary and I wonder how you see that evolving and then please work in your examples of that critical workload, that data-centric workload. >> Absolutely happy to do that. So if you look at the architecture of our cloud data centers the single most important invention was scale-out of identical or near identical servers all connected to a standard IP ethernet network. That's the architecture. Now, the building blocks of this architecture is ethernet switches which make up the network, IP ethernet switches. And then the server is all built using general purpose x86 CPUs with DRAM, with SSD, with hard drives all connected to inside the CPU. Now, the fact that you scale these server nodes as they're called out was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose compute. But this architecture did is it compute centric architecture and the reason it's a compute centric architecture is if you open this server node what you see is a connection to the network typically with a simple network interface card. And then you have CPUs which are in the middle of the action. Not only are the CPUs processing the application workload but they're processing all of the IO workload, what we call data-centric workload. And so when you connect SSDs, and hard drives, and GPUs, and everything to the CPU, as well as to the network you can now imagine the CPUs is doing two functions. It's running the applications but it's also playing traffic cop for the IO. So every IO has to go through the CPU and you're executing instructions typically in the operating system and you're interrupting the CPU many, many millions of times a second. Now, general purpose CPUs and the architecture CPUs was never designed to play traffic cop because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's critical that in this new architecture where there's a lot of data, a lot of these stress traffic the percentage of workload, which is data-centric has gone from maybe one to 2% to 30 to 40%. I'll give you some numbers which are absolutely stunning. If you go back to say 1987 and which is the year in which I bought my first personal computer the network was some 30 times slower than the CPU. The CPU is running at 15 megahertz, the network was running at three megabits per second. Or today the network runs at a 100 gigabits per second and the CPU clock speed of a single core is about three to 2.3 gigahertz. So you've seen that there's a 600X change in the ratio of IO to compute just the raw clock speed. Now, you can tell me that, hey, typical CPUs have lots, lots of cores, but even when you factor that in there's been close to two orders of magnitude change in the amount of IO to compute. There is no way to address that without changing the architecture and this is where the DPU comes in. And the DPU actually solves two fundamental problems in cloud data centers. And these are fundamental there's no escaping it. No amount of clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architecture the interactions between server nodes are very inefficient. That's number one, problem number one. Problem number two is that these data-centric computations and I'll give you those four examples. The network stack, the storage stack, the virtualization stack, and the security stack. Those four examples are executed very inefficiently by CPUs. Needless to say that if you try to execute these on GPUs you will run into the same problem probably even worse because GPUs are not good at executing these data-centric computations. So what we were looking to do at Fungible is to solve these two basic problems. And you don't solve them by just taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the last 40 years. So what we did was we created this new microprocessor that we call DPU from ground up. It's a clean sheet design and it solves those two problems fundamentally. >> So I want to get into that. And I just want to stop you for a second and just ask you a basic question which is if I understand it correctly, if I just took the traditional scale out, if I scale out compute and storage you're saying I'm going to hit a diminishing returns. It's not only is it not going to scale linearly I'm going to get inefficiencies. And that's really the problem that you're solving. Is that correct? >> That is correct. And the workloads that we have today are very data-heavy. You take AI for example, you take analytics for example it's well known that for AI training the larger the corpus of relevant data that you're training on the better the result. So you can imagine where this is going to go. >> Right. >> Especially when people have figured out a formula that, hey the more data I collect I can use those insights to make money- >> Yeah, this is why I wanted to talk to you because the last 10 years we've been collecting all this data. Now, I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research. And the first thing people said is they want to improve their infrastructure and they want to do that by moving to the cloud. And they also, there was a security angle there as well. That's a whole another topic we could discuss. The other stat that jumped out at me, there's 80% of the customers that you surveyed said there'll be augmenting their x86 CPU with alternative processing technology. So that's sort of, I know it's self-serving, but it's right on the conversation we're having. So I want to understand the architecture. >> Sure. >> And how you've approached this. You've clearly laid out this x86 is not going to solve this problem. And even GPUs are not going to solve the problem. >> They re not going to solve the problem. >> So help us understand the architecture and how you do solve this problem. >> I'll be very happy to. Remember I use this term traffic cop. I use this term very specifically because, first let me define what I mean by a data-centric computation because that's the essence of the problem we're solving. Remember I said two problems. One is we execute data-centric workloads at least an order of magnitude more efficiently than CPUs or GPUs, probably 30 times more efficient. And the second thing is that we allow nodes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first let's look at the data-centric piece. The data-centric piece for workload to qualify as being data-centric four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads so I'm not saying anything. Secondly, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently, thousands of them, okay? That's the number two. So a lot of multiplexing. Number three is that this workload is stateful. In other words you can't process back it's out of order. You have to do them in order because you're terminating network sessions. And the last one is that when you look at the actual computation the ratio of IO to arithmetic is medium to high. When you put all four of them together you actually have a data-centric workload, right? And this workload is terrible for general purpose CPUs. Not only the general purpose CPU is not executed properly the application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them you're going to be in trouble. So what did we do? Well, what we did was our architecture consists of very, very heavily multi-threaded general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some of those accelerators, DMA accelerators, then ratio coding accelerators, compression accelerators, crypto accelerators, compression accelerators. These are just some, and then look up accelerators. These are functions that if you do not specialize you're not going to execute them efficiently. But you cannot just put accelerators in there, these accelerators have to be multi-threaded to handle. We have something like a 1,000 different treads inside our DPU to address these many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that is very important to understand is that given the velocity of transistors I know that we have hundreds of billions of transistors on a chip, but the problem is that those transistors are used very inefficiently today if the architecture of a CPU or a GPU. What we have done is we've improved the efficiency of those transistors by 30 times, okay? >> So you can use a real estate much more effectively? >> Much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that we're going to end up in the same bucket where general purpose CPUs are today. We were trying to solve a specific problem of data-centric computations and of improving the note to note efficiency. So let me go to point number two because that's equally important. Because in a scalar or architecture the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that you start to get back at drops because there are some fundamental problems caused by congestion on the network which are unsolved as we speak today. There are only one solution which is to use TCP. Well, TCP is a well-known, is part of the TCP IP suite. TCP was never designed to handle the latencies and speeds inside data center. It's a wonderful protocol but it was invented 43 years ago now. >> Yeah, very reliable and tested and proven. It's got a good track record but you're right. >> Very good track record, unfortunately eats a lot of CPU cycles. So if you take the idea behind TCP and you say, okay, what's the essence of TCP? How would you apply it to the data center? That's what we've done with what we call FCP which is a fabric control protocol, which we intend to open. We intend to publish the standards and make it open. And when you do that and you embed FCP in hardware on top of this standard IP ethernet network you end up with the ability to run at very large-scale networks where the utilization of the network is 90 to 95%, not 20 to 25%. >> Wow, okay. >> And you end up with solving problems of congestion at the same time. Now, why is this important today? That's all geek speak so far. The reason this stuff is important is that it such a network allows you to disaggregate, pull and then virtualize the most important and expensive resources in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like DRAM wants to be disaggregated. And well, if I put everything inside a general purpose server the problem is that those resources get stranded because they're stuck behind a CPU. Well, once you disaggregate those resources and we're saying hyper disaggregate meaning the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources. >> And then you going to reaggregate them, right? I mean, that's obviously- >> Exactly and the network is the key in helping. >> Okay. >> So the reason the company is called Fungible is because we are able to disaggregate, virtualize and then pull those resources. And you can get for so scale-out companies the large AWS, Google, et cetera they have been doing this aggregation tooling for some time but because they've been using a compute centric architecture their disaggregation is not nearly as efficient as we can make. And they're off by about a factor of three. When you look at enterprise companies they are off by another factor of four because the utilization of enterprise is typically around 8% of overall infrastructure. The utilization in the cloud for AWS, and GCP, and Microsoft is closer to 35 to 40%. So there is a factor of almost four to eight which you can gain by dis-aggregating and pulling. >> Okay, so I want to interrupt you again. So these hyperscalers are smart. They have a lot of engineers and we've seen them. Yeah, you're right they're using a lot of general purpose but we've seen them make moves toward GPUs and embrace things like Arm. So I know you can't name names, but you would think that this is with all the data that's in the cloud, again, our topic today. You would think the hyperscalers are all over this. >> Well, the hyperscalers recognized here that the problems that we have articulated are important ones and they're trying to solve them with the resources that they have and all the clever people that they have. So these are recognized problems. However, please note that each of these hyperscalers has their own legacy now. They've been around for 10, 15 years. And so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some point. >> They have technical debt, you mean? (laughs) >> I'm not going to say they have technical debt, but they have a certain way of doing things and they are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you've heard the term SmartNIC. >> Yeah, right. >> Or your listeners must've heard that term. Well, a SmartNIC is not a DPU. What a SmartNIC is, is simply taking general purpose ARM cores, putting the network interface and a PCI interface and integrating them all on the same chip and separating them from the CPU. So this does solve a problem. It solves the problem of the data center workload interfering with the application workload, good job, but it does not address the architectural problem of how to execute data center workloads efficiently. >> Yeah, so it reminds me of, I understand what you're saying I was going to ask you about SmartNICs. It's almost like a bridge or a band-aid. >> Band-aid? >> It almost reminds me of throwing a high flash storage on a disc system that was designed for spinning disc. Gave you something but it doesn't solve the fundamental problem. I don't know if it's a valid analogy but we've seen this in computing for a longtime. >> Yeah, this analogy is close because okay, so let's take a hyperscaler X, okay? We won't name names. You find that half my CPUs are crippling their thumbs because they're executing this data-centric workload. Well, what are you going to do? All your code is written in C++ on x86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different let's say we use Arm simply because x86 licenses are not available to people to build their own CPUs so Arm was available. So they put a bunch of Arm cores, they stick a PCI express and a network interface and you bought that code from x86 to Arm. Not difficult to do but and it does you results. And by the way if for example this hyperscaler X, shall we called them, if they're able to remove 20% of the workload from general purpose CPUs that's worth billions of dollars. So of course, you're going to do that. It requires relatively little innovation other than to port code from one place to another place. >> Pradeep, that's what I'm saying. I mean, I would think again, the hyperscalers why can't they just do some work and do some engineering and then give you a call and say, okay, we're going to attack these workloads together. That's similar to how they brought in GPUs. And you're right it's worth billions of dollars. You could see when the hyperscalers Microsoft, and Azure, and AWS bolt announced, I think they depreciated servers now instead of four years it's five years. And it dropped like a billion dollars to their bottom line. But why not just work directly with you guys? I mean, let's see the logical play. >> Some of them are working with us. So that's not to say that they're not working with us. So all of the hyperscalers they recognize that the technology that we're building is a fundamental that we have something really special and moreover it's fully programmable. So the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit which is on the boundary of a server and the network, is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architecture where the functionality is programmable but it is also very high speed for this particular set of applications. So the analogy with GPUs is nearly perfect because GPUs and particularly Nvidia implemented or they invented CUDA which is the programming language for GPUs. And it made them easy to use, made it fully programmable without compromising performance. Well, this is what we're doing with DPUs. We've invented a new architecture, we've made them very easy to program. And they're these workloads, not workloads, computation that I talked about which is security, virtualization, storage and then network. Those four are quintessential examples of data center workloads and they're not going away. In fact, they're becoming more, and more, and more important over time. >> I'm very excited for you guys, I think, and really appreciate Pradeep, we have your back because I really want to get into some of the secret sauce. You talked about these accelerators, eraser code and crypto accelerators. But I want to understand that. I know there's NBMe in here, there's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now into this domain, this build-out of this, I like this term disaggregated, massive disaggregated network. >> Hyper disaggregated. >> It's so hyper disaggregated even better. And I would say this and then I got to go. But what got us here the last decade is not the same as what's going to take us through the next decade. >> That's correct. >> Pradeep, thanks so much for coming on theCUBE. It's a great conversation. >> Thank-you for having me it's really a pleasure to speak with you and get the message of Fungible out there. >> Yeah, I promise we'll have you back. And keep it right there everybody we've got more great content coming your way on theCUBE on cloud. This is Dave Vellante. Stay right there. >> Thank-you, Dave.

Published Date : Jan 4 2021

SUMMARY :

of compute and storage and the build-out Thank-you, Dave and is don't CPUs and GPUs is that the architectural interrupt you for a second. and I want you to tie it into the cloud in the amount of IO to compute. And that's really the And the workloads that we have And the first thing is not going to solve this problem. and how you do solve this problem. And the last one is that when you look the note to note efficiency. and tested and proven. the network is 90 to 95%, in the data center. Exactly and the network So the reason the data that's in the cloud, recognized here that the problems the compute centric way the data center workload I was going to ask you about SmartNICs. the fundamental problem. Well, the easiest thing to I mean, let's see the logical play. So all of the hyperscalers they recognize into some of the secret sauce. last decade is not the same It's a great conversation. and get the message of Fungible out there. Yeah, I promise we'll have you back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

90QUANTITY

0.99+

PradeepPERSON

0.99+

MicrosoftORGANIZATION

0.99+

20%QUANTITY

0.99+

15 megahertzQUANTITY

0.99+

30 timesQUANTITY

0.99+

30%QUANTITY

0.99+

four yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20QUANTITY

0.99+

five yearsQUANTITY

0.99+

80%QUANTITY

0.99+

30QUANTITY

0.99+

Juniper NetworksORGANIZATION

0.99+

Pradeep SindhuPERSON

0.99+

GoogleORGANIZATION

0.99+

two problemsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

600XQUANTITY

0.99+

1987DATE

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

first questionQUANTITY

0.99+

two problemsQUANTITY

0.99+

1,000 different treadsQUANTITY

0.99+

oneQUANTITY

0.99+

30 timesQUANTITY

0.99+

60 yearsQUANTITY

0.99+

next decadeDATE

0.99+

eachQUANTITY

0.99+

second thingQUANTITY

0.99+

2.3 gigahertzQUANTITY

0.99+

2%QUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

firstQUANTITY

0.99+

40%QUANTITY

0.99+

thousandsQUANTITY

0.99+

two functionsQUANTITY

0.98+

25%QUANTITY

0.98+

todayDATE

0.98+

third elementQUANTITY

0.98+

FungibleORGANIZATION

0.98+

95%QUANTITY

0.98+

40 timesQUANTITY

0.98+

two ordersQUANTITY

0.98+

singleQUANTITY

0.98+

SecondlyQUANTITY

0.98+

last decadeDATE

0.98+

two thingsQUANTITY

0.98+

two basic problemsQUANTITY

0.97+

10, 20%QUANTITY

0.97+

a secondQUANTITY

0.97+

around 8%QUANTITY

0.97+

one solutionQUANTITY

0.97+

43 years agoDATE

0.97+

fourQUANTITY

0.97+

four examplesQUANTITY

0.96+

eightQUANTITY

0.96+

billions of dollarsQUANTITY

0.96+

100 gigabits per secondQUANTITY

0.96+

one sideQUANTITY

0.95+

35QUANTITY

0.94+

three megabits per secondQUANTITY

0.94+

GCPORGANIZATION

0.93+

AzureORGANIZATION

0.92+

two fundamental problemsQUANTITY

0.91+

hundreds of billions of transistorsQUANTITY

0.91+

two and a half yearsQUANTITY

0.91+

Problem number twoQUANTITY

0.9+

Breaking Analysis: Cloud 2030 From IT, to Business Transformation


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE in ETR. This is Breaking Analysis with Dave Vellante. >> Cloud computing has been the single most transformative force in IT over the last decade. As we enter the 2020s, we believe that cloud will become the underpinning of a ubiquitous, intelligent and autonomous resource that will disrupt the operational stacks of virtually every company in every industry. Welcome to this week's special edition of Wikibon's CUBE Insights Powered by ETR. In this breaking analysis, and as part of theCUBE365's coverage of AWS re:Invent 2020, we're going to put forth our scenario for the next decade of cloud evolution. We'll also drill into the most recent data on AWS from ETR's October 2020 survey of more than 1,400 CIOs and IT professionals. So let's get right into it and take a look at how we see the cloud of yesterday, today and tomorrow. This graphic shows our view of the critical inflection points that catalyze the cloud adoption. In the middle of the 2000s, the IT industry was recovering from the shock of the dot-com bubble and of course 9/11. CIOs, they were still licking their wounds from the narrative, does IT even matter? AWS launched its Simple Storage Service and later EC2 with a little fanfare in 2006, but developers at startups and small businesses, they noticed that overnight AWS turned the data center into an API. Analysts like myself who saw the writing on the wall and CEO after CEO, they poo-pooed Amazon's entrance into their territory and they promised a cloud strategy that would allow them to easily defend their respective turfs. We'd seen the industry in denial before, and this was no different. The financial crisis was a boon for the cloud. CFOs saw a way to conserve cash, shift CAPEX to OPEX and avoid getting locked in to long-term capital depreciation schedules or constrictive leases. We also saw shadow IT take hold, and then bleed in to the 2010s in a big way. This of course created problems for organizations rightly concerned about security and rogue tech projects. CIOs were asked to come in and clean up the crime scene, and in doing so, realized the inevitable, i.e., that they could transform their IT operational models, shift infrastructure management to more strategic initiatives, and drop money to the bottom lines of their businesses. The 2010s saw an era of rapid innovation and a level of data explosion that we'd not seen before. AWS led the charge with a torrent pace of innovation via frequent rollouts or frequent feature rollouts. Virtually every industry, including the all-important public sector, got into the act. Again, led by AWS with the Seminole, a CIA deal. Google got in the game early, but they never really took the enterprise business seriously until 2015 when it hired Diane Green. But Microsoft saw the opportunity and leaned in heavily and made remarkable strides in the second half of the decade, leveraging its massive software stake. The 2010s also saw the rapid adoption of containers and an exit from the long AI winter, which along with the data explosion, created new workloads that began to go mainstream. Now, during this decade, we saw hybrid investments begin to take shape and show some promise. As the ecosystem realized broadly that it had to play in the AWS sandbox or it would lose customers. And we also saw the emergence of edge and IoT use cases like for example, AWS Ground Station, those emerge. Okay, so that's a quick history of cloud from our vantage point. The question is, what's coming next? What should we expect over the next decade? Whereas the last 10 years was largely about shifting the heavy burden of IT infrastructure management to the cloud, in the coming decade, we see the emergence of a true digital revolution. And most people agree that COVID has accelerated this shift by at least two to three years. We see all industries as ripe for disruption as they create a 360 degree view across their operational stacks. Meaning, for example, sales, marketing, customer service, logistics, etc., they're unified such that the customer experience is also unified. We see data flows coming together as well, where domain-specific knowledge workers are first party citizens in the data pipeline, i.e. not subservient to hyper-specialized technology experts. No industry is safe from this disruption. And the pandemic has given us a glimpse of what this is going to look like. Healthcare is going increasingly remote and becoming personalized. Machines are making more accurate diagnoses than humans, in some cases. Manufacturing, we'll see new levels of automation. Digital cash, blockchain and new payment systems will challenge traditional banking norms. Retail has been completely disrupted in the last nine months, as has education. And we're seeing the rise of Tesla as a possible harbinger to a day where owning and driving your own vehicle could become the exception rather than the norm. Farming, insurance, on and on and on. Virtually every industry will be transformed as this intelligent, responsive, autonomous, hyper-distributed system provides services that are ubiquitous and largely invisible. How's that for some buzzwords? But I'm here to tell you, it's coming. Now, a lot of questions remain. First, you may even ask, is this cloud that you're talking about? And I can understand why some people would ask that question. And I would say this, the definition of cloud is expanding. Cloud has defined the consumption model for technology. You're seeing cloud-like pricing models moving on-prem with initiatives like HPE's GreenLake and now Dell's APEX. SaaS pricing is evolving. You're seeing companies like Snowflake and Datadog challenging traditional SaaS models with a true cloud consumption pricing option. Not option, that's the way they price. And this, we think, is going to become the norm. Now, as hybrid cloud emerges and pushes to the edge, the cloud becomes this what we call, again, hyper-distributed system with a deployment and programming model that becomes much more uniform and ubiquitous. So maybe this s-curve that we've drawn here needs an adjacent s-curve with a steeper vertical. This decade, jumping s-curves, if you will, into this new era. And perhaps the nomenclature evolves, but we believe that cloud will still be the underpinning of whatever we call this future platform. We also point out on this chart, that public policy is going to evolve to address the privacy and concentrated industry power concerns that will vary by region and geography. So we don't expect the big tech lash to abate in the coming years. And finally, we definitely see alternative hardware and software models emerging, as witnessed by Nvidia and Arm and DPA's from companies like Fungible, and AWS and others designing their own silicon for specific workloads to control their costs and reduce their reliance on Intel. So the bottom line is that we see programming models evolving from infrastructure as code to programmable digital businesses, where ecosystems power the next wave of data creation, data sharing and innovation. Okay, let's bring it back to the current state and take a look at how we see the market for cloud today. This chart shows a just-released update of our IaaS and PaaS revenue for the big three cloud players, AWS, Azure, and Google. And you can see we've estimated Q4 revenues for each player and the full year, 2020. Now please remember our normal caveats on this data. AWS reports clean numbers, whereas Azure and GCP are estimates based on the little tidbits and breadcrumbs each company tosses our way. And we add in our own surveys and our own information from theCUBE Network. Now the following points are worth noting. First, while AWS's growth is lower than the other two, note what happens with the laws of large numbers? Yes, growth slows down, but the absolute dollars are substantial. Let me give an example. For AWS, Azure and Google, in Q4 2020 versus Q4 '19, we project annual quarter over quarter growth rate of 25% for AWS, 46% for Azure and 58% for Google Cloud Platform. So meaningfully lower growth rates for AWS compared to the other two. Yet AWS's revenue in absolute terms grows sequentially, 11.6 billion versus 12.4 billion. Whereas the others are flat to down sequentially. Azure and GCP, they'll have to come in with substantially higher annual growth to increase revenue from Q3 to Q4, that sequential increase that AWS can achieve with lower growth rates year to year, because it's so large. Now, having said that, on an annual basis, you can see both Azure and GCP are showing impressive growth in both percentage and absolute terms. AWS is going to add more than $10 billion to its revenue this year, with Azure growing nearly 9 billion or adding nearly 9 billion, and GCP adding just over 3 billion. So there's no denying that Azure is making ground as we've been reporting. GCP still has a long way to go. Thirdly, we also want to point out that these three companies alone now account for nearly $80 billion in infrastructure services annually. And the IaaS and PaaS business for these three companies combined is growing at around 40% per year. So much for repatriation. Now, let's take a deeper look at AWS specifically and bring in some of the ETR survey data. This wheel chart that we're showing here really shows you the granularity of how ETR calculates net score or spending momentum. Now each quarter ETR, they go get responses from thousands of CIOs and IT buyers, and they ask them, are you spending more or less than a particular platform or vendor? Net score is derived by taking adoption plus increase and subtracting out decrease plus replacing. So subtracting the reds from the greens. Now remember, AWS is a $45 billion company, and it has a net score of 51%. So despite its exposure to virtually every industry, including hospitality and airlines and other hard hit sectors, far more customers are spending more with AWS than are spending less. Now let's take a look inside of the AWS portfolio and really try to understand where that spending goes. This chart shows the net score across the AWS portfolio for three survey dates going back to last October, that's the gray. The summer is the blue. And October 2020, the most recent survey, is the yellow. Now remember, net score is an indicator of spending velocity and despite the deceleration, as shown in the yellow bars, these are very elevated net scores for AWS. Only Chime video conferencing is showing notable weakness in the AWS data set from the ETR survey, with an anemic 7% net score. But every other sector has elevated spending scores. Let's start with Lambda on the left-hand side. You can see that Lambda has a 65% net score. Now for context, very few companies have net scores that high. Snowflake and Kubernetes spend are two examples with higher net scores. But this is rarefied air for AWS Lambda, i.e. functions. Similarly, you can see AI, containers, cloud, cloud overall and analytics all with over 50% net scores. Now, while database is still elevated with a 46% net score, it has come down from its highs of late. And perhaps that's because AWS has so many options in database and its own portfolio and its ecosystem, and the survey maybe doesn't have enough granularity there, but in this competition, so I don't really know, but that's something that we're watching. But overall, there's a very strong portfolio from a spending momentum standpoint. Now what we want to do, let's flip the view and look at defections off of the AWS platform. Okay, look at this chart. We find this mind-boggling. The chart shows the same portfolio view, but isolates on the bright red portion of that wheel that I showed you earlier, the replacements. And basically you're seeing very few defections show up for AWS in the ETR survey. Again, only Chime is the sore spot. But everywhere else in the portfolio, we're seeing low single digit replacements. That's very, very impressive. Now, one more data chart. And then I want to go to some direct customer feedback, and then we'll wrap. Now we've shown this chart before. It plots net score or spending velocity on the vertical axis and market share, which measures pervasiveness in the dataset on the horizontal axis. And in the table portion in the upper-right corner, you can see the actual numbers that drive the plotting position. And you can see the data confirms what we know. This is a two-horse race right now between AWS and Microsoft. Google, they're kind of hanging out with the on-prem crowd vying for relevance at the data center. We've talked extensively about how we would like to see Google evolve its business and rely less on appropriating our data to serve ads and focus more on cloud. There's so much opportunity there. But nonetheless, you can see the so-called hybrid zone emerging. Hybrid is becoming real. Customers want hybrid and AWS is going to have to learn how to support hybrid deployments with offerings like outposts and others. But the data doesn't lie. The foundation has been set for the 2020s and AWS is extremely well-positioned to maintain its leadership, in our view. Now, the last chart we'll show takes some verbatim comments from customers that sum up the situation. These quotes were pulled from several ETR event roundtables that occurred in 2020. The first one talks to the cloud compute bill. It spikes and sometimes can be unpredictable. The second comment is from a CIO at IT/Telco. Let me paraphrase what he or she is saying. AWS is leading the pack and is number one. And this individual believes that AWS will continue to be number one by a wide margin. The third quote is from a CTO at an S&P 500 organization who talks to the cloud independence of the architecture that they're setting up and the strategy that they're pursuing. The central concern of this person is the software engineering pipeline, the cICB pipeline. The strategy is to clearly go multicloud, avoid getting locked in and ensuring that developers can be productive and independent of the cloud platform. Essentially separating the underlying infrastructure from the software development process. All right, let's wrap. So we talked about how the cloud will evolve to become an even more hyper-distributed system that can sense, act and serve, and provides sets of intelligence services on which digital businesses will be constructed and transformed. We expect AWS to continue to lead in this build-out with its heritage of delivering innovations and features at a torrid pace. We believe that ecosystems will become the main spring of innovation in the coming decade. And we feel that AWS has to embrace not only hybrid, but cross-cloud services. And it has to be careful not to push its ecosystem partners to competitors. It has to walk a fine line between competing and nurturing its ecosystem. To date, its success has been key to that balance as AWS has been able to, for the most part, call the shots. However, we shall see if competition and public policy attenuate its dominant position in this regard. What will be fascinating to watch is how AWS behaves, given its famed customer obsession and how it decodes the customer's needs. As Steve Jobs famously said, "Some people say, give the customers what they want. "That's not my approach. "Our job is to figure out "what they're going to want before they do." I think Henry Ford once asked, "If I'd ask customers what they wanted, "they would've told me a faster horse." Okay, that's it for now. It was great having you for this special report from theCUBE Insights Powered by ETR. Keep it right there for more great content on theCUBE from re:Invent 2020 virtual. (cheerful music)

Published Date : Nov 25 2020

SUMMARY :

This is Breaking Analysis and bring in some of the ETR survey data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

MaribelPERSON

0.99+

JohnPERSON

0.99+

KeithPERSON

0.99+

EquinixORGANIZATION

0.99+

Matt LinkPERSON

0.99+

Dave VellantePERSON

0.99+

IndianapolisLOCATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ScottPERSON

0.99+

Dave NicholsonPERSON

0.99+

Tim MinahanPERSON

0.99+

Paul GillinPERSON

0.99+

Lisa MartinPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

LisaPERSON

0.99+

EuropeLOCATION

0.99+

Stephanie CoxPERSON

0.99+

AkanshkaPERSON

0.99+

BudapestLOCATION

0.99+

IndianaLOCATION

0.99+

Steve JobsPERSON

0.99+

OctoberDATE

0.99+

IndiaLOCATION

0.99+

StephaniePERSON

0.99+

NvidiaORGANIZATION

0.99+

Chris LavillaPERSON

0.99+

2006DATE

0.99+

Tanuja RanderyPERSON

0.99+

CubaLOCATION

0.99+

IsraelLOCATION

0.99+

Keith TownsendPERSON

0.99+

AkankshaPERSON

0.99+

DellORGANIZATION

0.99+

Akanksha MehrotraPERSON

0.99+

LondonLOCATION

0.99+

September 2020DATE

0.99+

IntelORGANIZATION

0.99+

David SchmidtPERSON

0.99+

90%QUANTITY

0.99+

$45 billionQUANTITY

0.99+

October 2020DATE

0.99+

AfricaLOCATION

0.99+