ML & AI Keynote Analysis | AWS re:Invent 2022
>>Hey, welcome back everyone. Day three of eight of us Reinvent 2022. I'm John Farmer with Dave Volante, co-host the q Dave. 10 years for us, the leader in high tech coverage is our slogan. Now 10 years of reinvent day. We've been to every single one except with the original, which we would've come to if Amazon actually marketed the event, but they didn't. It's more of a customer event. This is day three. Is the machine learning ai keynote sws up there. A lot of announcements. We're gonna break this down. We got, we got Andy Thra here, vice President, prince Constellation Research. Andy, great to see you've been on the cube before one of our analysts bringing the, bringing the, the analysis, commentary to the keynote. This is your wheelhouse. Ai. What do you think about Swami up there? I mean, he's awesome. We love him. Big fan Oh yeah. Of of the Cuban we're fans of him, but he got 13 announcements. >>A lot. A lot, >>A lot. >>So, well some of them are, first of all, thanks for having me here and I'm glad to have both of you on the same show attacking me. I'm just kidding. But some of the announcement really sort of like a game changer announcements and some of them are like, meh, you know, just to plug in the holes what they have and a lot of golf claps. Yeah. Meeting today. And you could have also noticed that by, when he was making the announcements, you know, the, the, the clapping volume difference, you could say, which is better, right? But some of the announcements are, are really, really good. You know, particularly we talked about, one of that was Microsoft took that out of, you know, having the open AI in there, doing the large language models. And then they were going after that, you know, having the transformer available to them. And Amazon was a little bit weak in the area, so they couldn't, they don't have a large language model. So, you know, they, they are taking a different route saying that, you know what, I'll help you train the large language model by yourself, customized models. So I can provide the necessary instance. I can provide the instant volume, memory, the whole thing. Yeah. So you can train the model by yourself without depending on them kind >>Of thing. So Dave and Andy, I wanna get your thoughts cuz first of all, we've been following Amazon's deep bench on the, on the infrastructure pass. They've been doing a lot of machine learning and ai, a lot of data. It just seems that the sentiment is that there's other competitors doing a good job too. Like Google, Dave. And I've heard folks in the hallway, even here, ex Amazonians saying, Hey, they're train their models on Google than they bring up the SageMaker cuz it's better interface. So you got, Google's making a play for being that data cloud. Microsoft's obviously putting in a, a great kind of package to kind of make it turnkey. How do they really stand versus the competition guys? >>Good question. So they, you know, each have their own uniqueness and the we variation that take it to the field, right? So for example, if you were to look at it, Microsoft is known for as industry or later things that they are been going after, you know, industry verticals and whatnot. So that's one of the things I looked here, you know, they, they had this omic announcement, particularly towards that healthcare genomics space. That's a huge space for hpz related AIML applications. And they have put a lot of things in together in here in the SageMaker and in the, in their models saying that, you know, how do you, how do you use this transmit to do things like that? Like for example, drug discovery, for genomics analysis, for cancer treatment, the whole, right? That's a few volumes of data do. So they're going in that healthcare area. Google has taken a different route. I mean they want to make everything simple. All I have to do is I gotta call an api, give what I need and then get it done. But Amazon wants to go at a much deeper level saying that, you know what? I wanna provide everything you need. You can customize the whole thing for what you need. >>So to me, the big picture here is, and and Swami references, Hey, we are a data company. We started, he talked about books and how that informed them as to, you know, what books to place front and center. Here's the, here's the big picture. In my view, companies need to put data at the core of their business and they haven't, they've generally put humans at the core of their business and data. And now machine learning are at the, at the outside and the periphery. Amazon, Google, Microsoft, Facebook have put data at their core. So the question is how do incumbent companies, and you mentioned some Toyota Capital One, Bristol Myers Squibb, I don't know, are those data companies, you know, we'll see, but the challenge is most companies don't have the resources as you well know, Andy, to actually implement what Google and Facebook and others have. >>So how are they gonna do that? Well, they're gonna buy it, right? So are they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft and Google, I pulled some ETR data to say, okay, who are the top companies that are showing up in terms of spending? Who's spending with whom? AWS number one, Microsoft number two, Google number three, data bricks. Number four, just in terms of, you know, presence. And then it falls down DataRobot, Anaconda data icu, Oracle popped up actually cuz they're embedding a lot of AI into their products and, and of course IBM and then a lot of smaller companies. But do companies generally customers have the resources to do what it takes to implement AI into applications and into workflows? >>So a couple of things on that. One is when it comes to, I mean it's, it's no surprise that the, the top three or the hyperscalers, because they all want to bring their business to them to run the specific workloads on the next biggest workload. As you was saying, his keynote are two things. One is the A AIML workloads and the other one is the, the heavy unstructured workloads that he was talking about. 80%, 90% of the data that's coming off is unstructured. So how do you analyze that? Such as the geospatial data. He was talking about the volumes of data you need to analyze the, the neural deep neural net drug you ought to use, only hyperscale can do it, right? So that's no wonder all of them on top for the data, one of the things they announced, which not many people paid attention, there was a zero eight L that that they talked about. >>What that does is a little bit of a game changing moment in a sense that you don't have to, for example, if you were to train the data, data, if the data is distributed everywhere, if you have to bring them all together to integrate it, to do that, it's a lot of work to doing the dl. So by taking Amazon, Aurora, and then Rich combine them as zero or no ETL and then have Apaches Apaches Spark applications run on top of analytical applications, ML workloads. That's huge. So you don't have to move around the data, use the data where it is, >>I, I think you said it, they're basically filling holes, right? Yeah. They created this, you know, suite of tools, let's call it. You might say it's a mess. It's not a mess because it's, they're really powerful but they're not well integrated and now they're starting to take the seams as I say. >>Well yeah, it's a great point. And I would double down and say, look it, I think that boring is good. You know, we had that phase in Kubernetes hype cycle where it got boring and that was kind of like, boring is good. Boring means we're getting better, we're invisible. That's infrastructure that's in the weeds, that's in between the toes details. It's the stuff that, you know, people we have to get done. So, you know, you look at their 40 new data sources with data Wrangler 50, new app flow connectors, Redshift Auto Cog, this is boring. Good important shit Dave. The governance, you gotta get it and the governance is gonna be key. So, so to me, this may not jump off the page. Adam's keynote also felt a little bit of, we gotta get these gaps done in a good way. So I think that's a very positive sign. >>Now going back to the bigger picture, I think the real question is can there be another independent cloud data cloud? And that's the, to me, what I try to get at my story and you're breaking analysis kind of hit a home run on this, is there's interesting opportunity for an independent data cloud. Meaning something that isn't aws, that isn't, Google isn't one of the big three that could sit in. And so let me give you an example. I had a conversation last night with a bunch of ex Amazonian engineering teams that left the conversation was interesting, Dave. They were like talking, well data bricks and Snowflake are basically batch, okay, not transactional. And you look at Aerospike, I can see their booth here. Transactional data bases are hot right now. Streaming data is different. Confluence different than data bricks. Is data bricks good at hosting? >>No, Amazon's better. So you start to see these kinds of questions come up where, you know, data bricks is great, but maybe not good for this, that and the other thing. So you start to see the formation of swim lanes or visibility into where people might sit in the ecosystem, but what came out was transactional. Yep. And batch the relationship there and streaming real time and versus you know, the transactional data. So you're starting to see these new things emerge. Andy, what do you, what's your take on this? You're following this closely. This seems to be the alpha nerd conversation and it all points to who's gonna have the best data cloud, say data, super clouds, I call it. What's your take? >>Yes, data cloud is important as well. But also the computational that goes on top of it too, right? Because when, when the data is like unstructured data, it's that much of a huge data, it's going to be hard to do that with a low model, you know, compute power. But going back to your data point, the training of the AIML models required the batch data, right? That's when you need all the, the historical data to train your models. And then after that, when you do inference of it, that's where you need the streaming real time data that's available to you too. You can make an inference. One of the things, what, what they also announced, which is somewhat interesting, is you saw that they have like 700 different instances geared towards every single workload. And there are some of them very specifically run on the Amazon's new chip. The, the inference in two and theran tr one chips that basically not only has a specific instances but also is run on a high powered chip. And then if you have that data to support that, both the training as well as towards the inference, the efficiency, again, those numbers have to be proven. They claim that it could be anywhere between 40 to 60% faster. >>Well, so a couple things. You're definitely right. I mean Snowflake started out as a data warehouse that was simpler and it's not architected, you know, in and it's first wave to do real time inference, which is not now how, how could they, the other second point is snowflake's two or three years ahead when it comes to governance, data sharing. I mean, Amazon's doing what always does. It's copying, you know, it's customer driven. Cuz they probably walk into an account and they say, Hey look, what's Snowflake's doing for us? This stuff's kicking ass. And they go, oh, that's a good idea, let's do that too. You saw that with separating compute from storage, which is their tiering. You saw it today with extending data, sharing Redshift, data sharing. So how does Snowflake and data bricks approach this? They deal with ecosystem. They bring in ecosystem partners, they bring in open source tooling and that's how they compete. I think there's unquestionably an opportunity for a data cloud. >>Yeah, I think, I think the super cloud conversation and then, you know, sky Cloud with Berkeley Paper and other folks talking about this kind of pre, multi-cloud era. I mean that's what I would call us right now. We are, we're kind of in the pre era of multi-cloud, which by the way is not even yet defined. I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. Yeah. People have multiple clouds. They got, they, they end up by default, not by design as Dell likes to say. Right? And they gotta deal with it. So it's more of they're inheriting multiple cloud environments. It's not necessarily what they want in the situation. So to me that is a big, big issue. >>Yeah, I mean, again, going back to your snowflake and data breaks announcements, they're a data company. So they, that's how they made their mark in the market saying that, you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. And, and Amazon is catching up with that with a lot of that announcements they made, how far it's gonna get traction, you know, to change when I to say, >>Yeah, I mean to me, to me there's no doubt about Dave. I think, I think what Swamee is doing, if Amazon can get corner the market on out of the box ML and AI capabilities so that people can make it easier, that's gonna be the end of the day tell sign can they fill in the gaps. Again, boring is good competition. I don't know mean, mean I'm not following the competition. Andy, this is a real question mark for me. I don't know where they stand. Are they more comprehensive? Are they more deeper? Are they have deeper services? I mean, obviously shows to all the, the different, you know, capabilities. Where, where, where does Amazon stand? What's the process? >>So what, particularly when it comes to the models. So they're going at, at a different angle that, you know, I will help you create the models we talked about the zero and the whole data. We'll get the data sources in, we'll create the model. We'll move the, the whole model. We are talking about the ML ops teams here, right? And they have the whole functionality that, that they built ind over the year. So essentially they want to become the platform that I, when you come in, I'm the only platform you would use from the model training to deployment to inference, to model versioning to management, the old s and that's angle they're trying to take. So it's, it's a one source platform. >>What about this idea of technical debt? Adrian Carro was on yesterday. John, I know you talked to him as well. He said, look, Amazon's Legos, you wanna buy a toy for Christmas, you can go out and buy a toy or do you wanna build a, to, if you buy a toy in a couple years, you could break and what are you gonna do? You're gonna throw it out. But if you, if you, if part of your Lego needs to be extended, you extend it. So, you know, George Gilbert was saying, well, there's a lot of technical debt. Adrian was countering that. Does Amazon have technical debt or is that Lego blocks analogy the right one? >>Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes APIs? It depends on what team you're on. If you're on the runtime gene, you're gonna optimize for Kubernetes, but E two is the resources you want to use. So I think the idea of the 15 years of technical debt, I, I don't believe that. I think the APIs are still hardened. The issue that he brings up that I think is relevant is it's an end situation, not an or. You can have the bag of Legos, which is the primitives and build a durable application platform, monitor it, customize it, work with it, build it. It's harder, but the outcome is durability and sustainability. Building a toy, having a toy with those Legos glued together for you, you can get the play with, but it'll break over time. Then you gotta replace it. So there's gonna be a toy business and there's gonna be a Legos business. Make your own. >>So who, who are the toys in ai? >>Well, out of >>The box and who's outta Legos? >>The, so you asking about what what toys Amazon building >>Or, yeah, I mean Amazon clearly is Lego blocks. >>If people gonna have out the box, >>What about Google? What about Microsoft? Are they basically more, more building toys, more solutions? >>So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. But, but if it comes to vertical industry solutions, Microsoft is, is is ahead, right? Because they have, they have had years of indu industry experience. I mean there are other smaller cloud are trying to do that too. IBM being an example, but you know, the, now they are starting to go after the specific industry use cases. They think that through, for example, you know the medical one we talked about, right? So they want to build the, the health lake, security health lake that they're trying to build, which will HIPPA and it'll provide all the, the European regulations, the whole line yard, and it'll help you, you know, personalize things as you need as well. For example, you know, if you go for a certain treatment, it could analyze you based on your genome profile saying that, you know, the treatment for this particular person has to be individualized this way, but doing that requires a anomalous power, right? So if you do applications like that, you could bring in a lot of the, whether healthcare, finance or what have you, and then easy for them to use. >>What's the biggest mistake customers make when it comes to machine intelligence, ai, machine learning, >>So many things, right? I could start out with even the, the model. Basically when you build a model, you, you should be able to figure out how long that model is effective. Because as good as creating a model and, and going to the business and doing things the right way, there are people that they leave the model much longer than it's needed. It's hurting your business more than it is, you know, it could be things like that. Or you are, you are not building a responsibly or later things. You are, you are having a bias and you model and are so many issues. I, I don't know if I can pinpoint one, but there are many, many issues. Responsible ai, ethical ai. All >>Right, well, we'll leave it there. You're watching the cube, the leader in high tech coverage here at J three at reinvent. I'm Jeff, Dave Ante. Andy joining us here for the critical analysis and breaking down the commentary. We'll be right back with more coverage after this short break.
SUMMARY :
Ai. What do you think about Swami up there? A lot. of, you know, having the open AI in there, doing the large language models. So you got, Google's making a play for being that data cloud. So they, you know, each have their own uniqueness and the we variation that take it to have the resources as you well know, Andy, to actually implement what Google and they gonna build it with tools that's kind of like you said the Amazon approach or are they gonna buy it from Microsoft the neural deep neural net drug you ought to use, only hyperscale can do it, right? So you don't have to move around the data, use the data where it is, They created this, you know, It's the stuff that, you know, people we have to get done. And so let me give you an example. So you start to see these kinds of questions come up where, you know, it's going to be hard to do that with a low model, you know, compute power. was simpler and it's not architected, you know, in and it's first wave to do real time inference, I think people use that term, Dave, to say, you know, some sort of magical thing that's happening. you know, I do all those things, therefore you have, I had to have your data because it's a seamless data. the different, you know, capabilities. at a different angle that, you know, I will help you create the models we talked about the zero and you know, George Gilbert was saying, well, there's a lot of technical debt. Well, I talked to him about the debt and one of the things we talked about was what do you optimize for E two APIs or Kubernetes So Google is more of, you know, building solutions angle like, you know, I give you an API kind of thing. you know, it could be things like that. We'll be right back with more coverage after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Adrian | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Adrian Carro | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Andy Thra | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
13 announcements | QUANTITY | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
John Farmer | PERSON | 0.99+ |
Dave Ante | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Legos | ORGANIZATION | 0.99+ |
Bristol Myers Squibb | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Christmas | EVENT | 0.99+ |
second point | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Berkeley Paper | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
eight | QUANTITY | 0.98+ |
700 different instances | QUANTITY | 0.98+ |
three years | QUANTITY | 0.98+ |
Swami | PERSON | 0.98+ |
Aerospike | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
two things | QUANTITY | 0.98+ |
60% | QUANTITY | 0.98+ |
Oracle & AMD Partner to Power Exadata X9M
[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
20 percent | QUANTITY | 0.99+ |
juan loyza | PERSON | 0.99+ |
amd | ORGANIZATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
8 | QUANTITY | 0.99+ |
256-way | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
alibaba | ORGANIZATION | 0.99+ |
87 percent | QUANTITY | 0.99+ |
128 | QUANTITY | 0.99+ |
oracle | ORGANIZATION | 0.99+ |
two threads | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
11 years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
200 | QUANTITY | 0.99+ |
ipod | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
10 | DATE | 0.98+ |
iphone | COMMERCIAL_ITEM | 0.98+ |
earlier this century | DATE | 0.98+ |
last april | DATE | 0.98+ |
third generation | QUANTITY | 0.98+ |
juan | PERSON | 0.98+ |
64 cores | QUANTITY | 0.98+ |
128-way | QUANTITY | 0.98+ |
two socket | QUANTITY | 0.98+ |
eight lanes | QUANTITY | 0.98+ |
aws | ORGANIZATION | 0.97+ |
AMD | ORGANIZATION | 0.97+ |
ios | TITLE | 0.97+ |
fourth gen | QUANTITY | 0.96+ |
168 pcie | QUANTITY | 0.96+ |
dave vellante | PERSON | 0.95+ |
third gen | QUANTITY | 0.94+ |
aws azure | ORGANIZATION | 0.94+ |
apple | ORGANIZATION | 0.94+ |
thousands of parameters | QUANTITY | 0.92+ |
years | DATE | 0.91+ |
15 years | QUANTITY | 0.9+ |
Power Exadata | ORGANIZATION | 0.9+ |
over 90 percent | QUANTITY | 0.89+ |
four companies | QUANTITY | 0.89+ |
first | QUANTITY | 0.88+ |
oci | ORGANIZATION | 0.87+ |
first appearance | QUANTITY | 0.85+ |
one team | QUANTITY | 0.84+ |
almost 15 years ago | DATE | 0.83+ |
seven nanometer | QUANTITY | 0.83+ |
last few years | DATE | 0.82+ |
one thing | QUANTITY | 0.82+ |
15 years ago | DATE | 0.82+ |
epyc | TITLE | 0.8+ |
over 60 | QUANTITY | 0.79+ |
amd produce | ORGANIZATION | 0.79+ |
Antonio Neri, HPE | HPE Discover 2022
>>The cube presents HPE discover 2022 brought to you by HPE. >>Hey everyone. Welcome back to the Cube's continuing coverage of HPE. Discover 22 live from Las Vegas, the Venetian expo center at Lisa Martin and Dave Velante have a very special guest. Next one of our esteemed alumni here on the cube, Antonio Neri, the president and CEO of HPE, Antonio. Thank you so much for joining us this morning. >>Well, thanks for free with us today. >>Great to be back here after three years away. Yeah. Sit on stage yesterday in front of a massive sea of people. The energy here is electric. Yeah. Must have felt great yesterday, but you, you stood on stage three years ago and said buy 20, 22. And here it is. Yeah. We're gonna deliver our entire portfolio as a service. What was it like to be on stage and say we've done that. Here's where we are. We are a new company. >>Well, first of all, as always, I love the cube to cover HP discover, as you said, has been many, many years, and I hope you saw a different company yesterday. I'm really proud of what happened yesterday, because it was a pivotable moment in our journey. If I reflect back in my four years as a CEO, we said the enterprise of the future will be edge centric, cloud enable and data driven in 2018. And I pledged to invest 4 billion over four years. And you see the momentum we have at the edge with our business. And then in 19, to your point, Lisa, we said, by the end of 2022, we will offer everything as a service. When you look at the floor behind us, everything is a as a service experience from the moment you log through IHP GreenLake platform to all the cloud services we offer. So for me, it is a proud moment because our team worked really hard to deliver on that province on the face of a lot of challenges, >>Tremendous challenges, the last couple years that nobody could have predicted or even forecast, how can we tolerate this? Talk to me about your customer conversations and how they have changed and evolved as every company today has to be a data company. >>Well, even this morning, up to this interview, I already met four customers in, in less than an hour and a half. And I will say all of them, first of all, really appreciated bringing HP discover back. And what they really appreciated was the fact that they had the opportunity to meet and greet and talk to people. The energy that comes from that engagement is second to none. And I think says something right about the moment we are at this time, where the return to work and everything else. I think this is a wake up call in many ways, but customers are telling us is that they want to engage with a partner that has a vision that can take them to their journey, whatever that journey is. And we know digital transformation is core to everything, but ultimately they are now more focused on delivering outcomes for the organization they're running in it. And that's why HP GreenLake is incredible well positioned to do so, you >>Know, just picking up on that. I, I, I counted Antonio. I think I've been to 14 HP and HPE discovers when you include Europe. Yeah. I mean, Frankford, London, Barcelona Madrid, of course, you know the us, and I've never seen why I've tweeted this out. I've never seen this type of energy. Right. People are excited to get back. That's part of it. The other big part of it is course the focus. Yeah. So that focus on as a service was a burn, the boat moment for HPV. >>I don't think it was a burn the boat moment. It was a moment that we have to decide how we think about the future and how we become even more relevant for customers. And we are very important to all the customers they buy from us. Right. But I think about the next 3, 5, 10 years, how we position the company, enter the future to be relevant to whatever they need to do. >>Well, what I mean by that is you're not turning back. No, the bridge is gone. You go, you're going forward. And so my question is, did the pandemic accelerate that move or did it, did it hinder it? And, and, and how so >>Actually it was an, a moment for us to think about how we go further and faster to what we call this journey to one, one platform, one experience. And, and we felt as a team, as an organization, this was a unique moment in time to go further, faster. So to us, it was a catalyst to accelerate that transformation. >>Yeah. Now I, I want to ask you a question in your keynote. I love this, cuz you say I'm often asked by customers, what workload should we move to the public cloud and what should stay on prem? I'm like, yeah, I get that question all the time. And I was waiting for the answer. You said, that's the wrong question. And I was like, wait, but that's the question everybody's asking. So it was really interesting that you said that. And I wonder if you could, you could comment. And I think you said basically the world's hybrid is your challenge with, with the customers in this initiative to actually get people to stop asking that question. Right. And not think about that. >>No, I think the challenge we all collectively have is that how we think about data and how we drive what I call a data first modernization, you know, strategy for our customers in an age to cloud architecture, which basically says you are living a hybrid world is not a question which workloads are put in the public cloud, which workloads are put OnPrem. You know, the, all the issues around data gravity and whatnot is a question of how I bring the cloud experience to all your workloads of data, wherever they live. And that's where, you know, the opportunity really exists. And as customers understand more and more about the new environments, how they work, how they enable these new experiences is all driven by that data. And that data has enormous value. So it's not about which cloud use is about how you bring the cloud experience to your data in workloads. >>When you're talking to CIOs, especially transformational CIOs, what's the value pro to those CIOs that wanna transform and need to transform with the power of HPE. >>More and more of them are becoming conscious about the fact that they need to go faster in everything they do. We have done some interesting analysis with the brands that have done a better job or have become way more proficient on extracting insight from the data. They are actually the brands that winning the marketplace, not just with customers driving the preference, but also in the market capitalization because they become where more sophisticated in driving better efficiency, which is a necessity today. Second is the fact that also they need to improve their security aspect of it, but they are creating new experiences and new revenue streams. And those transformational CIOs are transforming their business in the way they run it into more an innovation engine. And so that's why, you know, we love working with them because they are advanced and the push has to think differently in the way we think about the innovation. How >>Do you help customers go from data, rich insight, port to data, rich insight, rich actions, new revenue, streams, new services. >>Well, first of all, you have to deploy the right architecture, which starts with a network, obviously because digital transformation requires an on-ramp and the connectivity is the first step. Second is to make sure you have a true end to end visibility of that data. And that's why we announced yesterday with the data fabric, right? A, a revolutionary way to think about that age to cloud architecture from a data driven perspective. And then the third piece of this is, is the aspect of how we bring that intelligence to that data. And that's where, you know, we are enabling all these amazing services with AI machine learning with, with, you know, HP GreenLake, which is ultimately the way we are gonna enable them. >>What's your favorite announcement from this week? >>I think HP GreenLake, you know, I think I >>Mentioned a lot of GreenLake, >>36 times HP GreenLake. And to me, you know, as I think about what comes next, right, is about how we innovate now on the platform at the pace that customers are demanding. And so for me, there is a lot of things there, but obviously the private cloud enterprise edition was a big moment for us because that's the way we bring the cloud operating experience on-prem and at the edge, but also all the hybrid capabilities that Brian showed during the demo is something that I think customers now say, wow, I didn't know. We can do that. >>And thinking about your business, you know, despite some macro headwinds and, and like you, you reaffirmed your guidance on the, on the last earnings call. Does GreenLake give you better visibility or is it harder to predict? >>No, I think the more we get engaged with customers in running their workloads and data, the more visibility we get, you know, I said, you know, customers voted with the workloads and data. And in, in that context, you know, we already have 65,000 customers more than 120,000 users. And the one interesting stat, which I hope it didn't go lost during that transition was the, the fact that we now have under the GreenLake management over an next bite of data. And so to me, right, that's a unique, a unique opportunity for us to learn and improve the whole cycle. >>So obviously a big pillar of your strategy is the data. And I wanted, if you could talk more about that because I, I would observe, you know, we, the cube started in sort of as big data, you know, started to take off and you saw that had ecosystem and, and that ecosystem has dispersed now. Yeah. So gone into the cloud, it's got snowflakes pulling and some in Mongo. Now you have the opportunity with this ecosystem yeah. To have a data ecosystem. How do you look at the ecosystem and the value that your partners can build on top of GreenLake and specifically monetize? Well, >>If you walk through the floor, one of the things we changed this time is that the partners are actually in the flow of all our solutions, not sitting on a corner of the show floor, right? And, and, and that's because what we have done in the last three years has been together with our partners, but we conceive HP GreenLake with the partners in mind, at the core of everything we do in the platform. And that's why on Monday we announced the new partner one ready vantage program that actually opens the platform through our APIs for allowing them to add their own value on the platform, whether in their own services to the marketplace or the other way around they to use our capabilities in their own solutions. Because some of these cloud operating capabilities are hard to develop, whether it is, you know, metering and billing and all the other services, sometimes you don't don't have to build yourself. So that's why, what we love about our strategies, the partners can decide where to participate in this broad ecosystem and then grow with us and we can grow through them as well. >>So GreenLake as a service, the focus is, is very clear. Hybrid is very clear. What's less clear to me is, is that I'll and I'll ask you to comment, is this, we go a term called super cloud and super cloud is different than multi-cloud multi-cloud oh, I run in AWS or, and, or I run in Azure. I run in, in, in GCP, Supercloud builds a layer above that hides the underlying complexity of the primitives and the APIs, and then builds new value on top of that, out to the edge as well. You guys talk about the edge all the time. You have Aruba a as an asset, you got space space born. You're doing some pretty edge. Like, well, >>We have it here. Yeah. Yeah. We are connected to the ISS. So if you were to that show floor, you can actually see what's being processed today. >>I mean, that's, you don't get more edge than that. So my question is, is, is that part of the vision to actually build that I call super cloud layer? Or is it more to be focused on hybrid and connecting on-prem to the cloud? >>No, I, I don't like to call it super cloud because that means, unless you are a superpower, you can't do what you need to do. I, I think I call it a super straight okay. Right. That we are enabling to our H to cloud architecture. So the customers can build their own experiences and consume the services that they need to compete and win in today's market. So our H to cloud approach is to create that substrate with connectivity, obviously the cloud and the data capability that you need to operate in today's >>Environment. Okay. So they're fair enough. I would say that your customers are gonna build then the super cloud on top of that software. >>Well, actually we want to give it to them. They don't have to build anything. They just need to run the business. Well, they don't have the time to really build stuff. They just need to innovate that's our, our value proposition. So they don't have to waste cycles in doing so if it comes ready to go, why you want to build it? >>Well, when I say build it, I'm talking about building their business on top of it things you're not gonna, I agree with that, bringing their tools, financial services companies with their data, their tools, their ecosystem, connecting OnPrem to the clouds. Yeah. That above that substrate that's their as a digital. >>Yeah. And that's why I said yesterday with our approach, we're actually enabling customers to power the next generation business models that they need. We enable the substrate, they can innovate on the platform, these next gen business models, >>Tap your engineering mind. And I'd like you to talk about how architectures are changing you along with many, many other CEOs signed a letter to, to the us government, you know, urging them to, to, to pass the chips act. As I posted on LinkedIn, there were, there were a few notables missing apple wasn't on there, meta wasn't on there, Tesla wasn't on there. I'd like to see them step up and sign that. Yeah. And so why did you, you know, sign that? Why did you post that? And, and, and why is that important? >>Well, first of all, it's important to customers because obviously they need to get access to technologies in a more ubiquitous way. And second it's important for the United States. We live in a, in a global economy that today is going to a refactoring of sorts where supply chain disruption has caused a lot of consternation and disruption across many industries. And I think, you know, as we think about the next generation supply chains, which are built for resiliency and obviously inclusion, we need to make sure that the United States address this problem. Because once you fall behind, it takes a long time to catch up. Even if we sign the chips act, it's gonna take many years for us, but we need to start now. Otherwise we never get what we need to >>Get. I, I agree. We're late. I think pat Gelsinger has done a very good job laying out the mission, you know, to bring, I mean, to me, it's modest, bring 20% of the manufacturing back to the us by the end of the decade. I mean that that's not gonna be easy, but even so that's, >>That's, we need stuff somewhere. Absolutely. You know, we are great partners with Intel. I really support the vision that path has laid out. And its not just about Intel again, it's about our customers in the United States, >>HP and HPE now cuz H HP labs is part of, of HPE. I believe that's correct state. Well, >>We refocus HP labs as a part of our high performance. Yeah. And AI business. Yes. >>But H HP and, and now HPE possess custom Silicon expertise. We may, we always >>Had. >>Yeah, exactly. And, and you know, with the fabulous world, do you see, I mean, you probably do in some custom Silicon today that I don't really, you know, have visibility on, but do you see getting more into that? Is there a need for >>That? Yeah. Well we already design more than 60 different silicons that are included in our solution. More and more of that. Silicon is actually in support of our other service experience. That's truly programmable for this new way to deploy a cloud or a data fabric or a network in fabric of sorts. When you look our, our age portfolio as a part of green lake through our Aruba set of offerings, we actually have a lot of the Silicon building. Our switching portfolio that's program. Normally give us the ability to drive intelligent routing in the network at the application layer. But also as you know, many years ago, we introduced our own ILO, the lights out technology, the BMC type of support that allows us to provide security to the root of our systems. But now more implement a cloud operating security environment if you will, but there is many more in the analog space for AI at scale. And even the latest introduction with frontier. When you look at frontier that wonderful high performance exit scale system, the, the magic of that is in the Silicon we developed, which is the interconnect fabric. Plus the smart mix at massive massive scale for parallel computing. And then ultimately it's the software environment that we put on top of it. So we can process billion, billion, square transactions per second. >>And when you think about a lot of the AI today is modeling, that's done in the cloud. When you think about the edge actual real time in, you're not gonna send all that back to the cloud. When you have to make a left turn or a right turn, >>Stop sign. I think, you know, people need to realize that 70% of the data today is outside the public cloud and 50% is at the edge. And when you think about the real time use cases, actually 30% of that data will need to be processed real time. So which means you need to establish inference the rate at the edge and at the same time run, you know, the analytics at the edge, whether it's machine learnings or some sort of simulation they need to do at the edge. And so that's why, you know, we can provide inference. We can provide machine learning at the edge on top of the connectivity and the edge compute or cloud computing at the edge. But also we can provide on the other side, AI at scale for massive amount of data analytics. And >>Will that be part of the GreenLake? >>We already offered that experience. We already offered that as a HPC, as a service is one of the key services we provide at scale. And then you also have machine learning operations as a service. So we have both and with the data fabric, now we're gonna take it to one step forward so we can connect the data. And I think one of the most exciting services, I actually, I'm a true believer. That is the capability we develop through HP labs. Since you asked for that early on, which is called the swarm learning technology. Of >>Course. Yeah. I've talked to Dr. GU about there you >>Go. >>So, so he >>Will do a better job than me explaining, >>Hey, I don't know. You're pretty, pretty good at it, but he's awesome. I mean, I have to admit on your keynote, you specifically took the time to mention your support for women's rights. Yes. Will HPE pay for women to leave the state to have a medical procedure? >>Yeah. So what happened last week was a sad moment in a history. I believe we, as a company felt compelled to stand up and take a position on the rights of women to choose. And as a part of that, we already offer as a part of our benefits, the ability to travel and pay all the medical expenses related to their choice. >>Yeah. Well thank you for doing that. I appreciate it. As a, as a father of two daughters who have less rights than, than my wife did when she was their age, I applaud you for your bravery and standing up and, and thank you for doing that. How excited are you for Janet Jackson? >>I think is gonna be a phenomenal rap of the HP discover, I think is gonna be a great moment for people to celebrate the coming together. One of the feedback I got from the meetings early on from customers is that put aside the vision, the strategy, the solutions that they actually can experience themselves is the fact that the, the, the one thing that really appreciated it is that they can be together. They can talk to people, they can learn with each other from each other. That energy is obviously very palpable when you go through it. And I think, you know, the celebration tonight and I want to thank the sponsor for allowing us to do so, is, is the fact that, you know, it's gonna be a moment of reuniting ourselves and look at the Fu at the future with optimism, but have some fun. >>Well, that's great, Antonio, as I said, I've been to a lot of HP and HPE discovers. You've brought a new focus clearly to the company, outstanding job of, of getting people aligned. I mean, it's not easy. It's 60,000, you know, professionals a around the globe and the energy is like I've never seen before. So congratulations. Thank you so much for coming back in the queue. >>Thank you, Dave. And as always, we appreciate you covering the, the event. You, you share the news with all the audiences around the globe here and, and that's, that means us means a lot to us. Thank you. Thank you. >>And thank you for watching. This is Dave Velante for Lisa Martin and John furrier. We'll be right back with our next guest. Live from HPE. Discover 2022 in Las Vegas.
SUMMARY :
Thank you so much for joining us this morning. Great to be back here after three years away. Well, first of all, as always, I love the cube to cover HP discover, as you said, Talk to me about your customer conversations and how they have changed and right about the moment we are at this time, where the return to work and I think I've been to 14 HP and HPE discovers the company, enter the future to be relevant to whatever they need to do. And so my question is, did the pandemic accelerate that move So to us, it was a catalyst to accelerate And I think you about how you bring the cloud experience to your data in workloads. those CIOs that wanna transform and need to transform with the power of HPE. And so that's why, you know, we love working with them because they are advanced and the push Do you help customers go from data, rich insight, port to data, And that's where, you know, we are enabling all these amazing services And to me, you know, you reaffirmed your guidance on the, on the last earnings call. the more visibility we get, you know, I said, you know, customers voted with the workloads and data. sort of as big data, you know, started to take off and you saw that had ecosystem and, are hard to develop, whether it is, you know, metering and billing and all the other What's less clear to me is, is that I'll and I'll ask you to comment, is this, we go a term called super So if you were to that show floor, you can actually see I mean, that's, you don't get more edge than that. obviously the cloud and the data capability that you need to operate in today's I would say that your customers are gonna build then the super cloud on top of that software. ready to go, why you want to build it? their ecosystem, connecting OnPrem to the clouds. We enable the And I'd like you to talk about how architectures are changing you along And I think, you know, as we think about the next generation supply chains, you know, to bring, I mean, to me, it's modest, bring 20% of the manufacturing back to the us by the end I really support the vision that path has laid out. I believe that's correct state. And AI business. We may, we always And, and you know, with the fabulous world, do you see, I mean, the magic of that is in the Silicon we developed, which is the interconnect fabric. And when you think about a lot of the AI today is modeling, And so that's why, you know, we can provide inference. And then you also have machine learning operations as a I mean, I have to admit on your keynote, the ability to travel and pay all the medical expenses related to their choice. have less rights than, than my wife did when she was their age, I applaud you for your And I think, you know, It's 60,000, you know, you share the news with all the audiences around the globe here and, And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Janet Jackson | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Dave Velante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
4 billion | QUANTITY | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
70% | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Second | QUANTITY | 0.99+ |
60,000 | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
apple | ORGANIZATION | 0.99+ |
two daughters | QUANTITY | 0.99+ |
more than 60 different silicons | QUANTITY | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
third piece | QUANTITY | 0.99+ |
more than 120,000 users | QUANTITY | 0.99+ |
four customers | QUANTITY | 0.99+ |
H HP | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
less than an hour and a half | QUANTITY | 0.98+ |
John furrier | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
end of 2022 | DATE | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
one platform | QUANTITY | 0.98+ |
BMC | ORGANIZATION | 0.98+ |
chips act | TITLE | 0.98+ |
four years | QUANTITY | 0.98+ |
tonight | DATE | 0.97+ |
this week | DATE | 0.97+ |
this morning | DATE | 0.97+ |
one experience | QUANTITY | 0.97+ |
GU | PERSON | 0.97+ |
20 | QUANTITY | 0.96+ |
Frankford | ORGANIZATION | 0.96+ |
second | QUANTITY | 0.96+ |
36 times | QUANTITY | 0.96+ |
Does Intel need a Miracle?
(upbeat music) >> Welcome everyone, this is Stephanie Chan with theCUBE. Recently analyst Dave Ross RADIO entitled, Pat Gelsinger has a vision. It just needs the time, the cash and a miracle where he highlights why he thinks Intel is years away from reversing position in the semiconductor industry. Welcome Dave. >> Hey thanks, Stephanie. Good to see you. >> So, Dave you been following the company closely over the years. If you look at Wall Street Journal most analysts are saying to hold onto Intel. can you tell us why you're so negative on it? >> Well, you know, I'm not a stock picker Stephanie, but I've seen the data there are a lot of... some buys some sells, but most of the analysts are on a hold. I think they're, who knows maybe they're just hedging their bets they don't want to a strong controversial call that kind of sitting in the fence. But look, Intel still an amazing company they got tremendous resources. They're an ICON and they pay a dividend. So, there's definitely an investment case to be made to hold onto the stock. But I would generally say that investors they better be ready to hold on to Intel for a long, long time. I mean, Intel's they're just not the dominant player that it used to be. And the challenges have been mounting for a decade and look competitively Intel's fighting a five front war. They got AMD in both PCs and the data center the entire Arm Ecosystem` and video coming after with the whole move toward AI and GPU they're dominating there. Taiwan Semiconductor is by far the leading fab in the world with terms of output. And I would say even China is kind of the fifth leg of that stool, long term. So, lot of hurdles to jump competitively. >> So what are other sources of Intel's trouble sincere besides what you just mentioned? >> Well, I think they started when PC volumes peaked which was, or David Floyer, Wikibon wrote back in 2011, 2012 that he tells if it doesn't make some moves, it's going to face some trouble. So, even though PC volumes have bumped up with the pandemic recently, they pair in comparison to the wafer volume that are coming out of the Arm Ecosystem, and TSM and Samsung factories. The volumes of the Arm Ecosystem, Stephanie they dwarf the output of Intel by probably 10 X in semiconductors. I mean, the volume in semiconductors is everything. And because that's what costs down and Intel they just knocked a little cost manufacture any anymore. And in my view, they may never be again, not without a major change in the volume strategy, which of course Gelsinger is doing everything he can to affect that change, but they're years away and they're going to have to spend, north of a 100 billion dollars trying to get there, but it's all about volume in the semiconductor game. And Intel just doesn't have it right now. >> So you mentioned Pat Gelsinger he was a new CEO last January. He's a highly respected CEO and in truth employed more than four decades, I think he has knowledge and experience. including 30 years at Intel where he began his career. What's your opinion on his performance thus far besides the volume and semiconductor industry position of Intel? >> Well, I think Gelsinger is an amazing executive. He's a technical visionary, he's an execution machine, he's doing all the right things. I mean, he's working, he was at the state of the union address and looking good in a suit, he's saying all the right things. He's spending time with EU leaders. And he's just a very clear thinker and a super strong strategist, but you can't change Physics. The thing about Pat is he's known all along what's going on with Intel. I'm sure he's watched it from not so far because I think it's always been his dream to run the company. So, the fact that he's made a lot of moves. He's bringing in new management, he's repairing some of the dead wood at Intel. He's launched, kind of relaunched if you will, the Foundry Business. But I think they're serious about that. You know, this time around, they're spinning out mobile eye to throw off some cash mobile eye was an acquisition they made years ago to throw off some more cash to pay for the fabs. They have announced things like; a fabs in Ohio, in the Heartland, Ze in Heartland which is strikes all the right chords with the various politicians. And so again, he's doing all the right things. He's trying to inject. He's calling out his best Andrew Grove. I like to say who's of course, The Iconic CEO of Intel for many, many years, but again you can't change Physics. He can't compress the cycle any faster than the cycle wants to go. And so he's doing all the right things. It's just going to take a long, long time. >> And you said that competition is better positioned. Could you elaborate on why you think that, and who are the main competitors at this moment? >> Well, it's this Five Front War that I talked about. I mean, you see what's happened in Arm changed everything, Intel remember they passed on the iPhone didn't think it could make enough money on smartphones. And that opened the door for Arm. It was eager to take Apple's business. And because of the consumer volumes the semiconductor industry changed permanently just like the PC volume changed the whole mini computer business. Well, the smartphone changed the economics of semiconductors as well. Very few companies can afford the capital expense of building semiconductor fabrication facilities. And even fewer can make cutting edge chips like; five nanometer, three nanometer and beyond. So companies like AMD and Invidia, they don't make chips they design them and then they ship them to foundries like TSM and Samsung to manufacture them. And because TSM has such huge volumes, thanks to large part to Apple it's further down or up I guess the experience curve and experience means everything in terms of cost. And they're leaving Intel behind. I mean, the best example I can give you is Apple would look at the, a series chip, and now the M one and the M one ultra, I think about the traditional Moore's law curve that we all talk about two X to transistor density every two years doubling. Intel's lucky today if can keep that pace up, let's assume it can. But meanwhile, look at Apple's Arm based M one to M one Ultra transition. It occurred in less than two years. It was more like, 15 or 18 months. And it went from 16 billion transistors on a package to over a 100 billion. And so we're talking about the competition Apple in this case using Arm standards improving it six to seven X inside of a two year period while Intel's running it two X. And that says it all. So Intel is on a curve that's more expensive and slower than the competition. >> Well recently, until what Lujan Harrison did with 5.4 billion So it can make more check order companies last February I think the middle of February what do you think of that strategic move? >> Well, it was designed to help with Foundry. And again, I said left that out of my things that in Intel's doing, as Pat's doing there's a long list actually and many more. Again I think, it's an Israeli based company they're a global company, which is important. One of the things that Pat stresses is having a a presence in Western countries, I think that's super important, he'd like to get the percentage of semiconductors coming out of Western countries back up to at least maybe not to where it was previously but by the end of the decade, much more competitive. And so that's what that acquisition was designed to do. And it's a good move, but it's, again it doesn't change Physics. >> So Dave, you've been putting a lot of content out there and been following Intel for years. What can Intel do to go back on track? >> Well, I think first it needs great leadership and Pat Gelsinger is providing that. Since we talked about it, he's doing all the right things. He's manifesting his best. Andrew Grove, as I said earlier, splitting out the Foundry business is critical because we all know Moore's law. This is Right Law talks about volume in any business not just semiconductors, but it's crucial in semiconductors. So, splitting out a separate Foundry business to make chips is important. He's going to do that. Of course, he's going to ask Intel's competitors to allow Intel to manufacture their chips which they very well may well want to do because there's such a shortage right now of supply and they need those types of manufacturers. So, the hope is that that's going to drive the volume necessary for Intel to compete cost effectively. And there's the chips act. And it's EU cousin where governments are going to possibly put in some money into the semiconductor manufacturing to make the west more competitive. It's a key initiative that Pat has put forth and a challenge. And it's a good one. And he's making a lot of moves on the design side and committing tons of CapEx in these new fabs as we talked about but maybe his best chance is again the fact that, well first of all, the market's enormous. It's a trillion dollar market, but secondly there's a very long term shortage in play here in semiconductors. I don't think it's going to be cleared up in 2022 or 2023. It's just going to be keep being an explotion whether it's automobiles and factory devices and cameras. I mean, virtually every consumer device and edge device is going to use huge numbers of semiconductor chip. So, I think that's in Pat's favor, but honestly Intel is so far behind in my opinion, that I hope by the end of this decade, it's going to be in a position maybe a stronger number two position, and volume behind TSM maybe number three behind Samsung maybe Apple is going to throw Intel some Foundry business over time, maybe under pressure from the us government. And they can maybe win that account back but that's still years away from a design cycle standpoint. And so again, maybe in the 2030's, Intel can compete for top dog status, but that in my view is the best we can hope for this national treasure called Intel. >> Got it. So we got to leave it right there. Thank you so much for your time, Dave. >> You're welcome Stephanie. Good to talk to you >> So you can check out Dave's breaking analysis on theCUBE.net each Friday. This is Stephanie Chan for theCUBE. We'll see you next time. (upbeat music)
SUMMARY :
It just needs the time, Good to see you. closely over the years. but most of the analysts are on a hold. I mean, the volume in far besides the volume And so he's doing all the right things. And you said that competition And because of the consumer volumes I think the middle of February but by the end of the decade, What can Intel do to go back on track? And so again, maybe in the 2030's, Thank you so much for your time, Dave. Good to talk to you So you can check out
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Samsung | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
TSM | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
2022 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
30 years | QUANTITY | 0.99+ |
Andrew Grove | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
5.4 billion | QUANTITY | 0.99+ |
Gelsinger | PERSON | 0.99+ |
10 X | QUANTITY | 0.99+ |
less than two years | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
M one | COMMERCIAL_ITEM | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
M one ultra | COMMERCIAL_ITEM | 0.99+ |
fifth leg | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
five nanometer | QUANTITY | 0.99+ |
Moore | PERSON | 0.99+ |
Heartland | LOCATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
Iconic | ORGANIZATION | 0.98+ |
five front | QUANTITY | 0.98+ |
three nanometer | QUANTITY | 0.98+ |
Dave Ross | PERSON | 0.98+ |
two year | QUANTITY | 0.98+ |
CapEx | ORGANIZATION | 0.98+ |
last February | DATE | 0.97+ |
last January | DATE | 0.97+ |
Lujan Harrison | PERSON | 0.97+ |
middle of February | DATE | 0.97+ |
first | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
16 billion transistors | QUANTITY | 0.96+ |
100 billion dollars | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
theCUBE.net | OTHER | 0.95+ |
both PCs | QUANTITY | 0.94+ |
Five Front War | EVENT | 0.94+ |
Breaking Analysis: Pat Gelsinger has the Vision Intel Just Needs Time, Cash & a Miracle
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> If it weren't for Pat Gelsinger, Intel's future would be a disaster. Even with his clear vision, fantastic leadership, deep technical and business acumen, and amazing positivity, the company's future is in serious jeopardy. It's the same story we've been telling for years. Volume is king in the semiconductor industry, and Intel no longer is the volume leader. Despite Intel's efforts to change that dynamic With several recent moves, including making another go at its Foundry business, the company is years away from reversing its lagging position relative to today's leading foundries and design shops. Intel's best chance to survive as a leader in our view, will come from a combination of a massive market, continued supply constraints, government money, and luck, perhaps in the form of a deal with apple in the midterm. Hello, and welcome to this week's "Wikibon CUBE Insights, Powered by ETR." In this "Breaking Analysis," we'll update you on our latest assessment of Intel's competitive position and unpack nuggets from the company's February investor conference. Let's go back in history a bit and review what we said in the early 2010s. If you've followed this program, you know that our David Floyer sounded the alarm for Intel as far back as 2012, the year after PC volumes peaked. Yes, they've ticked up a bit in the past couple of years but they pale in comparison to the volumes that the ARM ecosystem is producing. The world has changed from people entering data into machines, and now it's machines that are driving all the data. Data volumes in Web 1.0 were largely driven by keystrokes and clicks. Web 3.0 is going to be driven by machines entering data into sensors, cameras. Other edge devices are going to drive enormous data volumes and processing power to boot. Every windmill, every factory device, every consumer device, every car, will require processing at the edge to run AI, facial recognition, inference, and data intensive workloads. And the volume of this space compared to PCs and even the iPhone itself is about to be dwarfed with an explosion of devices. Intel is not well positioned for this new world in our view. Intel has to catch up on the process, Intel has to catch up on architecture, Intel has to play catch up on security, Intel has to play catch up on volume. The ARM ecosystem has cumulatively shipped 200 billion chips to date, and is shipping 10x Intel's wafer volume. Intel has to have an architecture that accommodates much more diversity. And while it's working on that, it's years behind. All that said, Pat Gelsinger is doing everything he can and more to close the gap. Here's a partial list of the moves that Pat is making. A year ago, he announced IDM 2.0, a new integrated device manufacturing strategy that opened up its world to partners for manufacturing and other innovation. Intel has restructured, reorganized, and many executives have boomeranged back in, many previous Intel execs. They understand the business and have a deep passion to help the company regain its prominence. As part of the IDM 2.0 announcement, Intel created, recreated if you will, a Foundry division and recently acquired Tower Semiconductor an Israeli firm, that is going to help it in that mission. It's opening up partnerships with alternative processor manufacturers and designers. And the company has announced major investments in CAPEX to build out Foundry capacity. Intel is going to spin out Mobileye, a company it had acquired for 15 billion in 2017. Or does it try and get a $50 billion valuation? Mobileye is about $1.4 billion in revenue, and is likely going to be worth more around 25 to 30 billion, we'll see. But Intel is going to maybe get $10 billion in cash from that, that spin out that IPO and it can use that to fund more FABS and more equipment. Intel is leveraging its 19,000 software engineers to move up the stack and sell more subscriptions and high margin software. He got to sell what he got. And finally Pat is playing politics beautifully. Announcing for example, FAB investments in Ohio, which he dubbed Silicon Heartland. Brilliant! Again, there's no doubt that Pat is moving fast and doing the right things. Here's Pat at his investor event in a T-shirt that says, "torrid, bringing back the torrid pace and discipline that Intel is used to." And on the right is Pat at the State of the Union address, looking sharp in shirt and tie and suit. And he has said, "a bet on Intel is a hedge against geopolitical instability in the world." That's just so good. To that statement, he showed this chart at his investor meeting. Basically it shows that whereas semiconductor manufacturing capacity has gone from 80% of the world's volume to 20%, he wants to get it back to 50% by 2030, and reset supply chains in a market that has become important as oil. Again, just brilliant positioning and pushing all the right hot buttons. And here's a slide underscoring that commitment, showing manufacturing facilities around the world with new capacity coming online in the next few years in Ohio and the EU. Mentioning the CHIPS Act in his presentation in The US and Europe as part of a public private partnership, no doubt, he's going to need all the help he can get. Now, we couldn't resist the chart on the left here shows wafer starts and transistor capacity growth. For Intel, overtime speaks to its volume aspirations. But we couldn't help notice that the shape of the curve is somewhat misleading because it shows a two-year (mumbles) and then widens the aperture to three years to make the curve look steeper. Fun with numbers. Okay, maybe a little nitpick, but these are some of the telling nuggets we pulled from the investor day, and they're important. Another nitpick is in our view, wafers would be a better measure of volume than transistors. It's like a company saying we shipped 20% more exabytes or MIPS this year than last year. Of course you did, and your revenue shrank. Anyway, Pat went through a detailed analysis of the various Intel businesses and promised mid to high double digit growth by 2026, half of which will come from Intel's traditional PC they center in network edge businesses and the rest from advanced graphics HPC, Mobileye and Foundry. Okay, that sounds pretty good. But it has to be taken into context that the balance of the semiconductor industry, yeah, this would be a pretty competitive growth rate, in our view, especially for a 70 plus billion dollar company. So kudos to Pat for sticking his neck out on this one. But again, the promise is several years away, at least four years away. Now we want to focus on Foundry because that's the only way Intel is going to get back into the volume game and the volume necessary for the company to compete. Pat built this slide showing the baby blue for today's Foundry business just under a billion dollars and adding in another $1.5 billion for Tower Semiconductor, the Israeli firm that it just acquired. So a few billion dollars in the near term future for the Foundry business. And then by 2026, this really fuzzy blue bar. Now remember, TSM is the new volume leader, and is a $50 billion company growing. So there's definitely a market there that it can go after. And adding in ARM processors to the mix, and, you know, opening up and partnering with the ecosystems out there can only help volume if Intel can win that business, which you know, it should be able to, given the likelihood of long term supply constraints. But we remain skeptical. This is another chart Pat showed, which makes the case that Foundry and IDM 2.0 will allow expensive assets to have a longer useful life. Okay, that's cool. It will also solve the cumulative output problem highlighted in the bottom right. We've talked at length about Wright's Law. That is, for every cumulative doubling of units manufactured, cost will fall by a constant percentage. You know, let's say around 15% in semiconductor world, which is vitally important to accommodate next generation chips, which are always more expensive at the start of the cycle. So you need that 15% cost buffer to jump curves and make any money. So let's unpack this a bit. You know, does this chart at the bottom right address our Wright's Law concerns, i.e. that Intel can't take advantage of Wright's Law because it can't double cumulative output fast enough? Now note the decline in wafer starts and then the slight uptick, and then the flattening. It's hard to tell what years we're talking about here. Intel is not going to share the sausage making because it's probably not pretty, But you can see on the bottom left, the flattening of the cumulative output curve in IDM 1.0 otherwise known as the death spiral. Okay, back to the power of Wright's Law. Now, assume for a second that wafer density doesn't grow. It does, but just work with us for a second. Let's say you produce 50 million units per year, just making a number up. That gets you cumulative output to $100 million in, sorry, 100 million units in the second year to take you two years to get to that 100 million. So in other words, it takes two years to lower your manufacturing cost by, let's say, roughly 15%. Now, assuming you can get wafer volumes to be flat, which that chart showed, with good yields, you're at 150 now in year three, 200 in year four, 250 in year five, 300 in year six, now, that's four years before you can take advantage of Wright's Law. You keep going at that flat wafer start, and that simplifying assumption we made at the start and 50 million units a year, and well, you get to the point. You get the point, it's now eight years before you can get the Wright's Law to kick in, and you know, by then you're cooked. But now you can grow the density of transistors on a chip, right? Yes, of course. So let's come back to Moore's Law. The graphic on the left says that all the growth is in the new stuff. Totally agree with that. Huge term that Pat presented. Now he also said that until we exhaust the periodic table of elements, Moore's Law is alive and well, and Intel is the steward of Moore's Law. Okay, that's cool. The chart on the right shows Intel going from 100 billion transistors today to a trillion by 2030. Hold that thought. So Intel is assuming that we'll keep up with Moore's Law, meaning a doubling of transistors every let's say two years, and I believe it. So bring that back to Wright's Law, in the previous chart, it means with IDM 2.0, Intel can get back to enjoying the benefits of Wright's Law every two years, let's say, versus IDM 1.0 where they were failing to keep up. Okay, so Intel is saved, yeah? Well, let's bring into this discussion one of our favorite examples, Apple's M1 ARM-based chip. The M1 Ultra is a new architecture. And you can see the stats here, 114 billion transistors on a five nanometer process and all the other stats. The M1 Ultra has two chips. They're bonded together. And Apple put an interposer between the two chips. An interposer is a pathway that allows electrical signals to pass through it onto another chip. It's a super fast connection. You can see 2.5 terabytes per second. But the brilliance is the two chips act as a single chip. So you don't have to change the software at all. The way Intel's architecture works is it takes two different chips on a substrate, and then each has its own memory. The memory is not shared. Apple shares the memory for the CPU, the NPU, the GPU. All of it is shared, meaning it needs no change in software unlike Intel. Now Intel is working on a new architecture, but Apple and others are way ahead. Now let's make this really straightforward. The original Apple M1 had 16 billion transistors per chip. And you could see in that diagram, the recently launched M1 Ultra has $114 billion per chip. Now if you take into account the size of the chips, which are increasing, and the increase in the number of transistors per chip, that transistor density, that's a factor of around 6x growth in transistor density per chip in 18 months. Remember Intel, assuming the results in the two previous charts that we showed, assuming they were achievable, is running at 2x every two years, versus 6x for the competition. And AMD and Nvidia are close to that as well because they can take advantage of TSM's learning curve. So in the previous chart with Moore's Law, alive and well, Intel gets to a trillion transistors by 2030. The Apple ARM and Nvidia ecosystems will arrive at that point years ahead of Intel. That means lower costs and significantly better competitive advantage. Okay, so where does that leave Intel? The story is really not resonating with investors and hasn't for a while. On February 18th, the day after its investor meeting, the stock was off. It's rebound a little bit but investors are, you know, they're probably prudent to wait unless they have really a long term view. And you can see Intel's performance relative to some of the major competitors. You know, Pat talked about five nodes in for years. He made a big deal out of that, and he shared proof points with Alder Lake and Meteor Lake and other nodes, but Intel just delayed granite rapids last month that pushed it out from 2023 to 2024. And it told investors that we're going to have to boost spending to turn this ship around, which is absolutely the case. And that delay in chips I feel like the first disappointment won't be the last. But as we've said many times, it's very difficult, actually, it's impossible to quickly catch up in semiconductors, and Intel will never catch up without volume. So we'll leave you by iterating our scenario that could save Intel, and that's if its Foundry business can eventually win back Apple to supercharge its volume story. It's going to be tough to wrestle that business away from TSM especially as TSM is setting up shop in Arizona, with US manufacturing that's going to placate The US government. But look, maybe the government cuts a deal with Apple, says, hey, maybe we'll back off with the DOJ and FTC and as part of the CHIPS Act, you'll have to throw some business at Intel. Would that be enough when combined with other Foundry opportunities Intel could theoretically produce? Maybe. But from this vantage point, it's very unlikely Intel will gain back its true number one leadership position. If it were really paranoid back when David Floyer sounded the alarm 10 years ago, yeah, that might have made a pretty big difference. But honestly, the best we can hope for is Intel's strategy and execution allows it to get competitive volumes by the end of the decade, and this national treasure survives to fight for its leadership position in the 2030s. Because it would take a miracle for that to happen in the 2020s. Okay, that's it for today. Thanks to David Floyer for his contributions to this research. Always a pleasure working with David. Stephanie Chan helps me do much of the background research for "Breaking Analysis," and works with our CUBE editorial team. Kristen Martin and Cheryl Knight to get the word out. And thanks to SiliconANGLE's editor in chief Rob Hof, who comes up with a lot of the great titles that we have for "Breaking Analysis" and gets the word out to the SiliconANGLE audience. Thanks, guys. Great teamwork. Remember, these episodes are all available as podcast wherever you listen. Just search "Breaking Analysis Podcast." You'll want to check out ETR's website @etr.ai. We also publish a full report every week on wikibon.com and siliconangle.com. You could always get in touch with me on email, david.vellante@siliconangle.com or DM me @dvellante, and comment on my LinkedIn posts. This is Dave Vellante for "theCUBE Insights, Powered by ETR." Have a great week. Stay safe, be well, and we'll see you next time. (upbeat music)
SUMMARY :
in Palo Alto in Boston, and Intel is the steward of Moore's Law.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephanie Chan | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
TSM | ORGANIZATION | 0.99+ |
Ohio | LOCATION | 0.99+ |
February 18th | DATE | 0.99+ |
Mobileye | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
$100 million | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Arizona | LOCATION | 0.99+ |
Wright | PERSON | 0.99+ |
18 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
6x | QUANTITY | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
2x | QUANTITY | 0.99+ |
$50 billion | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
$1.5 billion | QUANTITY | 0.99+ |
2030s | DATE | 0.99+ |
2030 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
CHIPS Act | TITLE | 0.99+ |
last year | DATE | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
2020s | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
two-year | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
February | DATE | 0.99+ |
two chips | QUANTITY | 0.99+ |
15 billion | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Tower Semiconductor | ORGANIZATION | 0.99+ |
M1 Ultra | COMMERCIAL_ITEM | 0.99+ |
2024 | DATE | 0.99+ |
70 plus billion dollar | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
A year ago | DATE | 0.99+ |
200 billion chips | QUANTITY | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
three years | QUANTITY | 0.99+ |
CHIPS Act | TITLE | 0.99+ |
second year | QUANTITY | 0.99+ |
about $1.4 billion | QUANTITY | 0.99+ |
early 2010s | DATE | 0.99+ |
Breaking Analysis The Future of the Semiconductor Industry
from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante semiconductors are the heart of technology innovation for decades technology improvements have marched the cadence of silicon advancements in performance cost power and packaging in the past 10 years the dynamics of the semiconductor industry have changed dramatically soaring factory costs device volume explosions fabulous chip companies greater programmability compressed time to tape out a lot more software content the looming presence of china these and other factors have changed the power structure of the semiconductor business chips today power every aspect of our lives and have led to a global semiconductor shortage that's been well covered but we've never seen anything like it before we believe silicon's success in the next 20 years will be determined by volume manufacturing capabilities design innovation public policy geopolitical dynamics visionary leadership and innovative business models that can survive the intense competition in one of the most challenging businesses in the world hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis it's our pleasure to welcome daniel newman in one of the leading analysts in the technology business and founder of futurum research daniel welcome to the program thanks so much dave great to see you thanks for having me big topic yeah i'll say i'm really looking forward to this and so here's some of the topics that we want to cover today if we have time changes in the semiconductor industry i've said they've been dramatic the shift to nofap companies we're going to talk about volume manufacturing those shifts that have occurred largely due to the arm model we want to cover intel and dig into that and what it has to do to to survive and thrive these changes and then we want to take a look at how alternative processors are impacting the world people talk about is moore's law dead is it alive and well daniel you have strong perspectives on all of this including nvidia love to get your thoughts on on that plus talk about the looming china threat as i mentioned in in the intro but daniel before we get into it do these topics they sound okay how do you see the state of the semiconductor industry today where have we come from where are we and where are we going at the macro level there are a lot of different narratives that are streaming alongside and they're not running in parallel so much as they're running and converging towards one another but it gradually different uh you know degrees so the last two years has welcomed a semiconductor conversation that we really hadn't had and that was supply chain driven the covid19 pandemic brought pretty much unprecedented desire demand thirst or products that are powered by semiconductors and it wasn't until we started running out of laptops of vehicles of servers that the whole world kind of put the semiconductor in focus again like it was just one of those things dave that we as a society it's sort of taken for granted like if you need a laptop you go buy a laptop if you needed a vehicle there'd always be one on the lot um but as we've seen kind of this exponentialism that's taken place throughout the pandemic what we ended up realizing is that semiconductors are eating the world and in fact the next industrial the entire industrial itself the complex is powered by semiconductor technology so everything we we do and we want to do right you went from a vehicle that might have had 50 or 100 worth of semiconductors on a few different parts to one that might have 700 800 different chips in it thousands of dollars worth of semi of semiconductors so you know across the board though yes you're dealing with the dynamics of the shortage you're dealing with the dynamics of innovation you're dealing with moore's law and sort of coming to the end which is leading to new process we're dealing with the foundry versus fab versus invention and product development uh situation so there's so many different concurrent semiconductor narratives that are going on dave and we can talk about any of them and all of them and i'm sure as we do we'll overlap all these different themes you know maybe you can solve this mystery for me there's this this this chip shortage and you can't invent vehicle inventory is so tight but yet when you listen to uh the the ads if the the auto manufacturers are pounding the advertising maybe they're afraid of tesla they don't want to lose their brand awareness but anyway so listen it's by the way a background i want to get a little bit academic here but but bear with me i want to introduce actually reintroduce the concept of wright's law to our audience we know we all know about moore's law but the earlier instantiation actually comes from theodore wright t.p wright he was this engineer in the airplane industry and the math is a little bit abstract to apply but roughly translated says as the cumulative number of units produced doubles your cost per unit declines by a fixed percentage now in airplanes that was around 15 percent in semiconductors we think that numbers more like 20 25 when you add the performance improvements you get from silicon advancements it translates into something like 33 percent cost cost declines when you can double your cumulative volume so that's very important because it confers strategic advantage to the company with the largest volume so it's a learning curve dynamic and it's like andy jassy says daniel there's no compression algorithm for experience and it definitely applies here so if you apply wright's law to what's happening in the industry today we think we can get a better understanding of for instance why tsmc is dominating and why intel is struggling any quick thoughts on that well you have to take every formula like that in any sort of standard mathematics and kind of throw it out the window when you're dealing with the economic situation we are right now i'm not i'm not actually throwing it out the window but what i'm saying is that when supply and demand get out of whack some of those laws become a little bit um more difficult to sustain over the long term what i will say about that is we have certainly seen this found um this fabulous model explode over the last few years you're seeing companies that can focus on software frameworks and innovation that aren't necessarily getting caught up in dealing with the large capital expenditures and overhead the ability to as you suggested in the topics here partner with a company like arm that's developing innovation and then and then um you know offering it uh to everybody right and for a licensee and then they can quickly build we're seeing what that's doing with companies like aws that are saying we're going to just build it alibaba we're just going to build it these aren't chip makers these aren't companies that were even considered chip makers they are now today competing as chip makers so there's a lot of different dynamics going back to your comment about wright's law like i said as we normalize and we figure out this situation on a global scale um i do believe that the who can manufacture the most will certainly continue to have significant competitive advantages yeah no so that's a really interesting point that you're bringing up because one of the things that it leads me to think is that the chip shortage could actually benefit intel i think will benefit intel so i want to introduce this some other data and then get your thoughts on this very simply the chart on the left shows pc shipments which peaked in in 2011 and then began at steady decline until covid and they've the pcs as we know have popped up in terms of volume in the past year and looks like they'll be up again this year the chart on the right is cumulative arm shipments and so as we've reported we think arm wafer volumes are 10x those of x86 volumes and and as such the arm ecosystem has far better cost structure than intel and that's why pat gelsinger was called in to sort of save the day so so daniel i just kind of again opened up this this can of worms but i think you're saying long term volume is going to be critical that's going to confer low cost advantages but in the in in the near to mid-term intel could actually benefit from uh from this chip shortage well intel is the opportunity to position itself as a leader in solving the repatriation crisis uh this will kind of carry over when we talk more about china and taiwan and that relationship and what's going on there we've really identified a massive gap in our uh in america supply chain in the global supply chain because we went from i don't have the stat off hand but i have a rough number dave and we can validate this later but i think it was in like the 30-ish high 30ish percentile of manufacturing of chips were done here in the united states around 1990 and now we're sub 10 as of 2020. so we we offshored almost all of our production and so when we hit this crisis and we needed more manufacturing volume we didn't have it ready part of the problem is you get people like elon musk that come out and make comments to the media like oh it'll be fixed later this year well you can't build a fab in a year you can't build a fab and start producing volume and the other problem is not all chips are the same so not every fab can produce every chip and when you do have fabs that are capable of producing multiple chips it costs millions of dollars to change the hardware and to actually change the process so it's not like oh we're going to build 28 today because that's what ford needs to get all those f-150s out of the lot and tomorrow we're going to pump out more sevens for you know a bunch of hp pcs it's a major overhaul every time you want to retool so there's a lot of complexity here but intel is the one domestic company us-based that has basically raised its hand and said we're going to put major dollars into this and by the way dave the arm chart you showed me could have a very big implication as to why intel wants to do that yeah so right because that's that's a big part of of foundry right is is get those volumes up so i want to hold that thought because i just want to introduce one more data point because one of the things we often talk about is the way in which alternative processors have exploded onto the scene and this chart here if you could bring that up patrick thank you shows the way in which i think you're pointing out intel is responding uh by leveraging alternative fat but once again you know kind of getting getting serious about manufacturing chips what the chart shows is the performance curve it's on a log scale for in the blue line is x86 and the orange line is apple's a series and we're using that as a proxy for sort of the curve that arm is on and it's in its performance over time culminating in the a15 and it measures trillions of operations per second so if you take the traditional x86 curve of doubling every 18 to 24 months that comes out roughly to about 40 percent improvement per year in performance and that's diminishing as we all know to around 30 percent a year because the moore's law is waning the orange line is powered by arm and it's growing at over a hundred percent really 110 per year when you do the math and that's when you combine the cpu the the the neural processing unit the the the xpu the dsps the accelerators et cetera so we're seeing apple use arm aws to you to your point is building chips on on graviton and and and tesla's using our list is long and this is one reason why so daniel this curve is it feels like it's the new performance curve in the industry yeah we are certainly in an era where companies are able to take control of the innovation curve using the development using the open ecosystem of arm having more direct control and price control and of course part of that massive arm number has to do with you know mobile devices and iot and devices that have huge scale but at the same time a lot of companies have made the decision either to move some portion of their product development on arm or to move entirely on arm part of why it was so attractive to nvidia part of the reason that it's under so much scrutiny that that deal um whether that deal will end up getting completed dave but we are seeing an era where we want we i said lust for power i talked about lust for semiconductors our lust for our technology to do more uh whether that's software-defined vehicles whether that's the smartphones we keep in our pocket or the desktop computer we use we want these machines to be as powerful and fast and responsive and scalable as possible if you can get 100 where you can get 30 improvement with each year and generation what is the consumer going to want so i think companies are as normal following the demand of consumers and what's available and at the same time there's some economic benefits they're they're able to realize as well i i don't want to i don't want to go too deep into nvidia arm but what do you handicap that that the chances that that acquisition actually happens oh boy um right now there's a lot of reasons it should happen but there are some reasons that it shouldn't i still kind of consider it a coin toss at this point because fundamentally speaking um you know it should create more competition but there are some people out there that believe it could cause less and so i think this is going to be hung up with regulators a little bit longer than we thought we've already sort of had some previews into that dave with the extensions and some of the timelines that have already been given um i know that was a safe answer and i will take credit for being safe this one's going to be a hard one to call but it certainly makes nvidia an amazing uh it gives amazing prospects to nvidia if they're able to get this deal done yeah i i agree with you i think it's 50 50. okay my i want to pose the question is intel too strategic to fail in march of this year we published this article where we posed that question uh you and i both know pat pretty well we talked about at the time the multi-front war intel is waging in a war with amd the arm ecosystem tsmc the design firms china and we looked at the company's moves which seemed to be right from a strategy standpoint the looking at the potential impact of the u.s government intel's partnership with ibm and what that might portend the us government has a huge incentive to make sure intel wins with onshore manufacturing and that looming threat from china but daniel is intel too strategic to fail and is pat gelsinger making the right moves well first of all i do believe at this current juncture where the semiconductor and supply chain shortage and crisis still looms that intel is too strategic to fail i also believe that intel's demise is somewhat overstated not to say intel doesn't have a slate of challenges that it's going to need to address long term just with the technology adoption curve that you showed being one of them dave but you have to remember the company still has nearly 90 of the server cpu market it still has a significant market share in client and pc it is seeing market share erosion but it's not happened nearly as fast as some people had suggested it would happen with right now with the demand in place and as high as it is intel is selling chips just about as quickly as it can make them and so we right now are sort of seeing the tam as a whole the demand as a whole continue to expand and so intel is fulfilling that need but where are they really too strategic to fail i mean we've seen in certain markets in certain uh process in um you know client for instance where amd has gained of course that's still x86 we've seen uh where the m1 was kind of initially thought to be potentially a pro product that would take some time it didn't take nearly as long for them to get that product in good shape um but the foundry and fab side is where i think intel really has a chance to flourish right now one it can play in the arm space it can build these facilities to be able to produce and help support the production of volumes of chips using arm designs so that actually gives intel and inroads two is it's the company that has made the most outspoken commitment to invest in the manufacturing needs of the united states both here in the united states and in other places across the world where we have friendly ally relationships and need more production capabilities if not in intel b and there is no other logical company that's us-based that's going to meet the regulator and policymakers requirements right now that is also raising their hand and saying we have the know-how we've been doing this we can do more of this and so i think pat is leaning into the right area and i think what will happen is very likely intel will support manufacturing of chips by companies like qualcomm companies like nvidia and if they're able to do that some of the market share losses that they're potentially facing with innovation challenges um and engineering challenges could be offset with growth in their fab and foundry businesses and i think i think pat identified it i think he's going to market with it and you know convincing the street that's going to be a whole nother thing that this is exciting um but i think as the street sees the opportunity here this is an area that intel can really lean into so i think i i think people generally would recognize at least the folks i talk to and it'll be interested in your thoughts who really know this business that intel you know had the best manufacturing process in in the world obviously that's coming to question but but but but for instance people say well intel's 10 nanometer you know is comparable to tsm seven nanometer and that's sort of overstated their their nanometer you know loss but but so so they they were able to point as they were able to sort of hide some of the issues maybe in design with great process and and i i believe that comes down to volume so the question i have then is and i think so i think patrick's pat is doing the right thing because he's going after volume and that's what foundry brings but can he get enough volume or does he need for inst for instance i mean one of the theories i've put out there is that apple could could save the day for intel if the if the us government gets apple in a headlock and says hey we'll back off on break up big tech but you got to give pat some of your foundry volume that puts him on a steeper learning curve do you do you worry sometimes though daniel that intel just even with like qualcomm and broadcom who by the way are competitors of theirs and don't necessarily love them but even even so if they could get that those wins that they still won't have the volume to compete on a cost basis or do you feel like even if they're numbered a number three even behind samsung it's good enough what are your thoughts on that well i don't believe a company like intel goes into a business full steam and they're not new to this business but the obvious volume and expansion that they're looking at with the intention of being number two or three these great companies and you know that's same thing i always say with google cloud google's not out to be the third cloud they're out to be one well that's intel will want to to be stronger if the us government and these investments that it's looking at making this 50 plus billion dollars is looking to pour into this particular space which i don't think is actually enough but if if the government makes these commitments and intel being likely one of the recipients of at least some of these dollars to help expedite this process move forward with building these facilities to make increased manufacturing very likely there's going to be some precedent of law a policy that is going to be put in place to make sure that a certain amount of the volume is done here stateside with companies this is a strategic imperative this is a government strategic imperative this is a putting the country at risk of losing its technology leadership if we cannot manufacture and control this process of innovation so i think intel is going to have that as a benefit that the government is going to most likely require some of this manufacturing to take place here um especially if this investment is made the last thing they're going to want to do is build a bunch of foundries and build a bunch of fabs and end up having them not at capacity especially when the world has seen how much of the manufacturing is now being done in taiwan so i think we're concluding and i i i correctly if i'm wrong but intel is too strategic to fail and and i i sometimes worry they can go bankrupt you know trying to compete with the likes of tsmc and that's why the the the public policy and the in the in the partnership with the u.s government and the eu is i think so important yeah i don't think bankruptcy is an immediate issue i think um but while i follow your train of thought dave i think what you're really looking at more is can the company grow and continue to get support where i worry about is shareholders getting exhausted with intel's the merry-go-round of not growing fast enough not gaining market share not being clearly identified as a leader in any particular process or technology and sort of just playing the role of the incumbent and they the company needs to whether it's in ai whether it's at the edge whether it's in the communications and service provider space intel is doing well you look at their quarterly numbers they're making money but if you had to say where are they leading right now what what which thing is intel really winning uh consistently at you know you look at like ai and ml and people will point to nvidia you look at you know innovation for um client you know and even amd has been super disruptive and difficult for intel uh of course you we've already talked about in like mobile um how impactful arm has been and arm is also playing a pretty big role in servers so like i said the market share and the technology leadership are a little out of skew right now and i think that's where pat's really working hard is identifying the opportunities for for intel to play market leader and technology leader again and for the market to clearly say yes um fab and foundry you know could this be an area where intel becomes the clear leader domestically and i think that the answer is definitely yes because none of the big chipmakers in the us are are doing fabrication you know they're they're all outsourcing it to overseas so if intel can really lead that here grow that large here then it takes some of the pressure off of the process and the innovation side and that's not to say that intel won't have to keep moving there but it does augment the revenue creates a new profit center and makes the company even more strategic here domestically yeah and global foundry tapped out of of sub 10 nanometer and that's why ibm's pseudonym hey wait a minute you had a commitment there the concern i have and this is where again your point is i think really important with the chip shortage you know to go from you know initial design to tape out took tesla and apple you know sub sub 24 months you know probably 18 months with intel we're on a three-year design to tape out cycle maybe even four years so they've got to compress that but that as you well know that's a really hard thing to do but the chip shortage is buying them time and i think that's a really important point that you brought out early in this segment so but the other big question daniel i want to test with you is well you mentioned this about seeing arm in the enterprise not a lot of people talk about that or have visibility on that but i think you're right on so will arm and nvidia be able to seriously penetrate the enterprise the server business in particular clearly jensen wants to be there now this data from etr lays out many of the enterprise players and we've superimposed the semiconductor giants in logos the data is an xy chart it shows net score that's etr's measure of spending momentum on the vertical axis and market share on the horizontal axis market share is not like idc market share its presence in the data set and as we reported before aws is leading the charge in enterprise architecture as daniel mentioned they're they're designing their own chips nitro and graviton microsoft is following suit as is google vmware has project monterey cisco is on the chart dell hp ibm with red hat are also shown and we've superimposed intel nvidia china and arm and now we can debate the position of the logos but we know that one intel has a dominant position in the data center it's got to protect that business it cannot lose ground as it has in pcs because the margin pressure it would face two we know aws with its annapurna acquisition is trying to control its own destiny three we know vmware has project monterey and is following aws's lead to support these new workloads beyond x86 general purpose they got partnerships with pansando and arm and others and four we know cisco they've got chip design chops as does hpe maybe to a lesser extent and of course we know ibm has excellent semiconductor design expertise especially when it comes to things like memory disaggregation as i said jensen's going hard after the data center you know him well daniel we know china wants to control its own destiny and then there's arm it dominates mobile as you pointed out in iot can it make a play for the data center daniel how do you see this picture and what are your thoughts on the future of enterprise in the context of semiconductor competition it's going to take some time i believe but some of the investments and products that have been brought to market and you mentioned that shorter tape out period that shorter period for innovation whether it's you know the graviton uh you know on aws or the aiml chips that uh with trainium and inferentia how quickly aws was able to you know develop build deploy to market an arm-based solution that is being well received and becoming an increasing component of the services and and uh products that are being offered from aws at this point it's still pretty small and i would i would suggest that nvidia and arm in the spirit of trying to get this deal done probably don't necess don't want the enterprise opportunity to be overly inflated as to how quickly the company's going to be able to play in that space because that would somewhat maybe slow or bring up some caution flags that of the regulators that are that are monitoring this at the same time you could argue that arm offering additional options in competition much like it's doing in client will offer new form factors new designs um new uh you know new skus the oems will be able to create more customized uh hardware offerings that might be able to be unique for certain enterprises industries can put more focus you know we're seeing the disaggregation with dpus and how that technology using arm with what aws is doing with nitro but what what these different companies are doing to use you know semiconductor technology to split out security networking and storage and so you start to see design innovation could become very interesting on the foundation of arm so in time i certainly see momentum right now the thing is is most companies in the enterprise are looking for something that's fairly well baked off the shelf that can meet their needs whether it's sap or whether it's you know running different custom applications that the business is built on top of commerce solutions and so intel meets most of those needs and so arm has made a lot of sense for instance with these cloud scale providers but not necessarily as much sense for enterprises especially those that don't want to necessarily look at refactoring all the workloads but as software becomes simpler as refactoring becomes easier to do between different uh different technologies and processes you start to say well arm could be compelling and you know because the the bottom line is we know this from mobile devices is most of us don't care what the processor is the average person the average data you know they look at many of these companies the same in enterprise it's always mattered um kind of like in the pc world it used to really matter that's where intel inside was born but as we continue to grow up and you see these different processes these different companies nvidia amd intel all seen as very worthy companies with very capable technologies in the data center if they can offer economics if they can offer performance if they can offer faster time to value people will look at them so i'd say in time dave the answer is arm will certainly become more and more competitive in the data center like it was able to do at the edge in immobile yeah one of the things that we've talked about is that you know the software-defined data center is awesome but it also created a lot of wasted overhead in terms of offloading storage and and networking security and that much of that is being done with general purpose x86 processors which are more expensive than than for instance using um if you look at what as you mentioned great summary of what aws is doing with graviton and trainium and other other tooling what ampere is doing um in in in oracle and you're seeing both of those companies for example particularly aws get isvs to write so they can run general purpose applications on um on arm-based processors as well it sets up well for ai inferencing at the edge which we know arms dominating the edge we see all these new types of workloads coming into the data center if you look at what companies like nebulon and pensando and and others are doing uh you're seeing a lot of their offloads are going to arm they're putting arm in even though they're still using x86 in a lot of cases but but but they're offloading to arm so it seems like they're coming into the back door i understand your point actually about they don't want to overplay their hand there especially during these negotiations but we think that that long term you know it bears watching but intel they have such a strong presence they got a super strong ecosystem and they really have great relationships with a lot of the the enterprise players and they have influence over them so they're going to use that the the the chip shortage benefits them the uh the relationship with the us government pat is spending a lot of time you know working that so it's really going to be interesting to see how this plays out daniel i want to give you the last word your final thoughts on what we talked about today and where you see this all headed i think the world benefits as a whole with more competition and more innovation pressure i like to see more players coming into the fray i think we've seen intel react over the last year under pat gelsinger's leadership we've seen the technology innovation the angstrom era the 20a we're starting to see what that roadmap is going to look like we've certainly seen how companies like nvidia can disrupt come into market and not just using hardware but using software to play a major role but as a whole as innovation continues to take form at scale we all benefit it means more intelligent software-defined vehicles it puts phones in our hands that are more powerful it gives power to you know cities governments and enterprises that can build applications and tools that give us social networks and give us data-driven experiences so i'm very bullish and optimistic on as a whole i said this before i say it again i believe semiconductors will eat the world and then you know you look at the we didn't even really talk about the companies um you know whether it's in ai uh like you know grok or grav core there are some very cool companies building things you've got qualcomm bought nuvia another company that could you know come out of the blue and offer us new innovations in mobile and personal computing i mean there's so many cool companies dave with the scale of data the uh the the growth and demand and desire for connectivity in the world um it's never been a more interesting time to be a fan of technology the only thing i will say as a whole as a society as i hope we can fix this problem because it does create risks the supply chain inflation the economics all that stuff ties together and a lot of people don't see that but if we can't get this manufacturing issue under control we didn't really talk about china dave and i'll just say taiwan and china are very physically close together and the way that china sees taiwan and the way we see taiwan is completely different we have very little control over what can happen we've all seen what's happened with hong kong so there's just so many as i said when i started this conversation we've got all these trains on the track they're all moving but they're not in parallel these tracks are all converging but the convergence isn't perpendicular so sometimes we don't see how all these things interrelate but as a whole it's a very exciting time love being in technology and uh love having the chance to come out here and talk with you i love the optimism and you're right uh that competition in china that's going to come from china as well xi has made it a part of his legacy i think to you know re-incorporate taiwan that's going to be interesting to see i mean taiwan ebbs and flows with regard to you know its leadership sometimes they're more pro i guess i should say less anti-china maybe that's the better way to say it uh and and and you know china's putting in big fab capacity for nand you know maybe maybe people look at that you know some of that is the low end of the market but you know clay christensen would say well to go take a look at the steel industry and see what happened there so so we didn't talk much about china and that was my oversight but but they're after self-sufficiency it's not like they haven't tried before kind of like intel has tried foundry before but i think they're really going for it this time but but now what are your do you believe that china will be able to get self-sufficiency let's say within the next 10 to 15 years with semiconductors yes i would never count china out of anything if they put their mind to it if it's something that they want to put absolute focus on i think um right now china vacillates between wanting to be a good player and a good steward to the world and wanting to completely run its own show the the politicization of what's going on over there we all saw what happened in the real estate market this past week we saw what happened with tech ed over the last few months we've seen what's happened with uh innovation and entrepreneurship it is not entirely clear if china wants to give the more capitalistic and innovation ecosystem a full try but it is certainly shown that it wants to be seen as a world leader over the last few decades it's accomplished that in almost any area that it wants to compete dave i would say if this is one of gigi ping's primary focuses wanting to do this it would be very irresponsible to rule it out as a possibility daniel i gotta tell you i i love collaborating with you um we met face to face just recently and i hope we could do this again i'd love to have you you back on on the program thanks so much for your your time and insights today thanks for having me dave so daniel's website futuram research that's three use in futurum uh check that out for termresearch.com uh the the this individual is really plugged in he's forward thinking and and a great resource at daniel newman uv is his twitter so go follow him for some great stuff and remember these episodes are all available as podcasts wherever you listen all you do is search for breaking analysis podcast we publish each week on wikibon.com and siliconangle.com and by the way daniel thank you for contributing your your quotes to siliconangle the writers there love you uh you can always connect on twitter i'm at divalanto you can email me at david.velante at siliconangle.com appreciate the comments on linkedin and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time you
SUMMARY :
benefit that the government is going to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
50 | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
patrick | PERSON | 0.99+ |
three-year | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
33 percent | QUANTITY | 0.99+ |
nvidia | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
daniel | PERSON | 0.99+ |
taiwan | LOCATION | 0.99+ |
700 | QUANTITY | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
alibaba | ORGANIZATION | 0.99+ |
boston | LOCATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
samsung | ORGANIZATION | 0.99+ |
daniel newman | PERSON | 0.99+ |
thousands of dollars | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
america | LOCATION | 0.99+ |
dave vellante | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
one reason | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
each week | QUANTITY | 0.99+ |
amd | ORGANIZATION | 0.98+ |
aws | ORGANIZATION | 0.98+ |
dave | PERSON | 0.98+ |
10 nanometer | QUANTITY | 0.98+ |
ibm | ORGANIZATION | 0.98+ |
intel | ORGANIZATION | 0.98+ |
pansando | ORGANIZATION | 0.98+ |
palo alto | ORGANIZATION | 0.98+ |
each year | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
u.s government | ORGANIZATION | 0.98+ |
united states | LOCATION | 0.98+ |
china | LOCATION | 0.98+ |
24 months | QUANTITY | 0.97+ |
andy jassy | PERSON | 0.97+ |
this year | DATE | 0.97+ |
50 plus billion dollars | QUANTITY | 0.97+ |
f-150s | COMMERCIAL_ITEM | 0.97+ |
last year | DATE | 0.97+ |
march of this year | DATE | 0.97+ |
termresearch.com | OTHER | 0.97+ |
around 15 percent | QUANTITY | 0.96+ |
vmware | ORGANIZATION | 0.96+ |
The Future of the Semiconductor Industry | TITLE | 0.96+ |
cisco | ORGANIZATION | 0.96+ |
nuvia | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
broadcom | ORGANIZATION | 0.96+ |
clay christensen | PERSON | 0.96+ |
tesla | PERSON | 0.96+ |
china | ORGANIZATION | 0.95+ |
around 30 percent a year | QUANTITY | 0.95+ |
Raj Pai, AWS | AWS EC2 Day 2021
(upbeat rhythmic music) >> Everyone, I'm John Furrier with theCUBE here at Palo Alto on a remote interview for a special video interview. The EC2 15th birthday party celebration event. Raj Pai, who's the Vice President of EC2 Product Management AWS is here with me. Congratulations on Amazon Web Services, EC2 with the compute. What a journey. 15 years old. Soon we got the keys to the car at a couple more years. So Raj, great to see you. You guys have been doing great work. Congratulations. >> Thank you. It's great being here. It's super exciting for me too. I can't believe it's 15 years and you know that big, we're still at the very beginning as you know, that we often say. >> The building blocks that have been there from the beginning really set the table, and it's just been fun to watch the innovation on behalf of customers that you guys have done at AWS and more, and for entrepreneurs and for developers, it just continues to be great and the edge is right on the corner. Wavelength, all the great stuff. But let's talk about the specific topic here that I really want to drill into is that as you look at the 15th year and birthday for EC2, okay? You're looking at the future as well. You're looking at the past, present and future. And one of the things that's most compelling about recent re-invent was the Graviton performance numbers are amazing. You guys have been building custom silicon for a while. You also worked with Intel and AMD. What is it about? What's the huge investment for you guys? Where do you started to see some returns? Are you seeing returns? And then why did AWS decide to build its own processors? >> Yeah, now, that's a really good question. And I mean, like with everything else we do in AWS, it's all about innovating on behalf of our customers. And one of the things our customers are telling us, that they continue to tell us is they want to see better performance at lower prices. And we've been able to deliver that with our hardware partners for the last 15 years. But as we've understood the workloads that run on EC2 and AWS, we saw an opportunity. Like, what if we were going to go and design our own processor that was really optimized for the sort of workload that customers run on the Cloud? And make design decisions when designing the CPU and the system and the chip around the CPU that does things like bring a lot more core local cache and speed up the parts of the operations that really benefit real-world workload. So, this isn't about benchmarks. It's about how do real world workloads perform and how do we build systems that optimize that performance? And with Graviton, we were able to hit the nail on the head. We were also very pleasantly surprised when we got our first chips off the line. And we're seeing that a customer, like about 40% performance improvement at significantly lower cost. And that's super exciting. And that's one of the reasons we're getting so much interest from our customers. >> I got to say as a geek and a tech nerd, I love the silicon development. And there's benefits there, also the performance is there. The thing that also is pretty obvious that's happening is and the world seeing it is the shift towards ARM-based computing. What kinds of customers and use cases are you seeing adopt to Graviton? And what kind of workloads were they running on? What are the things that surprise you guys, that didn't surprise me. Did you guys always talk about the upcheck and how everyone's leveraging it? What are some of the examples? Take us through some of the customers, use cases, workloads. What's surprising you and what's going on with Graviton? >> Yes, so I think that the biggest surprise for us is how broadly applicable it's been. So one of the things we did, we launched with reinvent is like we have different form factors of compute. We have memory-optimized instances that are good for databases and in memory caches. We have compute optimized for HPC and workloads that really take advantage of the performance of the chip and then we have general purpose workloads. And we we introduced Graviton variants of all those instance families And we're actually seeing the same sort of performance benefits across workload. So, and it's one of the reasons why companies like Metrol, and Snap and SmugMug, they move one workload over, they see the performance benefit and before you know it, they're starting to move workloads and mass over across kind of that spectrum. So, I think that's one of the biggest surprises is that Graviton seems to do well across a wide range and we're going to keep on introducing it more and more of instance families, because we've seen this uptick well. >> You're seeing a lot of people move to the Graviton. You mentioned a few of those early adopters who were pushing the envelope, and they're always kind of trotted out there as examples at reinvent, which is always fun to see what they're working on next. And now is that people see the Graviton2 instances, okay, the architecture's different, higher performance. How much effort do our customers typically need to move to Graviton2 instances? And what are some of the benefits they're seeing on performance and price performance? Can you talk about that transition? Because that's natural evolution for them. >> Yeah. It's actually a lot less than they originally think. So, some of the hardest effort is just getting them over the line to try it. So, one of the things that we tell our customers who are considering Graviton is it just takes one or two developers take one workload and go off for a couple of weeks and just try reporting it to Graviton. And more often than not, they come back to us in four or five days. They're like, it works. And we just had to do some testing and verification, but we were able to afford it because, you know, the operating system support was there, the ISP support was there and the tools that they use, and they're using most cases, modern programming languages like Python or Go or Java or PhD where, you know, interpret the language and it just run. And so there's very little lift in comparison to what people think it's going to be. And that's one of the reasons that, you know, one of the big announcements we made in the last few weeks is what we're calling the Graviton challenge, right? So it's a set of blueprints for customers to essentially have best practices on how to in four days take, you know, a piece of code and piece of that workload and execute it and run it and migrate it to the Graviton. And we're seeing a lot of interest in that as people in the community realize how easy that actually is. >> What are some of the cool price performance things that are emerging? Obviously it makes sense if you don't really need it, don't pay for it, but you have that option. A lot of people are going there. Is there a wave you see coming that Graviton2 is going to be really set up for that you kind of see some early signals coming in, Raj? Because, I can see the 64 bit. I can see where Graviton fits today. Obviously, performance is key. Is it certain things that are emerging? What's the main problems that it solves? >> Well, I think anything that's a multi-threaded architecture is going to do really well in Graviton because of the, we have really densely packed 64 course. And so you're going to see things like microservices and containers and workloads that are more, that are able to take advantage of that parallel execution do really, really well. And so, we say 40% performance improvement, but like, when our customers have gone and tried this, they've seen upwards of 50% depending on the workload. So yeah, it's going to be more your multi-threaded application. There's some applications that may not be a fit, like it can give a legacy, you know, for example, like, there's some software that hasn't yet been moved over and we're going to continue to invest super heavily in our whole ecosystem of hardware, for the longterm. So I think that because there's a great option and we just encourage them to try it. And then they'll learn from their experience what works and what doesn't. >> Wow. 15th birthday. Still growing up and it's starting to get more mature. You're the VP of Product Management. You have the keys to the kingdom. So, you have wide-ranging responsibilities. Share with us if you can. I know that you really can't say much, but try to give a little bit of teaser. You got Wavelength. I can see the dots connecting at the edge. You got Outposts, so we see all that emerging. I can almost imagine that this is going to get stronger. What should people think about? Where's the headroom for EC2 with Graviton and Graviton2? >> Yeah, I know. I think like, a new architect (mumbles) yourself. But like, our goal is to have AWS kind of everywhere our customers are. And that means the full power of AWS. So, I think you're going to see more and more of us having EC2 in compute capacity, wherever customers need it. That could be in an Outpost. That could be on their 5G network. That could be in a city right next to them, right? And you're going to see us continue to offer the variety, the selection of instances and platforms in all those locations as well. So, I think the key for us is to be ubiquitous and have compute power everywhere our customers need it, in the form factors they need it. >> That's awesome. Congratulations. I love the power. You can't go wrong with sending computers where the data is, where the customers are. AWS, Amazon Web Services. Building their own custom silicon with Graviton2 processors. This is EC2 continuing to grow up. Raj Pai, Vice President of EC2 Product Management. Thank you for coming on and sharing the update and congratulations on the 15th birthday to EC2. >> Yeah, thanks for having me. It's been great. Have a great Friday. >> All right. Great. I'm Jeffrey with theCUBE. You're watching theCUBE coverage of EC2's 15th birthday event. Thanks for watching. (soft rhythmic music)
SUMMARY :
So Raj, great to see you. that we often say. And one of the things And one of the things our and the world seeing it is the shift So, and it's one of the reasons why And now is that people see And that's one of the Because, I can see the 64 bit. that are able to take advantage You have the keys to the kingdom. And that means the full power of AWS. the 15th birthday to EC2. Have a great Friday. of EC2's 15th birthday event.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Raj Pai | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
Raj | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
50% | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
Metrol | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
SmugMug | ORGANIZATION | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Snap | ORGANIZATION | 0.99+ |
five days | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
first chips | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Friday | DATE | 0.99+ |
two developers | QUANTITY | 0.99+ |
EC2 | ORGANIZATION | 0.98+ |
15th year | QUANTITY | 0.98+ |
64 bit | QUANTITY | 0.98+ |
15th birthday | QUANTITY | 0.97+ |
Graviton | ORGANIZATION | 0.97+ |
Go | TITLE | 0.97+ |
about 40% | QUANTITY | 0.97+ |
four days | QUANTITY | 0.95+ |
EC2 | TITLE | 0.93+ |
15 years old | QUANTITY | 0.9+ |
64 course | QUANTITY | 0.89+ |
today | DATE | 0.87+ |
Graviton2 | COMMERCIAL_ITEM | 0.86+ |
Graviton2 | TITLE | 0.85+ |
Wavelength | TITLE | 0.85+ |
15th birthday event | QUANTITY | 0.83+ |
theCUBE | ORGANIZATION | 0.83+ |
couple more years | QUANTITY | 0.82+ |
last 15 years | DATE | 0.82+ |
Graviton | TITLE | 0.81+ |
Vice President | PERSON | 0.8+ |
EC2 | EVENT | 0.79+ |
last | DATE | 0.65+ |
EC2 Day 2021 | EVENT | 0.64+ |
Graviton | COMMERCIAL_ITEM | 0.53+ |
weeks | DATE | 0.49+ |
Breaking Analysis: Why Apple Could be the Key to Intel's Future
>> From theCUBE studios in Palo Alto, in Boston bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante >> The latest Arm Neoverse announcement further cements our opinion that it's architecture business model and ecosystem execution are defining a new era of computing and leaving Intel in it's dust. We believe the company and its partners have at least a two year lead on Intel and are currently in a far better position to capitalize on a major waves that are driving the technology industry and its innovation. To compete our view is that Intel needs a new strategy. Now, Pat Gelsinger is bringing that but they also need financial support from the US and the EU governments. Pat Gelsinger was just noted as asking or requesting from the EU government $9 billion, sorry, 8 billion euros in financial support. And very importantly, Intel needs a volume for its new Foundry business. And that is where Apple could be a key. Hello, everyone. And welcome to this week's weekly bond Cube insights powered by ETR. In this breaking analysis will explain why Apple could be the key to saving Intel and America's semiconductor industry leadership. We'll also further explore our scenario of the evolution of computing and what will happen to Intel if it can't catch up. Here's a hint it's not pretty. Let's start by looking at some of the key assumptions that we've made that are informing our scenarios. We've pointed out many times that we believe Arm wafer volumes are approaching 10 times those of x86 wafers. This means that manufacturers of Arm chips have a significant cost advantage over Intel. We've covered that extensively, but we repeat it because when we see news reports and analysis and print it's not a factor that anybody's highlighting. And this is probably the most important issue that Intel faces. And it's why we feel that Apple could be Intel's savior. We'll come back to that. We've projected that the chip shortage will last no less than three years, perhaps even longer. As we reported in a recent breaking analysis. Well, Moore's law is waning. The result of Moore's law, I.e the doubling of processor performance every 18 to 24 months is actually accelerating. We've observed and continue to project a quadrupling of performance every two years, breaking historical norms. Arm is attacking the enterprise and the data center. We see hyperscalers as the tip of their entry spear. AWS's graviton chip is the best example. Amazon and other cloud vendors that have engineering and software capabilities are making Arm-based chips capable of running general purpose applications. This is a huge threat to x86. And if Intel doesn't quickly we believe Arm will gain a 50% share of an enterprise semiconductor spend by 2030. We see the definition of Cloud expanding. Cloud is no longer a remote set of services, in the cloud, rather it's expanding to the edge where the edge could be a data center, a data closet, or a true edge device or system. And Arm is by far in our view in the best position to support the new workloads and computing models that are emerging as a result. Finally geopolitical forces are at play here. We believe the U S government will do, or at least should do everything possible to ensure that Intel and the U S chip industry regain its leadership position in the semiconductor business. If they don't the U S and Intel could fade to irrelevance. Let's look at this last point and make some comments on that. Here's a map of the South China sea in a way off in the Pacific we've superimposed a little pie chart. And we asked ourselves if you had a hundred points of strategic value to allocate, how much would you put in the semiconductor manufacturing bucket and how much would go to design? And our conclusion was 50, 50. Now it used to be because of Intel's dominance with x86 and its volume that the United States was number one in both strategic areas. But today that orange slice of the pie is dominated by TSMC. Thanks to Arm volumes. Now we've reported extensively on this and we don't want to dwell on it for too long but on all accounts cost, technology, volume. TSMC is the clear leader here. China's president Xi has a stated goal of unifying Taiwan by China's Centennial in 2049, will this tiny Island nation which dominates a critical part of the strategic semiconductor pie, go the way of Hong Kong and be subsumed into China. Well, military experts say it was very hard for China to take Taiwan by force, without heavy losses and some serious international repercussions. The US's military presence in the Philippines and Okinawa and Guam combined with support from Japan and South Korea would make it even more difficult. And certainly the Taiwanese people you would think would prefer their independence. But Taiwanese leadership, it ebbs and flows between those hardliners who really want to separate and want independence and those that are more sympathetic to China. Could China for example, use cyber warfare to over time control the narrative in Taiwan. Remember if you control the narrative you can control the meme. If you can crawl the meme you control the idea. If you control the idea, you control the belief system. And if you control the belief system you control the population without firing a shot. So is it possible that over the next 25 years China could weaponize propaganda and social media to reach its objectives with Taiwan? Maybe it's a long shot but if you're a senior strategist in the U S government would you want to leave that to chance? We don't think so. Let's park that for now and double click on one of our key findings. And that is the pace of semiconductor performance gains. As we first reported a few weeks ago. Well, Moore's law is moderating the outlook for cheap dense and efficient processing power has never been better. This slideshows two simple log lines. One is the traditional Moore's law curve. That's the one at the bottom. And the other is the current pace of system performance improvement that we're seeing measured in trillions of operations per second. Now, if you calculate the historical annual rate of processor performance improvement that we saw with x86, the math comes out to around 40% improvement per year. Now that rate is slowing. It's now down to around 30% annually. So we're not quite doubling every 24 months anymore with x86 and that's why people say Moore's law is dead. But if you look at the (indistinct) effects of packaging CPU's, GPU's, NPUs accelerators, DSPs and all the alternative processing power you can find in SOC system on chip and eventually system on package it's growing at more than a hundred percent per annum. And this means that the processing power is now quadrupling every 24 months. That's impressive. And the reason we're here is Arm. Arm has redefined the core process of model for a new era of computing. Arm made an announcement last week which really recycle some old content from last September, but it also put forth new proof points on adoption and performance. Arm laid out three components and its announcement. The first was Neoverse version one which is all about extending vector performance. This is critical for high performance computing HPC which at one point you thought that was a niche but it is the AI platform. AI workloads are not a niche. Second Arm announced the Neoverse and two platform based on the recently introduced Arm V9. We talked about that a lot in one of our earlier Breaking Analysis. This is going to performance boost of around 40%. Now the third was, it was called CMN-700 Arm maybe needs to work on some of its names, but Arm said this is the industry's most advanced mesh interconnect. This is the glue for the V1 and the N2 platforms. The importance is it allows for more efficient use and sharing of memory resources across components of the system package. We talked about this extensively in previous episodes the importance of that capability. Now let's share with you this wheel diagram underscores the completeness of the Arm platform. Arms approach is to enable flexibility across an open ecosystem, allowing for value add at many levels. Arm has built the architecture in design and allows an open ecosystem to provide the value added software. Now, very importantly, Arm has created the standards and specifications by which they can with certainty, certify that the Foundry can make the chips to a high quality standard, and importantly that all the applications are going to run properly. In other words, if you design an application, it will work across the ecosystem and maintain backwards compatibility with previous generations, like Intel has done for years but Arm as we'll see next is positioning not only for existing workloads but also the emerging high growth applications. To (indistinct) here's the Arm total available market as we see it, we think the end market spending value of just the chips going into these areas is $600 billion today. And it's going to grow to 1 trillion by 2030. In other words, we're allocating the value of the end market spend in these sectors to the marked up value of the Silicon as a percentage of the total spend. It's enormous. So the big areas are Hyperscale Clouds which we think is around 20% of this TAM and the HPC and AI workloads, which account for about 35% and the Edge will ultimately be the largest of all probably capturing 45%. And these are rough estimates and they'll ebb and flow and there's obviously some overlap but the bottom line is the market is huge and growing very rapidly. And you see that little red highlighted area that's enterprise IT. Traditional IT and that's the x86 market in context. So it's relatively small. What's happening is we're seeing a number of traditional IT vendors, packaging x86 boxes throwing them over the fence and saying, we're going after the Edge. And what they're doing is saying, okay the edge is this aggregation point for all these end point devices. We think the real opportunity at the Edge is for AI inferencing. That, that is where most of the activity and most of the spending is going to be. And we think Arm is going to dominate that market. And this brings up another challenge for Intel. So we've made the point a zillion times that PC volumes peaked in 2011. And we saw that as problematic for Intel for the cost reasons that we've beat into your head. And lo and behold PC volumes, they actually grew last year thanks to COVID and we'll continue to grow it seems for a year or so. Here's some ETR data that underscores that fact. This chart shows the net score. Remember that's spending momentum it's the breakdown for Dell's laptop business. The green means spending is accelerating and the red is decelerating. And the blue line is net score that spending momentum. And the trend is up and to the right now, as we've said this is great news for Dell and HP and Lenovo and Apple for its laptops, all the laptops sellers but it's not necessarily great news for Intel. Why? I mean, it's okay. But what it does is it shifts Intel's product mix toward lower margin, PC chips and it squeezes Intel's gross margins. So the CFO has to explain that margin contraction to wall street. Imagine that the business that got Intel to its monopoly status is growing faster than the high margin server business. And that's pulling margins down. So as we said, Intel is fighting a war on multiple fronts. It's battling AMD in the core x86 business both PCs and servers. It's watching Arm mop up in mobile. It's trying to figure out how to reinvent itself and change its culture to allow more flexibility into its designs. And it's spinning up a Foundry business to compete with TSMC. So it's got to fund all this while at the same time propping up at stock with buybacks Intel last summer announced that it was accelerating it's $10 billion stock buyback program, $10 billion. Buy stock back, or build a Foundry which do you think is more important for the future of Intel and the us semiconductor industry? So Intel, it's got to protect its past while building his future and placating wall street all at the same time. And here's where it gets even more dicey. Intel's got to protect its high-end x86 business. It is the cash cow and funds their operation. Who's Intel's biggest customer Dell, HP, Facebook, Google Amazon? Well, let's just say Amazon is a big customer. Can we agree on that? And we know AWS is biggest revenue generator is EC2. And EC2 was powered by microprocessors made from Intel and others. We found this slide in the Arm Neoverse deck and it caught our attention. The data comes from a data platform called lifter insights. The charts show, the rapid growth of AWS is graviton chips which are they're custom designed chips based on Arm of course. The blue is that graviton and the black vendor A presumably is Intel and the gray is assumed to be AMD. The eye popper is the 2020 pie chart. The instant deployments, nearly 50% are graviton. So if you're Pat Gelsinger, you better be all over AWS. You don't want to lose this customer and you're going to do everything in your power to keep them. But the trend is not your friend in this account. Now the story gets even gnarlier and here's the killer chart. It shows the ISV ecosystem platforms that run on graviton too, because AWS has such good engineering and controls its own stack. It can build Arm-based chips that run software designed to run on general purpose x86 systems. Yes, it's true. The ISV, they got to do some work, but large ISV they have a huge incentives because they want to ride the AWS wave. Certainly the user doesn't know or care but AWS cares because it's driving costs and energy consumption down and performance up. Lower cost, higher performance. Sounds like something Amazon wants to consistently deliver, right? And the ISV portfolio that runs on our base graviton and it's just going to continue to grow. And by the way, it's not just Amazon. It's Alibaba, it's Oracle, it's Marvell. It's 10 cents. The list keeps growing Arm, trotted out a number of names. And I would expect over time it's going to be Facebook and Google and Microsoft. If they're not, are you there? Now the last piece of the Arm architecture story that we want to share is the progress that they're making and compare that to x86. This chart shows how Arm is innovating and let's start with the first line under platform capabilities. Number of cores supported per die or, or system. Now die is what ends up as a chip on a small piece of Silicon. Think of the die as circuit diagram of the chip if you will, and these circuits they're fabricated on wafers using photo lithography. The wafers then cut up into many pieces each one, having a chip. Each of these pieces is the chip. And two chips make up a system. The key here is that Arm is quadrupling the number of cores instead of increasing thread counts. It's giving you cores. Cores are better than threads because threads are shared and cores are independent and much easier to virtualize. This is particularly important in situations where you want to be as efficient as possible sharing massive resources like the Cloud. Now, as you can see in the right hand side of the chart under the orange Arm is dramatically increasing the amount of capabilities compared to previous generations. And one of the other highlights to us is that last line that CCIX and CXL support again Arm maybe needs to name these better. These refer to Arms and memory sharing capabilities within and between processors. This allows CPU's GPU's NPS, et cetera to share resources very often efficiently especially compared to the way x86 works where everything is currently controlled by the x86 processor. CCIX and CXL support on the other hand will allow designers to program the system and share memory wherever they want within the system directly and not have to go through the overhead of a central processor, which owns the memory. So for example, if there's a CPU, GPU, NPU the CPU can say to the GPU, give me your results at a specified location and signal me when you're done. So when the GPU is finished calculating and sending the results, the GPU just signals the operation is complete. Versus having to ping the CPU constantly, which is overhead intensive. Now composability in that chart means the system it's a fixed. Rather you can programmatically change the characteristics of the system on the fly. For example, if the NPU is idle you can allocate more resources to other parts of the system. Now, Intel is doing this too in the future but we think Arm is way ahead. At least by two years this is also huge for Nvidia, which today relies on x86. A major problem for Nvidia has been coherent memory management because the utilization of its GPU is appallingly low and it can't be easily optimized. Last week, Nvidia announced it's intent to provide an AI capability for the data center without x86 I.e using Arm-based processors. So Nvidia another big Intel customer is also moving to Arm. And if it's successful acquiring Arm which is still a long shot this trend is only going to accelerate. But the bottom line is if Intel can't move fast enough to stem the momentum of Arm we believe Arm will capture 50% of the enterprise semiconductor spending by 2030. So how does Intel continue to lead? Well, it's not going to be easy. Remember we said, Intel, can't go it alone. And we posited that the company would have to initiate a joint venture structure. We propose a triumvirate of Intel, IBM with its power of 10 and memory aggregation and memory architecture And Samsung with its volume manufacturing expertise on the premise that it coveted in on US soil presence. Now upon further review we're not sure the Samsung is willing to give up and contribute its IP to this venture. It's put a lot of money and a lot of emphasis on infrastructure in South Korea. And furthermore, we're not convinced that Arvind Krishna who we believe ultimately made the call to Jettisons. Jettison IBM's micro electronics business wants to put his efforts back into manufacturing semi-conductors. So we have this conundrum. Intel is fighting AMD, which is already at seven nanometer. Intel has a fall behind in process manufacturing which is strategically important to the United States it's military and the nation's competitiveness. Intel's behind the curve on cost and architecture and is losing key customers in the most important market segments. And it's way behind on volume. The critical piece of the pie that nobody ever talks about. Intel must become more price and performance competitive with x86 and bring in new composable designs that maintain x86 competitive. And give the ability to allow customers and designers to add and customize GPU's, NPUs, accelerators et cetera. All while launching a successful Foundry business. So we think there's another possibility to this thought exercise. Apple is currently reliant on TSMC and is pushing them hard toward five nanometer, in fact sucking up a lot of that volume and TSMC is maybe not servicing some other customers as well as it's servicing Apple because it's a bit destructive, it is distracted and you have this chip shortage. So Apple because of its size gets the lion's share of the attention but Apple needs a trusted onshore supplier. Sure TSMC is adding manufacturing capacity in the US and Arizona. But back to our precarious scenario in the South China sea. Will the U S government and Apple sit back and hope for the best or will they hope for the best and plan for the worst? Let's face it. If China gains control of TSMC, it could block access to the latest and greatest process technology. Apple just announced that it's investing billions of dollars in semiconductor technology across the US. The US government is pressuring big tech. What about an Apple Intel joint venture? Apple brings the volume, it's Cloud, it's Cloud, sorry. It's money it's design leadership, all that to the table. And they could partner with Intel. It gives Intel the Foundry business and a guaranteed volume stream. And maybe the U S government gives Apple a little bit of breathing room and the whole big up big breakup, big tech narrative. And even though it's not necessarily specifically targeting Apple but maybe the US government needs to think twice before it attacks big tech and thinks about the long-term strategic ramifications. Wouldn't that be ironic? Apple dumps Intel in favor of Arm for the M1 and then incubates, and essentially saves Intel with a pipeline of Foundry business. Now back to IBM in this scenario, we've put a question mark on the slide because maybe IBM just gets in the way and why not? A nice clean partnership between Intel and Apple? Who knows? Maybe Gelsinger can even negotiate this without giving up any equity to Apple, but Apple could be a key ingredient to a cocktail of a new strategy under Pat Gelsinger leadership. Gobs of cash from the US and EU governments and volume from Apple. Wow, still a long shot, but one worth pursuing because as we've written, Intel is too strategic to fail. Okay, well, what do you think? You can DM me @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn post. Remember, these episodes are all available as podcasts so please subscribe wherever you listen. I publish weekly on wikibon.com and siliconangle.com. And don't forget to check out etr.plus for all the survey analysis. And I want to thank my colleague, David Floyer for his collaboration on this and other related episodes. This is Dave Vellante for theCUBE insights powered by ETR. Thanks for watching, be well, and we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and most of the spending is going to be.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Pat Gelsinger | PERSON | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
50% | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
$600 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
45% | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
10 cents | QUANTITY | 0.99+ |
South Korea | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
Last week | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Arizona | LOCATION | 0.99+ |
U S | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
1 trillion | QUANTITY | 0.99+ |
2030 | DATE | 0.99+ |
Marvell | ORGANIZATION | 0.99+ |
China | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Moore | PERSON | 0.99+ |
$9 billion | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
EU | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
last week | DATE | 0.99+ |
twice | QUANTITY | 0.99+ |
first line | QUANTITY | 0.99+ |
Okinawa | LOCATION | 0.99+ |
last September | DATE | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Breaking Analysis: Arm Lays Down the Gauntlet at Intel's Feet
>> Announcer: From the Cube's studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> Exactly one week after Pat Gelsinger's announcement of his plans to reinvent Intel. Arm announced version nine of its architecture and laid out its vision for the next decade. We believe this vision is extremely strong as it combines an end-to-end capability from Edge to Cloud, to the data center, to the home and everything in between. Arms aspirations are ambitious and powerful. Leveraging its business model, ecosystem and software compatibility with previous generations. Hello every one and welcome to this week's Wikibon Cube Insights powered by ETR. And this breaking analysis will explain why we think this announcement is so important and what it means for Intel and the broader technology landscape. We'll also share with you some feedback that we received from the Cube Community on last week's episode and a little inside baseball on how Intel, IBM, Samsung, TSMC and the U.S. government might be thinking about the shifting landscape of semiconductor technology. Now, there were two notable announcements this week that were directly related to Intel's announcement of March 23rd. The Armv9 news and TSMC's plans to invest a $100 billion in chip manufacturing and development over the next three years. That is a big number. It appears to tramp Intel's plan $20 billion investment to launch two new fabs in the U.S. starting in 2024. You may remember back in 2019, Samsung pledged to invest a $116 billion to diversify its production beyond memory trip, memory chips. Why are all these companies getting so aggressive? And won't this cause a glut in chips? Well, first, China looms large and aims to dominate its local markets, which in turn is going to confer advantages globally. The second, there's a huge chip shortage right now. And the belief is that it's going to continue through the decade and possibly beyond. We are seeing a new inflection point in the demand as we discussed last week. Stemming from digital, IOT, cloud, autos in new use cases in the home as so well presented by Sarjeet Johal in our community. As to the glut, these manufacturers believe that demand will outstrip supply indefinitely. And I understand that a lack of manufacturing capacity is actually more deadly than an oversupply. Look, if there's a glut, manufacturers can cut production and take the financial hit. Whereas capacity constraints mean you can miss entire cycles of growth and really miss out on the demand and the cost reductions. So, all these manufacturers are going for it. Now let's talk about Arm, its approach and the announcements that it made this week. Now last week, we talked about how Pat Gelsinger his vision of a system on package was an attempt to leapfrog system on chip SOC, while Arm is taking a similar system approach. But in our view, it's even broader than the vision laid out by Pat at Intel. Arm is targeting a wide variety of use cases that are shown here. Arm's fundamental philosophy is that the future will require highly specialized chips and Intel as you recall from Pat's announcement, would agree. But Arm historically takes an ecosystem approach that is different from Intel's model. Arm is all about enabling the production of specialized chips to really fit a specific application. For example, think about the amount of AI going on iPhones. They move if I remember from fingerprint to face recognition. This requires specialized neural processing units, NPUs that are designed by Apple for that particular use case. Arm is facilitating the creation of these specialized chips to be designed and produced by the ecosystem. Intel on the other hand has historically taken a one size fits all approach. Built around the x86. The Intel's design has always been about improving the processor. For example, in terms of speed, density, adding vector processing to accommodate AI, et cetera. And Intel does all the design and the manufacturing in any specialization for the ecosystem is done by Intel. Much of the value, that's added from the ecosystem is frankly been bending metal or adding displays or other features at the margin. But, the advantage is that the x86 architecture is well understood. It's consistent, reliable, and let's face it. Most enterprise software runs on x86. So, but very, very different models historically, which we heard from Gelsinger last week they're going to change with a new trusted foundry strategy. Now let's go through an example that might help explain the power of Arm's model. Let's say, your AWS and you're doing graviton. Designing graviton and graviton2. Or Apple, designing the M1 chip, or Tesla designing its own chip, or any other company in in any one of these use cases that are shown here. Tesla is a really good example. In order to optimize for video processing, Tesla needed to add specialized code firmware in the NPU for it's specific use case within autos. It was happy to take off the shelf CPU or GPU or whatever, and leverage Arm's standards there. And then it added its own value in the NPU. So the advantage of this model is Tesla could go from tape out in less or, or, or or in less than a year versus get the tape out in less than a year versus what would normally take many years. Arm is, think of Arm is like customize a Lego blocks that enable unique value add by the ecosystem with a much faster time to market. So like I say, the Tesla goes from logical tape out if you will, to Samsung and then says, okay run this against your manufacturing process. And it should all work as advertised by Arm. Tesla, interestingly, just as an aside chose the 14 nanometer process to keep its costs down. It didn't need the latest and greatest density. Okay, so you can see big difference in philosophies historically between Arm and Intel. And you can see Intel vectoring toward the Arm model based on what Gelsinger said last week for its foundry business. Essentially it has to. Now, Arm announced a new Arm architecture, Armv9. v9 is backwards compatible with previous generations. Perhaps Arm learned from Intel's failed, Itanium effort for those remember that word. Had no backward compatibility and it really floundered. As well, Arm adds some additional capabilities. And today we're going to focus on the two areas that have highlighted, machine learning piece and security. I'll take note of the call out, 300 billion chips. That's Arm's vision. That's a lot. And we've said, before, Arm's way for volumes are 10X those of x86. Volume, we sound like a broken record. Volume equals cost reduction. We'll come back to that a little bit later. Now let's have a word on AI and machine learning. Arm is betting on AI and ML. Big as are many others. And this chart really shows why, it's a graphic that shows ETR data and spending momentum and pervasiveness in the dataset across all the different sectors that ETR tracks within its taxonomy. Note that ML/AI gets the top spot on the vertical axis, which represents net score. That's a measure of spending momentum or spending velocity. The horizontal axis is market share presence in the dataset. And we give this sector four stars to signify it's consistent lead in the data. So pretty reasonable bet by Arm. But the other area that we're going to talk about is security. And its vision day, Arm talked about confidential compute architecture and these things called realms. Note in the left-hand side, showing data traveling all over the different use cases and around the world and the call-out from the CISO below, it's a large public airline CISO that spoke at an ETR Venn round table. And this individual noted that the shifting end points increase the threat vectors. We all know that. Arm said something that really resonated. Specifically, they said today, there's far too much trust on the OS and the hypervisor that are running these applications. And their broad access to data is a weakness. Arm's concept of realms as shown in the right-hand side, underscores the company strategy to remove the assumption that privileged software. Like the hypervisor needs to be able to see the data. So by creating realms, in a virtualized multi-tenant environment, data can be more protected from memory leaks which of course is a major opportunity for hackers that they exploit. So it's a nice concept in a way for the system to isolate attendance data from other users. Okay, we want, we want to share some feedback that we got last week from the community on our analysis of Intel. A tech exec from city pointed out that, Intel really didn't miss a mobile, as we said, it really missed smartphones. In fact, whell, this is a kind of a minor distinction, it's important to recognize we think. Because Intel facilitated WIFI with Centrino, under the direction of Paul Alini. Who by the way, was not an engineer. I think he was the first non-engineer to be the CEO of Intel. He was a marketing person by background. Ironically, Intel's work in wifi connectivity enabled, actually enabled the smartphone revolution. And maybe that makes the smartphone missed by Intel all that more egregious, I don't know. Now the other piece of feedback we received related to our IBM scenario and our three-way joint venture prediction bringing together Intel, IBM, and Samsung in a triumvirate where Intel brings the foundry and it's process manufacturing. IBM brings its dis-aggregated memory technology and Samsung brings its its volume and its knowledge of of volume down the learning curve. Let's start with IBM. Remember we said that IBM with power 10 has the best technology in terms of this notion of dis-aggregating compute from memory and sharing memory in a pool across different processor types. So a few things in this regard, IBM when it restructured its micro electronics business under Ginni Rometty, catalyzed the partnership with global foundries and you know, this picture in the upper right it shows the global foundries facility outside of Albany, New York in Malta. And the partnership included AMD and Samsung. But we believe that global foundries is backed away from some of its contractual commitments with IBM causing a bit of a rift between the companies and leaving a hole in your original strategy. And evidently AMD hasn't really leaned in to move the needle in any way and so the New York foundry, is it a bit of a state of limbo with respect to its original vision. Now, well, Arvind Krishna was the face of the Intel announcement. It clearly has deep knowledge of IBM semiconductor strategy. Dario Gill, we think is a key player in the mix. He's the senior vice president director of IBM research. And it is in a position to affect some knowledge sharing and maybe even knowledge transfer with Intel possibly as it relates to disaggregated architecture. His questions remain as to how open IBM will be. And how protected it will be with its IP. It's got, as we said, last week, it's got to have an incentive to do so. Now why would IBM do that? Well, it wants to compete more effectively with VMware who has done a great job leveraging x86 and that's the biggest competitor in threat to open shift. So Arvind needs Intel chips to really execute on IBM's cloud strategy. Because almost all of IBM's customers are running apps on x86. So IBM's cloud and hybrid cloud. Strategy really need to leverage that Intel partnership. Now Intel for its part has great FinFET technology. FinFET is a tactic goes beyond CMOs. You all mainframes might remember when IBM burned the boat on ECL, Emitter-coupled Logic. And then moved to CMOs for its mainframes. Well, this is the next gen beyond, and it could give Intel a leg up on AMD's chiplet intellectual properties. Especially as it relates to latency. And there could be some benefits there for IBM. So maybe there's a quid pro quo going on. Now, where it really gets interesting is New York Senator, Chuck Schumer, is keen on building up an alternative to Silicon Valley in New York now it is Silicon Alley. So it's possible that Intel, who by the way has really good process technology. This is an aside, it really allowed TSMC to run the table with the whole seven nanometers versus 10 minute nanometer narrative. TSMC was at seven nanometer. Intel was at 10 nanometer. And really, we've said in the past that Intel's 10 nanometer tech is pretty close to TSMC seven. So Intel's ahead in that regard, even though in terms of, you know, the intervener thickness density, it's it's not, you know. These are sort of games that the semiconductor companies play, but you know it's possible that Intel with the U.S. government and IBM and Samsung could make a play for that New York foundry as part of Intel's trusted foundry strategy and kind of reshuffle that deck in Albany. Sounds like a "Game of Thrones," doesn't it? By the way, TSMC has been so consumed servicing Apple for five nanometer and eventually four nanometer that it's dropped the ball on some of its other's customers, namely Nvidia. And remember, a long-term competitiveness and cost reductions, they all come down to volume. And we think that Intel can't get to volume without an Arm strategy. Okay, so maybe the JV, the Joint Venture that we talked about, maybe we're out on a limb there and that's a stretch. And perhaps Samsung's not willing to play ball, given it's made huge investments in fabs and infrastructure and other resources, locally, but we think it's still viable scenario because we think Samsung definitely would covet a presence in the United States. No good to do that directly but maybe a partnership makes more sense in terms of gaining ground on TSMC. But anyway, let's say Intel can become a trusted foundry with the help of IBM and the U.S. government. Maybe then it could compete on volume. Well, how would that work? Well, let's say Nvidia, let's say they're not too happy with TSMC. Maybe with entertain Intel as a second source. Would that do it? In and of itself, no. But what about AWS and Google and Facebook? Maybe this is a way to placate the U.S. government and call off the antitrust dogs. Hey, we'll give Intel Foundry our business to secure America's semiconductor leadership and future and pay U.S. government. Why don't you chill out, back off a little bit. Microsoft even though, you know, it's not getting as much scrutiny from the U.S. government, it's anti trustee is maybe perhaps are behind it, who knows. But I think Microsoft would be happy to play ball as well. Now, would this give Intel a competitive volume posture? Yes, we think it would, for sure. If it can gain the trust of these companies and the volume we think would be there. But as we've said, currently, this is a very, very long shot because of the, the, the new strategy, the distance the difference in the Foundry business all those challenges that we laid out last week, it's going to take years to play out. But the dots are starting to connect in this scenario and the stakes are exceedingly high hence the importance of the U.S. government. Okay, that's it for now. Thanks to the community for your comments and insights. And thanks again to David Floyer whose analysis around Arm and semiconductors. And this work that he's done for the past decade is of tremendous help. Remember I publish each week on wikibon.com and siliconangle.com. And these episodes are all available as podcasts, just search for braking analysis podcast and you can always connect on Twitter. You can hit the chat right here or this live event or email me at david.vellante@siliconangle.com. Look, I always appreciate the comments on LinkedIn and Clubhouse. You can follow me so you're notified when we start a room and riff on these topics as well as others. And don't forget to check out etr.plus where all the survey data. This is Dave Vellante for the Cube Insights powered by ETR. Be well, and we'll see you next time. (cheerful music) (cheerful music)
SUMMARY :
Announcer: From the Cube's studios And maybe that makes the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dario Gill | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
March 23rd | DATE | 0.99+ |
Pat | PERSON | 0.99+ |
Albany | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Alini | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
$116 billion | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Arvind | PERSON | 0.99+ |
less than a year | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
10 nanometer | QUANTITY | 0.99+ |
10X | QUANTITY | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
seven nanometers | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2024 | DATE | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
last week | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
$20 billion | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Sarjeet Johal | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
Breaking Analysis: Arm Lays Down The Gauntlet at Intel's Feet
>> From the Cube's studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> Exactly one week after Pat Gelsinger's announcement of his plans to reinvent Intel. Arm announced version nine of its architecture and laid out its vision for the next decade. We believe this vision is extremely strong as it combines an end-to-end capability from Edge to Cloud, to the data center, to the home and everything in between. Arms aspirations are ambitious and powerful. Leveraging its business model, ecosystem and software compatibility with previous generations. Hello every one and welcome to this week's Wikibon Cube Insights powered by ETR. And this breaking analysis will explain why we think this announcement is so important and what it means for Intel and the broader technology landscape. We'll also share with you some feedback that we received from the Cube Community on last week's episode and a little inside baseball on how Intel, IBM, Samsung, TSMC and the U.S. government might be thinking about the shifting landscape of semiconductor technology. Now, there were two notable announcements this week that were directly related to Intel's announcement of March 23rd. The Armv9 news and TSMC's plans to invest a $100 billion in chip manufacturing and development over the next three years. That is a big number. It appears to tramp Intel's plan $20 billion investment to launch two new fabs in the U.S. starting in 2024. You may remember back in 2019, Samsung pledged to invest a $116 billion to diversify its production beyond memory trip, memory chips. Why are all these companies getting so aggressive? And won't this cause a glut in chips? Well, first, China looms large and aims to dominate its local markets, which in turn is going to confer advantages globally. The second, there's a huge chip shortage right now. And the belief is that it's going to continue through the decade and possibly beyond. We are seeing a new inflection point in the demand as we discussed last week. Stemming from digital, IOT, cloud, autos in new use cases in the home as so well presented by Sarjeet Johal in our community. As to the glut, these manufacturers believe that demand will outstrip supply indefinitely. And I understand that a lack of manufacturing capacity is actually more deadly than an oversupply. Look, if there's a glut, manufacturers can cut production and take the financial hit. Whereas capacity constraints mean you can miss entire cycles of growth and really miss out on the demand and the cost reductions. So, all these manufacturers are going for it. Now let's talk about Arm, its approach and the announcements that it made this week. Now last week, we talked about how Pat Gelsinger his vision of a system on package was an attempt to leapfrog system on chip SOC, while Arm is taking a similar system approach. But in our view, it's even broader than the vision laid out by Pat at Intel. Arm is targeting a wide variety of use cases that are shown here. Arm's fundamental philosophy is that the future will require highly specialized chips and Intel as you recall from Pat's announcement, would agree. But Arm historically takes an ecosystem approach that is different from Intel's model. Arm is all about enabling the production of specialized chips to really fit a specific application. For example, think about the amount of AI going on iPhones. They move if I remember from fingerprint to face recognition. This requires specialized neural processing units, NPUs that are designed by Apple for that particular use case. Arm is facilitating the creation of these specialized chips to be designed and produced by the ecosystem. Intel on the other hand has historically taken a one size fits all approach. Built around the x86. The Intel's design has always been about improving the processor. For example, in terms of speed, density, adding vector processing to accommodate AI, et cetera. And Intel does all the design and the manufacturing in any specialization for the ecosystem is done by Intel. Much of the value, that's added from the ecosystem is frankly been bending metal or adding displays or other features at the margin. But, the advantage is that the x86 architecture is well understood. It's consistent, reliable, and let's face it. Most enterprise software runs on x86. So, but very, very different models historically, which we heard from Gelsinger last week they're going to change with a new trusted foundry strategy. Now let's go through an example that might help explain the power of Arm's model. Let's say, your AWS and you're doing graviton. Designing graviton and graviton2. Or Apple, designing the M1 chip, or Tesla designing its own chip, or any other company in in any one of these use cases that are shown here. Tesla is a really good example. In order to optimize for video processing, Tesla needed to add specialized code firmware in the NPU for it's specific use case within autos. It was happy to take off the shelf CPU or GPU or whatever, and leverage Arm's standards there. And then it added its own value in the NPU. So the advantage of this model is Tesla could go from tape out in less or, or, or or in less than a year versus get the tape out in less than a year versus what would normally take many years. Arm is, think of Arm is like customize a Lego blocks that enable unique value add by the ecosystem with a much faster time to market. So like I say, the Tesla goes from logical tape out if you will, to Samsung and then says, okay run this against your manufacturing process. And it should all work as advertised by Arm. Tesla, interestingly, just as an aside chose the 14 nanometer process to keep its costs down. It didn't need the latest and greatest density. Okay, so you can see big difference in philosophies historically between Arm and Intel. And you can see Intel vectoring toward the Arm model based on what Gelsinger said last week for its foundry business. Essentially it has to. Now, Arm announced a new Arm architecture, Armv9. v9 is backwards compatible with previous generations. Perhaps Arm learned from Intel's failed, Itanium effort for those remember that word. Had no backward compatibility and it really floundered. As well, Arm adds some additional capabilities. And today we're going to focus on the two areas that have highlighted, machine learning piece and security. I'll take note of the call out, 300 billion chips. That's Arm's vision. That's a lot. And we've said, before, Arm's way for volumes are 10X those of x86. Volume, we sound like a broken record. Volume equals cost reduction. We'll come back to that a little bit later. Now let's have a word on AI and machine learning. Arm is betting on AI and ML. Big as are many others. And this chart really shows why, it's a graphic that shows ETR data and spending momentum and pervasiveness in the dataset across all the different sectors that ETR tracks within its taxonomy. Note that ML/AI gets the top spot on the vertical axis, which represents net score. That's a measure of spending momentum or spending velocity. The horizontal axis is market share presence in the dataset. And we give this sector four stars to signify it's consistent lead in the data. So pretty reasonable bet by Arm. But the other area that we're going to talk about is security. And its vision day, Arm talked about confidential compute architecture and these things called realms. Note in the left-hand side, showing data traveling all over the different use cases and around the world and the call-out from the CISO below, it's a large public airline CISO that spoke at an ETR Venn round table. And this individual noted that the shifting end points increase the threat vectors. We all know that. Arm said something that really resonated. Specifically, they said today, there's far too much trust on the OS and the hypervisor that are running these applications. And their broad access to data is a weakness. Arm's concept of realms as shown in the right-hand side, underscores the company strategy to remove the assumption that privileged software. Like the hypervisor needs to be able to see the data. So by creating realms, in a virtualized multi-tenant environment, data can be more protected from memory leaks which of course is a major opportunity for hackers that they exploit. So it's a nice concept in a way for the system to isolate attendance data from other users. Okay, we want, we want to share some feedback that we got last week from the community on our analysis of Intel. A tech exec from city pointed out that, Intel really didn't miss a mobile, as we said, it really missed smartphones. In fact, whell, this is a kind of a minor distinction, it's important to recognize we think. Because Intel facilitated WIFI with Centrino, under the direction of Paul Alini. Who by the way, was not an engineer. I think he was the first non-engineer to be the CEO of Intel. He was a marketing person by background. Ironically, Intel's work in wifi connectivity enabled, actually enabled the smartphone revolution. And maybe that makes the smartphone missed by Intel all that more egregious, I don't know. Now the other piece of feedback we received related to our IBM scenario and our three-way joint venture prediction bringing together Intel, IBM, and Samsung in a triumvirate where Intel brings the foundry and it's process manufacturing. IBM brings its dis-aggregated memory technology and Samsung brings its its volume and its knowledge of of volume down the learning curve. Let's start with IBM. Remember we said that IBM with power 10 has the best technology in terms of this notion of dis-aggregating compute from memory and sharing memory in a pool across different processor types. So a few things in this regard, IBM when it restructured its micro electronics business under Ginni Rometty, catalyzed the partnership with global foundries and you know, this picture in the upper right it shows the global foundries facility outside of Albany, New York in Malta. And the partnership included AMD and Samsung. But we believe that global foundries is backed away from some of its contractual commitments with IBM causing a bit of a rift between the companies and leaving a hole in your original strategy. And evidently AMD hasn't really leaned in to move the needle in any way and so the New York foundry, is it a bit of a state of limbo with respect to its original vision. Now, well, Arvind Krishna was the face of the Intel announcement. It clearly has deep knowledge of IBM semiconductor strategy. Dario Gill, we think is a key player in the mix. He's the senior vice president director of IBM research. And it is in a position to affect some knowledge sharing and maybe even knowledge transfer with Intel possibly as it relates to disaggregated architecture. His questions remain as to how open IBM will be. And how protected it will be with its IP. It's got, as we said, last week, it's got to have an incentive to do so. Now why would IBM do that? Well, it wants to compete more effectively with VMware who has done a great job leveraging x86 and that's the biggest competitor in threat to open shift. So Arvind needs Intel chips to really execute on IBM's cloud strategy. Because almost all of IBM's customers are running apps on x86. So IBM's cloud and hybrid cloud. Strategy really need to leverage that Intel partnership. Now Intel for its part has great FinFET technology. FinFET is a tactic goes beyond CMOs. You all mainframes might remember when IBM burned the boat on ECL, Emitter-coupled Logic. And then moved to CMOs for its mainframes. Well, this is the next gen beyond, and it could give Intel a leg up on AMD's chiplet intellectual properties. Especially as it relates to latency. And there could be some benefits there for IBM. So maybe there's a quid pro quo going on. Now, where it really gets interesting is New York Senator, Chuck Schumer, is keen on building up an alternative to Silicon Valley in New York now it is Silicon Alley. So it's possible that Intel, who by the way has really good process technology. This is an aside, it really allowed TSMC to run the table with the whole seven nanometers versus 10 minute nanometer narrative. TSMC was at seven nanometer. Intel was at 10 nanometer. And really, we've said in the past that Intel's 10 nanometer tech is pretty close to TSMC seven. So Intel's ahead in that regard, even though in terms of, you know, the intervener thickness density, it's it's not, you know. These are sort of games that the semiconductor companies play, but you know it's possible that Intel with the U.S. government and IBM and Samsung could make a play for that New York foundry as part of Intel's trusted foundry strategy and kind of reshuffle that deck in Albany. Sounds like a "Game of Thrones," doesn't it? By the way, TSMC has been so consumed servicing Apple for five nanometer and eventually four nanometer that it's dropped the ball on some of its other's customers, namely Nvidia. And remember, a long-term competitiveness and cost reductions, they all come down to volume. And we think that Intel can't get to volume without an Arm strategy. Okay, so maybe the JV, the Joint Venture that we talked about, maybe we're out on a limb there and that's a stretch. And perhaps Samsung's not willing to play ball, given it's made huge investments in fabs and infrastructure and other resources, locally, but we think it's still viable scenario because we think Samsung definitely would covet a presence in the United States. No good to do that directly but maybe a partnership makes more sense in terms of gaining ground on TSMC. But anyway, let's say Intel can become a trusted foundry with the help of IBM and the U.S. government. Maybe then it could compete on volume. Well, how would that work? Well, let's say Nvidia, let's say they're not too happy with TSMC. Maybe with entertain Intel as a second source. Would that do it? In and of itself, no. But what about AWS and Google and Facebook? Maybe this is a way to placate the U.S. government and call off the antitrust dogs. Hey, we'll give Intel Foundry our business to secure America's semiconductor leadership and future and pay U.S. government. Why don't you chill out, back off a little bit. Microsoft even though, you know, it's not getting as much scrutiny from the U.S. government, it's anti trustee is maybe perhaps are behind it, who knows. But I think Microsoft would be happy to play ball as well. Now, would this give Intel a competitive volume posture? Yes, we think it would, for sure. If it can gain the trust of these companies and the volume we think would be there. But as we've said, currently, this is a very, very long shot because of the, the, the new strategy, the distance the difference in the Foundry business all those challenges that we laid out last week, it's going to take years to play out. But the dots are starting to connect in this scenario and the stakes are exceedingly high hence the importance of the U.S. government. Okay, that's it for now. Thanks to the community for your comments and insights. And thanks again to David Floyer whose analysis around Arm and semiconductors. And this work that he's done for the past decade is of tremendous help. Remember I publish each week on wikibon.com and siliconangle.com. And these episodes are all available as podcasts, just search for braking analysis podcast and you can always connect on Twitter. You can hit the chat right here or this live event or email me at david.vellante@siliconangle.com. Look, I always appreciate the comments on LinkedIn and Clubhouse. You can follow me so you're notified when we start a room and riff on these topics as well as others. And don't forget to check out etr.plus where all the survey data. This is Dave Vellante for the Cube Insights powered by ETR. Be well, and we'll see you next time. (cheerful music) (cheerful music)
SUMMARY :
From the Cube's studios And maybe that makes the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dario Gill | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Alini | PERSON | 0.99+ |
March 23rd | DATE | 0.99+ |
Albany | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
$116 billion | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Arvind | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
last week | DATE | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
less than a year | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
10X | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
ORGANIZATION | 0.99+ | |
Silicon Valley | LOCATION | 0.99+ |
2024 | DATE | 0.99+ |
seven nanometers | QUANTITY | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
Arm | PERSON | 0.99+ |
this week | DATE | 0.99+ |
Armv9 | COMMERCIAL_ITEM | 0.99+ |
New York | LOCATION | 0.99+ |
Dec 10th Keynote Analysis Dave Vellante & Dave Floyer | AWS re:Invent 2020
>>From around the globe. It's the queue with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Hi, this is Dave Volante. Welcome back to the cubes. Continuous coverage of AWS reinvent 2020, the virtual version of the cube and reinvent. I'm here with David foyer. Who's the CTO Wiki Bon, and we're going to break down today's infrastructure keynote, which was headlined by Peter DeSantis. David. Good to see you. Good to see you. So David, we have a very tight timeframe and I just want to cover a couple of things. Something that I've learned for many, many years, working with you is the statement. It's all about recovery. And that really was the first part of Peter's discussion today. It was, he laid out the operational practices of AWS and he talked a lot about, he actually had some really interesting things up there. You know, you use the there's no compression algorithm for experience, but he talked a lot about availability and he compared AWS's availability philosophy with some of its competitors. >>And he talked about generators being concurrent and maintainable. He got, he took it down to the batteries and the ups and the thing that impressed me, most of the other thing that you've taught me over the years is system thinking. You've got to look at the entire system. That one little component could have Peter does emphasis towards a huge blast radius. So what AWS tries to do is, is constrict that blast radius so he can sleep at night. So non-disruptive replacements of things like batteries. He talked a lot about synchronous versus asynchronous trade-offs and it was like, kind of async versus sync one-on-one synchronous. You got latency asynchronous, you got your data loss to exposure. So a lot of discussions around that, but what was most interesting is he CA he compared and contrasted AWS's philosophy on availability zones, uh, with the competition. And he didn't specifically call out Microsoft and Google, but he showed some screenshots of their websites and the competition uses terms like usually available and generally available this meaning that certain regions and availability zone may not be available. That's not the case with AWS, your thoughts on that. >>They have a very impressive track record, uh, despite the, a beta the other day. Um, but they've got a very impressive track record. I, I think there is a big difference, however, between a general purpose computing and, uh, mission critical computing. And when you've got to bring up, uh, databases and everything else like that, then I think there are other platforms, uh, which, uh, which in the longterm, uh, AWS in my view, should be embracing that do a better job in mission critical areas, uh, in terms of bringing things up and not using data and recovery. So that's, that's an area which I think AWS will need to partner with in the past. >>Yeah. So, um, the other area of the keynote that was critical was, um, he spent a lot of time on custom Silicon and you and I have talked about this a lot, of course, AWS and Intel are huge partners. Uh, but, but we know that Intel owns its own fabs, uh, it's competitors, you know, we'll outsource to the other, other manufacturers. So Intel is motivated to put as much function on the real estate as possible to create general purpose processors and, and get as much out of that real estate as they possibly can. So what AWS has been been doing, and they certainly didn't throw Intel under the bus. They were very complimentary and, and friendly, but they also lay it out that they're developing a number of components that are custom Silicon. They talked about the nitro controllers, uh, inferential, which is, you know, specialized chips around, around inference to do things like PI torch, uh, and TensorFlow. >>Uh, they talked about training them, you know, the new training ship for training AI models or ML models. They spent a lot of time on Gravatar, which is 64 bit, like you say, everything's 64 bit these days, but it's the arm processor. And so, you know, they, they didn't specifically mention Moore's law, but they certainly taught, they gave, uh, a microprocessor one Oh one overview, which I really enjoyed. They talked about, they didn't specifically talk about Moore's law, but they talked about the need to put, put on more, more cores, uh, and then running multithreaded apps and the whole new programming models that, that brings out. Um, and, and, and basically laid out the case that these specialized processors that they're developing are more efficient. They talked about all these cores and the overhead that, that those cores bring in the difficulty of keeping those processors, those cores busy. >>Uh, and so they talked about symmetric, uh, uh, a simultaneous multi-threading, uh, and sharing cores, which like, it was like going back to the old days of, of microprocessor development. But the point being that as you add more cores and you have that overhead, you get non-linear, uh, performance improvements. And so, so it defeats the notion of scale out, right? And so what I, what I want to get to is to get your take on this as you've been talking for a long, long time about arm in the data center, and remind me just like object storage. We talked for years about object storage. It never went anywhere until Amazon brought forth simple storage service. And then object storage obviously is, you know, a mainstream mainstream storage. Now I see the same thing happening, happening with, with arm and the data center specifically, of course, alternative processes are taking off, but, but what's your take on all this? You, you listened to the keynote, uh, give us your takeaways. >>Well, let's go back to first principles for a second. Why is this happening? It's happening because of volume, volume, volume, volume is incredibly important, obviously in terms of cost. Um, and if you, if you're, if you look at a volume, uh, arm is, is, was based on the volumes that came from that from the, uh, from the, um, uh, handhelds and all of their, all of the mobile stuff that's been generating. So there's billions of chips being made, uh, on that. >>I can interrupt you for a second, David. So we're showing a slide here, uh, and, and it's, it's, it, it, it relates to volume and somewhat, I mean, we, we talk a lot about the volume that flash for instance gained from the consumer. Uh, and, and, and now we're talking about these emerging workloads. You call them matrix workloads. These are things like AI influencing edge work, and this gray area shows these alternative workloads. And that's really what Amazon is going after. So you show in this chart, you know, basically very small today, 2020, but you show a very large and growing position, uh, by the end of this decade, really eating into traditional, the traditional space. >>That, that that's absolutely correct. And, and that's being led by what's happening in the mobile market. If you look at all of the work that's going on, on your, on your, uh, Apple, uh, Apple iPhone, there's a huge amount of, uh, modern, uh, matrix workloads are going there to help you with your photography and everything like that. And that's going to come into the, uh, into the data center within, within two years. Uh, and that's what, what, uh, AWS is focusing on is capabilities of doing this type of new workload in real time. And, and it's hundreds of times, hundreds of times more processing, uh, to do these workloads and it's gotta be done in real time. >>Yeah. So we have a, we have a chart on that this bar chart that you've, you've produced. Uh, I don't know if you can see the bars here. Um, I can't see them, but, but maybe we can, we can editorialize. So on the left-hand side, you basically have traditional workloads, uh, on blue and you have matrix workloads. What you calling these emerging workloads and red you, so you show performance 0.9, five versus 50, then price performance for traditional 3.6. And it's more than 150 times greater for ARM-based workload. >>Yeah. And that's a analysis of the previous generation of arm. And if you take the new ones, the M one, for example, which has come in to the, uh, to the PC area, um, that's going to be even higher. So the arm is producing hybrid computers, uh, multi, uh, uh, uh, heterogeneous computers with multiple different things inside the computer. And that is making life a lot more efficient. And especially in the inference world, they're using NPUs instead of GPU's, they conferred about four times more NPUs that you can GPU's. And, um, uh, it, it's just a, uh, it's a different world and, uh, arm is ahead because it's done all the work in the volume area, and that's now going to go into PCs and, and it's going to, going to go into the data center. >>Okay, great. Now, yeah, if we could, uh, uh, guys bring up the, uh, the, the other chart that's titled workloads moving to ARM-based servers, this one is just amazing to me, David, you'll see that I, for some reason, the slides aren't translating, so, uh, forget that, forget the slides. So, um, but, but basically you have the revenue coming from arm as to be substantially higher, uh, in the out years, uh, or certainly substantially growing more than the traditional, uh, workload revenue. Now that's going to take a decade, but maybe you could explain, you know, why you see that. >>Yeah, the, the, the, the, the reason is that these matrix workloads, uh, and also, uh, the offload of like nitro is doing it's the offload of the storage and the networking from the, the main CPU's, uh, the dis-aggregation of computing, uh, plus the traditional workloads, which can move, uh, over or are moving over and where AWS, uh, and, and Microsoft and the PC and Apple, and the PC where those leaders are leading us is that they are doing the hard work of making sure that their software, uh, and their API APIs can utilize the capabilities of arm. Uh, so, uh, it's, it's the it, and the advantage that AWS has of course, is that enormous economies of scale, across many, many users. Uh, that's going to take longer to go into the, the enterprise data center much longer, but the, the, uh, Microsoft, Google and AWS, they're going to be leading the charge of this movement, all of arm into the data center. Uh, it was amazing some of the people or what some of the arm customers or the AWS customers were seeing today with much faster performance and much lower price. It was, they were, they were affirming. Uh, and, and the fundamental reason is that arm are two generations of production. They are in at the moment at five nano meters, whereas, um, Intel is still at 10. Uh, so that's a big, big issue that, uh, Intel have to address. Yeah. And so >>You get, you've been getting this core creep, I'll call it, which brings a lot of overhead. And now you're seeing these very efficient, specialized processes in your premises. We're going to see these explode for these new workloads. And in particular, the edge is such an enormous opportunity. I think you've pointed out that you see a big, uh, uh, market for edge, these edge emergent edge workloads kind of start in the data center and then push out to the edge. Andy Jassy says that the edge, uh, or, or we're going to bring AWS to the edge of the data center is just another edge node. I liked that vision, your thoughts. >>Uh, I, I think that is a, a compelling vision. I think things at the edge, you have many different form factors. So, uh, you, you will need an edge and a car for example, which is cheap enough to fit into a car and it's, but it's gotta be a hundred times more processing than it is in the, in the computers, in the car at the moment, that's a big leap and, and for, to get to automated driving, uh, but that's going to happen. Um, and it's going to happen on ARM-based systems and the amount of work that's going to go out to the edge is enormous. And the amount of data that's generated at the edge is enormous. That's not going to come back to the center, that's going to be processed at the edge, and the edge is going to be the center. If you're like of where computing is done. Uh, it doesn't mean to say that you're not going to have a lot of inference work inside the data center, but a lot of, lot of work in terms of data and processing is move, is going to move into the edge over the next decade. >>Yeah, well, many of, uh, AWS is edge offerings today, you know, assume data is going to be sent back. Although of course you see outpost and then smaller versions of outposts. That's a, to me, that's a clue of what's coming. Uh, basically again, bringing AWS to, to, to the edge. I want to also touch on, uh, Amazon's, uh, comments on renewable. Peter has talked a lot about what they're doing to reduce carbon. Uh, one of the interesting things was they're actually reusing their cooling water that they clean and reuse. I think, I think you said three or multiple times, uh, and then they put it back out and they were able to purify it and reuse it. So, so that's a really great sustainable story. There was much more to it. Uh, but I think, you know, companies like Amazon, especially, you know, large companies really have a responsibility. So it's great to see Amazon stepping up. Uh, anyway, we're out of time, David, thanks so much for coming on and sharing your insights really, really appreciate it. Those, by the way, those slides of Wiki bond.com has a lot of David's work on there. Apologize for some of the data not showing through, but, uh, working in real time here. This is Dave Volante for David foyer. Are you watching the cubes that continuous coverage of AWS reinvent 2020, we'll be right back.
SUMMARY :
It's the queue with digital coverage of Who's the CTO Wiki Bon, and we're going to break down today's infrastructure keynote, That's not the case with AWS, your thoughts on that. a beta the other day. uh, inferential, which is, you know, specialized chips around, around inference to do things like PI Uh, they talked about training them, you know, the new training ship for training AI models or ML models. Uh, and so they talked about symmetric, uh, uh, a simultaneous multi-threading, uh, on that. So you show in this chart, you know, basically very small today, 2020, but you show a very And that's going to come into the, uh, into the data center within, So on the left-hand side, you basically have traditional workloads, And especially in the inference world, they're using NPUs instead of more than the traditional, uh, workload revenue. the main CPU's, uh, the dis-aggregation of computing, in the data center and then push out to the edge. and the edge is going to be the center. Uh, one of the interesting things was they're actually reusing their cooling water
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter DeSantis | PERSON | 0.99+ |
Dave Floyer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Andy Jassy | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dec 10th | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
hundreds of times | QUANTITY | 0.99+ |
3.6 | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
0.9 | QUANTITY | 0.99+ |
five nano meters | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
64 bit | QUANTITY | 0.99+ |
two generations | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
more than 150 times | QUANTITY | 0.98+ |
five | QUANTITY | 0.97+ |
two years | QUANTITY | 0.95+ |
first part | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
first principles | QUANTITY | 0.94+ |
next decade | DATE | 0.93+ |
one | QUANTITY | 0.93+ |
2020 | TITLE | 0.92+ |
end of this decade | DATE | 0.9+ |
one little component | QUANTITY | 0.9+ |
billions of chips | QUANTITY | 0.88+ |
a decade | QUANTITY | 0.85+ |
Moore | PERSON | 0.81+ |
Wiki bond.com | ORGANIZATION | 0.76+ |
second | QUANTITY | 0.74+ |
hundred times | QUANTITY | 0.71+ |
Invent | EVENT | 0.7+ |
about four times | QUANTITY | 0.69+ |
a second | QUANTITY | 0.68+ |
Full Keynote Hour - DockerCon 2020
(water running) (upbeat music) (electric buzzing) >> Fuel up! (upbeat music) (audience clapping) (upbeat music) >> Announcer: From around the globe. It's the queue with digital coverage of DockerCon live 2020, brought to you by Docker and its ecosystem partners. >> Hello everyone, welcome to DockerCon 2020. I'm John Furrier with theCUBE I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon 2020. Virtual event, normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content, over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Burcio and Bret Fisher. We'll be with you all day today, taking you through the program, helping you navigate the sessions. I'm so excited. Jenny, this is a virtual event. We talk about this. Can you believe it? Maybe the internet gods be with us today and hope everyone's having-- >> Yes. >> Easy time getting in. Jenny, Bret, thank you for-- >> Hello. >> Being here. >> Hey. >> Hi everyone, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you. >> Guys great job getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the opportunities given this tough times where we're in. It's super exciting again, made the internet gods be with us throughout the day, but there's plenty of content. Bret's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's canceling their events, but this is special to you guys. Talk about DockerCon virtual this year. >> The Docker community shows up at DockerCon every year, and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make DockerCon a virtual event. And of course when we did that, there was no quarantine we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for DockerCon today. And when you look at DockerCon of past right live events, really and we're learning are just the tip of the iceberg and so thrilled to be able to deliver a more inclusive global event today. And we have so much planned I think. Bret, you want to tell us some of the things that you have planned? >> Well, I'm sure I'm going to forget something 'cause there's a lot going on. But, we've obviously got interviews all day today on this channel with John and the crew. Jenny has put together an amazing set of all these speakers, and then you have the captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. It's all engineers, all day long. Captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically not scripted, it's an all day long unscripted event. So I'm sure it's going to be a lot of fun hanging out in there. >> Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions, where the speakers will be there with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Bret's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock, it'll be available on demand. All that content is available if you're on your desktop. If you're on your mobile, it's the same thing. Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, getting more out of this event. You guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >> Yes, so first set up your profile, put your picture next to your chat handle and then chat. John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded, so you get quality content and the speakers and chat so you can ask questions the whole time. If you're looking for the hallway track, then definitely check out the captain's on deck channel. And then we have some great interviews all day on the queue. So set up your profile, join the conversation and be kind, right? This is a community event. Code of conduct is linked on every page at the top, and just have a great day. >> And Bret, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, So you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >> Yeah, so I'm sure we're going to have lots of, stuff going on in chat. So no cLancaerns there about, having crickets in the chat. But we're going to be basically starting the day with two of my good Docker captain friends, (murmurs) and Laura Taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour and we're going to get you going and then you can maybe jump out and go to take some sessions. Maybe there's some stuff you want to check out and other sessions that you want to chat and talk with the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interviews. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. We're not just changing out the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there, and basically it's captains all day long. And if you've been on my YouTube live show you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >> Awesome and the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What other things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies, what else? What's going on? Any secret, surprises throughout the day. >> There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Bret will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Hopefully right you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >> All right, great stuff, so they've got the Docker selfie. So the Docker selfies, the hashtag is just DockerCon hashtag DockerCon. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool kids are going to be hanging out with Bret and then all they'll knowledge and learning. Don't miss the keynote, the keynote should be solid. We've got chain Governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us. And again, check out the interactive calendar. All you got to do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Bret, any final thoughts on what you want to share to the community around, what you got going on the virtual event, just random thoughts? >> Yeah, so sorry we can't all be together in the same physical place. But the coolest thing about as business online, is that we actually get to involve everyone, so as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Like Jenny said, the code of conduct is important. So, we're all in this together with the chat, so try to be nice in there. These are all real humans that, have feelings just like me. So let's try to keep it cool. And, over in the Catherine's channel we'll be taking your questions and maybe playing some music, playing some games, giving away some free stuff, while you're, in between sessions learning, oh yeah. >> And I got to say props to your rig. You've got an amazing setup there, Bret. I love what your show, you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So if you're not getting in, just, Wade's going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >> Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. So you can learn and a huge thank you to our platinum and gold authors. >> Awesome, well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there. I tweeted this out last night and let them get you guys' reaction to this because there's been a lot of talk around the COVID crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps aren't going to just change the world, they're going to save the world. So a lot of the theme here is the impact that developers are having right now in the current situation. If we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples. how containers and microservices are certainly changing the world and helping save the world, your thoughts. >> Like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around COVID, Clement Beyondo is sharing his company's experience, from being able to continue operations in Italy when they were completely shut down beginning of March. We have also in theCUBE channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and developers are moving in industry and really humanity forward because of what they're able to build and create, with advances in technology. >> Yeah and the first responders and these days is developers. Bret compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries, I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >> I think we're over 700,000 composed files on GitHub. So it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Just by having that we just buy, and that's not even counting. I mean that's just counting the files that are named Docker compose YAML. So I'm sure a lot of you out there have created a YAML file to manage your local containers or even on a server with Docker compose. And the nice thing is is Docker is doubling down on that. So we've gotten some news recently, from them about what they want to do with opening the spec up, getting more companies involved because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >> All right, well let's get into the keynote guys, jump into the keynote. If you missing anything, come back to the stream, check out the sessions, check out the calendar. Let's go, let's have a great time. Have some fun, thanks and enjoy the rest of the day we'll see you soon. (upbeat music) (upbeat music) >> Okay, what is the name of that Whale? >> Molly. >> And what is the name of this Whale? >> Mobby. >> That's right, dad's got to go, thanks bud. >> Bye. >> Bye. Hi, I'm Scott Johnson, CEO of Docker and welcome to DockerCon 2020. This year DockerCon is an all virtual event with more than 60,000 members of the Docker Community joining from around the world. And with the global shelter in place policies, we're excited to offer a unifying, inclusive virtual community event in which anyone and everyone can participate from their home. As a company, Docker has been through a lot of changes since our last DockerCon last year. The most important starting last November, is our refocusing 100% on developers and development teams. As part of that refocusing, one of the big challenges we've been working on, is how to help development teams quickly and efficiently get their app from code to cloud And wouldn't it be cool, if developers could quickly deploy to the cloud right from their local environment with the commands and workflow they already know. We're excited to give you a sneak preview of what we've been working on. And rather than slides, we thought we jumped right into the product. And joining me demonstrate some of these cool new features, is enclave your DACA. One of our engineers here at Docker working on Docker compose. Hello Lanca. >> Hello. >> We're going to show how an application development team collaborates using Docker desktop and Docker hub. And then deploys the app directly from the Docker command line to the clouds in just two commands. A development team would use this to quickly share functional changes of their app with the product management team, with beta testers or other development teams. Let's go ahead and take a look at our app. Now, this is a web app, that randomly pulls words from the database, and assembles them into sentences. You can see it's a pretty typical three tier application with each tier implemented in its own container. We have a front end web service, a middle tier, which implements the logic to randomly pull the words from the database and assemble them and a backend database. And here you can see the database uses the Postgres official image from Docker hub. Now let's first run the app locally using Docker command line and the Docker engine in Docker desktop. We'll do a Doc compose up and you can see that it's pulling the containers from our Docker organization account. Wordsmith, inc. Now that it's up. Let's go ahead and look at local host and we'll confirm that the application is functioning as desired. So there's one sentence, let's pull and now you and you can indeed see that we are pulling random words and assembling into sentences. Now you can also see though that the look and feel is a bit dated. And so Lanca is going to show us how easy it is to make changes and share them with the rest of the team. Lanca, over to you. >> Thank you, so I have, the source code of our application on my machine and I have updated it with the latest team from DockerCon 2020. So before committing the code, I'm going to build the application locally and run it, to verify that indeed the changes are good. So I'm going to build with Docker compose the image for the web service. Now that the image has been built, I'm going to deploy it locally. Wait to compose up. We can now check the dashboard in a Docker desktop that indeed our containers are up and running, and we can access, we can open in the web browser, the end point for the web service. So as we can see, we have the latest changes in for our application. So as you can see, the application has been updated successfully. So now, I'm going to push the image that I have just built to my organization's shared repository on Docker hub. So I can do this with Docker compose push web. Now that the image has been updated in the Docker hub repository, or my teammates can access it and check the changes. >> Excellent, well, thank you Lanca. Now of course, in these times, video conferencing is the new normal, and as great as it is, video conferencing does not allow users to actually test the application. And so, to allow us to have our app be accessible by others outside organizations such as beta testers or others, let's go ahead and deploy to the cloud. >> Sure we, can do this by employing a context. A Docker context, is a mechanism that we can use to target different platforms for deploying containers. The context we hold, information as the endpoint for the platform, and also how to authenticate to it. So I'm going to list the context that I have set locally. As you can see, I'm currently using the default context that is pointing to my local Docker engine. So all the commands that I have issued so far, we're targeting my local engine. Now, in order to deploy the application on a cloud. I have an account in the Azure Cloud, where I have no resource running currently, and I have created for this account, dedicated context that will hold the information on how to connect it to it. So now all I need to do, is to switch to this context, with Docker context use, and the name of my cloud context. So all the commands that I'm going to run, from now on, are going to target the cloud platform. So we can also check very, more simpler, in a simpler way we can check the running containers with Docker PS. So as we see no container is running in my cloud account. Now to deploy the application, all I need to do is to run a Docker compose up. And this will trigger the deployment of my application. >> Thanks Lanca. Now notice that Lanca did not have to move the composed file from Docker desktop to Azure. Notice you have to make any changes to the Docker compose file, and nor did she change any of the containers that she and I were using locally in our local environments. So the same composed file, same images, run locally and upon Azure without changes. While the app is deploying to Azure, let's highlight some of the features in Docker hub that helps teams with remote first collaboration. So first, here's our team's account where it (murmurs) and you can see the updated container sentences web that Lanca just pushed a couple of minutes ago. As far as collaboration, we can add members using their Docker ID or their email, and then we can organize them into different teams depending on their role in the application development process. So and then Lancae they're organized into different teams, we can assign them permissions, so that teams can work in parallel without stepping on each other's changes accidentally. For example, we'll give the engineering team full read, write access, whereas the product management team will go ahead and just give read only access. So this role based access controls, is just one of the many features in Docker hub that allows teams to collaboratively and quickly develop applications. Okay Lanca, how's our app doing? >> Our app has been successfully deployed to the cloud. So, we can easily check either the Azure portal to verify the containers running for it or simpler we can run a Docker PS again to get the list with the containers that have been deployed for it. In the output from the Docker PS, we can see an end point that we can use to access our application in the web browser. So we can see the application running in clouds. It's really up to date and now we can take this particular endpoint and share it within our organization such that anybody can have a look at it. >> That's cool Onka. We showed how we can deploy an app to the cloud in minutes and just two commands, and using commands that Docker users already know, thanks so much. In that sneak preview, you saw a team developing an app collaboratively, with a tool chain that includes Docker desktop and Docker hub. And simply by switching Docker context from their local environment to the cloud, deploy that app to the cloud, to Azure without leaving the command line using Docker commands they already know. And in doing so, really simplifying for development team, getting their app from code to cloud. And just as important, what you did not see, was a lot of complexity. You did not see cloud specific interfaces, user management or security. You did not see us having to provision and configure compute networking and storage resources in the cloud. And you did not see infrastructure specific application changes to either the composed file or the Docker images. And by simplifying a way that complexity, these new features help application DevOps teams, quickly iterate and get their ideas, their apps from code to cloud, and helping development teams, build share and run great applications, is what Docker is all about. A Docker is able to simplify for development teams getting their app from code to cloud quickly as a result of standards, products and ecosystem partners. It starts with open standards for applications and application artifacts, and active open source communities around those standards to ensure portability and choice. Then as you saw in the demo, the Docker experience delivered by Docker desktop and Docker hub, simplifies a team's collaborative development of applications, and together with ecosystem partners provides every stage of an application development tool chain. For example, deploying applications to the cloud in two commands. What you saw on the demo, well that's an extension of our strategic partnership with Microsoft, which we announced yesterday. And you can learn more about our partnership from Amanda Silver from Microsoft later today, right here at DockerCon. Another tool chain stage, the capability to scan applications for security and vulnerabilities, as a result of our partnership with Sneak, which we announced last week. You can learn more about that partnership from Peter McKay, CEO Sneak, again later today, right here at DockerCon. A third example, development team can automate the build of container images upon a simple get push, as a result of Docker hub integrations with GitHub and Alaska and Bitbucket. As a final example of Docker and the ecosystem helping teams quickly build applications, together with our ISV partners. We offer in Docker hub over 500 official and verified publisher images of ready to run Dockerized application components such as databases, load balancers, programming languages, and much more. Of course, none of this happens without people. And I would like to take a moment to thank four groups of people in particular. First, the Docker team, past and present. We've had a challenging 12 months including a restructuring and then a global pandemic, and yet their support for each other, and their passion for the product, this community and our customers has never been stronger. We think our community, Docker wouldn't be Docker without you, and whether you're one of the 50 Docker captains, they're almost 400 meetup organizers, the thousands of contributors and maintainers. Every day you show up, you give back, you teach new support. We thank our users, more than six and a half million developers who have built more than 7 million applications and are then sharing those applications through Docker hub at a rate of more than one and a half billion poles per week. Those apps are then run, are more than 44 million Docker engines. And finally, we thank our customers, the over 18,000 docker subscribers, both individual developers and development teams from startups to large organizations, 60% of which are outside the United States. And they spend every industry vertical, from media, to entertainment to manufacturing. healthcare and much more. Thank you. Now looking forward, given these unprecedented times, we would like to offer a challenge. While it would be easy to feel helpless and miss this global pandemic, the challenge is for us as individuals and as a community to instead see and grasp the tremendous opportunities before us to be forces for good. For starters, look no further than the pandemic itself, in the fight against this global disaster, applications and data are playing a critical role, and the Docker Community quickly recognize this and rose to the challenge. There are over 600 COVID-19 related publicly available projects on Docker hub today, from data processing to genome analytics to data visualization folding at home. The distributed computing project for simulating protein dynamics, is also available on Docker hub, and it uses spirit compute capacity to analyze COVID-19 proteins to aid in the design of new therapies. And right here at DockerCon, you can hear how Clemente Biondo and his company engineering in Gagne area Informatica are using Docker in the fight with COVID-19 in Italy every day. Now, in addition to fighting the pandemic directly, as a community, we also have an opportunity to bridge the disruption the pandemic is wreaking. It's impacting us at work and at home in every country around the world and every aspect of our lives. For example, many of you have a student at home, whose world is going to be very different when they returned to school. As employees, all of us have experienced the stresses from working from home as well as many of the benefits and in fact 75% of us say that going forward, we're going to continue to work from home at least occasionally. And of course one of the biggest disruptions has been job losses, over 35 million in the United States alone. And we know that's affected many of you. And yet your skills are in such demand and so important now more than ever. And that's why here at DockerCon, we want to try to do our part to help, and we're promoting this hashtag on Twitter, hashtag DockerCon jobs, where job seekers and those offering jobs can reach out to one another and connect. Now, pandemics disruption is accelerating the shift of more and more of our time, our priorities, our dollars from offline to online to hybrid, and even online only ways of living. We need to find new ways to collaborate, new approaches to engage customers, new modes for education and much more. And what is going to fill the needs created by this acceleration from offline, online? New applications. And it's this need, this demand for all these new applications that represents a great opportunity for the Docker community of developers. The world needs us, needs you developers now more than ever. So let's seize this moment. Let us in our teams, go build share and run great new applications. Thank you for joining today. And let's have a great DockerCon. >> Okay, welcome back to the DockerCon studio headquarters in your hosts, Jenny Burcio and myself John Furrier. u@farrier on Twitter. If you want to tweet me anything @DockerCon as well, share what you're thinking. Great keynote there from Scott CEO. Jenny, demo DockerCon jobs, some highlights there from Scott. Yeah, I love the intro. It's okay I'm about to do the keynote. The little green room comes on, makes it human. We're all trying to survive-- >> Let me answer the reality of what we are all doing with right now. I had to ask my kids to leave though or they would crash the whole stream but yes, we have a great community, a large community gather gathered here today, and we do want to take the opportunity for those that are looking for jobs, are hiring, to share with the hashtag DockerCon jobs. In addition, we want to support direct health care workers, and Bret Fisher and the captains will be running a all day charity stream on the captain's channel. Go there and you'll get the link to donate to directrelief.org which is a California based nonprofit, delivering and aid and supporting health care workers globally response to the COVID-19 crisis. >> Okay, if you jumping into the stream, I'm John Farrie with Jenny Webby, your hosts all day today throughout DockerCon. It's a packed house of great content. You have a main stream, theCUBE which is the mainstream that we'll be promoting a lot of cube interviews. But check out the 40 plus sessions underneath in the interactive calendar on dockercon.com site. Check it out, they're going to be live on a clock. So if you want to participate in real time in the chat, jump into your session on the track of your choice and participate with the folks in there chatting. If you miss it, it's going to go right on demand right after sort of all content will be immediately be available. So make sure you check it out. Docker selfie is a hashtag. Take a selfie, share it. Docker hashtag Docker jobs. If you're looking for a job or have openings, please share with the community and of course give us feedback on what you can do. We got James Governor, the keynote coming up next. He's with Red monk. Not afraid to share his opinion on open source on what companies should be doing, and also the evolution of this Cambrin explosion of apps that are going to be coming as we come out of this post pandemic world. A lot of people are thinking about this, the crisis and following through. So stay with us for more and more coverage. Jenny, favorite sessions on your mind for people to pay attention to that they should (murmurs)? >> I just want to address a few things that continue to come up in the chat sessions, especially breakout sessions after they play live and the speakers in chat with you, those go on demand, they are recorded, you will be able to access them. Also, if the screen is too small, there is the button to expand full screen, and different quality levels for the video that you can choose on your end. All the breakout sessions also have closed captioning, so please if you would like to read along, turn that on so you can, stay with the sessions. We have some great sessions, kicking off right at 10:00 a.m, getting started with Docker. We have a full track really in the how to enhance on that you should check out devs in action, hear what other people are doing and then of course our sponsors are delivering great content to you all day long. >> Tons of content. It's all available. They'll always be up always on at large scale. Thanks for watching. Now we got James Governor, the keynote. He's with Red Monk, the analyst firm and has been tracking open source for many generations. He's been doing amazing work. Watch his great keynote. I'm going to be interviewing him live right after. So stay with us and enjoy the rest of the day. We'll see you back shortly. (upbeat music) >> Hi, I'm James Governor, one of the co-founders of a company called RedMonk. We're an industry research firm focusing on developer led technology adoption. So that's I guess why Docker invited me to DockerCon 2020 to talk about some trends that we're seeing in the world of work and software development. So Monk Chips, that's who I am. I spent a lot of time on Twitter. It's a great research tool. It's a great way to find out what's going on with keep track of, as I say, there's people that we value so highly software developers, engineers and practitioners. So when I started talking to Docker about this event and it was pre Rhona, should we say, the idea of a crowd wasn't a scary thing, but today you see something like this, it makes you feel uncomfortable. This is not a place that I want to be. I'm pretty sure it's a place you don't want to be. And you know, to that end, I think it's interesting quote by Ellen Powell, she says, "Work from home is now just work" And we're going to see more and more of that. Organizations aren't feeling the same way they did about work before. Who all these people? Who is my cLancaern? So GitHub says has 50 million developers right on its network. Now, one of the things I think is most interesting, it's not that it has 50 million developers. Perhaps that's a proxy for number of developers worldwide. But quite frankly, a lot of those accounts, there's all kinds of people there. They're just Selena's. There are data engineers, there are data scientists, there are product managers, there were tech marketers. It's a big, big community and it goes way beyond just software developers itself. Frankly for me, I'd probably be saying there's more like 20 to 25 million developers worldwide, but GitHub knows a lot about the world of code. So what else do they know? One of the things they know is that world of code software and opensource, is becoming increasingly global. I get so excited about this stuff. The idea that there are these different software communities around the planet where we're seeing massive expansions in terms of things like open source. Great example is Nigeria. So Nigeria more than 200 million people, right? The energy there in terms of events, in terms of learning, in terms of teaching, in terms of the desire to code, the desire to launch businesses, desire to be part of a global software community is just so exciting. And you know, these, this sort of energy is not just in Nigeria, it's in other countries in Africa, it's happening in Egypt. It's happening around the world. This energy is something that's super interesting to me. We need to think about that. We've got global that we need to solve. And software is going to be a big part of that. At the moment, we can talk about other countries, but what about frankly the gender gap, the gender issue that, you know, from 1984 onwards, the number of women taking computer science degrees began to, not track but to create in comparison to what men were doing. The tech industry is way too male focused, there are men that are dominant, it's not welcoming, we haven't found ways to have those pathways and frankly to drive inclusion. And the women I know in tech, have to deal with the massively disproportionate amount of stress and things like online networks. But talking about online networks and talking about a better way of living, I was really excited by get up satellite recently, was a fantastic demo by Alison McMillan and she did a demo of a code spaces. So code spaces is Microsoft online ID, new platform that they've built. And online IDs, we're never quite sure, you know, plenty of people still out there just using the max. But, visual studio code has been a big success. And so this idea of moving to one online IDE, it's been around that for awhile. What they did was just make really tight integration. So you're in your GitHub repo and just be able to create a development environment with effectively one click, getting rid of all of the act shaving, making it super easy. And what I loved was it the demo, what Ali's like, yeah cause this is great. One of my kids are having a nap, I can just start (murmurs) and I don't have to sort out all the rest of it. And to me that was amazing. It was like productivity as inclusion. I'm here was a senior director at GitHub. They're doing this amazing work and then making this clear statement about being a parent. And I think that was fantastic. Because that's what, to me, importantly just working from home, which has been so challenging for so many of us, began to open up new possibilities, and frankly exciting possibilities. So Alley's also got a podcast parent-driven development, which I think is super important. Because this is about men and women rule in this together show parenting is a team sport, same as software development. And the idea that we should be thinking about, how to be more productive, is super important to me. So I want to talk a bit about developer culture and how it led to social media. Because you know, your social media, we're in this ad bomb stage now. It's TikTok, it's like exercise, people doing incredible back flips and stuff like that. Doing a bunch of dancing. We've had the world of sharing cat gifts, Facebook, we sort of see social media is I think a phenomenon in its own right. Whereas the me, I think it's interesting because it's its progenitors, where did it come from? So here's (murmurs) So 1971, one of the features in the emergency management information system, that he built, which it's topical, it was for medical tracking medical information as well, medical emergencies, included a bulletin board system. So that it could keep track of what people were doing on a team and make sure that they were collaborating effectively, boom! That was the start of something big, obviously. Another day I think is worth looking at 1983, Sorania Pullman, spanning tree protocol. So at DEC, they were very good at distributed systems. And the idea was that you can have a distributed system and so much of the internet working that we do today was based on radius work. And then it showed that basically, you could span out a huge network so that everyone could collaborate. That is incredibly exciting in terms of the trends, that I'm talking about. So then let's look at 1988, you've got IRC. IRC what developer has not used IRC, right. Well, I guess maybe some of the other ones might not have. But I don't know if we're post IRC yet, but (murmurs) at a finished university, really nailed it with IRC as a platform that people could communicate effectively with. And then we go into like 1991. So we've had IRC, we've had finished universities, doing a lot of really fantastic work about collaboration. And I don't think it was necessarily an accident that this is where the line is twofold, announced Linux. So Linux was a wonderfully packaged, idea in terms of we're going to take this Unix thing. And when I say package, what a package was the idea that we could collaborate on software. So, it may have just been the work of one person, but clearly what made it important, made it interesting, was finding a social networking pattern, for software development so that everybody could work on something at scale. That was really, I think, fundamental and foundational. Now I think it's important, We're going to talk about Linus, to talk about some things that are not good about software culture, not good about open source culture, not good about hacker culture. And that's where I'm going to talk about code of conduct. We have not been welcoming to new people. We got the acronyms, JFTI, We call people news, that's super unhelpful. We've got to find ways to be more welcoming and more self-sustaining in our communities, because otherwise communities will fail. And I'd like to thank everyone that has a code of conduct and has encouraged others to have codes of conduct. We need to have codes of conduct that are enforced to ensure that we have better diversity at our events. And that's what women, underrepresented minorities, all different kinds of people need to be well looked off to and be in safe and inclusive spaces. And that's the online events. But of course it's also for all of our activities offline. So Linus, as I say, I'm not the most charming of characters at all time, but he has done some amazing technology. So we got to like 2005 the creation of GIT. Not necessarily the distributed version control system that would win. But there was some interesting principles there, and they'd come out of the work that he had done in terms of trying to build and sustain the Linux code base. So it was very much based on experience. He had an itch that he needed to scratch and there was a community that was this building, this thing. So what was going to be the option, came up with Git foundational to another huge wave of social change, frankly get to logical awesome. April 20 April, 2008 GitHub, right? GiHub comes up, they've looked at Git, they've packaged it up, they found a way to make it consumable so the teams could use it and really begin to take advantage of the power of that distributed version control model. Now, ironically enough, of course they centralized the service in doing so. So we have a single point of failure on GitHub. But on the other hand, the notion of the poll request, the primitives that they established and made usable by people, that changed everything in terms of software development. I think another one that I'd really like to look at is Slack. So Slack is a huge success used by all different kinds of businesses. But it began specifically as a pivot from a company called Glitch. It was a game company and they still wanted, a tool internally that was better than IRC. So they built out something that later became Slack. So Slack 2014, is established as a company and basically it was this Slack fit software engineering. The focus on automation, the conversational aspects, the asynchronous aspects. It really pulled things together in a way that was interesting to software developers. And I think we've seen this pattern in the world, frankly, of the last few years. Software developers are influences. So Slack first used by the engineering teams, later used by everybody. And arguably you could say the same thing actually happened with Apple. Apple was mainstreamed by developers adopting that platform. Get to 2013, boom again, Solomon Hikes, Docker, right? So Docker was, I mean containers were not new, they were just super hard to use. People found it difficult technology, it was Easter Terek. It wasn't something that they could fully understand. Solomon did an incredible job of understanding how containers could fit into modern developer workflows. So if we think about immutable images, if we think about the ability to have everything required in the package where you are, it really tied into what people were trying to do with CICD, tied into microservices. And certainly the notion of sort of display usability Docker nailed that, and I guess from this conference, at least the rest is history. So I want to talk a little bit about, scratching the itch. And particularly what has become, I call it the developer authentic. So let's go into dark mode now. I've talked about developers laying out these foundations and frameworks that, the mainstream, frankly now my son, he's 14, he (murmurs) at me if I don't have dark mode on in an application. And it's this notion that developers, they have an aesthetic, it does get adopted I mean it's quite often jokey. One of the things we've seen in the really successful platforms like GitHub, Docker, NPM, let's look at GitHub. Let's look at over that Playfulness. I think was really interesting. And that changes the world of work, right? So we've got the world of work which can be buttoned up, which can be somewhat tight. I think both of those companies were really influential, in thinking that software development, which is a profession, it's also something that can and is fun. And I think about how can we make it more fun? How can we develop better applications together? Takes me to, if we think about Docker talking about build, share and run, for me the key word is share, because development has to be a team sport. It needs to be sharing. It needs to be kind and it needs to bring together people to do more effective work. Because that's what it's all about, doing effective work. If you think about zoom, it's a proxy for collaboration in terms of its value. So we've got all of these airlines and frankly, add up that their share that add up their total value. It's currently less than Zoom. So video conferencing has become so much of how we live now on a consumer basis. But certainly from a business to business perspective. I want to talk about how we live now. I want to think about like, what will come out all of this traumatic and it is incredibly traumatic time? I'd like to say I'm very privileged. I can work from home. So thank you to all the frontline workers that are out there that they're not in that position. But overall what I'm really thinking about, there's some things that will come out of this that will benefit us as a culture. Looking at cities like Paris, Milan, London, New York, putting a new cycling infrastructure, so that people can social distance and travel outside because they don't feel comfortable on public transport. I think sort of amazing widening pavements or we can't do that. All these cities have done it literally overnight. This sort of changes is exciting. And what does come off that like, oh there are some positive aspects of the current issues that we face. So I've got a conference or I've got a community that may and some of those, I've been working on. So Katie from HashiCorp and Carla from container solutions basically about, look, what will the world look like in developer relations? Can we have developer relations without the air miles? 'Cause developer advocates, they do too much travel ends up, you know, burning them out, develop relations. People don't like to say no. They may have bosses that say, you know, I was like, Oh that corporates went great. Now we're going to roll it out worldwide to 47 cities. That's stuff is terrible. It's terrible from a personal perspective, it's really terrible from an environmental perspective. We need to travel less. Virtual events are crushing it. Microsoft just at build, right? Normally that'd be just over 10,000 people, they had 245,000 plus registrations. 40,000 of them in the last day, right? Red Hat summit, 80,000 people, IBM think 90,000 people, GitHub Crushed it as well. Like this is a more inclusive way people can dip in. They can be from all around the world. I mentioned Nigeria and how fantastic it is. Very often Nigerian developers and advocates find it hard to get visas. Why should they be shut out of events? Events are going to start to become remote first because frankly, look at it, if you're turning in those kinds of numbers, and Microsoft was already doing great online events, but they absolutely nailed it. They're going to have to ask some serious questions about why everybody should get back on a plane again. So if you're going to do remote, you've got to be intentional about it. It's one thing I've learned some exciting about GitLab. GitLab's culture is amazing. Everything is documented, everything is public, everything is transparent. Think that really clear and if you look at their principles, everything, you can't have implicit collaboration models. Everything needs to be documented and explicit, so that anyone can work anywhere and they can still be part of the team. Remote first is where we're at now, Coinbase, Shopify, even Barkley says the not going to go back to having everybody in offices in the way they used to. This is a fundamental shift. And I think it's got significant implications for all industries, but definitely for software development. Here's the thing, the last 20 years were about distributed computing, microservices, the cloud, we've got pretty good at that. The next 20 years will be about distributed work. We can't have everybody living in San Francisco and London and Berlin. The talent is distributed, the talent is elsewhere. So how are we going to build tools? Who is going to scratch that itch to build tools to make them more effective? Who's building the next generation of apps, you are, thanks.
SUMMARY :
It's the queue with digital coverage Maybe the internet gods be with us today Jenny, Bret, thank you for-- Welcome to the Docker community. but this is special to you guys. of the iceberg and so thrilled to be able or the questions you have. find the session that you want. to help you get the most out of your So the folks who were familiar with that and at the end of this keynote, Awesome and the content attention to the keynotes. and click on the session you want. in the same physical place. And I got to say props to your rig. the sponsor pages and you go, So a lot of the theme here is the impact and interviews in the program today Yeah and the first responders And the nice thing is is Docker of the day we'll see you soon. got to go, thanks bud. of the Docker Community from the Docker command line to the clouds So I'm going to build with Docker compose And so, to allow us to So all the commands that I'm going to run, While the app is deploying to Azure, to get the list with the containers the capability to scan applications Yeah, I love the intro. and Bret Fisher and the captains of apps that are going to be coming in the how to enhance on the rest of the day. in terms of the desire to code,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ellen Powell | PERSON | 0.99+ |
Alison McMillan | PERSON | 0.99+ |
Peter McKay | PERSON | 0.99+ |
Jenny Burcio | PERSON | 0.99+ |
Jenny | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Carla | PERSON | 0.99+ |
Scott Johnson | PERSON | 0.99+ |
Amanda Silver | PERSON | 0.99+ |
Bret | PERSON | 0.99+ |
Egypt | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Bret Fisher | PERSON | 0.99+ |
Milan | LOCATION | 0.99+ |
Paris | LOCATION | 0.99+ |
RedMonk | ORGANIZATION | 0.99+ |
John Farrie | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Africa | LOCATION | 0.99+ |
Clement Beyondo | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
Jenny Webby | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
Berlin | LOCATION | 0.99+ |
Katie | PERSON | 0.99+ |
December | DATE | 0.99+ |
60% | QUANTITY | 0.99+ |
1983 | DATE | 0.99+ |
1984 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
14 | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Nigeria | LOCATION | 0.99+ |
2005 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
more than 44 million | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Laura Taco | PERSON | 0.99+ |
40,000 | QUANTITY | 0.99+ |
47 cities | QUANTITY | 0.99+ |
April 20 April, 2008 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Wade | PERSON | 0.99+ |
Coinbase | ORGANIZATION | 0.99+ |
Gagne | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
James Governor | PERSON | 0.99+ |
Sorania Pullman | PERSON | 0.99+ |
last November | DATE | 0.99+ |
50 million developers | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Clemente Biondo | PERSON | 0.99+ |
10:00 a.m | DATE | 0.99+ |
Scott | PERSON | 0.99+ |
Dominic Wilde, SnapRoute | CUBEConversation, January 2019
>> Hello everyone. Welcome to this CUBE conversation. I'm John Furrier host like you here in our Palo Alto studio here in Palo Alto. Here with Dominic Wilde, known as Dom, CEO of SnapRoute, a hot new startup. A great venture. Backers don. Welcome to skip conversation. So love having to start ups. And so talk about Snape route the company because you're doing something interesting that we've been covering your pretty aggressively the convergence between Dev Ops and Networking. We've known you for many, many years. You were a former Hewlett Packard than you woodpecker enterprise running the networking group over there. You know, networking. And you're an operator. Snap rows. Interesting, because, um, great names back behind it. Big venture backers. Lightspeed Norwest, among others. Yes. Take a minute. Explain what? A SnapRoute. >> So SnapRoute was founded to really address one of the big, big problems we see in infrastructure, which is that, you know, essentially the network gets in the way of the deployment the rapid and angel deployment of applications. And so in the modern environment that we're in, you know, the business environment, highly competitive environment of disruption, continuous disruption going on in our industry, every company out there is constantly looking over their shoulder is, you know, making sure that they're moving fast enough there innovating fast enough that they don't want to be disrupted. They don't want to be overrun by, you know, a new upstart. And in order to do that, you know the application is is actually the work product that you really want to deploy, that you you want to roll out, and you want to be able to do that on a continuous basis. You want to be really agile about how you do it. And, quite frankly, when it comes to infrastructure, networking has been fifteen years behind the rest of the infrastructure and enabling that it's, ah, it's a big roadblock. It's obviously, you know, some of the innovations and developments and networking of lag behind other areas on what we snap Brown set out to do was to say, You know, look, if we're if we're going to bring networking forward and we're going to try and solve some of these problems, how do we do that? In a way, architecturally, that will enable networking to become not just a part of Ah, you know a cloud native infrastructure but actually enable those those organizations to drive forward. And so what we did was we took all of our sort of devops principles and Dev ups tools, and we built a network operating system from the ground up using devops principles, devops architectures and devops tools. And so what we're delivering is a cloud native network operating system that is built entirely on containers and is delivered is a micro services architecture on the big...one of the big value propositions that we deliver is what we call see a CD for networking, which is your continuous integration. Continuous deployment is obviously, you know, Big devops principal there. But doing that for networking, allowing a network to be constantly up enabling network Teo adapt to immutable infrastructure principles. You know we're just replacing pieces that need to be replaced. Different pieces of the operating system can be replaced If there's a security vulnerability, for instance, or if there's ah, bugger and you feature needed so you know we can innovate quicker. We can enable the network to be more reliable, allow it to be more agile, more responsive to the needs of the organization on all of this, fundamentally means that your Operation shins model now becomes ah, lot more unified. A lot more simple. You. Now, we now enable the net ox teams to become a sort of more native part of the conversation with devils. Reduce the tension there, eliminate any conflicts and everything. And we do that through this. You know, this innovative offices. >> Classically, the infrastructure is code ethos. >> Yeah, exactly right. I mean, it's you know, a lot of people have been talking about infrastructure is code for a long, long time. But what we really do, I mean, if if you deploy our network operating system you employ onto the bare metal switching, then you really enable Dev ops to hang have, you know, I take control and to drive the network in the way they want using their native tool chains. So, you know, Cuba Netease, for instance, ears. You know that the big growing dev ops orchestration to all of the moment. In fact, we think it's more than of the moment. You know, I've never seen in the industry that sort of, you know, this kind of momentum behind on open source initiative like there is behind Cuba. Netease. And we've taken communities and baked it natively into the operating system. Such that now our network operating system that runs on a physical switch can be a native part off that communities and develops tool >> Dom, I want to get to the marketplace, dynamics. Kind of what's different. Why now? But I think what's interesting about SnapRoute you're the chief of is that it's a venture back with big names? Yeah. Lightspeed, Norwest, among others. It's a signal of a wave that we've been covering people are interested in. How do you make developers deploy faster, more agility at scale, on premises and in clouds. But I want you to before we get there, want to talk about the origin story of company? Yeah. Why does it exist? How did it come to bear you mentioned? Operation is a big part of cloud to cloud is about operating model so much a company. Yes. This is the big trend. That's the big way. But how did it all get started? What's the SnapRoute story? >> Yeah, it's an interesting story. Our founders were actually operators at at Apple back in the day, and they were responsible for building out some of Apple's biggest. You know, data centers for their sort of customer facing services, like, you know, like loud iTunes, all those good things and you know they would. They were tasked with, sort of, you know, sort of modernizing the operational model with with those data centers and, you know, and then they, like many other operators, do you know, had a sense of community and worked with their peers. You know, another big organizations, even you know, other hyper scale organizations and wanted to learn from what they did on DH. What they recognised was that, you know, cos like, you know, Google and Facebook and Microsoft is urine things. They had done some incredible things and some incredible innovations around infrastructure and particularly in networking, that enabled them to Dr Thie infrastructure from A from a Devil ops perspective and make it more native. But those words that if you know, fairly tailored for there, if you know, for their organizations and so what they saw was the opportunity to say, Well, you know, there's there's many other organizations who are delivering, you know, infrastructure is a service or SAS, or you know, who are just very large enterprises who are acting as these new cloud service providers. And they would have a need to, you know, to also have, you know, tools and capabilities, particularly in the network, to enable the network to be more responsive, more to the devil apps like. And so, you know, they they they founded SnapRoute on that principle that, you know, here's the problem that we know we can solve. It's been solved, you know, some degree, but it's an architectural problem, and it's not about taking, You know, all of the, you know, the last twenty five years of networking knowledge and just incrementally doing a sort of, you know, dot upgrade and, you know, trying to sort of say, Hey, we're just add on some AP eyes and things. You really needed to start from the ground up and rethink this entirely from an architectural perspective and design the network operating system as on with Dev ups, tools and principles. So they started the company, you know, been around just very late two thousand fifteen early two thousand sixteen. >> And how much money have you read >> The last around. We are Siri's, eh? We took in twenty five million. >> And who were the venture? >> It was Lightspeed Ventures on DH Norwest. And we also had some strategic investment from Microsoft Ventures and from teams >> from great name blue chips. What was their interest? What was their thesis? Well, and you mentioned the problem. What was the core problem that you're solving that they were attracted to? Why would that why was the thirst with such big name VCs? >> Yeah, I mean, I think it was, you know, a zip said, I think it's the the opportunity to change the operational more. And I think one of the big things that was very different about our company is and, you know, we like to say, you know, we're building for effort. Operators, by operators, you know, I've found is, as I said, well, more operators from Apple, they have lived and breathed what it is to be woken up at three. A. M. On Christmas Eve toe. You know, some outage and have to, you know, try and figure that out and fight your way through a legacy kind of network and figure out what's going on. So you know, so they empathize with what that means and having that DNA and our company is incredibly meaningful in terms of how we build that you know the product on how we engage with customers. We're not just a bunch of vendors who you know we're coming from, you know, previous spender backgrounds. Although I do, you know, I bring to the table the ability to, you know, to deliver a package and you know, So there's just a cloud scale its clouds, Gail. It's it's but it's It's enabling a bridge if you like. If you look at what the hyper scales have done, what they're achieving and the operational models they have, where a if you like a bridge to enable that capability for a much broader set of operators and C. S. P s and as a service companies and dry forward a an aggressive Angela innovation agenda for companies, >> businesses. You know, we always discussing the Cuban. Everyone who watches the Kiev knows I'm always ranting about how cloud providers make their market share numbers, and lot of people include sass, right? I think everyone will be in the SAS business, so I kind of look at the SAS numbers on, say, it's really infrastructures service platform to service Amazon, Google, Microsoft and then, you know, Ali Baba in China. Others. Then you got IBM or one of it's kind of in the big kind of cluster there top. That is a whole nother set of business requirements that sass driven this cloud based. Yeah, this seems to be a really growing market. Is that what you're targeting? And the question is, how do you relate Visa? Visa Cooper? Netease trend? Because communities and these abstraction layers, you're starting to hear things like service mesh, policy based state Full application states up. Is that you trying to that trend explain. >> We're very complimentary, Teo. Those trends, we're, you know, we're not looking to replace any of that, really. And and my big philosophy is, if you're not simplifying something, then you're not really adding back here, you know, what you're doing is complicating matters or adding another layer on top. So so yeah, I mean, we are of value to those companies who are looking at hybrid approaches or have some on prime asset. Our operating system will land on a physical, bare metal switch So you know what? What we do is when you look at it, you know, service most is your message measures and all the other, You know, technologies you talked about with very, very complimentary to those approaches because we're delivering the on underlying network infrastructure on network fabric. Whatever you'd like to call it, that can be managed natively with class native tools, squeezing the alliteration there. But but, you know, it means that you don't need toe add overlays. We don't need to sort of say, Hey, look, the network is this static, archaic thing that's really fragile. And And I mean, if we touch it, it's going to break. So let's just leave it alone and let's let's put some kind of overlay over the top of it on do you know, run over the top? What we're saying is you can collapse that down. Now what you can say, what you can do is you can say, Well, let's make the network dynamic responsive. Let's build a network operating system out of micro services so you can replace parts of it. You can, you know, fix bugs. You can fix security vulnerabilities and you can do all that on the fly without having to schedule outage windows, which is, you know, for a cloud native company or a sass or infrastructure service company. I mean, that's your business. You can't take outage windows. Your business depends on being available all the time. And so we were really changing that fundamentals of a principle of networking and saying, You know, networking is now dynamic, you know, in a very, very native way, but it also integrates very closely with Dev ops. Operational model >> is a lot of innovation that network. We're seeing that clearly around the industry. No doubt everyone sees late and see that comes into multi Cloud was saying that the trend moving the data to the compute coyote again that's a network issue network is now an innovation opportunity. So I gotta ask you, where do you guys see that happening? And I want to ask you specifically talking about the cloud architects out in the marketplace in these enterprises who were trying to figure out about the architecture of clowns. So they know on premises there, moving that into a cloud operations. We see Amazon, they see Google and Microsoft has clouds that might want to engage with have cloud native presence in a hybrid and multi cloud fashion for those cloud architects. What are the things that you like to see them doing? More of that relates to your value problems. In other words, if they're using containers or they're using micro services, is this good or bad? What? What you should enterprise to be working on that ties into your value proposition. >> So I think about this the other way around, actually, if I can kind of turn that turn that question. But on his head, I think what you know, enterprises, you know, organization C, S. P s. I think what they should be doing is focusing on their business and what their business needs. They shouldn't be looking at their infrastructure architecture and saying, you know, okay, how can we, you know, build all these pieces? And then you know what can the business and do on top of that infrastructure? You wanna look at it the other way around? I need to deploy applications rapidly. I need to innovate those applications. I need to, you know, upgrade, change whatever you need to do with those applications. And I need an infrastructure that can be responsive. I need an infrastructure that can be hybrid. I need infrastructure that can be, you know, orchestrated in the hybrid manner on DH. Therefore, I want to go and look for the building blocks out there of those those architectural and infrastructure building blocks out there that can service that application in the most appropriate way to enable the velocity of my business and the innovation from my business. Because at the end of the day, I mean, you know, when we talk to customers, the most important thing T customers, you know, is the velocity of their business. It is keeping ahead in the highly competitive environment and staying so far ahead that you're not going to be disrupted. And, you know, if any element of your infrastructure is holding you back and even you know, you know the most mild way it's a problem. It's something you should address. And we now have the capability to do that for, you know, for many, many years. In fact, you know, I would claim up to today without snap route that you know, you you do not have the ability to remove the network problem. The network is always going to be a boat anchor on your business. It introduces extra cycles. It introduces big security, of underplaying >> the problems of the network and the consequences that prior to snap her out that you guys saw. >> So I take the security issue right? I mean, everybody is very concerned about security today. One of the biggest attack vectors in the security world world today is the infrastructure. It's it's it's so vulnerable. A lot of infrastructure is is built on sort of proprietary software and operating systems. You know, it's very complex. There's a lot of, you know, operations, operational, moves out and change it. So there's there's a lot of opportunity for mistakes to be made. There's a lot of opportunity for, you know, for vulnerabilities to be exposed. And so what you want to do is you want to reduce the threat surface of, you know, your your infrastructure. So one of the things that we can do it SnapRoute that was never possible before is when you look at a traditional network operating system. Andreas, A traditional. I mean, any operating system is out there, other you know, Other >> than our own. >> It's basically a monolithic Lennox blob. It is one blob of code that contains all of the features. And it could be, you know, architect in in a way that it Sze chopped up nicely. But if you're not using certain features, they're still there. And that increases the threat surface with our sat proud plant native network operating system. Because it is a micro services are key picture. If you are not using certain services or features, you can destroy and remove the containers that contain those features and reduce the threat surface of the operating system. And then beyond that, if you do become aware ofthe vulnerability or a threat that you know is somewhere in there, you can replace it in seconds on the fly without taking the infrastructure. Damn, without having to completely replace that whole blob of software causing, you know, an outage window. So that's just one example of, you know, the things we can do. But even when it comes to simple things, like, you know, adding in new services or things because we're containerized service is a ll boot together. It's no, eh? You know it doesn't. It doesn't have a one after the other. It it's all in parallel. So you know this this operating system comes up faster. It's more reliable. It eliminates the risk factors, the security, you know, the issues that you have. It provides native automation capabilities. It natively integrates with, You know, your Dev Ops tool chain. It brings networking into the cloud. Native >> really, really isn't in frustrations. Code is an operating system, so it sounds like your solution is a cloud native operating system. That's correct. That's pretty much the solution. That's it. How do customers engage with you guys? And what do you say? That cloud architect this is Don't tell me what to do. What's the playbook, right? How you guys advice? Because I see this is a new solution. Talk about the solution and your recommendation to architects as they start thinking about building that elastic in that flexible environment. >> Yeah. I mean, I think you know, Ah, big recommendation is, you know, is to embrace, you know, that all the all of the cloud native principles and most of the companies that were talking to, you know, definitely doing that and moving very quickly. But, you know, my recommendation. You know, engaging with us is you should be looking for the network to in naval, your your goals and your you know your applications rather than limiting. I mean, that's that's the big difference that, you know, the people who really see the value in what we do recognize that, you know, the network should be Andi is an asset. It should be enabling new innovation, new capabilities in the business rather than looking at the network as necessary evil where we you know, where we have to get over its limitations or it's holding us back. And so, you know, for any organization that is, you know, is looking at deploying, you know, new switching infrastructure in any way, shape or form. I think, you know, you should be looking at Well, how am I going to integrate this into a dev ops? You know, world, how may going to integrate this into a cloud native world. So as my business moves forward, I'm actually servicing the application in enabling a faster time to service for the application for the business. At the end of the day, that's that's everybody's going, >> you know, we've been seeing in reporting this consistently, and it's even more mainstream now that cloud computing has opened up the aperture of the value and the economics and also the technical innovation around application developers coding faster having the kind of resource is. But it also created a CZ creating a renaissance and networking. So the value of networking and application development that collision is coming together very quickly. So the intersection you guys play. So I'm sure this will resonate well with customers Will as they try to figure out the role the network because against security number one analytics all the things that go into what Sadiq they care about share data, shared coat all this is kind of coming together. So if someone hears this story, they'll go, OK, love this snap around store. I gotta I gotta dig in. How do they engage you? What do you guys sell to them? What's the pitch? Give the quick plug for the company real >> quick. Engaging with us is, you know, is a simple issue. No, come to www snapper out dot com. And you know, you know contacts are up there. You know, we were currently obviously we're a small company. We sell direct, more engaged with, you know, our first customers and deploying our product, you know, right now, and it's going very, very well, and, you know, it's a PSE faras. You know how you know what and when to engage us. I would say you can engage us at any stage and and value whether or not your architect ing a whole new network deploying a new data center. Obviously. Which is, you know, it is an ideal is built from the ground up, but we add value to the >> data center preexisting data saying that wants >> the modernizing data centers. I mean, very want >> to modernize my data center, my candidate. >> So one of the biggest challenges in an existing data center in when one of the biggest areas of tension is at the top of rack switch the top of racks, which is where you connect in your you know, your your application assets, your servers are connected. You're connecting into the into the, you know, first leap into the network. One of the challenges there is. You know, Dev ops engineers, They want Teo, you know, deploy containers. They want to deploy virtual machines they wantto and stuff move stuff, change stuff and they need network engineers to help them to do that. For a network engineer, the least interesting part of the infrastructure is the top Arax. Which it is a constant barrage day in, day, out of request. Hey, can I have a villain? Can have an i p address. Can we move this? And it's not interesting. It just chews up time we alleviate that tension. What we enable you to do is network engineer can you know, deploy the network, get it up and running, and then control what needs to be controlled natively from their box from debits tool chains and allow the devil ups engineers to take control as infrastructure. So the >> Taelon is taking the stress out of the top of racks. Wedge, take the drama out of this. >> Take that arm around the network. Right. >> So okay, you have the soul from a customer. What am I buying? What do you guys offering? Is that a professional services package? Is it software? Is it a sad solution? Itself is the product. >> It is software, you know. We are. We're selling a network operating system. It lands on, you know, bare metal. He liked white box switching. Ah, nde. We offer that as both perpetual licenses or as a subscription. We also office, um, you know, the value and services around that as well. You know, Andre, right now that is, you know, that is our approach to market. You know, we may expand that, you know, two other services in the future, but that is what we're selling right now. It is a network operating >> system down. Thanks for coming and sharing this story of SnapRoute. Final question for you is you've been in this century. While we've had many conversations we'd love to talk about gear, speeds and feeds. I'll see softwares eating. The world was seeing that we're seeing cloud create massive amounts. Opportunity. You're in a big wave, right? What is this wave look like for the next couple of years? How do you see this? Playing out as Cloud continues to go global and you start to Seymour networking becoming much more innovative. Part of the equation with Mohr developers coming onboard. Faster, more scale. How do you see? It's all playing out in the industry. >> Yeah. So I think the next sort of, you know, big wave of things is really around the operational. But I mean, we've we've we've concentrated for many years in the networking industry on speeds and feeds. And then it was, you know, it's all about protocols and you know how protocol stacks of building stuff. That's all noise. It's really about How do you engage with the network? How do you how do you operate your network to service your business? Quite frankly, you know, you should not even know the network is there. If we're doing a really good job of network, you shouldn't even know about it. And that's where we need to get to is an industry. And you know that's that's my belief is where, where we can take >> it. Low latent. See programmable networks. Great stuff. SnapRoute Dominic. While no one is dominant industry friend of the Cube also keep alumni CEO of Snapper Out. Hot new start up with some big backers. Interesting signal. Programmable networks software Cloud Global all kind of big Party innovation equation. Here in Silicon Valley, I'm showing for with cube conversations. Thanks for watching
SUMMARY :
You were a former Hewlett Packard than you woodpecker enterprise running the networking group over there. of the big, big problems we see in infrastructure, which is that, you know, I mean, it's you know, a lot of people have been talking about infrastructure But I want you to before we get there, want to talk about the origin story of DH. What they recognised was that, you know, cos like, you know, Google and Facebook and Microsoft is urine We are Siri's, eh? And we and you mentioned the problem. is and, you know, we like to say, you know, we're building for effort. And the question is, how do you relate Visa? some kind of overlay over the top of it on do you know, run over the top? What are the things that you like to see them doing? the most important thing T customers, you know, is the velocity of their business. the threat surface of, you know, your your infrastructure. It eliminates the risk factors, the security, you know, the issues that you have. And what do you say? that's that's the big difference that, you know, the people who really see the value in what we do recognize So the intersection you guys play. And you know, you know contacts are up there. the modernizing data centers. the into the, you know, first leap into the network. Taelon is taking the stress out of the top of racks. Take that arm around the network. So okay, you have the soul from a customer. You know, Andre, right now that is, you know, Playing out as Cloud continues to go global and you start to Seymour And then it was, you know, it's all about protocols and you know how protocol stacks of building stuff. While no one is dominant industry friend of the Cube also keep alumni CEO of Snapper Out.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Dominic Wilde | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
Andre | PERSON | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
January 2019 | DATE | 0.99+ |
Microsoft Ventures | ORGANIZATION | 0.99+ |
twenty five million | QUANTITY | 0.99+ |
fifteen years | QUANTITY | 0.99+ |
Dom | PERSON | 0.99+ |
Norwest | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Lightspeed | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Snape | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Siri | TITLE | 0.99+ |
Snapper | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
SnapRoute | ORGANIZATION | 0.99+ |
Andreas | PERSON | 0.98+ |
first | QUANTITY | 0.98+ |
Dominic | PERSON | 0.98+ |
one example | QUANTITY | 0.98+ |
Lightspeed Ventures | ORGANIZATION | 0.98+ |
one blob | QUANTITY | 0.98+ |
first customers | QUANTITY | 0.98+ |
Mohr | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
DH Norwest | ORGANIZATION | 0.97+ |
Teo | PERSON | 0.97+ |
Visa | ORGANIZATION | 0.97+ |
iTunes | TITLE | 0.96+ |
Snap rows | ORGANIZATION | 0.95+ |
Taelon | PERSON | 0.95+ |
Andi | ORGANIZATION | 0.94+ |
C. S. P | ORGANIZATION | 0.93+ |
www | OTHER | 0.91+ |
Lightspeed Norwest | ORGANIZATION | 0.9+ |
dot com | ORGANIZATION | 0.9+ |
two other services | QUANTITY | 0.89+ |
PSE | ORGANIZATION | 0.89+ |
next couple of years | DATE | 0.89+ |
Kiev | LOCATION | 0.88+ |
Ali Baba | PERSON | 0.86+ |
big | EVENT | 0.84+ |
Cuba Netease | ORGANIZATION | 0.84+ |
Christmas Eve | EVENT | 0.82+ |
Angela | PERSON | 0.81+ |
Brown | PERSON | 0.78+ |
Lennox | ORGANIZATION | 0.77+ |
Visa Cooper | ORGANIZATION | 0.76+ |
Arax | ORGANIZATION | 0.75+ |
twenty five years | QUANTITY | 0.75+ |
Gail | PERSON | 0.72+ |
Dr | PERSON | 0.72+ |
three | DATE | 0.71+ |
Cuba | LOCATION | 0.7+ |
two | DATE | 0.7+ |
blue chips | ORGANIZATION | 0.69+ |
CUBE | ORGANIZATION | 0.67+ |
SnapRoute | TITLE | 0.65+ |
big wave of | EVENT | 0.65+ |
CEO | PERSON | 0.64+ |
CUBEConversation | EVENT | 0.63+ |
Pradeep Sindhu, Cofounder and CEO, Fungible | Mayfield50
>> From Sand Hill Road, in the heart of Silicon Valley, it's theCUBE! Presenting the People First Network, insights from entrepreneurs and tech leaders. >> Hello everyone, I'm John Furrier with theCUBE. We are here on Sand Hill Road at Mayfield's Venture Capital Headquarters for the People First Network. I'm here with Pradeep Sindhu, who's the co-founder of Juniper Networks and now the co-founder and CEO of Fungible. Thanks for joining me on this special conversation for the People First Program. >> Thank you, John. >> So I want to talk to you about entrepreneurship. You're doing a new startup, you've been so successful as an entrepreneur over the years, uh you keep building a great company at Juniper Networks, everyone kind of knows the success there, great success. We've interviewed you before on that, but now you got a new startup! >> I do. >> You're building a company I thought startups were for young people. (Pradeep laughs) Come on! We're nine years into our startup, we're still a startup. >> Well, I'm not quite over the hill yet. (John Laughs) One of the reasons I jumped back in to the startup world was I saw an opportunity to solve a very important industry problem and to do it rapidly and so, I took the step. >> Well, we're super excited that you shared your vision with us and folks can check that video out on theCUBE and deep dive on the future of that startup. So, it's exciting, check it out. Entrepreneurship has changed and one of the things that we're talking about here is how things have changed just since the last time you've done a round. I mean, you're now a couple years in, you've been stealth for a while building out this amazing chip, the the Data Processing Unit, the DPU. What's different about building companies now? I mean, are you a unicorn? You have a billion-dollar evaluation yet? I mean, that's the new bar, it's different. What are some of the differences now in building a company? >> You know, one thing, John, that I saw is a clear difference between when I started Juniper and started Fungible, is that the amount of bureaucracy and paperwork that one has to go through is tremendously larger. And this was disappointing because one of the things that the US does very well is to keep it light and keep it fast so that it's easy for people to create new companies. That was one difference. The other difference that I saw was actually reluctance on the part of Venture to take big bets. Because people had gotten used to the idea of a quick turn around with maybe a social media company or something. Now, you know, my tendency to work on problems is I tend to work on fundamental problems that take time to do, but the outcome is potentially large. So, I'm attracted to that kind of problem. And so, the number of VCs that were willing to look at those kinds of problems were far fewer this time around than last time. >> So you got some no's then? >> Of course, I got no's. Even from people that-- >> You're the Founder of Juniper Networks, you've done amazing things, like you created billions of dollars of value, you should be gold-plated. >> What you did 20 years ago only goes so far. I think what what people were reluctant, and remember, I started Fungible in 2015. At that time, silicon was still a dirty word. I think now there are several people who said, no, we're regretting because they see that it's kind of the second coming of silicon and it's for reasons that we have talked about in the other discussion that, you know, Moore's Law is coming to a close. And that the largest that it was distributing over the last 30, 40 years is going away so what we have to do is we have to innovate on silicon. You know, as we discussed, the world has only seen a few architectures for computing engines on silicon. One of the things that makes me very happy is that now people are going to apply their creativity to painting on this canvas. >> So, silicon's got some new life blood. What's your angle with your silicon strategy? >> So, our silicon strategy is really to focus on one aspect of computations in the data center and this aspect we call Data Centric Computing. Data Centric Computing is really computing where there's a lot more movement of data and lot less arithmetic on data. And today, giving scaled out architectures, data has to move and be stored and retrieved and so on as much as it has to be computed on. So, existing engines are not very good at doing these Data Centric Computations, so we are building a programmable DPU to actually do those computations much, much better than any engine can today. >> And that's great. And just a reminder, we got a deep dive on that topic, so check out the video on that. So, I got to ask you the question, why are people resistant at the silicon trend? Was it trendy? Was it the lack of information? You almost see people almost less informed on computer architecture these days as people Blitzscale for SASPA businesses. Cloud certainly is great for that , but there's now this renaissance. Why was it, what was the problem? >> I think the problem is very easy to identify. Building silicon is expensive. It takes very specialized set of skills. It takes a lot of money, and it takes time. Well, anything that takes a long time is risky. And Venture, while it likes risk, it tries to minimize it. So, it's completely understandable to me that, you know, people don't want to take, they don't want to put money in ventures that might take two, three years. Actually, you know, going back to the Juniper era, there are Venture folks, I won't name them, but who said, well, if you could do this thing in six months, we're in, but otherwise no. >> How long did it take? >> 2 1/2 years. >> And then the rest is history. >> Yeah. >> So, there's a lot of naysayers, it's just categorical kind of like, you know, courses for horses for courses, as they say, that expression. All right, so now with with your experience, okay, you got some no's, how did that, how did that make you feel? You're like, damn, I got to get out and do the rounds? >> Actually-- >> You just kind of moved on or? >> I just moved on because, you know, the fact that I did Juniper should not give me any special treatment. It should be the quality of the idea that I've come up with. And so, what I tried to do, my response was to make the idea more compelling, sharpen it further, and and try to convince people that, hey there was value here. I think that I've not been often wrong about predicting things maybe two, three years out, so on the basis of that people were willing to give me that credibility, and so, there were enough people who were interested in investing. >> What did you learn in the process? What was the one thing that you sharpened pretty quickly? Was it the story, was it the architecture message? What was the main thing that you just had to sharpen really fast? >> The thing I had to sharpen really fast was while the technology we were developing is disruptive, customers really, really care, they don't want to be disrupted. They actually want the insertion to be smooth. And so, this is the piece that we had to sharpen. Anytime you have a new technology, you have to think about, well, how can I make it easy for people to use? This is very, very important. >> So the impact to the architecture itself, if it was deployed in the use case, and then look at the impact of ripple effect. >> For example, you cannot require people to change their applications. That's a no-no. Nobody's going to rewrite their software. You also probably don't want to ask people to change their network architecture. You don't want to ask people to change their deployment model. So, there are certain things that need to be held constant. So, that was a very quick learning. >> So, one of the other things that we've been talking about with other entrepreneurs is okay, the durability of the company. You're going down, playing the long game, but also innovation and and attracting people and so you've done, built companies before, as with Juniper, and you've worked with a great team of people in your network. How did you attract people for this? Obviously, they probably were attracted on the merit of the idea, but how do you pick people? What's the algorithm? What's the method that you use to choose team members or partners? Because that's also super important. If you got a gestation period where you're building out. You got to have high quality DNA. How do you make that choice? What's the thought process? >> So John, the the only algorithm that I know works is to look for people that are either known to you directly or known to somebody that you trust because in an interview, it's a hit or miss. At least, I'm not so good at interviewing that I can have a 70, 80% success rate. Because people can fake it in an interview, but you cannot fake it once you've worked with somebody, so that's one very important test. The other one was, it was very important for me to have people who were collaborative. It is possible to find lots of people who are very smart but they are not collaborative. And in an endeavor like the one we're doing, collaboration is very important, and of course the base skill set is very important so, you know, almost half of our team is software because we are-- >> It's a programmable chip. >> It's a programmable chip. We're writing our own operating system, very lightweight. So, you need that combination of hardware and software skills which is getting more and more scarce regrettably. >> I had a chat with Andy Bechtolsheim at VMworld and he and I had a great conversation similar to this, he said, you know, hardware is hard, software is easier, (laughs) and that was his point, and he also was saying that with merchant silicon, it's the software that's key. >> It is absolutely the key. Software, you know, software is always important. But software doesn't run on air. We should also remember that. And there are certain problems, for example, switching packets inside a data center where the problem is reasonably well-solved by merchant silicon. But there are other problems for which there is no merchant silicon solution, like the DPU that we're talking about. Eventually, there might be. But today there isn't. So, I think Apple is a great example for me of a company that understands the value of software hardware integration. Everybody thinks of Apple as a software only company. They have thousands of silicon engineers, thousands. If you look at your Apple Watch, there are probably some 20 chips inside it. You look at the iPhone. It won't do the magic that it does without the silicon team that they have. They don't talk about it a lot on purpose because-- >> 'Cause they don't want a China chip in there. >> Well, they don't want a China chip, but not only that, they don't know to advertise. It's part of their core value. >> Yeah. >> And so, as long as people keep believing that everything can be done in software, that's good for Apple. >> So, this is the trend, and this is why, Larry also brought this up years ago when he was talking about Oracle. He tried to make the play that Oracle would be the iPhone of the data center. >> Mm-hmm. >> Which people poo-pooed and they're still struggling with that idea, but he was pointing out the benefit of the iPhone, how they are integrating into the hardware and managing what Steve Jobs always wanted which was security number one >> Absolutely. >> for the customer. >> And seamlessness of use. And the reason the iPhone actually works as well as it does is because the hardware and the software are co-designed. And the reason it delivers the value that it does to the company is because of those things. >> So you see, this as a big trend, now you see that hardware and software will work together. You see cloud native heterogeneous almost server-less environments abstracted away with software and other components, fabric and specialized processors? >> Yes. >> And just application developers just programming at will? >> Correct, and edge data centers, so computing, I would say that maybe in a decade we will see roughly half of the computing and storage being done closer to the edge and the remaining half being done in these massively skilled data centers. >> I want to get geeky with you for a second, I want to ask you a question, I want to get your take on something. I've been thinking about and haven't really talked publicly about, kind of said on theCUBE a few times in a couple interviews, but I want to get your thoughts. There's been a big discussion about hybrid cloud, private cloud, multi-cloud, all that stuff going on, and I was talking with Andy Jassy, the CEO of Amazon, and Diane Greene at Google and I'm like okay, I can buy all these definitions, I don't believe any of 'em, but, you know, what the hell does that mean, what I know. I said to Diane Greene, I said, well, if everyone's going cloud operations, if cloud operations and edge is the new paradigm, isn't the data center just a big fat edge? And she looked at me and said, hmm, interesting. So, is the data center ultimately just a device on this network? If the operating model is horizontally scalable, isn't it just a a big fat edge? >> So you can, so here's the thing, right, if we talk about, you know, what is cloud? It's essentially a particular architecture, which is scaled out architecture uh to build a data center and then having this data center be connected by a very fast network. To consumers anytime, anywhere. So, let's take that as the definition of cloud. Well, if that's the definition of cloud, now you're talking about what kind of data centers will be present over time, and I think what we observed was it's really important for many applications to come, and with the advent of 5G, with the advent of things like augmented reality, now, with the advent of self-driving cars, a lot of computing needs to be done close to the edge because it cannot be done, because of laws of physics reasons, it cannot be done far away. So, once you have this idea that you also have small scale out data centers close to the edge, all these arguments about whether it's a hybrid cloud or this cloud or that cloud, they kind of vanish because-- >> So, you agree then, it's kind of like an edge? >> It is. >> Because it's an operational philosophy if you're running it that way, then it's just what it is, it's a scale out entity. >> Correct. >> It could be a small sensor network or it could be a data center. >> Correct. So, the key is actually the operational model and the idea of using scaled out design principles, which is don't try to build 50,000 different types of widgets which are then hard to manage. Try to build a small set of things, tinker toys that you can connect together in different ways. Make it easy to manage, manage it using software, which is then centralized by itself. >> That's a great point. You you jumped the gun on me on this one. I was going to ask you that next question. As an entrepreneur who's looking at this new architecture you just mentioned, what advice would you give them? How should they attack this market? 'Cause the old way was you get a PowerPoint, you show a presentations of the VCs, they give you some money, you provision some hardware, you go on next generation, get a prototype, it's up and running, you got some users. Built it then you get some cash, you scale it (laughs). Now with this new architecture, what's the strategy of the eager entrepreneur who wants to create a valuable opportunity with this new architecture. What would you advise them? >> So I, you know, I think it really depends on what is the underlying technology that you have for your startup. There's going to be lots and lots of opportunities. >> Oh don't fight the trend, which is, the headwind would be, don't compete against the scale out. Ride that wave, right? >> Yeah, people who are competing against scale out by building large scale monolithic machines, I think they're going to have difficulty, there's fundamental difficulties there. So, don't fight the trend. There's plenty of opportunities for software. Plenty of opportunities for software. But it's not the vertical software stack that you have to go through five or six different levels before you get to doing the real work. It's more a horizontal stack, it's a more agile stack. So, if it's a software company, you can actually build prototypes very quickly today. Maybe on AWS, maybe on Google Cloud, maybe on Microsoft. >> So, maybe the marketing campaign for your company, or maybe the trend might that's emerging is data first companies. We heard cloud mobile first, cloud first, data first. >> Correct. We think that the world really, the world of infrastructure is going from compute centric to data centric. This is absolutely the case. So, data first companies, yes. >> All right, so final question for you, as someone who's had a lot of experience in building public company, multi-billions of dollars of value, embarking on a big idea that that we like, I love the idea. A lot of people struggle with the entrepreneurial equation of how to leverage their board, how to leverage their investors and advisors and service providers. What would you share to the folks watching that are out there that have struggled? Some think, oh the VCs, they don't add value. Some do, some don't. There's always missed reactions. There's different, different types out there. Some do, some don't. But in general, it's about leveraging the resources and the people involved. How should entrepreneurs leverage their advisors, their board, their investors? >> I think it's very important for an entrepreneur to look for complementarity. It's very easy to want to find people that think like you do. If you just find people that think like you do, you're not, they're not going to find weaknesses in your arguments. It's more difficult, but if you look to entrepreneurs to provide complementarity, you look to advisors to provide the complementarity, look to customers to give you feedback, that's how you build value. >> Pradeep, thanks so much for sharing the insight, a lot of opportunities. Thanks for sharing here on-- >> Thank you, John. >> The People Network. I'm John Furrier at Mayfield on Sand Hill Road for theCUBE's coverage of the People First Network series, part of Mayfield's 50th Anniversary. Thanks for watching. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, it's theCUBE! and now the co-founder and CEO of Fungible. So I want to talk to you about entrepreneurship. I thought startups were for young people. One of the reasons I jumped back in to the startup world and deep dive on the future of that startup. is that the amount of bureaucracy and paperwork Even from people that-- You're the Founder of in the other discussion that, you know, So, silicon's got some new life blood. on one aspect of computations in the data center So, I got to ask you the question, So, it's completely understandable to me that, you know, of naysayers, it's just categorical kind of like, you know, I just moved on because, you know, you have to think about, well, So the impact to the architecture itself, So, there are certain things that need to be held constant. on the merit of the idea, but how do you pick people? is to look for people that are either known to you directly So, you need that combination he said, you know, hardware is hard, software is easier, It is absolutely the key. but not only that, they don't know to advertise. And so, as long as people keep believing that everything and this is why, Larry also brought this up years ago is because the hardware and the software are co-designed. So you see, this as a big trend, being done closer to the edge and the remaining half I want to get geeky with you for a second, So, let's take that as the definition of cloud. Because it's an operational philosophy It could be a small sensor network and the idea of using scaled out design principles, 'Cause the old way was you get a PowerPoint, that you have for your startup. Oh don't fight the trend, which is, that you have to go through five or six different levels So, maybe the marketing campaign for your company, This is absolutely the case. and the people involved. look to customers to give you feedback, Pradeep, thanks so much for sharing the insight, I'm John Furrier at Mayfield on Sand Hill Road
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Greene | PERSON | 0.99+ |
Pradeep Sindhu | PERSON | 0.99+ |
Juniper Networks | ORGANIZATION | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Andy Jassy | PERSON | 0.99+ |
Andy Bechtolsheim | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Laughs | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Pradeep | PERSON | 0.99+ |
People First Network | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Sand Hill Road | LOCATION | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
nine years | QUANTITY | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
PowerPoint | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
second | QUANTITY | 0.98+ |
three years | QUANTITY | 0.98+ |
six months | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
billion-dollar | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
The People Network | ORGANIZATION | 0.98+ |
70, 80% | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
one difference | QUANTITY | 0.97+ |
20 chips | QUANTITY | 0.97+ |
2 1/2 years | QUANTITY | 0.97+ |
one aspect | QUANTITY | 0.97+ |
six different levels | QUANTITY | 0.95+ |
50,000 different types | QUANTITY | 0.95+ |
Mayfield | LOCATION | 0.94+ |
Venture | ORGANIZATION | 0.93+ |
Data Centric | ORGANIZATION | 0.92+ |
billions of dollars | QUANTITY | 0.91+ |
Apple Watch | COMMERCIAL_ITEM | 0.9+ |
one thing | QUANTITY | 0.9+ |
Fungible | ORGANIZATION | 0.9+ |
50th Anniversary | QUANTITY | 0.88+ |
multi-billions of dollars | QUANTITY | 0.87+ |
Data Centric Computing | ORGANIZATION | 0.85+ |
People First Program | ORGANIZATION | 0.83+ |
theCUBE | ORGANIZATION | 0.8+ |
couple interviews | QUANTITY | 0.77+ |
thousands of silicon | QUANTITY | 0.75+ |
team | QUANTITY | 0.72+ |
one very important test | QUANTITY | 0.71+ |
Mayfield | ORGANIZATION | 0.71+ |
Google Cloud | TITLE | 0.71+ |
China | OTHER | 0.71+ |
US | ORGANIZATION | 0.69+ |
years ago | DATE | 0.67+ |
half | QUANTITY | 0.66+ |
Simon Crosby & Chris Sachs, SWIM | CUBE Conversation
>> Hi, I'm Peter Burris and welcome to another Cube Conversation. We're broadcasting from our beautiful Palo Alto studios and this time we've got a couple of great guests from SWIM. And one of them is Chris Sachs, who's the founder and lead architect. And the other one is Simon Crosby, who's the CTO. Welcome to the Cube, guys. >> Great to be here. >> Thank you. >> So let's start. Tell us a little bit about yourselves. Well, Chris, let's start with you. >> So my name's Chris Sachs. I'm a co-founder of SWIM, and my background is embedded in distributed systems and bringing those two worlds together. And I've spent the last three years building software from first principles for its computing. >> But embedded, very importantly, that's small devices, highly distributed with a high degree of autonomy-- >> Chris: Yes. >> And how they will interact with each other. >> Right. You need both the small footprint and you need to scale down and out, is one thing that we say. People get scaling out in the cloud and scaling up and out. For the edge, you need to scale down and out. There's similarities to how clouds scale and some very different principles. >> We're going to get into that. So Simon, CTO. >> Sure, my name is Simon Crosby. I came this way courtesy of being an academic, a long time ago, and then doing startups. This is startup number five for me. I was CTO and founder at XenSource. We built the Xen hypervisor. Also at Bromium, where we did micro-virtualization, and I'm privileged to be along for the ride with Chris. >> Excellent. So guys, the SWIM promise is edge AI. I like that, down and out. Tell us a little bit about it, Chris. >> So one of the key observations that we've made over the past half decade is there's a whole lot of compute cycles being showered on planet Earth. ARM is shipping five billion chips a quarter. And there's a tremendous amount of computing, generating a tremendous about of data and it's trapped in the edge. There are physics problems, economic problems with back on it all to the cloud, but there's tremendous, you're capturing the functionality of the world on these chips. >> We like to say that if software's going to eat the world, it's going to eat it at the edge. Is that kind of what you mean? >> Yes. >> That's right. >> And you start running into, when you decide you want to eat the edge, you run into problems very quickly with a traditional way of doing things. So one example is where does your database live if you live on the edge? Which telephone pole are you going to put at your database node in? >> Simon: How big does this need to be? >> There are a number of decisions that are very difficult to make. So SWIM's promises, now, you have some advantages as well in that billions of clock cycles go by on these chips in between that work packets. And if you can figure out how to squeeze your software into these slop cycles between network packets, you can actually do, you actually have a super computer, a global super computer on which you can do machine learning. You can try and predict the future of how physical systems are going to play out-- >> Hence, your background in distributive systems because the goal is to try to ensure that the network packets are as productive as possible. >> Chris: Exactly. >> Here's another way of looking at the problem. If you count top down, it's reasonable to think of things in the future, all sorts of things, which have got computer and maybe some networking in them, presenting to you a digital twin of themselves. Where's the thing come from? >> Now, describe digital twin. We've done a lot of research on this, but it's still is relatively novel concept. GE talked about it. IVM talks about it. When we say digital twin, we're talking about the simulacrum, the digital representation of an actual thing, right? >> Of an actual thing. There are a couple of ways you can get there. One way is if you give me the detailed design of a thing and exactly how it works, I can give you all of that detail and maybe (mumbles) can help use that to find a problem. The other way is to try and construct it automatically. And that's exactly what SWIM does. >> So it takes the thing and builds models around it that are-- >> Well, so what do things do? Things give us data. So the problem, then, becomes how can I build a digital twin just given the data? Just given the observations of what this thing is seeing, what its sensors are bleating about, what things near it are saying. How can I build a digital twin, which will analyze itself, tell you what its current state is and predict the future, just from the data? >> All right, so the bottom line is that you've got, you're providing a facility to help model real world things that tend to operate in an analog way and turning them into digital representations that then can be a full member, in fact, perhaps even a superior member in a highly distributed system of how things work together. >> Yes. >> Got that right. >> A few key points is digital twins are in the loop with the real world. And they are in the loop with their neighbors, and you start with digital twins that reflect the physical world, but they don't end there. You can have physical twins. You can have digital twins of concepts as well and other higher order notions. And from the masses of data that you get from physical devices, you can actually infer the existence of twins where you don't even have a sensor. >> It's making it real. So you could have a digital. If you happen to be tracking all of the buses in downtown San Francisco, you can infer PM10 pollution as a virtual sensor on a bus. And then you can pretty quickly work out something which is a value to somebody who's trying to sell insurance, for example. And that's not a real sensor on every bus, but you can then compose these things, given that you have these other digital twins which are manifesting themselves. >> So folks talk about the butterfly effect and things like chaos theory, which is a butterfly affecting the weather in China. But what we're talking about is locality really matters. It matters in real systems. And it matters in computers. And if you have something that's generating data, more than likely, that thing is going to want its own data because of locality. But also, the things near it are also going to want to be able to infer or understand the behavior of that thing, because it's going to have a consequential impact on them. >> Correct, so I'll give you two examples of that. We've been using aircraft manufacturing facility. The virtual twin here is some widget which has an RFID tag in it. We don't know what that is. We just know there's a tag and we can place it in three ways because it gets seen by multiple sensors we triangulate. And then, as these tags come together makes an aircraft sub-assembly. That meaning of an aircraft sub-assembly is kind of another thing but the nearness, it's the locality that gets you there. So I can say all these tags came together. Let's track that as a superior object. There's a containment notion there. And suddenly, we're tracking will assemblies instead of widgets. >> And this is where the AI comes in, because now, the AI is the basis for recognizing the patterns of these tags and being able to infer from the characteristics of these patterns that it's a sub-assembly. Have I got that right? >> Right. There's a unique opportunity that is opened up in AI when you're watching things unfold live in that you have this great unifying force to learn off of, which is causality. It's the what does everything have in common? It's that data that you've lost through time. And what do you do when you have billions of clock cycles to spare between network packets? Well, you can make a guess about what your particular digital twin might see next. So you can take a guess based on what you're state is, what the sensors around you are saying, and just make a guess. Then you can see what actually happens. You see what actually happens. You measure the error between what you predicted would happen and what actually happened. And you can correct for that. And you could do that just add in an item. Just trillions of times over the course of a year, you make small corrections for how you think. Your particular system will evolve, whether it's a street of traffic light trying to predict when it's going to change, when cars are going to show up, when pedestrians are going to push buttons, or it's a machine, a conveyor belt or a motor in a factory, trying to predict when it might break down, you can learn from these precise systems that very specific models of how they're going to evolve and you can play reality forward. You learn a simulation. And you can play your own, predict your own future. >> And there's a very cool thing that shows up from that. So instead of say, let's take a city and all of its lights. Instead of trying to gather all that data from the city and go then solve a big model, which is the cloud approach to doing this, big data in cloud approach, essentially each one of these digital twins is solving its own problem of how do I predict my own future? So instead of solving one big model, you'll have 200 different insections all predicting their own future, which is totally cool, because it distributes well in this fabric of space CPU cycles and can be very efficient to computers. >> And a consequence of that is, again, you can get these very rich patterns that then these things can learn more from and each acting autonomously in individual as groups. >> Even more than that. There's an even cooler thing. Imagine I set you down by an insection and I said, "Write me a program for how this thing is going to behave." First of all, you wouldn't know how to do it. Second, there aren't enough humans on planet Earth to do this. What we're saying is that we can construct this program from the data, from this thing as it evolves through time. We'll construct the program, and it will be merely a learned model. And then you could ask it how it's going to behave in the future. You could say, "Well, what if I do this? "What if a pedestrian pushes this button? "What will the response be?" So effectively, you're learning a program. You're learning the digital twin just from the data. >> All right, so how does SWIM do this? So we know now we know what it is. And we know that it's using, it's stealing cycles from CPUs that are mainly set up to gather, to sense things, and package data up and send it off somewhere else, but how does it actually work? What does the designer, the developer, the operator do with SWIM that they couldn't do before? >> So SWIM is a tiny, vertically integrated software stack that does all, has all the capabilities you'd find in an open source cloud platform. You have persistence. You have message dispatch. You have peer-to-peer routing. You have analytics and a number of other capabilities. But SWIM hides that and it takes care of it, abstracts over what you need to do to, rather than thinking about where do you place compute, it's when you think "What is my model? "What is my digital twin? "And what am I related to?" And SWIM dynamically maps these logical models to physical hardware at run time and dynamically moves these encapsulated agents around as needed based on the loads and the demand in the network. And in the same way that-- >> In the events? >> Yes, in the events. And in the same way that you, if you're using Microsoft Word, you don't really what CPU core is that running on? Who knows and who cares? It's a solved problem. We look from the ground up and the edge is just one big massively, multi-core computer. And there's similar principles to apply in terms of how you maintain consistency, how you efficiently route data that you can abstract over and eliminate as a problem that you have to be concerned about as a developer or a user who just wants to ingest some data and get insights on how-- >> So I'm going to make sure I got that. So if I look at the edge, which might have 200, might have 10 thousand sensors associated with it, we can imagine, for example, level of complexity like what happens on a drilling platform on an oil field. Probably is 10 thousand sensors on that thing, all of these different things. Each of those sensors are doing something. And they're sending, dispatching information. But what you're doing is you're basically saying we can now look at those sensors that can do their own thing, but we can also look at them as a cluster of processing capability. We'll put a little bit of software on there that will provide a degree of coordinated control so that models can-- >> So two things. >> Build up out of that? >> So first off, SWIM itself builds a distributed fabric on whatever computer's available. And you can smear SWIM between an embedded environment and a VM in the cloud. We just don't care. >> But the point is anything you pointed at becomes part of this cluster. >> Yes, but the second level of this is when you start to discover the entities in the real world. And you begin to discover the entities from that data. So I'll get all this gray stuff. I don't really know what it means, but I'm going to find these entities and what they're related to and then, for each entity, instantiate when these digital twins as an active, essentially the things that microservice. It's a stateful microservice, which is then just going to consume its own real world data and do its thing and then present what it knows by an API or graphical UI components. >> So I'm an operator. I install. What do I do to install? >> You start a process on whatever devices you have available. So SWIM is completely self-contained and has no external dependencies. So we can run as the (mumbles) analytics box or even without an operating system. >> So I basically target swim at the device and it installs? >> Chris: Correct. >> Once it's installed, how am I then acquiring it through software development? >> Ultimately, in this edge world, there is, you've asked the key question, which is how the hell do I get ahold of this stuff and how does it run? And I don't think the world knows the answer to all these questions. So, for example, in the traffic views case, the answer is this. We've published an API. It happens to be an (mumbles), but who cares? Where people like Uber and Lyft or UPS can show up and say what's this traffic light can do in the future. And they just hit that. What they're doing is going for the insides of digital twins in real time as a service. That's kind of an interesting thing to do, right? But you might find this embedded in a widget, because it's small enough to be able to do that. You might find that a customer installs them in a couple of boxes and it just runs. We don't really care. It will be there, and it's trivial to run. >> So you're going to be moving it into people who are building these embedded fixtures? >> Sure. >> Yes. >> Sure, but the key point here is that I know you, particularly in the Cube, you're hearing all these wonderful stories about DevOps and (mumbles) and all this guff up in the cloud, fine. That's where you want those people to be. >> Don't call it guff (laughs). >> But at the edge, no (mumbles). There aren't enough humans to run this stuff so it's got to be completely automatic. It's got to just wake up, run, find all the compute, run ceaselessly, distribute load, be resilient, be secure, all these things that just got to happen. >> So SWIM becomes a service that is shipped with an embedded system. >> Possibly, or there is a potential outcome where it's delivered as software which runs on a box close to some widget. >> Or willed out as a software update with some existing manufacturers. >> In this particular case of traffic, we should be on 60 thousand insections by the end of this year. The traffic infrastructure vendor, the vendor that delivers the traffic management system, just rolls up an upgrade and suddenly, a whole bunch of new insections appear in a cloud API. And an UBER or a Lyft or whatever, it's just hitting that thing and finding out what they are. >> Great, and so but as developers, am I going into a SWIM environment and doing anything? This is just the way that the data's being captured. >> Simon: So we take data. >> That the pattern's being identified. >> Take data, turn into digital twins with intelligent things to say and expose that as APIs or as UI components. >> So that now the developers can go off and use whatever tools they want and just invoke the service through the API. >> Bingo, so that's right. So developers, if they're doing something, just hit digital twins. >> All right, so we've talked a couple. We've talked a little bit about the traffic example and mentioned being in an oil field. What are some of the other big impacts? As this thing gets rolling, what is it going to, what kind of problems is this going to allow us to solve? Not just one, but there's definitely going to be a network effect here, right? >> Sure, so the interesting thing about the edge world is that it's massively diverse. So even one cookie factory's different from another cookie factory in that they might have the same equipment, but they're in different places on planet Earth, may have different operators in everything else. So the data will be different in everything else. So the challenge in general with the edge environment has been that we've been very professional services centric people bring in (mumbles) people and try and solve a local problem and it's very expensive. SWIM has this opportunity to basically just show up, consume this gray data, and tell you real stuff without enormous amounts of semantic knowledge a priority. So we hae this ability to conquer this diversity problem, which is characteristic of the edge, and also come up with highly realistic and highly accurate models for this particular thing. I want to be very clear. The widget in chocolate factory A is exactly the same as the widget in chocolate factory B, but the models will be 100% different and totally (mumbles) at either place, because if the pipes go bang at 6 a.m. here, it's in the model. >> And SWIM has the opportunity to reach the 99.9% of data that currently is generated and immediately forgotten, because it's too expensive to store. It's too expensive to transport. And it's too expensive to build applications to use. >> We should talk about cost, because that's a great one. So if you wanted to solve the problem of predicting what the lights in Palo Alto are going to do for the next five minutes, that's heading towards 10 thousand dollars a month in AWS. SWIM will solve that problem for a tiny fraction, like less than a 100th of that, just on stranded CPU cycles lying around at the edge. And you have say, bandwidth and a whole bunch of things. >> Yeah, and that's a very important point, because the edge is, it's been around for a while. Operational technology. People have been doing this for a while, but not in a way that's naturally, easily programmable. You're bringing the technology that makes it easy to self-discover simply by utilizing whatever cycles and whatever data's there and putting a persistence, making it really simple for that to be accessed through an API, and ultimately, it creates a lot of options on what you can do with your devices in the future. Makes existing assets more valuable, because you have options in what you can do with it. >> If you look at the traffic example, it's the AWS scenario is $50 per month per insection. No one's going to do that. But if it's like a buck, I'm in. And you can do things, 'cause then it's worthwhile for UBER to hit that API. >> All right, so we got to wrap this up. So one way of thinking about it is, I'm thinking. And there's so many metaphors that one could invoke, but this is kind of like the teeth that are going to eat the real world. The software teeth that's going to eat the real world at the edge. >> So if I can leave with one thought, which is SWIM loosely stems from software and motion. And the idea is that teeth edge. You need to move the software to where the data is. You can't move the data to where the software is. The data is huge. It's immobile. And the quantities of data are staggering. You essentially have a world of spam bots out there. It's intractable. But if you move the software to where the data is, then the world's yours. >> One thing to note is that software's still data. It just happens to be extremely well organized data. So the choice is do you move all the not-particularly-well-organized data somewhere where it can operate or would you move the really well organized and compact? And information theory says move the most structured thing you possibly can and that's the application of the software itself. All right. Chris Sachs, founder and lead architect of SWIM. Simon Crosby, CTO of SWIM. Thank you very much for being on the Cube. Great conversation. >> Thanks for having us. >> Good luck. >> Enjoy. >> And once again, I'm Peter Burris. And thank you for participating in another Cube conversation with SWIM. Talk to you again soon.
SUMMARY :
And the other one is Simon Crosby, who's the CTO. So let's start. And I've spent the last three years building software You need both the small footprint and you need We're going to get into that. and I'm privileged to be along for the ride with Chris. So guys, the SWIM promise is edge AI. So one of the key observations that we've made Is that kind of what you mean? And you start running into, And if you can figure out how to squeeze your software because the goal is to try to ensure presenting to you a digital twin of themselves. the digital representation of an actual thing, right? There are a couple of ways you can get there. and predict the future, just from the data? All right, so the bottom line is that you've got, And from the masses of data that you get And then you can pretty quickly work out But also, the things near it are also going to want to be able it's the locality that gets you there. because now, the AI is the basis And what do you do when you have billions of clock cycles So instead of say, let's take a city and all of its lights. And a consequence of that is, again, And then you could ask it the operator do with SWIM that they couldn't do before? And in the same way that-- And in the same way that you, So if I look at the edge, which might have 200, And you can smear SWIM But the point is anything you pointed at And you begin to discover the entities from that data. What do I do to install? on whatever devices you have available. the answer to all these questions. Sure, but the key point here is that But at the edge, no (mumbles). that is shipped with an embedded system. which runs on a box close to some widget. with some existing manufacturers. by the end of this year. This is just the way that the data's being captured. and expose that as APIs or as UI components. So that now the developers can go off So developers, if they're doing something, What are some of the other big impacts? So the challenge in general with the edge environment And SWIM has the opportunity to reach the 99.9% of data And you have say, bandwidth and a whole bunch of things. on what you can do with your devices in the future. And you can do things, that are going to eat the real world. You can't move the data to where the software is. So the choice is do you move Talk to you again soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Simon Crosby | PERSON | 0.99+ |
Chris Sachs | PERSON | 0.99+ |
SWIM | ORGANIZATION | 0.99+ |
XenSource | ORGANIZATION | 0.99+ |
Simon | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
99.9% | QUANTITY | 0.99+ |
60 thousand insections | QUANTITY | 0.99+ |
UPS | ORGANIZATION | 0.99+ |
200 | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Each | QUANTITY | 0.99+ |
Xen | ORGANIZATION | 0.99+ |
6 a.m. | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Second | QUANTITY | 0.99+ |
two examples | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 thousand sensors | QUANTITY | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
SWIM | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
GE | ORGANIZATION | 0.99+ |
one thing | QUANTITY | 0.99+ |
trillions of times | QUANTITY | 0.99+ |
second level | QUANTITY | 0.99+ |
one thought | QUANTITY | 0.98+ |
UBER | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
three ways | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
One way | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
end of this year | DATE | 0.98+ |
One | QUANTITY | 0.97+ |
200 different insections | QUANTITY | 0.97+ |
less than a 100th | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
two worlds | QUANTITY | 0.96+ |
First | QUANTITY | 0.96+ |
each entity | QUANTITY | 0.95+ |
each one | QUANTITY | 0.94+ |
each | QUANTITY | 0.94+ |
one way | QUANTITY | 0.91+ |
DevOps | TITLE | 0.9+ |
Microsoft | ORGANIZATION | 0.89+ |
10 thousand dollars a month | QUANTITY | 0.89+ |
Earth | LOCATION | 0.89+ |
PM10 | OTHER | 0.87+ |
IVM | ORGANIZATION | 0.86+ |
past half decade | DATE | 0.85+ |
billions of clock | QUANTITY | 0.85+ |
Bromium | ORGANIZATION | 0.85+ |
five billion chips a quarter | QUANTITY | 0.84+ |
Cube | ORGANIZATION | 0.84+ |
first principles | QUANTITY | 0.83+ |
a year | QUANTITY | 0.83+ |
one cookie factory | QUANTITY | 0.82+ |
San Francisco | LOCATION | 0.8+ |
Bingo | TITLE | 0.78+ |
one big model | QUANTITY | 0.78+ |
billions of clock cycles | QUANTITY | 0.76+ |