Breaking Analysis: Arm Lays Down the Gauntlet at Intel's Feet
>> Announcer: From the Cube's studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> Exactly one week after Pat Gelsinger's announcement of his plans to reinvent Intel. Arm announced version nine of its architecture and laid out its vision for the next decade. We believe this vision is extremely strong as it combines an end-to-end capability from Edge to Cloud, to the data center, to the home and everything in between. Arms aspirations are ambitious and powerful. Leveraging its business model, ecosystem and software compatibility with previous generations. Hello every one and welcome to this week's Wikibon Cube Insights powered by ETR. And this breaking analysis will explain why we think this announcement is so important and what it means for Intel and the broader technology landscape. We'll also share with you some feedback that we received from the Cube Community on last week's episode and a little inside baseball on how Intel, IBM, Samsung, TSMC and the U.S. government might be thinking about the shifting landscape of semiconductor technology. Now, there were two notable announcements this week that were directly related to Intel's announcement of March 23rd. The Armv9 news and TSMC's plans to invest a $100 billion in chip manufacturing and development over the next three years. That is a big number. It appears to tramp Intel's plan $20 billion investment to launch two new fabs in the U.S. starting in 2024. You may remember back in 2019, Samsung pledged to invest a $116 billion to diversify its production beyond memory trip, memory chips. Why are all these companies getting so aggressive? And won't this cause a glut in chips? Well, first, China looms large and aims to dominate its local markets, which in turn is going to confer advantages globally. The second, there's a huge chip shortage right now. And the belief is that it's going to continue through the decade and possibly beyond. We are seeing a new inflection point in the demand as we discussed last week. Stemming from digital, IOT, cloud, autos in new use cases in the home as so well presented by Sarjeet Johal in our community. As to the glut, these manufacturers believe that demand will outstrip supply indefinitely. And I understand that a lack of manufacturing capacity is actually more deadly than an oversupply. Look, if there's a glut, manufacturers can cut production and take the financial hit. Whereas capacity constraints mean you can miss entire cycles of growth and really miss out on the demand and the cost reductions. So, all these manufacturers are going for it. Now let's talk about Arm, its approach and the announcements that it made this week. Now last week, we talked about how Pat Gelsinger his vision of a system on package was an attempt to leapfrog system on chip SOC, while Arm is taking a similar system approach. But in our view, it's even broader than the vision laid out by Pat at Intel. Arm is targeting a wide variety of use cases that are shown here. Arm's fundamental philosophy is that the future will require highly specialized chips and Intel as you recall from Pat's announcement, would agree. But Arm historically takes an ecosystem approach that is different from Intel's model. Arm is all about enabling the production of specialized chips to really fit a specific application. For example, think about the amount of AI going on iPhones. They move if I remember from fingerprint to face recognition. This requires specialized neural processing units, NPUs that are designed by Apple for that particular use case. Arm is facilitating the creation of these specialized chips to be designed and produced by the ecosystem. Intel on the other hand has historically taken a one size fits all approach. Built around the x86. The Intel's design has always been about improving the processor. For example, in terms of speed, density, adding vector processing to accommodate AI, et cetera. And Intel does all the design and the manufacturing in any specialization for the ecosystem is done by Intel. Much of the value, that's added from the ecosystem is frankly been bending metal or adding displays or other features at the margin. But, the advantage is that the x86 architecture is well understood. It's consistent, reliable, and let's face it. Most enterprise software runs on x86. So, but very, very different models historically, which we heard from Gelsinger last week they're going to change with a new trusted foundry strategy. Now let's go through an example that might help explain the power of Arm's model. Let's say, your AWS and you're doing graviton. Designing graviton and graviton2. Or Apple, designing the M1 chip, or Tesla designing its own chip, or any other company in in any one of these use cases that are shown here. Tesla is a really good example. In order to optimize for video processing, Tesla needed to add specialized code firmware in the NPU for it's specific use case within autos. It was happy to take off the shelf CPU or GPU or whatever, and leverage Arm's standards there. And then it added its own value in the NPU. So the advantage of this model is Tesla could go from tape out in less or, or, or or in less than a year versus get the tape out in less than a year versus what would normally take many years. Arm is, think of Arm is like customize a Lego blocks that enable unique value add by the ecosystem with a much faster time to market. So like I say, the Tesla goes from logical tape out if you will, to Samsung and then says, okay run this against your manufacturing process. And it should all work as advertised by Arm. Tesla, interestingly, just as an aside chose the 14 nanometer process to keep its costs down. It didn't need the latest and greatest density. Okay, so you can see big difference in philosophies historically between Arm and Intel. And you can see Intel vectoring toward the Arm model based on what Gelsinger said last week for its foundry business. Essentially it has to. Now, Arm announced a new Arm architecture, Armv9. v9 is backwards compatible with previous generations. Perhaps Arm learned from Intel's failed, Itanium effort for those remember that word. Had no backward compatibility and it really floundered. As well, Arm adds some additional capabilities. And today we're going to focus on the two areas that have highlighted, machine learning piece and security. I'll take note of the call out, 300 billion chips. That's Arm's vision. That's a lot. And we've said, before, Arm's way for volumes are 10X those of x86. Volume, we sound like a broken record. Volume equals cost reduction. We'll come back to that a little bit later. Now let's have a word on AI and machine learning. Arm is betting on AI and ML. Big as are many others. And this chart really shows why, it's a graphic that shows ETR data and spending momentum and pervasiveness in the dataset across all the different sectors that ETR tracks within its taxonomy. Note that ML/AI gets the top spot on the vertical axis, which represents net score. That's a measure of spending momentum or spending velocity. The horizontal axis is market share presence in the dataset. And we give this sector four stars to signify it's consistent lead in the data. So pretty reasonable bet by Arm. But the other area that we're going to talk about is security. And its vision day, Arm talked about confidential compute architecture and these things called realms. Note in the left-hand side, showing data traveling all over the different use cases and around the world and the call-out from the CISO below, it's a large public airline CISO that spoke at an ETR Venn round table. And this individual noted that the shifting end points increase the threat vectors. We all know that. Arm said something that really resonated. Specifically, they said today, there's far too much trust on the OS and the hypervisor that are running these applications. And their broad access to data is a weakness. Arm's concept of realms as shown in the right-hand side, underscores the company strategy to remove the assumption that privileged software. Like the hypervisor needs to be able to see the data. So by creating realms, in a virtualized multi-tenant environment, data can be more protected from memory leaks which of course is a major opportunity for hackers that they exploit. So it's a nice concept in a way for the system to isolate attendance data from other users. Okay, we want, we want to share some feedback that we got last week from the community on our analysis of Intel. A tech exec from city pointed out that, Intel really didn't miss a mobile, as we said, it really missed smartphones. In fact, whell, this is a kind of a minor distinction, it's important to recognize we think. Because Intel facilitated WIFI with Centrino, under the direction of Paul Alini. Who by the way, was not an engineer. I think he was the first non-engineer to be the CEO of Intel. He was a marketing person by background. Ironically, Intel's work in wifi connectivity enabled, actually enabled the smartphone revolution. And maybe that makes the smartphone missed by Intel all that more egregious, I don't know. Now the other piece of feedback we received related to our IBM scenario and our three-way joint venture prediction bringing together Intel, IBM, and Samsung in a triumvirate where Intel brings the foundry and it's process manufacturing. IBM brings its dis-aggregated memory technology and Samsung brings its its volume and its knowledge of of volume down the learning curve. Let's start with IBM. Remember we said that IBM with power 10 has the best technology in terms of this notion of dis-aggregating compute from memory and sharing memory in a pool across different processor types. So a few things in this regard, IBM when it restructured its micro electronics business under Ginni Rometty, catalyzed the partnership with global foundries and you know, this picture in the upper right it shows the global foundries facility outside of Albany, New York in Malta. And the partnership included AMD and Samsung. But we believe that global foundries is backed away from some of its contractual commitments with IBM causing a bit of a rift between the companies and leaving a hole in your original strategy. And evidently AMD hasn't really leaned in to move the needle in any way and so the New York foundry, is it a bit of a state of limbo with respect to its original vision. Now, well, Arvind Krishna was the face of the Intel announcement. It clearly has deep knowledge of IBM semiconductor strategy. Dario Gill, we think is a key player in the mix. He's the senior vice president director of IBM research. And it is in a position to affect some knowledge sharing and maybe even knowledge transfer with Intel possibly as it relates to disaggregated architecture. His questions remain as to how open IBM will be. And how protected it will be with its IP. It's got, as we said, last week, it's got to have an incentive to do so. Now why would IBM do that? Well, it wants to compete more effectively with VMware who has done a great job leveraging x86 and that's the biggest competitor in threat to open shift. So Arvind needs Intel chips to really execute on IBM's cloud strategy. Because almost all of IBM's customers are running apps on x86. So IBM's cloud and hybrid cloud. Strategy really need to leverage that Intel partnership. Now Intel for its part has great FinFET technology. FinFET is a tactic goes beyond CMOs. You all mainframes might remember when IBM burned the boat on ECL, Emitter-coupled Logic. And then moved to CMOs for its mainframes. Well, this is the next gen beyond, and it could give Intel a leg up on AMD's chiplet intellectual properties. Especially as it relates to latency. And there could be some benefits there for IBM. So maybe there's a quid pro quo going on. Now, where it really gets interesting is New York Senator, Chuck Schumer, is keen on building up an alternative to Silicon Valley in New York now it is Silicon Alley. So it's possible that Intel, who by the way has really good process technology. This is an aside, it really allowed TSMC to run the table with the whole seven nanometers versus 10 minute nanometer narrative. TSMC was at seven nanometer. Intel was at 10 nanometer. And really, we've said in the past that Intel's 10 nanometer tech is pretty close to TSMC seven. So Intel's ahead in that regard, even though in terms of, you know, the intervener thickness density, it's it's not, you know. These are sort of games that the semiconductor companies play, but you know it's possible that Intel with the U.S. government and IBM and Samsung could make a play for that New York foundry as part of Intel's trusted foundry strategy and kind of reshuffle that deck in Albany. Sounds like a "Game of Thrones," doesn't it? By the way, TSMC has been so consumed servicing Apple for five nanometer and eventually four nanometer that it's dropped the ball on some of its other's customers, namely Nvidia. And remember, a long-term competitiveness and cost reductions, they all come down to volume. And we think that Intel can't get to volume without an Arm strategy. Okay, so maybe the JV, the Joint Venture that we talked about, maybe we're out on a limb there and that's a stretch. And perhaps Samsung's not willing to play ball, given it's made huge investments in fabs and infrastructure and other resources, locally, but we think it's still viable scenario because we think Samsung definitely would covet a presence in the United States. No good to do that directly but maybe a partnership makes more sense in terms of gaining ground on TSMC. But anyway, let's say Intel can become a trusted foundry with the help of IBM and the U.S. government. Maybe then it could compete on volume. Well, how would that work? Well, let's say Nvidia, let's say they're not too happy with TSMC. Maybe with entertain Intel as a second source. Would that do it? In and of itself, no. But what about AWS and Google and Facebook? Maybe this is a way to placate the U.S. government and call off the antitrust dogs. Hey, we'll give Intel Foundry our business to secure America's semiconductor leadership and future and pay U.S. government. Why don't you chill out, back off a little bit. Microsoft even though, you know, it's not getting as much scrutiny from the U.S. government, it's anti trustee is maybe perhaps are behind it, who knows. But I think Microsoft would be happy to play ball as well. Now, would this give Intel a competitive volume posture? Yes, we think it would, for sure. If it can gain the trust of these companies and the volume we think would be there. But as we've said, currently, this is a very, very long shot because of the, the, the new strategy, the distance the difference in the Foundry business all those challenges that we laid out last week, it's going to take years to play out. But the dots are starting to connect in this scenario and the stakes are exceedingly high hence the importance of the U.S. government. Okay, that's it for now. Thanks to the community for your comments and insights. And thanks again to David Floyer whose analysis around Arm and semiconductors. And this work that he's done for the past decade is of tremendous help. Remember I publish each week on wikibon.com and siliconangle.com. And these episodes are all available as podcasts, just search for braking analysis podcast and you can always connect on Twitter. You can hit the chat right here or this live event or email me at david.vellante@siliconangle.com. Look, I always appreciate the comments on LinkedIn and Clubhouse. You can follow me so you're notified when we start a room and riff on these topics as well as others. And don't forget to check out etr.plus where all the survey data. This is Dave Vellante for the Cube Insights powered by ETR. Be well, and we'll see you next time. (cheerful music) (cheerful music)
SUMMARY :
Announcer: From the Cube's studios And maybe that makes the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dario Gill | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
March 23rd | DATE | 0.99+ |
Pat | PERSON | 0.99+ |
Albany | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Alini | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
$116 billion | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Arvind | PERSON | 0.99+ |
less than a year | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
10 nanometer | QUANTITY | 0.99+ |
10X | QUANTITY | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
seven nanometers | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2024 | DATE | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
last week | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
$20 billion | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Sarjeet Johal | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
Breaking Analysis: Arm Lays Down The Gauntlet at Intel's Feet
>> From the Cube's studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> Exactly one week after Pat Gelsinger's announcement of his plans to reinvent Intel. Arm announced version nine of its architecture and laid out its vision for the next decade. We believe this vision is extremely strong as it combines an end-to-end capability from Edge to Cloud, to the data center, to the home and everything in between. Arms aspirations are ambitious and powerful. Leveraging its business model, ecosystem and software compatibility with previous generations. Hello every one and welcome to this week's Wikibon Cube Insights powered by ETR. And this breaking analysis will explain why we think this announcement is so important and what it means for Intel and the broader technology landscape. We'll also share with you some feedback that we received from the Cube Community on last week's episode and a little inside baseball on how Intel, IBM, Samsung, TSMC and the U.S. government might be thinking about the shifting landscape of semiconductor technology. Now, there were two notable announcements this week that were directly related to Intel's announcement of March 23rd. The Armv9 news and TSMC's plans to invest a $100 billion in chip manufacturing and development over the next three years. That is a big number. It appears to tramp Intel's plan $20 billion investment to launch two new fabs in the U.S. starting in 2024. You may remember back in 2019, Samsung pledged to invest a $116 billion to diversify its production beyond memory trip, memory chips. Why are all these companies getting so aggressive? And won't this cause a glut in chips? Well, first, China looms large and aims to dominate its local markets, which in turn is going to confer advantages globally. The second, there's a huge chip shortage right now. And the belief is that it's going to continue through the decade and possibly beyond. We are seeing a new inflection point in the demand as we discussed last week. Stemming from digital, IOT, cloud, autos in new use cases in the home as so well presented by Sarjeet Johal in our community. As to the glut, these manufacturers believe that demand will outstrip supply indefinitely. And I understand that a lack of manufacturing capacity is actually more deadly than an oversupply. Look, if there's a glut, manufacturers can cut production and take the financial hit. Whereas capacity constraints mean you can miss entire cycles of growth and really miss out on the demand and the cost reductions. So, all these manufacturers are going for it. Now let's talk about Arm, its approach and the announcements that it made this week. Now last week, we talked about how Pat Gelsinger his vision of a system on package was an attempt to leapfrog system on chip SOC, while Arm is taking a similar system approach. But in our view, it's even broader than the vision laid out by Pat at Intel. Arm is targeting a wide variety of use cases that are shown here. Arm's fundamental philosophy is that the future will require highly specialized chips and Intel as you recall from Pat's announcement, would agree. But Arm historically takes an ecosystem approach that is different from Intel's model. Arm is all about enabling the production of specialized chips to really fit a specific application. For example, think about the amount of AI going on iPhones. They move if I remember from fingerprint to face recognition. This requires specialized neural processing units, NPUs that are designed by Apple for that particular use case. Arm is facilitating the creation of these specialized chips to be designed and produced by the ecosystem. Intel on the other hand has historically taken a one size fits all approach. Built around the x86. The Intel's design has always been about improving the processor. For example, in terms of speed, density, adding vector processing to accommodate AI, et cetera. And Intel does all the design and the manufacturing in any specialization for the ecosystem is done by Intel. Much of the value, that's added from the ecosystem is frankly been bending metal or adding displays or other features at the margin. But, the advantage is that the x86 architecture is well understood. It's consistent, reliable, and let's face it. Most enterprise software runs on x86. So, but very, very different models historically, which we heard from Gelsinger last week they're going to change with a new trusted foundry strategy. Now let's go through an example that might help explain the power of Arm's model. Let's say, your AWS and you're doing graviton. Designing graviton and graviton2. Or Apple, designing the M1 chip, or Tesla designing its own chip, or any other company in in any one of these use cases that are shown here. Tesla is a really good example. In order to optimize for video processing, Tesla needed to add specialized code firmware in the NPU for it's specific use case within autos. It was happy to take off the shelf CPU or GPU or whatever, and leverage Arm's standards there. And then it added its own value in the NPU. So the advantage of this model is Tesla could go from tape out in less or, or, or or in less than a year versus get the tape out in less than a year versus what would normally take many years. Arm is, think of Arm is like customize a Lego blocks that enable unique value add by the ecosystem with a much faster time to market. So like I say, the Tesla goes from logical tape out if you will, to Samsung and then says, okay run this against your manufacturing process. And it should all work as advertised by Arm. Tesla, interestingly, just as an aside chose the 14 nanometer process to keep its costs down. It didn't need the latest and greatest density. Okay, so you can see big difference in philosophies historically between Arm and Intel. And you can see Intel vectoring toward the Arm model based on what Gelsinger said last week for its foundry business. Essentially it has to. Now, Arm announced a new Arm architecture, Armv9. v9 is backwards compatible with previous generations. Perhaps Arm learned from Intel's failed, Itanium effort for those remember that word. Had no backward compatibility and it really floundered. As well, Arm adds some additional capabilities. And today we're going to focus on the two areas that have highlighted, machine learning piece and security. I'll take note of the call out, 300 billion chips. That's Arm's vision. That's a lot. And we've said, before, Arm's way for volumes are 10X those of x86. Volume, we sound like a broken record. Volume equals cost reduction. We'll come back to that a little bit later. Now let's have a word on AI and machine learning. Arm is betting on AI and ML. Big as are many others. And this chart really shows why, it's a graphic that shows ETR data and spending momentum and pervasiveness in the dataset across all the different sectors that ETR tracks within its taxonomy. Note that ML/AI gets the top spot on the vertical axis, which represents net score. That's a measure of spending momentum or spending velocity. The horizontal axis is market share presence in the dataset. And we give this sector four stars to signify it's consistent lead in the data. So pretty reasonable bet by Arm. But the other area that we're going to talk about is security. And its vision day, Arm talked about confidential compute architecture and these things called realms. Note in the left-hand side, showing data traveling all over the different use cases and around the world and the call-out from the CISO below, it's a large public airline CISO that spoke at an ETR Venn round table. And this individual noted that the shifting end points increase the threat vectors. We all know that. Arm said something that really resonated. Specifically, they said today, there's far too much trust on the OS and the hypervisor that are running these applications. And their broad access to data is a weakness. Arm's concept of realms as shown in the right-hand side, underscores the company strategy to remove the assumption that privileged software. Like the hypervisor needs to be able to see the data. So by creating realms, in a virtualized multi-tenant environment, data can be more protected from memory leaks which of course is a major opportunity for hackers that they exploit. So it's a nice concept in a way for the system to isolate attendance data from other users. Okay, we want, we want to share some feedback that we got last week from the community on our analysis of Intel. A tech exec from city pointed out that, Intel really didn't miss a mobile, as we said, it really missed smartphones. In fact, whell, this is a kind of a minor distinction, it's important to recognize we think. Because Intel facilitated WIFI with Centrino, under the direction of Paul Alini. Who by the way, was not an engineer. I think he was the first non-engineer to be the CEO of Intel. He was a marketing person by background. Ironically, Intel's work in wifi connectivity enabled, actually enabled the smartphone revolution. And maybe that makes the smartphone missed by Intel all that more egregious, I don't know. Now the other piece of feedback we received related to our IBM scenario and our three-way joint venture prediction bringing together Intel, IBM, and Samsung in a triumvirate where Intel brings the foundry and it's process manufacturing. IBM brings its dis-aggregated memory technology and Samsung brings its its volume and its knowledge of of volume down the learning curve. Let's start with IBM. Remember we said that IBM with power 10 has the best technology in terms of this notion of dis-aggregating compute from memory and sharing memory in a pool across different processor types. So a few things in this regard, IBM when it restructured its micro electronics business under Ginni Rometty, catalyzed the partnership with global foundries and you know, this picture in the upper right it shows the global foundries facility outside of Albany, New York in Malta. And the partnership included AMD and Samsung. But we believe that global foundries is backed away from some of its contractual commitments with IBM causing a bit of a rift between the companies and leaving a hole in your original strategy. And evidently AMD hasn't really leaned in to move the needle in any way and so the New York foundry, is it a bit of a state of limbo with respect to its original vision. Now, well, Arvind Krishna was the face of the Intel announcement. It clearly has deep knowledge of IBM semiconductor strategy. Dario Gill, we think is a key player in the mix. He's the senior vice president director of IBM research. And it is in a position to affect some knowledge sharing and maybe even knowledge transfer with Intel possibly as it relates to disaggregated architecture. His questions remain as to how open IBM will be. And how protected it will be with its IP. It's got, as we said, last week, it's got to have an incentive to do so. Now why would IBM do that? Well, it wants to compete more effectively with VMware who has done a great job leveraging x86 and that's the biggest competitor in threat to open shift. So Arvind needs Intel chips to really execute on IBM's cloud strategy. Because almost all of IBM's customers are running apps on x86. So IBM's cloud and hybrid cloud. Strategy really need to leverage that Intel partnership. Now Intel for its part has great FinFET technology. FinFET is a tactic goes beyond CMOs. You all mainframes might remember when IBM burned the boat on ECL, Emitter-coupled Logic. And then moved to CMOs for its mainframes. Well, this is the next gen beyond, and it could give Intel a leg up on AMD's chiplet intellectual properties. Especially as it relates to latency. And there could be some benefits there for IBM. So maybe there's a quid pro quo going on. Now, where it really gets interesting is New York Senator, Chuck Schumer, is keen on building up an alternative to Silicon Valley in New York now it is Silicon Alley. So it's possible that Intel, who by the way has really good process technology. This is an aside, it really allowed TSMC to run the table with the whole seven nanometers versus 10 minute nanometer narrative. TSMC was at seven nanometer. Intel was at 10 nanometer. And really, we've said in the past that Intel's 10 nanometer tech is pretty close to TSMC seven. So Intel's ahead in that regard, even though in terms of, you know, the intervener thickness density, it's it's not, you know. These are sort of games that the semiconductor companies play, but you know it's possible that Intel with the U.S. government and IBM and Samsung could make a play for that New York foundry as part of Intel's trusted foundry strategy and kind of reshuffle that deck in Albany. Sounds like a "Game of Thrones," doesn't it? By the way, TSMC has been so consumed servicing Apple for five nanometer and eventually four nanometer that it's dropped the ball on some of its other's customers, namely Nvidia. And remember, a long-term competitiveness and cost reductions, they all come down to volume. And we think that Intel can't get to volume without an Arm strategy. Okay, so maybe the JV, the Joint Venture that we talked about, maybe we're out on a limb there and that's a stretch. And perhaps Samsung's not willing to play ball, given it's made huge investments in fabs and infrastructure and other resources, locally, but we think it's still viable scenario because we think Samsung definitely would covet a presence in the United States. No good to do that directly but maybe a partnership makes more sense in terms of gaining ground on TSMC. But anyway, let's say Intel can become a trusted foundry with the help of IBM and the U.S. government. Maybe then it could compete on volume. Well, how would that work? Well, let's say Nvidia, let's say they're not too happy with TSMC. Maybe with entertain Intel as a second source. Would that do it? In and of itself, no. But what about AWS and Google and Facebook? Maybe this is a way to placate the U.S. government and call off the antitrust dogs. Hey, we'll give Intel Foundry our business to secure America's semiconductor leadership and future and pay U.S. government. Why don't you chill out, back off a little bit. Microsoft even though, you know, it's not getting as much scrutiny from the U.S. government, it's anti trustee is maybe perhaps are behind it, who knows. But I think Microsoft would be happy to play ball as well. Now, would this give Intel a competitive volume posture? Yes, we think it would, for sure. If it can gain the trust of these companies and the volume we think would be there. But as we've said, currently, this is a very, very long shot because of the, the, the new strategy, the distance the difference in the Foundry business all those challenges that we laid out last week, it's going to take years to play out. But the dots are starting to connect in this scenario and the stakes are exceedingly high hence the importance of the U.S. government. Okay, that's it for now. Thanks to the community for your comments and insights. And thanks again to David Floyer whose analysis around Arm and semiconductors. And this work that he's done for the past decade is of tremendous help. Remember I publish each week on wikibon.com and siliconangle.com. And these episodes are all available as podcasts, just search for braking analysis podcast and you can always connect on Twitter. You can hit the chat right here or this live event or email me at david.vellante@siliconangle.com. Look, I always appreciate the comments on LinkedIn and Clubhouse. You can follow me so you're notified when we start a room and riff on these topics as well as others. And don't forget to check out etr.plus where all the survey data. This is Dave Vellante for the Cube Insights powered by ETR. Be well, and we'll see you next time. (cheerful music) (cheerful music)
SUMMARY :
From the Cube's studios And maybe that makes the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dario Gill | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Alini | PERSON | 0.99+ |
March 23rd | DATE | 0.99+ |
Albany | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
$116 billion | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Arvind | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
last week | DATE | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
less than a year | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
10X | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
ORGANIZATION | 0.99+ | |
Silicon Valley | LOCATION | 0.99+ |
2024 | DATE | 0.99+ |
seven nanometers | QUANTITY | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
Arm | PERSON | 0.99+ |
this week | DATE | 0.99+ |
Armv9 | COMMERCIAL_ITEM | 0.99+ |
New York | LOCATION | 0.99+ |
Robin Goldstone, Lawrence Livermore National Laboratory | Red Hat Summit 2019
>> live from Boston, Massachusetts. It's the queue covering your red. Have some twenty nineteen brought to you by bread. Welcome back a few, but our way Our red have some twenty nineteen >> center along with Sue Mittleman. I'm John Walls were now joined by Robin Goldstone, who's HBC solution architect at the Lawrence Livermore National Laboratory. Hello, Robin >> Harrier. Good to see you. I >> saw you on the Keystone States this morning. Fascinating presentation, I thought. First off for the viewers at home who might not be too familiar with the laboratory If you could please just give it that thirty thousand foot level of just what kind of national security work you're involved with. >> Sure. So yes, indeed. We are a national security lab. And you know, first and foremost, our mission is assuring the safety, security reliability of our nuclear weapons stockpile. And there's a lot to that mission. But we also have broader national security mission. We work on counterterrorism and nonproliferation, a lot of of cyber security kinds of things. And but even just general science. We're doing things with precision medicine and and just all all sorts >> of interesting technology. Fascinating >> Es eso, Robin, You know so much and i t you know, the buzzword. The vast months years has been scaled on. We talk about what public loud people are doing. It's labs like yours have been challenged. Challenge with scale in many other ways, especially performance is something that you know, usually at the forefront of where things are you talked about in the keynote this morning. Sierra is the latest generation supercomputer number two, you know, supercomputer. So you know, I don't know how many people understand the petaflop one hundred twenty five flops and the like, but tell us a little bit about, you know, kind of the why and the what of that, >> right? So So Sierra's a supercomputer. And what's unique about these systems is that we're solving. There's lots of systems that network together. Maybe you're bigger number of servers than us, but we're doing scientific simulation, and that kind of computing requires a level of parallelism and very tightly coupled. So all the servers are running a piece of the problem. They all have to sort of operate together. If any one of them is running slow, it makes the whole thing goes slow. So it's really this tightly couple nature of super computers that make things really challenging. You know, we talked about performance. If if one servers just running slow for some reason, you know everything else is going to be affected by that. So we really do care about performance. And we really do care about just every little piece of the hardware you know, performing as it should. So So I >> think in national security, nuclear stockpiles. Um I mean, there is nothing more important, obviously, than the safety and security of the American people were at the center of that. Right? You're open source, right? You know, how does that work? How does that? Because as much trust and faith and confidence we have in the open source community. This is an extremely important responsibility that's being consigned more less to this open source community. >> Sure. You know, at first, people do have that feeling that we should be running some secret sauce. I mean, our applications themselves or secret. But when it comes to the system software and all the software around the applications, I mean, open source makes perfect sense. I mean, we started out running really closed source solutions in some cases, the perp. The hardware itself was really proprietary. And, of course, the vendors who made the hardware proprietary. They wanted their software to be proprietary. But I think most people can resonate when you buy a piece of software and the vendor tells you it's it's great. It's going to do everything you needed to do and trust us, right? Okay, But at our scale, it often doesn't work the way it's It's supposed to work. They've never tested it. Our skill. And when it breaks, now they have to fix. They're the only ones that can fix it. And in some cases we found it wasn't in the vendors decided. You know what? No one else has one quite like yours. And you know, it's a lot of work to make it work for you. So we're just not going to fix and you can't wait, right? And so open source is just the opposite of that, right? I mean, we have all that visibility in that software. If it doesn't work for our needs, we can make it work for our needs, and then we can give it back to the community. Because even though people are doing things that the scale that we are today, Ah, lot of the things that we're doing really do trickle down and can be used by a lot of other people. >> But it's something really important because, as you said, you used to be and I was like, OK, the Cray supercomputer is what we know, You know, let's use proprietary interfaces and I need the highest speed and therefore it's not the general purpose stuff. You moved X eighty six. Lennox is something that's been in the shower computers. Why? But it's a finely tuned version there. Let's get you know, the duct tape and baling wire. And don't breathe on it once you get it running. You're running well today and you talk a little bit about the journey with Roland. You know, now on the Super Computers, >> right? So again, there's always been this sort of proprietary, really high end supercomputing. But about in the late nineteen nineties, early two thousand, that's when we started building these these commodity clusters. You know, at the time, I think Beta Wolf was the terminology for that. But, you know, basically looking at how we could take these basic off the shelf servers and make them work for our applications and trying to take advantage of a CZ much commodity technologies we can, because we didn't want to re invent anything. We want to use as much as possible. And so we've really written that curve. And initially it was just red hat. Lennox. There was no relative time, but then when we started getting into the newer architectures going from Mexico six. Taxi, six, sixty for and Itanium, you know the support just wasn't there in basic red hat and again, even though it's open source and we could do everything ourselves, we don't want to do everything ourselves. I mean, having an organization having this Enterprise edition of Red Hat having a company stand behind it. The software is still open. Source. We can look at the source code. We can modify it if we want, But you know what at the end of the day, were happy to hand over some of our challenge is to Red Hat and and let them do what they do best. They have great, you know, reach into the into the colonel community. They can get things done that we can't necessarily get done. So it's a great relationship. >> Yes. So that that last mile getting it on Sierra there. Is that the first time on one kind of the big showcase your computer? >> Sure. And part of the reason for that is because those big computers themselves are basically now mostly commodity. I mean, again, you talked about a Cray, Some really exotic architecture. I mean, Sierra is a collection of Lennox servers. Now, in this case, they're running the power architecture instead of X eighty six. So Red hat did a lot of work with IBM to make sure that that power was was fully supported in the rail stack. But so, you know, again that the service themselves somewhat commodity were running and video GP use those air widely used everywhere. Obviously big deal for machine learning and stuff that the main the biggest proprietary component we're still dealing was is thie interconnect. So, you know, I mentioned these clusters have to be really tightly coupled. They that performance has to be really superior and most importantly, the latent see right, they have to be super low late and see an ethernet just doesn't cut it >> So you run Infinite Band today. I'm assuming we're >> running infinite band on melon oxen finna ban on Sierra on some of our commodity clusters. We run melon ox on other ones. We run intel. Omni Path was just another flavor of of infinite band. You know, if we could use it, if we could use Ethernet, we would, because again, we would get all the benefit in the leverage of what everybody else is doing, but just just hasn't hasn't quite been able to meet our needs in that >> area now, uh, find recalled the history lesson. We got a bit from me this morning. The laboratory has been around since the early fifties, born of the Cold War. And so obviously open source was, you know? Yeah, right, you know, went well. What about your evolution to open source? I mean, ahs. This has taken hold. Now, there had to be a tipping point at some point that converted and made the laboratory believers. But if you can, can you go back to that process? And was it of was it a big moment for you big time? Or was it just a kind of a steady migration? tour. >> Well, it's interesting if you go way back. We actually wrote the operating systems for those early Cray computers. We wrote those operating systems in house because there really was no operating system that will work for us. So we've been software developers for a long time. We've been system software developers, but at that time it was all proprietary in closed source. So we know how to do that stuff. The reason I think really what happened was when these commodity clusters came along when we showed that we could build a, you know, a cluster that could perform well for our applications on that commodity hardware. We started with Red Hat, but we had to add some things on top. We had to add the software that made a bunch of individual servers function as a cluster. So all the system management stuff the resource manager of the thing that lets a schedule jobs, batch jobs. We wrote that software, the parallel file system. Those things did not exist in the open source, and we helped to write those things, and those things took on lives of their own. So luster. It's a parallel file system that we helped develop slow, Erm, if anyone outside of HBC probably hasn't heard of it, but it's a resource manager that again is very widely popular. So the lab really saw that. You know, we got a lot of visibility by contributing this stuff to the community. And I think everybody has embracing. And we develop open source software at all different layers. This >> software, Robin, you know, I'm curious how you look at Public Cloud. So, you know, when I look at the public odd, they do a lot with government agencies. They got cloud. You know, I've talked to companies that said I could have built a super computer. Here's how long and do. But I could spend it up in minutes. And you know what I need? Is that a possibility for something of yours? I understand. Maybe not the super high performance, But where does it fit in? >> Sure, Yeah. I mean, certainly for a company that has no experience or no infrastructure. I mean, we have invested a huge amount in our data center, and we have a ton of power and cooling and floor space. We have already made that investment, you know, trying to outsource that to the cloud doesn't make sense. There are definitely things. Cloud is great. We are using Gove Cloud for things like prototyping, or someone wants a server, that some architecture, that we don't have the ability to just spin it up. You know, if we had to go and buy it, it would take six months because you know, we are the government. But be able to just spin that stuff up. It's really great for what we do. We use it for open source for building test. We use it to conferences when we want to run a tutorial and spin up a bunch of instances of, you know, Lennox and and run a tutorial. But the biggest thing is at the end of the day are our most important work. Clothes are on a classified environment, and we don't have the ability to run those workloads in the cloud. And so to do it on the open side and not be ableto leverage it on the close side, it really takes away some of the value of because we really want to make the two environments look a similar is possible leverage our staff and and everything like that. So that's where Cloud just doesn't quite fit >> in for us. You were talking about, you know, the speed of, Of of Sierra. And then also mentioning El Capitan, which is thie the next generation. You're next, You know, super unbelievably fast computer to an extent of ten X that off current speed is within the next four to five years. >> Right? That's the goal. I >> mean, what those Some numbers that is there because you put a pretty impressive array up there, >> right? So Series about one hundred twenty five PETA flops and are the big Holy Grail for high performance computing is excess scale and exit flop of performance. And so, you know, El Capitan is targeted to be, you know, one point two, maybe one point five exit flops or even Mohr again. That's peak performance. It doesn't necessarily translate into what our applications, um, I can get out of the platform. But the reason you keep sometimes I think, isn't it enough isn't one hundred twenty five five's enough, But it's never enough because any time we get another platform, people figure out how to do things with it that they've never done before. Either they're solving problems faster than they could. And so now they're able to explore a solution space much faster. Or they want to look at, you know, these air simulations of three dimensional space, and they want to be able to look at it in a more fine grain level. So again, every computer we get, we can either push a workload through ten times faster. Or we can look at a simulation. You know, that's ten times more resolved than the one that >> we could do before. So do this for made and for folks at home and take the work that you do and translate that toe. Why that exponential increase in speed will make you better. What you do in terms of decision making and processing of information, >> right? So, yeah, so the thing is, these these nuclear weapons systems are very complicated. There's multi physics. There's lots of different interactions going on, and to really understand them at the lowest level. One of the reasons that's so important now is we're maintaining a stockpile that is well beyond the life span that it was designed for. You know, these nuclear weapons, some of them were built in the fifties, the sixties and seventies. They weren't designed to last this long, right? And so now they're sort of out of their design regime, and we really have to understand their behaviour and their properties as they age. So it opens up a whole nother area, you know, that we have to be able to floor and and just some of that physics has never been explored before. So, you know, the problems get more challenging the farther we get away from the design basis of these weapons, but also were really starting to do new things like eh, I am machine learning things that weren't part of our workflow before. We're starting to incorporate machine learning in with simulation again to help explore a very large problem space and be ableto find interesting areas within a simulation to focus in on. And so that's a really exciting area. And that is also an area where, you know, GPS and >> stuff just exploded. You know, the performance levels that people are seeing on these machines? Well, we thank you for your work. It is critically important, azaz, we all realize and wonderfully fascinating at the same time. So thanks for the insights here on for your time. We appreciate that. >> All right, Thanks for >> thanking Robin Goldstone. Joining us back with more here on the Cube. You're watching our coverage live from Boston of Red Hat Summit twenty nineteen.
SUMMARY :
Have some twenty nineteen brought to you by bread. center along with Sue Mittleman. Good to see you. saw you on the Keystone States this morning. And you know, of interesting technology. five flops and the like, but tell us a little bit about, you know, kind of the why and the what And we really do care about just every little piece of the hardware you know, in the open source community. And you know, it's a lot of work to make it work for you. Let's get you know, We can modify it if we want, But you know what at the end of the day, were happy to hand over Is that the first time on one kind of the But so, you know, again that the service themselves So you run Infinite Band today. You know, if we could use it, if we could use Ethernet, And so obviously open source was, you know? came along when we showed that we could build a, you know, a cluster that So, you know, when I look at the public odd, they do a lot with government agencies. You know, if we had to go and buy it, it would take six months because you know, we are the government. You were talking about, you know, the speed of, Of of Sierra. That's the goal. And so, you know, El Capitan is targeted to be, you know, one point two, So do this for made and for folks at home and take the work that you do And that is also an area where, you know, GPS and Well, we thank you for your work. of Red Hat Summit twenty nineteen.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sue Mittleman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Robin Goldstone | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
ten times | QUANTITY | 0.99+ |
Cold War | EVENT | 0.99+ |
six months | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
HBC | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
El Capitan | TITLE | 0.99+ |
thirty thousand foot | QUANTITY | 0.98+ |
two environments | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
late nineteen nineties | DATE | 0.98+ |
Mexico | LOCATION | 0.98+ |
one hundred | QUANTITY | 0.98+ |
Harrier | PERSON | 0.98+ |
five years | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
four | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
Cray | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
Boston | LOCATION | 0.96+ |
early fifties | DATE | 0.96+ |
red hat | TITLE | 0.96+ |
twenty nineteen | QUANTITY | 0.96+ |
Sierra | LOCATION | 0.96+ |
first | QUANTITY | 0.95+ |
this morning | DATE | 0.93+ |
ten | QUANTITY | 0.93+ |
six | QUANTITY | 0.92+ |
one hundred twenty five flops | QUANTITY | 0.9+ |
sixties | DATE | 0.89+ |
one servers | QUANTITY | 0.88+ |
Itanium | ORGANIZATION | 0.87+ |
intel | ORGANIZATION | 0.86+ |
Of of Sierra | ORGANIZATION | 0.86+ |
First | QUANTITY | 0.83+ |
five | QUANTITY | 0.82+ |
Sierra | ORGANIZATION | 0.8+ |
Red Hat | ORGANIZATION | 0.8+ |
Red Hat Summit 2019 | EVENT | 0.79+ |
Roland | ORGANIZATION | 0.79+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.79+ |
Red Hat Summit twenty | EVENT | 0.79+ |
two | QUANTITY | 0.78+ |
Keystone States | LOCATION | 0.78+ |
seventies | DATE | 0.78+ |
Red | ORGANIZATION | 0.76+ |
twenty five five | QUANTITY | 0.73+ |
early two thousand | DATE | 0.71+ |
Lawrence Livermore | LOCATION | 0.71+ |
Sierra | COMMERCIAL_ITEM | 0.69+ |
Erm | PERSON | 0.66+ |
Mohr | PERSON | 0.65+ |
supercomputer | QUANTITY | 0.64+ |
one hundred twenty five | QUANTITY | 0.62+ |
Path | OTHER | 0.59+ |
Band | OTHER | 0.58+ |
National Laboratory | ORGANIZATION | 0.55+ |
band | OTHER | 0.55+ |
Gove Cloud | TITLE | 0.54+ |
nineteen | QUANTITY | 0.53+ |
fifties | DATE | 0.52+ |
number | QUANTITY | 0.52+ |
Beta Wolf | OTHER | 0.52+ |
dimensional | QUANTITY | 0.49+ |
sixty | ORGANIZATION | 0.47+ |
six | COMMERCIAL_ITEM | 0.45+ |
American | PERSON | 0.43+ |
Sierra | TITLE | 0.42+ |