Kim Leyenaar, Broadcom | SuperComputing 22
(Intro music) >> Welcome back. We're LIVE here from SuperComputing 22 in Dallas Paul Gillin, for Silicon Angle in theCUBE with my guest host Dave... excuse me. And our, our guest today, this segment is Kim Leyenaar who is a storage performance architect at Broadcom. And the topic of this conversation is, is is networking, it's connectivity. I guess, how does that relate to the work of a storage performance architect? >> Well, that's a really good question. So yeah, I have been focused on storage performance for about 22 years. But even, even if we're talking about just storage the entire, all the components have a really big impact on ultimately how quickly you can access your data. So, you know, the, the switches the memory bandwidth, the, the expanders the just the different protocols that you're using. And so, and the big part of is actually ethernet because as you know, data's not siloed anymore. You have to be able to access it from anywhere in the world. >> Dave: So wait, so you're telling me that we're just not living in a CPU centric world now? >> Ha ha ha >> Because it is it is sort of interesting. When we talk about supercomputing and high performance computing we're always talking about clustering systems. So how do you connect those systems? Isn't that, isn't that kind of your, your wheelhouse? >> Kim: It really is. >> Dave: At Broadcom. >> It's, it is, it is Broadcom's wheelhouse. We are all about interconnectivity and we own the interconnectivity. You know, you know, years ago it was, 'Hey, you know buy this new server because, you know, we we've added more cores or we've got better memory.' But now you've got all this siloed data and we've got you know, we've got this, this stuff or defined kind of environment now this composable environments where, hey if you need more networking, just plug this in or just go here and just allocate yourself more. So what we're seeing is these silos really of, 'hey here's our compute, here's your networking, here's your storage.' And so, how do you put those all together? The thing is interconnectivity. So, that's really what we specialize in. I'm really, you know, I'm really happy to be here to talk about some of the things that that we do to enable high performance computing. >> Paul: Now we're seeing, you know, new breed of AI computers being built with multiple GPUs very large amounts of data being transferred between them. And the internet really has become a, a bottleneck. The interconnect has become a bottle, a bottleneck. Is that something that Broadcom is working on alleviating? >> Kim: Absolutely. So we work with a lot of different, there's there's a lot of different standards that we work with to define so that we can make sure that we work everywhere. So even if you're just a dentist's office that's deploying one server, or we're talking about these hyperscalers that are, you know that have thousands or, you know tens of thousands of servers, you know, we're working on making sure that the next generation is able to outperform the previous generation. Not only that, but we found that, you know with these siloed things, if, if you add more storage but that means we're going to eat up six cores using that it's not really as useful. So Broadcom's really been focused on trying to offload the CPU. So we're offloading it from, you know data security, data protection, you know, we're we do packet sniffing ourselves and things like that. So no longer do we rely on the CPU to do that kind of processing for us but we become very smart devices all on our own so that they work very well in these kind of environments. >> Dave: So how about, give, give us an example. I know a lot of the discussion here has been around using ethernet as the connectivity layer. >> Yes. >> You know, in in, in the past, people would think about supercomputing as exclusively being InfiniBand based. >> Ha ha ha. >> But give, give us an idea of what Broadcom is doing in the ethernet space. What, you know, what's what are the advantages of using ethernet? >> Kim: So we've made two really big announcements. The first one is our Tomahawk five ethernet switch. So it's a 400 gigi ethernet switch. And the other thing we announced too was our Thor. So we have, these are our network controllers that also support up to 400 gigi each as well. So, those two alone, it just, it's amazing to me how much data we're able to transfer with those. But not only that, but they're super super intelligent controllers too. And then we realized, you know, hey, we're we're managing all this data, let's go ahead and offload the CPU. So we actually adopted the Rocky Standards. So that's one of the things that puts us above InfiniBand is that ethernet is ubiquitous, it's everywhere. And InfiniBand is primarily just owned by one or two companies. And, and so, and it's also a lot more expensive. So ethernet is just, it's everywhere. And now with the, with the Rocky standards, we're working along with, it's, it's, it does what you're talking about much better than, you know predecessors. >> Tell us about the Rocky Standards. I'm not familiar with it. I'm sure some of our listeners are not. What is the Rocky standard? >> Kim: Ha ha ha. So it's our DNA over converged to ethernet. I'm not a Rocky expert myself but I am an expert on how to offload the CPU. And so one of the things it does is instead of using the CPU to transfer the data from, you know the user space over to the next, you know server when you're transferring it we actually will do it ourselves. So we'll handle it ourselves. We will take it, we will move it across the wire and we will put it in that remote computer. And we don't have to ask the CPU to do anything to get involved in that. So big, you know, it's a big savings. >> Yeah, I mean in, in a nutshell, because there are parts of the InfiniBand protocol that are essentially embedded in RDMA over converged ethernet. So... >> Right. >> So if you can, if you can leverage kind of the best of both worlds, but have it in an ethernet environment which is already ubiquitous, it seems like it's, kind of democratizing supercomputing and, and HPC and I know you guys are big partners with Dell as an example, you guys work with all sorts of other people. >> Kim: Yeah. >> But let's say, let's say somebody is going to be doing ethernet for connectivity, you also offer switches? >> Kim: We do, actually. >> So is that, I mean that's another piece of the puzzle. >> That's a big piece of the puzzle. So we just released our, our Atlas 2 switch. It is a PCIE Gen Five switch. And... >> Dave: What does that mean? What does Gen five, what does that mean? >> Oh, Gen Five PCIE, it's it's a magic connectivity right now. So, you know, we talk about the Sapphire Rapids release as well as the GENUWA release. I know that those, you know those have been talked about a lot here. I've been walking around and everybody's talking about it. Well, those enable the Gen Five PCIE interfaces. So we've been able to double the bandwidth from the Gen Four up to the Gen Five. So, in order to, to support that we do now have our Atlas two PCIE Gen Five switch. And it allows you to connect especially around here we're talking about, you know artificial intelligence and machine learning. A lot of these are relying on the GPU and the DPU that you see, you know a lot of people talking about enabling. So by in, you know, putting these switches in the servers you can connect multitudes of not only NVME devices but also these GPUs and these, these CPUs. So besides that we also have the storage component of it too. So to support that, we we just recently have released our 9,500 series HBAs which support 24 gig SAS. And you know, this is kind of a, this is kind of a big deal for some of our hyperscalers that say, Hey, look our next generation, we're putting a hundred hard drives in. So we're like, you know, so a lot of it is maybe for cold storage, but by giving them that 24 gig bandwidth and by having these mass 24 gig SAS expanders that allows these hyperscalers to build up their systems. >> Paul: And how are you supporting the HPC community at large? And what are you doing that's exclusively for supercomputing? >> Kim: Exclusively for? So we're doing the interconnectivity really for them. You know, you can have as, as much compute power as you want, but these are very data hungry applications and a lot of that data is not sitting right in the box. A lot of that data is sitting in some other country or in some other city, or just the box next door. So to be able to move that data around, you know there's a new concept where they say, you know do the compute where the data is and then there's another kind of, you know the other way is move the data around which is a lot easier kind of sometimes, but so we're allowing us to move that data around. So for that, you know, we do have our our tomahawk switches, we've got our Thor NICS and of course we got, you know, the really wide pipe. So our, our new 9,500 series HBA and RAID controllers not only allow us to do, so we're doing 28 gigabytes a second that we can trans through the one controller, and that's on protected data. So we can actually have the high availability protected data of RAID 5 or RAID 6, or RAID 10 in the box giving in 27 gigabytes a second. So it's, it's unheard of the latency that we're seeing even off of this too, we have a right cash latency that is sub 8 microseconds that is lower than most of the NVME drives that you see, you know that are available today. So, so you know we're able to support these applications that require really low latency as well as data protection. >> Dave: So, so often when we talk about the underlying hardware, it's a it's a game of, you know, whack-a-mole chase the bottleneck. And so you've mentioned PCIE five, a lot of folks who will be implementing five, gen five PCIE five are coming off of three, not even four. >> Kim: I know. >> So make, so, so they're not just getting a last generation to this generation bump but they're getting a two generations, bump. >> Kim: They are. >> How does that, is it the case that it would never make sense to use a next gen or a current gen card in an older generation bus because of the mismatch and performance? Are these things all designed to work together? >> Uh... That's a really tough question. I want to say, no, it doesn't make sense. It, it really makes sense just to kind of move things forward and buy a card that's made for the bus it's in. However, that's not always the case. So for instance, our 9,500 controller is a Gen four PCIE but what we did, we doubled the PCIE so it's a by 16, even though it's a gen four, it's a by 16. So we're getting really, really good bandwidth out of it. As I said before, you know, we're getting 28, 27.8 or almost 28 gigabytes a second bandwidth out of that by doubling the PCIE bus. >> Dave: But they worked together, it all works together? >> All works together. You can put, you can put our Gen four and a Gen five all day long and they work beautifully. Yeah. We, we do work to validate that. >> We're almost out our time. But I, I want to ask you a more, nuts and bolts question, about storage. And we've heard for, you know, for years of the aerial density of hard disk has been reached and there's really no, no way to excel. There's no way to make the, the dish any denser. What is the future of the hard disk look like as a storage medium? >> Kim: Multi actuator actually, we're seeing a lot of multi-actuator. I was surprised to see it come across my desk, you know because our 9,500 actually does support multi-actuator. And, and, and so it was really neat after I've been working with hard drives for 22 years and I remember when they could do 30 megabytes a second, and that was amazing. That was like, wow, 30 megabytes a second. And then, about 15 years ago, they hit around 200 to 250 megabytes a second, and they stayed there. They haven't gone anywhere. What they have done is they've increased the density so that you can have more storage. So you can easily go out and buy 15 to 30 terabyte drive, but you're not going to get any more performance. So what they've done is they've added multiple actuators. So each one of these can do its own streaming and each one of these can actually do their own seeking. So you can get two and four. And I've even seen a talk about, you know eight actuator per disc. I, I don't think that, I think that's still theory, but but they could implement those. So that's one of the things that we're seeing. >> Paul: Old technology somehow finds a way to, to remain current. >> It does. >> Even it does even in the face of new alternatives. Kim Leyenaar, Storage Architect, Storage Performance Architect at Broadcom Thanks so much for being here with us today. Thank you so much for having me. >> This is Paul Gillin with Dave Nicholson here at SuperComputing 22. We'll be right back. (Outro music)
SUMMARY :
And the topic of this conversation is, is So, you know, the, the switches So how do you connect those systems? buy this new server because, you know, we you know, new breed So we're offloading it from, you know I know a lot of the You know, in in, in the What, you know, what's And then we realized, you know, hey, we're What is the Rocky standard? the data from, you know of the InfiniBand protocol So if you can, if you can So is that, I mean that's So we just released So we're like, you know, So for that, you know, we do have our it's a game of, you know, So make, so, so they're not out of that by doubling the PCIE bus. You can put, you can put And we've heard for, you know, for years so that you can have more storage. to remain current. Even it does even in the with Dave Nicholson here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kim Leyenaar | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Kim | PERSON | 0.99+ |
30 megabytes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
9,500 | QUANTITY | 0.99+ |
28 | QUANTITY | 0.99+ |
22 years | QUANTITY | 0.99+ |
six cores | QUANTITY | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
24 gig | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
Rocky | ORGANIZATION | 0.98+ |
27.8 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
30 terabyte | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.98+ |
about 22 years | QUANTITY | 0.97+ |
two generations | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
SuperComputing 22 | ORGANIZATION | 0.97+ |
one controller | QUANTITY | 0.97+ |
three | QUANTITY | 0.96+ |
two really big announcements | QUANTITY | 0.96+ |
250 megabytes | QUANTITY | 0.96+ |
one server | QUANTITY | 0.94+ |
Gen four | COMMERCIAL_ITEM | 0.94+ |
up to 400 gigi | QUANTITY | 0.93+ |
Rocky standards | ORGANIZATION | 0.93+ |
tens of thousands of servers | QUANTITY | 0.93+ |
400 gigi | QUANTITY | 0.92+ |
around 200 | QUANTITY | 0.92+ |
9,500 series | QUANTITY | 0.92+ |
excel | TITLE | 0.91+ |
9,500 series | COMMERCIAL_ITEM | 0.9+ |
16 | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.89+ |
sub 8 microseconds | QUANTITY | 0.89+ |
gen four | COMMERCIAL_ITEM | 0.89+ |
eight actuator | QUANTITY | 0.89+ |
second bandwidth | QUANTITY | 0.88+ |
Atlas 2 | COMMERCIAL_ITEM | 0.86+ |
GENUWA | ORGANIZATION | 0.86+ |
Thor | ORGANIZATION | 0.85+ |
five | TITLE | 0.85+ |
about 15 years ago | DATE | 0.84+ |
28 gigabytes | QUANTITY | 0.84+ |
Gen Five | COMMERCIAL_ITEM | 0.83+ |
27 gigabytes a second | QUANTITY | 0.82+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)
SUMMARY :
David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
David Nicholson | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
August | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
50 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
9,000 | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
128, 400 gig | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,352 | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
Tomahawk 4 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
512 fiber | QUANTITY | 0.98+ |
seven times | QUANTITY | 0.98+ |
Tomahawk 5 | COMMERCIAL_ITEM | 0.98+ |
four lanes | QUANTITY | 0.98+ |
9,000 plus | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
Aquaman | PERSON | 0.97+ |
Both | QUANTITY | 0.97+ |
InfiniBand | ORGANIZATION | 0.97+ |
QSFP 112 | OTHER | 0.96+ |
hundred gig | QUANTITY | 0.96+ |
Peter Del Vecchio | PERSON | 0.96+ |
25.6 terabytes per second | QUANTITY | 0.96+ |
two fascinating guests | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
64 OSFP | QUANTITY | 0.95+ |
Rocky | ORGANIZATION | 0.95+ |
two million CPUs | QUANTITY | 0.95+ |
25.6 T. | QUANTITY | 0.95+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.
SUMMARY :
how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
August | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
2 million | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
50 gig | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
400 giggi | QUANTITY | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
9,000 plus | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Peter Del Vecchio | PERSON | 0.99+ |
single source | QUANTITY | 0.99+ |
North America | LOCATION | 0.98+ |
double | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
Hawk four | COMMERCIAL_ITEM | 0.98+ |
three | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.97+ |
next year | DATE | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
Tomahawk five | COMMERCIAL_ITEM | 0.97+ |
Dell Technologies | ORGANIZATION | 0.97+ |
T six | COMMERCIAL_ITEM | 0.96+ |
two | QUANTITY | 0.96+ |
one switch | QUANTITY | 0.96+ |
Texas | LOCATION | 0.96+ |
six efficiency | QUANTITY | 0.96+ |
25 point | QUANTITY | 0.95+ |
Armando | ORGANIZATION | 0.95+ |
50 | QUANTITY | 0.93+ |
25.6 tets per second | QUANTITY | 0.92+ |
51.2 terabytes per second | QUANTITY | 0.92+ |
18 | QUANTITY | 0.91+ |
512 fiber pairs | QUANTITY | 0.91+ |
two fascinating guests | QUANTITY | 0.91+ |
hundred gig | QUANTITY | 0.91+ |
four lanes | QUANTITY | 0.9+ |
HPC | ORGANIZATION | 0.9+ |
51.2 T. | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.9+ |
256 end | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
Armando Acosta | PERSON | 0.89+ |
two different network technologies | QUANTITY | 0.88+ |