Michael Fagan, Village Roadshow | Palo Alto Networks Ignite22
>>The Cube presents Ignite 22, brought to you by Palo Alto Networks. >>Welcome back to Vegas, guys and girls, it's great to have you with us. The Cube Live. Si finishing our second day of coverage of Palo Alto Ignite. 22 from MGM Grand in Las Vegas. Lisa Martin here with Dave Valante. Dave Cybersecurity is one of my favorite topics to talk about because it is so interesting. It is so dynamic. My other favorite thing is to hear the voice of our vendors' customers. And we could to >>Do that. I always love to have the customer on you get you get right to the heart of the matter. Yeah. Really understand. You know, what I like to do is sort of when I listen to the keynotes, try to see how well it aligns with what the customers are actually doing. Yeah. So let's >>Do it. We're gonna unpack that now. Michael Fagan joins us, the Chief Transformation Officer at Village Roadshow. Welcome Michael. It's great to have you >>And thank you. It's a pleasure to be here. >>So this is a really interesting entertainment company. I find the name interesting, but talk to us a little bit about Village Roadshow so the audience gets an understanding of all of the things that you guys do cuz theme parks is part of >>This. Yeah, so Village Road show's Australia's largest cinema exhibitor in conjunction with our partners at event. We also own and operate Australia's largest theme parks. We have Warner Brothers movie World, wet and Wild. SeaWorld Top Golf in Australia is, is operated by us plus more. We also do studio, we also own movie studios, so Aquaman, parts of the Caribbean. We're, we're filming our movie studios Elvis last year. And we also distribute and produce movies and TV shows. Quite diverse group. >>Yeah, you guys have won a lot of awards. I mean, I don't know, academy Awards, golden Globe, all that stuff, you know, and so it's good. Congratulations. Yeah. >>Thank you. >>Cool stuff. I wanna also, before we dig into the use case here, talk to us about the role of a chief transformation officer. How long have you been in that role? What does it encompass and what do you get to drive from a transformation perspective? Yeah, >>So the, the, the nature and pace of disruption is accelerating and on, on one side. And then on the other side, the running business as usual is becoming increasingly complex and, and more difficult to do. So running both simultaneously and at pace can put organizations at risk, both financially and and other ways. So in my role as Chief Transformation officer, I support the rest of the executive team by giving them additional capacity and also bring capability to the team that wasn't there before. So I do a lot of strategic and thought leadership. There's some executive coaching in there, a lot of financial modeling and analysis. And I believe that when a transformation role in particularly a chief transformation role is done correctly, it's a very hands-on role. So there's certain things where I, I dive right down and I'm actually hands in, hands-on leading teams or leading pieces of work. So I might be leading particular projects. I tried to drive profit revenue and profitability across the divisions and does any multi or cross-divisional opportunities or initiative, then I will, I will lead those. >>The transformation, you know, a while ago was cloud, right? Okay, hey, cloud and transformation officers, whether or not they had that title, we'll tell you, look, you gotta change the operating model. You can't just, you know, lift and shift in the cloud. That's, you know, that's pennies. We want, you know, big bucks. That's the operating. Now it's, I'm my question is, is did the pandemic just accelerate your transformation or, or was it, you know, deeper than that? >>Yeah, so what in my role have both digital and business transformation, some of it has been organizational. I think the pandemic has had a, a significant and long lasting effect on society, not just on, on business. So I think if you think about how work work used to be a, a place you went to and how it was done beforehand, before the, before COVID versus now where, you know, previously, you know, within the enterprise you had all of the users, you had all of the applications, you had all of the data, you had all of the people. And then since March, 2020, just overnight, that kind of inverted and, you know, you had people working from home and a person working from home as a branch office of one. So, so we ended up with another thousand branches literally overnight. A lot of the applications that we use are now SASS or cloud-based, whether that's timekeeping with Kronos or communica employee communication or work Jam. So they're not sitting within our data center, they're not sitting within, within our enterprise. It's all external. >>So from a security perspective, you obviously had to respond to that and we heard a lot about endpoint and cloud security and refactoring the network and identity. These guys aren't really an identity. They partner for that, but still a lot of change in focus that the CISO had to deal with. How, how did you guys respond to that? And, and you had a rush to do it. Yeah. And so as you sit back now, where do you go from here? >>Well we had, we had two major triggers for our, our network and security transformation. The first being COVID itself, and then the second beam, we had a, a major MPLS telco renewal that came up. So that gave you an opportunity to look at what we were doing and essentially our network was designed for a near, that no longer exists for when, for when p like I said, when people, when people were from home, all the applications were inside. So, and we had aging infrastructure, our firewalls were end of life. So initially we started off with an SD WAN at the SD WAN layer and an SD WAN implementation. But when we investigated and saw the security capabilities that are available now, we that to a full sassy WAN implementation. >>Why Palo Alto Networks? Because you, you had, you said you had an aging infrastructure designed for an era that doesn't exist anymore, but you also had a number of tools. We've been talking about a consolidation a lot the last couple days. Yeah. How did, what did you consolidate and why with Palo Alto? >>So we had a great partner in Australia, incidentally also called Cube. Cube Networks. Yeah. That we worked with great >>Names. Yeah, right. >>So we, so we, we worked for Cube. We ran a, a form of tender process. And Palo Alto with, you know, Prisma access and Global Global Protect was the only, the only solution that gave us everything that we needed in terms of network modernization, the agility that we required. So for example, in our theme part, we want to send out a hotdog cart or an ice cream cart, and that becomes, all of a sudden you got a new branch that I want to spin up this branch in 10 minutes and then I wanna spin it back down again. So from agility perspective, from a flexibility perspective, the security that, that we wanted, you know, from a zero trust perspective, and they were the only, certainly from a zero trust perspective, they're probably the only vendor that, that exists that, that actually provided the, the, all those capabilities. >>And did you consolidate tools or you were in the process of consolidating tools now? >>Yeah, so we actually, we actually consolidated down to, to, to a, to a single vendor. And in my previous role I had, I had implemented SD WAN before and you know, interoperability is a, is a major issue in the IT industry. I think there's, it's probably the only industry in the, the only industry I can think of certainly that where we, we ship products that aren't ready. They're not of all the features, they, they don't have all the features that they should have. They're their plans. They were releasing patches, releasing additional features every, every couple of months. So, you know, if you, if if Ford sold the card, I said, Hey, you're gonna give you backseats in a couple of months, they'd be uproar. But, but we do that all the time in, in it. So I had, when I previously implemented an Sdwan transformation, I had products from two tier one vendors that just didn't talk to one another. And so when I went and spoke to those vendors, they just went, well, it's not me. It's clearly, clearly those guys. So, so there's a lot to be said for having a, you know, a champion team rather than a team of champions. And Palo Alto have got that full stack fully integrated that was, you know, exactly meant what we were looking for. >>They've been talking a lot the last couple days about integration and it, and I've talked with some of their executives and some analysts as well, including Dave about that seems to be a differentiator for them because they really focus on that. Their m and a strategy is very, it seems to be very clear and there's purpose on that backend integration instead of leaving it to the customer, like Village Road show to do it. They also talked a lot about the consolidation. I'm just curious, Michael, in terms of like what you've heard at the show in the last couple of days. >>Yeah, I mean I've been hearing to same mess, but actually we've, we've lived in a >>You're living it. That's what I wanted to >>Know. So, so, you know, we had a choice of, you know, do you try and purchase so-called best of breed products and then put a lot of effort into integrating them and trying to get them to work, which is not really what we want to spend time doing. I don't, I don't wanna be famous for, you know, integration and, you know, great infrastructure. I want to be, I want Village to be famous for delivering great experiences to our customers. Memories that last a lifetime. And you know, when kids grow up in Australia, they, everybody remembers going to the theme parks. That's what, that's what I want our team to be doing and to be delivering those great experiences, not to be trying to plug together bits of software and it may or may not work and have vendors pointing at one another and then we are left carrying the cannon and holding the >>Baby. So what was the before and after, can you give us a sense as to how life changed, you know, pre that consolidation versus post? >>Yeah, so our, our, our infrastructure, say our infrastructure was designed for, you know, the, you know, old ways of working where we had you knowm routers that were, you know, not designed for cloud, for modern traffic, including cloud Destin traffic, an old MPLS network. We used to back haul all the traffic from, from our branches back to central location run where we've got, you know, firewall walls, we've got a dmz, we could run advanced inspection services on that. So if you had a branch that wanted to access a website that was housed next door, even if it was across the country, then it would, we would pull that all the way back to Melbourne. We would apply advanced inspection services to it, send it up to the cloud out back across the country. Traffic would come back, come down to us, back out to our branch. >>So you talk about crossing the country four times, even at the website is, is situated next door now with, with our sasi sdwan transformation just pops out to the cloud now straight away. And the, the difference in performance for our, for our team and for our customers, it, it's phenomenal. So you'll talk about saving minutes, you know, on a log on and, and seconds then and on, on an average transaction and second zone sound like a lot. But when you, it's every click up, they're saving a second and add up. You're talking about thousands of man hours every month that we've saved. >>If near Zuke were sitting right here and said, what could we do better? You know, what do you need from us that we're not delivering today that you want to, you want us to deliver that would change your life. Yeah, >>There's two things. One, one of which I think they're all, they're already doing, but I actually haven't experienced myself. It's around the autonomous digital experience management. So I've now got a thousand users who are sitting at home and they've got, when they've got a problem, I don't know, is it, is it my problem or is it their problem? So I know that p were working on a, an A solution that digital experience solution, which can actually tell, well actually know you're sitting in your kitchen and your routes in your front room, maybe you should move closer to the route. So there, there they, that's one thing. And the second thing is using AI to tell me things that I wouldn't be able to figure out with a human training. A lot of time sifting through data. So things like where I've potentially overcompensated and, you know, overdelivered on the network and security side or of potentially underdelivered on a security side. So having AI to, you know, assess all of those millions and probably billions of, you know, transactions and packets that are moving around our network and say, Hey, you could optimize it more if you, if you dial this down or dial this up. >>So you said earlier we, this industry has a habit of shipping products before, you know they're ready. So based on your experience, seems like, first of all, it sounds like you got a at least decent technical background as well. When do you expect to have that capability? Realistically? When can we expect that as an industry? >>I think I, I think, like I said, the the rate and nature of change is, is, I think it's accelerating. The halflife of degree is short. I think when I left university, what I, what I learned in first year was, was obsolete within five years, I'd say now it's probably obsolete of you. What'd you learn in first year? It's probably obsolete by the time you finish your degree. >>Six months. Yeah, >>It's true. So I think the, the, the rate of change and the, the partnership that I see Palo building with the likes of AWS and Google and that and how they're coming together to, to solve, to jointly solve these problems is I think we will see this within 12 months. >>Who, who are your clouds? You got multiple clouds >>Or We got multiple clouds. Mostly aws, but there are certain things that we run that run in run in Azure as well. We, we don't really have much in GCP or, or, or some of the other >>Azure for collaboration and teams, stuff like that. >>Ah, we, we run, we run SAP that's we hosted in, in Azure and our cinema ticketing system is, is was run in Azure. It's, it was only available in, in in Azure the time we're mo we are mostly an AWS >>Shop. And what do you do with aws? I mean, pretty much everything else is >>Much every, everything else, anything that's customer facing our websites, they give us great stability. Great, great availability, great performance, you know, we've had and, and, and, and a very variable as well. So, we'll, you know, our, our pattern of selling movie tickets is typically, you know, fairly flat except when, you know, there's a launch of a, of a new movie. So all of a sudden we might say you might sell, you know, at 9:00 AM when, you know, spider-Man went on sale last year, I think we sold 100 times the amount of tickets in the forest, 10 minutes. So our website didn't just scale look beautifully, just took in all of that extra traffic scale up. We're at only any intervention and then scale back down >>Taylor Swift needs that she does need that. So yeah. And so is your vision to have Palo Alto networks security infrastructure have be a common sort of layer across those clouds and maybe even some on-prem? Is it, are you, are you working toward that? Yeah, >>We, yeah, we, yeah, we, we'd love to have, you know, our end, our end customers don't really care about the infrastructure that we run. They won't be >>Able to unless it breaks. >>Unless it breaks. Yeah. They wanna be able to go to see a movie. Do you wanna be able to get on a rollercoaster? They wanna be able to go, you know, play around around a top golf. So having that convergence and that seamless integration of working across cloud network security now for most of our team, they, they don't know and they don't need to know. In fact, I, I frankly don't want them to know and be, be thinking about networks and clouds. I kind of want them thinking about how do we sell more cinema tickets? How do we give a great experience to our guests? How do we give long lasting lifetime memories to, to the people who come visit our parks? >>That's what they want. They want that experience. Right. I'd love to get your final thoughts on, we, we had you give a great overview of the ch the role that you play as Chief transformation officer. You own digital transformation, you want business transformation. What advice would you give to either other treat chief transformation officers, CISOs, CSOs, CEOs about partnering, what's the right partner to really improve your security posture? >>I think there's, there's two things. One is if you haven't looked at this in the last two years and made some changes, you're outta date. Yeah. Because the world has changed. We've seen, I mean, I've heard somebody say it was two decades worth of, I actually think it's probably five 50 years worth of change in, in Australia in terms of working habits. So one, you need to do something. Yeah. Need to, you need to have a look at this. The second thing I think is to try and partner with someone that has similar values to your organization. So Village is a, it's a wonderful, innovative company. Very agile. So the, like the, the concept of gold class cinema, so, you know, big proceeds, recliners, waiter service, elevated foods concept that, that was invented by village in 1997. Thank you. And we had thanks finally came to the states so decade later, I mean we would've had the CEO of every major cinema chain in the world come to come to Melbourne and have a look at what Village is doing and go, yeah, we're gonna export that back around around the world. It's probably one of, one of Australia's unknown exports. Yeah. So it's, yeah, so, so partnering. So we've got a great innovation history and we'd like to think of ourselves as pretty agile. So working with partners who are, have a similar thought process and, and managed to an outcome and not to a contract Yeah. Is, is important for us. >>It's all about outcomes. And you've had some great outcomes, Michael, thank you for joining us on the program, walking us through Village Roadshow, the challenges that you had, how you tackled them, and, and next time I think I'm in a movie theater and I'm in reclining chair, I'm gonna think about you and village. So thank you. We appreciate your insights, your time. Thank you. Thanks Michael. For Michael Fagan and Dave Valante. I'm Lisa Martin. You've been watching The Cube. Our live coverage of Palo Alto Networks. Ignite comes to an end. We thank you so much for watching. We appreciate you. You're watching the Cube, the leader in live enterprise and emerging emerging tech coverage next year. >>Yeah.
SUMMARY :
The Cube presents Ignite 22, brought to you by Palo Alto Welcome back to Vegas, guys and girls, it's great to have you with us. I always love to have the customer on you get you get right to the heart of the matter. It's great to have you It's a pleasure to be here. us a little bit about Village Roadshow so the audience gets an understanding of all of the things that you guys do cuz theme And we also distribute and produce movies and TV shows. all that stuff, you know, and so it's good. do you get to drive from a transformation perspective? So in my role as Chief Transformation officer, I support the rest of the executive We want, you know, just overnight, that kind of inverted and, you know, you had people working from home So from a security perspective, you obviously had to respond to that and we heard a lot about endpoint So that gave you an opportunity to look at what we were doing and essentially for an era that doesn't exist anymore, but you also had a number of tools. So we had a great partner in Australia, incidentally also called Cube. Yeah, right. that we wanted, you know, from a zero trust perspective, and they were the only, fully integrated that was, you know, exactly meant what we were looking for. it to the customer, like Village Road show to do it. That's what I wanted to you know, integration and, you know, great infrastructure. consolidation versus post? back to central location run where we've got, you know, firewall walls, we've got a dmz, So you talk about crossing the country four times, even at the website is, is situated next door now You know, what do you need from us that we're not delivering today that you want to, you want us to deliver that would change So things like where I've potentially overcompensated and, you know, overdelivered on the network So you said earlier we, this industry has a habit of shipping products before, It's probably obsolete by the time you finish your degree. Yeah, So I think the, the, the rate of change and the, the partnership that I see Palo Mostly aws, but there are certain things that we run that run in run mo we are mostly an AWS I mean, pretty much everything else is So all of a sudden we might say you might sell, So yeah. We, yeah, we, yeah, we, we'd love to have, you know, you know, play around around a top golf. we, we had you give a great overview of the ch the role that you play as Chief transformation So one, you need to do something. Roadshow, the challenges that you had, how you tackled them, and, and next time I think I'm in a movie theater
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
1997 | DATE | 0.99+ |
Michael | PERSON | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
March, 2020 | DATE | 0.99+ |
Michael Fagan | PERSON | 0.99+ |
Melbourne | LOCATION | 0.99+ |
Six months | QUANTITY | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
two decades | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Taylor Swift | PERSON | 0.99+ |
100 times | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
second day | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
The Cube | TITLE | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
five 50 years | QUANTITY | 0.99+ |
first year | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
billions | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
Global Global Protect | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
decade later | DATE | 0.98+ |
next year | DATE | 0.98+ |
second thing | QUANTITY | 0.98+ |
Caribbean | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
9:00 AM | DATE | 0.98+ |
Vegas | LOCATION | 0.98+ |
12 months | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
Cube Networks | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Warner Brothers | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
Village | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.96+ |
pandemic | EVENT | 0.95+ |
Kronos | ORGANIZATION | 0.94+ |
Village Roadshow | ORGANIZATION | 0.94+ |
Prisma access | ORGANIZATION | 0.92+ |
one side | QUANTITY | 0.92+ |
second beam | QUANTITY | 0.9+ |
Sdwan | ORGANIZATION | 0.9+ |
golden Globe | TITLE | 0.9+ |
zero trust | QUANTITY | 0.88+ |
MGM Grand | LOCATION | 0.86+ |
Village Road show | ORGANIZATION | 0.86+ |
thousands of man hours | QUANTITY | 0.86+ |
second zone | QUANTITY | 0.85+ |
Village Roadshow | TITLE | 0.85+ |
CISO | ORGANIZATION | 0.85+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)
SUMMARY :
David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
David Nicholson | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
August | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
50 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
9,000 | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
128, 400 gig | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,352 | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
Tomahawk 4 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
512 fiber | QUANTITY | 0.98+ |
seven times | QUANTITY | 0.98+ |
Tomahawk 5 | COMMERCIAL_ITEM | 0.98+ |
four lanes | QUANTITY | 0.98+ |
9,000 plus | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
Aquaman | PERSON | 0.97+ |
Both | QUANTITY | 0.97+ |
InfiniBand | ORGANIZATION | 0.97+ |
QSFP 112 | OTHER | 0.96+ |
hundred gig | QUANTITY | 0.96+ |
Peter Del Vecchio | PERSON | 0.96+ |
25.6 terabytes per second | QUANTITY | 0.96+ |
two fascinating guests | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
64 OSFP | QUANTITY | 0.95+ |
Rocky | ORGANIZATION | 0.95+ |
two million CPUs | QUANTITY | 0.95+ |
25.6 T. | QUANTITY | 0.95+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.
SUMMARY :
how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
August | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
2 million | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
50 gig | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
400 giggi | QUANTITY | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
9,000 plus | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Peter Del Vecchio | PERSON | 0.99+ |
single source | QUANTITY | 0.99+ |
North America | LOCATION | 0.98+ |
double | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
Hawk four | COMMERCIAL_ITEM | 0.98+ |
three | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.97+ |
next year | DATE | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
Tomahawk five | COMMERCIAL_ITEM | 0.97+ |
Dell Technologies | ORGANIZATION | 0.97+ |
T six | COMMERCIAL_ITEM | 0.96+ |
two | QUANTITY | 0.96+ |
one switch | QUANTITY | 0.96+ |
Texas | LOCATION | 0.96+ |
six efficiency | QUANTITY | 0.96+ |
25 point | QUANTITY | 0.95+ |
Armando | ORGANIZATION | 0.95+ |
50 | QUANTITY | 0.93+ |
25.6 tets per second | QUANTITY | 0.92+ |
51.2 terabytes per second | QUANTITY | 0.92+ |
18 | QUANTITY | 0.91+ |
512 fiber pairs | QUANTITY | 0.91+ |
two fascinating guests | QUANTITY | 0.91+ |
hundred gig | QUANTITY | 0.91+ |
four lanes | QUANTITY | 0.9+ |
HPC | ORGANIZATION | 0.9+ |
51.2 T. | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.9+ |
256 end | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
Armando Acosta | PERSON | 0.89+ |
two different network technologies | QUANTITY | 0.88+ |