Armando Acosta, Dell Technologies and Matt Leininger, Lawrence Livermore National Laboratory
(upbeat music) >> We are back, approaching the finish line here at Supercomputing 22, our last interview of the day, our last interview of the show. And I have to say Dave Nicholson, my co-host, My name is Paul Gillin. I've been attending trade shows for 40 years Dave, I've never been to one like this. The type of people who are here, the type of problems they're solving, what they talk about, the trade shows are typically, they're so speeds and feeds. They're so financial, they're so ROI, they all sound the same after a while. This is truly a different event. Do you get that sense? >> A hundred percent. Now, I've been attending trade shows for 10 years since I was 19, in other words, so I don't have necessarily your depth. No, but seriously, Paul, totally, completely, completely different than any other conference. First of all, there's the absolute allure of looking at the latest and greatest, coolest stuff. I mean, when you have NASA lecturing on things when you have Lawrence Livermore Labs that we're going to be talking to here in a second it's a completely different story. You have all of the academics you have students who are in competition and also interviewing with organizations. It's phenomenal. I've had chills a lot this week. >> And I guess our last two guests sort of represent that cross section. Armando Acosta, director of HPC Solutions, High Performance Solutions at Dell. And Matt Leininger, who is the HPC Strategist at Lawrence Livermore National Laboratory. Now, there is perhaps, I don't know you can correct me on this, but perhaps no institution in the world that uses more computing cycles than Lawrence Livermore National Laboratory and is always on the leading edge of what's going on in Supercomputing. And so we want to talk to both of you about that. Thank you. Thank you for joining us today. >> Sure, glad to be here. >> For having us. >> Let's start with you, Armando. Well, let's talk about the juxtaposition of the two of you. I would not have thought of LLNL as being a Dell reference account in the past. Tell us about the background of your relationship and what you're providing to the laboratory. >> Yeah, so we're really excited to be working with Lawrence Livermore, working with Matt. But actually this process started about two years ago. So we started looking at essentially what was coming down the pipeline. You know, what were the customer requirements. What did we need in order to make Matt successful. And so the beauty of this project is that we've been talking about this for two years, and now it's finally coming to fruition. And now we're actually delivering systems and delivering racks of systems. But what I really appreciate is Matt coming to us, us working together for two years and really trying to understand what are the requirements, what's the schedule, what do we need to hit in order to make them successful >> At Lawrence Livermore, what drives your computing requirements I guess? You're working on some very, very big problems but a lot of very complex problems. How do you decide what you need to procure to address them? >> Well, that's a difficult challenge. I mean, our mission is a national security mission dealing with making sure that we do our part to provide the high performance computing capabilities to the US Department of Energy's National Nuclear Security Administration. We do that through the Advanced Simulation computing program. Its goal is to provide that computing power to make sure that the US nuclear rep of the stockpile is safe, secure, and effective. So how we go about doing that? There's a lot of work involved. We have multiple platform lines that we accomplish that goal with. One of them is the advanced technology systems. Those are the ones you've heard about a lot, they're pushing towards exit scale, the GPU technologies incorporated into those. We also have a second line, a platform line, called the Commodity Technology Systems. That's where right now we're partnering with Dell on the latest generation of those. Those systems are a little more conservative, they're right now CPU only driven but they're also intended to be the everyday work horses. So those are the first systems our users get on. It's very easy for them to get their applications up and running. They're the first things they use usually on a day to day basis. They run a lot of small to medium size jobs that you need to do to figure out how to most effectively use what workloads you need to move to the even larger systems to accomplish our mission goals. >> The workhorses. >> Yeah. >> What have you seen here these last few days of the show, what excites you? What are the most interesting things you've seen? >> There's all kinds of things that are interesting. Probably most interesting ones I can't talk about in public, unfortunately, 'cause of NDA agreements, of course. But it's always exciting to be here at Supercomputing. It's always exciting to see the products that we've been working with industry and co-designing with them on for, you know, several years before the public actually sees them. That's always an exciting part of the conference as well specifically with CTS-2, it's exciting. As was mentioned before, I've been working with Dell for nearly two years on this, but the systems first started being delivered this past August. And so we're just taking the initial deliveries of those. We've deployed, you know, roughly about 1600 nodes now but that'll ramp up to over 6,000 nodes over the next three or four months. >> So how does this work intersect with Sandia and Los Alamos? Explain to us the relationship there. >> Right, so those three laboratories are the laboratories under the National Nuclear Security Administration. We partner together on CTS. So the architectures, as you were asking, how do we define these things, it's the labs coming together. Those three laboratories we define what we need for that architecture. We have a joint procurement that is run out of Livermore but then the systems are deployed at all three laboratories. And then they serve the programs that I mentioned for each laboratory as well. >> I've worked in this space for a very long time you know I've worked with agencies where the closest I got to anything they were actually doing was the sort of guest suite outside the secure area. And sometimes there are challenges when you're communicating, it's like you have a partner like Dell who has all of these things to offer, all of these ideas. You have requirements, but maybe you can't share 100% of what you need to do. How do you navigate that? Who makes the decision about what can be revealed in these conversations? You talk about NDA in terms of what's been shared with you, you may be limited in terms of what you can share with vendors. Does that cause inefficiency? >> To some degree. I mean, we do a good job within the NSA of understanding what our applications need and then mapping that to technical requirements that we can talk about with vendors. We also have kind of in between that we've done this for many years. A recent example is of course with the exit scale computing program and some things it's doing creating proxy apps or mini apps that are smaller versions of some of the things that we are important to us. Some application areas are important to us, hydrodynamics, material science, things like that. And so we can collaborate with vendors on those proxy apps to co-design systems and tweak the architectures. In fact, we've done a little bit that with CTS-2, not as much in CTS as maybe in the ATS platforms but that kind of general idea of how we collaborate through these proxy applications is something we've used across platforms. >> Now is Dell one of your co-design partners? >> In CTS-2 absolutely, yep. >> And how, what aspects of CTS-2 are you working on with Dell? >> Well, the architecture itself was the first, you know thing we worked with them on, we had a procurement come out, you know they bid an architecture on that. We had worked with them, you know but previously on our requirements, understanding what our requirements are. But that architecture today is based on the fourth generation Intel Xeon that you've heard a lot about at the conference. We are one of the first customers to get those systems in. All the systems are interconnected together with the Cornell Network's Omni-Path Network that we've used before and are very excited about as well. And we build up from there. The systems get integrated in by the operations teams at the laboratory. They get integrated into our production computing environment. Dell is really responsible, you know for designing these systems and delivering to the laboratories. The laboratories then work with Dell. We have a software stack that we provide on top of that called TOSS, for Tri-Lab Operating System. It's based on Redhead Enterprise Linux. But the goal there is that it allows us, a common user environment, a common simulation environment across not only CTS-2, but maybe older systems we have and even the larger systems that we'll be deploying as well. So from a user perspective they see a common user interface, a common environment across all the different platforms that they use at Livermore and the other laboratories. >> And Armando, what does Dell get out of the co-design arrangement with the lab? >> Well, we get to make sure that they're successful. But the other big thing that we want to do, is typically when you think about Dell and HPC, a lot of people don't make that connection together. And so what we're trying to do is make sure that, you know they know that, hey, whether you're a work group customer at the smallest end or a super computer customer at the highest end, Dell wants to make sure that we have the right setup portfolio to match any needs across this. But what we were really excited about this, this is kind of our, you know big CTS-2 first thing we've done together. And so, you know, hopefully this has been successful. We've made Matt happy and we look forward to the future what we can do with bigger and bigger things. >> So will the labs be okay with Dell coming up with a marketing campaign that said something like, "We can't confirm that alien technology is being reverse engineered." >> Yeah, that would fly. >> I mean that would be right, right? And I have to ask you the question directly and the way you can answer it is by smiling like you're thinking, what a stupid question. Are you reverse engineering alien technology at the labs? >> Yeah, you'd have to suck the PR office. >> Okay, okay. (all laughing) >> Good answer. >> No, but it is fascinating because to a degree it's like you could say, yeah, we're working together but if you really want to dig into it, it's like, "Well I kind of can't tell you exactly how some of this stuff is." Do you consider anything that you do from a technology perspective, not what you're doing with it, but the actual stack, do you try to design proprietary things into the stack or do you say, "No, no, no, we're going to go with standards and then what we do with it is proprietary and secret."? >> Yeah, it's more the latter. >> Is the latter? Yeah, yeah, yeah. So you're not going to try to reverse engineer the industry? >> No, no. We want the solutions that we develop to enhance the industry to be able to apply to a broader market so that we can, you know, gain from the volume of that market, the lower cost that they would enable, right? If we go off and develop more and more customized solutions that can be extraordinarily expensive. And so we we're really looking to leverage the wider market, but do what we can to influence that, to develop key technologies that we and others need that can enable us in the high forms computing space. >> We were talking with Satish Iyer from Dell earlier about validated designs, Dell's reference designs for for pharma and for manufacturing, in HPC are you seeing that HPC, Armando, and is coming together traditionally and more of an academic research discipline beginning to come together with commercial applications? And are these two markets beginning to blend? >> Yeah, I mean so here's what's happening, is you have this convergence of HPC, AI and data analytics. And so when you have that combination of those three workloads they're applicable across many vertical markets, right? Whether it's financial services, whether it's life science, government and research. But what's interesting, and Matt won't brag about, but a lot of stuff that happens in the DoE labs trickles down to the enterprise space, trickles down to the commercial space because these guys know how to do it at scale, they know how to do it efficiently and they know how to hit the mark. And so a lot of customers say, "Hey we want what CTS-2 does," right? And so it's very interesting. The way I love it is their process the way they do the RFP process. Matt talked about the benchmarks and helping us understand, hey here's kind of the mark you have to hit. And then at the same time, you know if we make them successful then obviously it's better for all of us, right? You know, I want to secure nuclear stock pile so I hope everybody else does as well. >> The software stack you mentioned, I think Tia? >> TOSS. >> TOSS. >> Yeah. >> How did that come about? Why did you feel the need to develop your own software stack? >> It originated back, you know, even 20 years ago when we first started building Linux clusters when that was a crazy idea. Livermore and other laboratories were really the first to start doing that and then push them to larger and larger scales. And it was key to have Linux running on that at the time. And so we had the. >> So 20 years ago you knew you wanted to run on Linux? >> Was 20 years ago, yeah, yeah. And we started doing that but we needed a way to have a version of Linux that we could partner with someone on that would do, you know, the support, you know, just like you get from an EoS vendor, right? Security support and other things. But then layer on top of that, all the HPC stuff you need either to run the system, to set up the system, to support our user base. And that evolved into to TOSS which is the Tri-Lab Operating System. Now it's based on the latest version of Redhead Enterprise Linux, as I mentioned before, with all the other HPC magic, so to speak and all that HPC magic is open source things. It's not stuff, it may be things that we develop but it's nothing closed source. So all that's there we run it across all these different environments as I mentioned before. And it really originated back in the early days of, you know, Beowulf clusters, Linux clusters, as just needing something that we can use to run on multiple systems and start creating that common environment at Livermore and then eventually the other laboratories. >> How is a company like Dell, able to benefit from the open source work that's coming out of the labs? >> Well, when you look at the open source, I mean open source is good for everybody, right? Because if you make a open source tool available then people start essentially using that tool. And so if we can make that open source tool more robust and get more people using it, it gets more enterprise ready. And so with that, you know, we're all about open source we're all about standards and really about raising all boats 'cause that's what open source is all about. >> And with that, we are out of time. This is our 28th interview of SC22 and you're taking us out on a high note. Armando Acosta, director of HPC Solutions at Dell. Matt Leininger, HPC Strategist, Lawrence Livermore National Laboratories. Great discussion. Hopefully it was a good show for you. Fascinating show for us and thanks for being with us today. >> Thank you very much. >> Thank you for having us >> Dave it's been a pleasure. >> Absolutely. >> Hope we'll be back next year. >> Can't believe, went by fast. Absolutely at SC23. >> We hope you'll be back next year. This is Paul Gillin. That's a wrap, with Dave Nicholson for theCUBE. See here in next time. (soft upbear music)
SUMMARY :
And I have to say Dave You have all of the academics and is always on the leading edge about the juxtaposition of the two of you. And so the beauty of this project How do you decide what you need that you need to do but the systems first Explain to us the relationship there. So the architectures, as you were asking, 100% of what you need to do. And so we can collaborate with and the other laboratories. And so, you know, hopefully that said something like, And I have to ask you and then what we do with it reverse engineer the industry? so that we can, you know, gain And so when you have that combination running on that at the time. all the HPC stuff you need And so with that, you know, and thanks for being with us today. Absolutely at SC23. with Dave Nicholson for theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Leininger | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
National Nuclear Security Administration | ORGANIZATION | 0.99+ |
Armando Acosta | PERSON | 0.99+ |
Cornell Network | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
CTS-2 | TITLE | 0.99+ |
US Department of Energy | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
40 years | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Lawrence Livermore | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
CTS | TITLE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
HPC Solutions | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Los Alamos | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.99+ |
Armando | ORGANIZATION | 0.99+ |
each laboratory | QUANTITY | 0.99+ |
second line | QUANTITY | 0.99+ |
over 6,000 nodes | QUANTITY | 0.99+ |
20 years ago | DATE | 0.98+ |
three laboratories | QUANTITY | 0.98+ |
28th interview | QUANTITY | 0.98+ |
Lawrence Livermore National Laboratories | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Tri-Lab | ORGANIZATION | 0.98+ |
Sandia | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
two markets | QUANTITY | 0.97+ |
Supercomputing | ORGANIZATION | 0.96+ |
first systems | QUANTITY | 0.96+ |
fourth generation | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
Livermore | ORGANIZATION | 0.96+ |
Omni-Path Network | ORGANIZATION | 0.95+ |
about 1600 nodes | QUANTITY | 0.95+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.94+ |
LLNL | ORGANIZATION | 0.93+ |
NDA | ORGANIZATION | 0.93+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)
SUMMARY :
David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
David Nicholson | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
August | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
50 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
9,000 | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
128, 400 gig | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,352 | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
Tomahawk 4 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
512 fiber | QUANTITY | 0.98+ |
seven times | QUANTITY | 0.98+ |
Tomahawk 5 | COMMERCIAL_ITEM | 0.98+ |
four lanes | QUANTITY | 0.98+ |
9,000 plus | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
Aquaman | PERSON | 0.97+ |
Both | QUANTITY | 0.97+ |
InfiniBand | ORGANIZATION | 0.97+ |
QSFP 112 | OTHER | 0.96+ |
hundred gig | QUANTITY | 0.96+ |
Peter Del Vecchio | PERSON | 0.96+ |
25.6 terabytes per second | QUANTITY | 0.96+ |
two fascinating guests | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
64 OSFP | QUANTITY | 0.95+ |
Rocky | ORGANIZATION | 0.95+ |
two million CPUs | QUANTITY | 0.95+ |
25.6 T. | QUANTITY | 0.95+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.
SUMMARY :
how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
August | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
2 million | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
50 gig | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
400 giggi | QUANTITY | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
9,000 plus | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Peter Del Vecchio | PERSON | 0.99+ |
single source | QUANTITY | 0.99+ |
North America | LOCATION | 0.98+ |
double | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
Hawk four | COMMERCIAL_ITEM | 0.98+ |
three | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.97+ |
next year | DATE | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
Tomahawk five | COMMERCIAL_ITEM | 0.97+ |
Dell Technologies | ORGANIZATION | 0.97+ |
T six | COMMERCIAL_ITEM | 0.96+ |
two | QUANTITY | 0.96+ |
one switch | QUANTITY | 0.96+ |
Texas | LOCATION | 0.96+ |
six efficiency | QUANTITY | 0.96+ |
25 point | QUANTITY | 0.95+ |
Armando | ORGANIZATION | 0.95+ |
50 | QUANTITY | 0.93+ |
25.6 tets per second | QUANTITY | 0.92+ |
51.2 terabytes per second | QUANTITY | 0.92+ |
18 | QUANTITY | 0.91+ |
512 fiber pairs | QUANTITY | 0.91+ |
two fascinating guests | QUANTITY | 0.91+ |
hundred gig | QUANTITY | 0.91+ |
four lanes | QUANTITY | 0.9+ |
HPC | ORGANIZATION | 0.9+ |
51.2 T. | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.9+ |
256 end | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
Armando Acosta | PERSON | 0.89+ |
two different network technologies | QUANTITY | 0.88+ |
Armando Lambert, Bayview Asset Mgt. & Ahmed Zaidi, Accelirate | UiPath FORWARD III 2019
>>Live from Las Vegas. It's the cube covering UI path forward Americas 2019 brought to you by UI path. >>Welcome back to Las Vegas. Everybody, you watching the cube, the leader in live tech coverage. We go out to the events, we extract the signal from the noise. Ahmed Zion, he is here, he's the chief automation officer at accelerate a specialist service provider in this area of RPA in Armando Lambert is the vice president of enterprise optimization governance and risk guys. Oh sorry. At Bayview asset management in Miami. Welcome to the cube criminal. Thank you for that. So Bayview, you've got a good view of the Bay and Miami, is that kinda where the name comes from or the beautiful place to work happening with UI path forward to was in Miami at the Fontainebleau back here in Vegas. But um, so let's get into it. I met um, chief automation officer. That's kind of a cool title. I don't see that a lot. What's that entail? And tell us about accelerate. >>So accelerate at accelerated where one of the largest nice providers is the only thing that we do a process automation and AI company. And our sole focus has been process automation since our inception and our past lives were generalists. We did well and wanted to do it again. Uh, so when we started accelerate, we wanted to make sure that we focused on a very specific vertical niche and process automation was just starting up the uptick about mid 2016 ish. >> So there's gonna be some interesting conversations around process automation is like had an analyst on yesterday, they predicted RPA is dead, you know, process automation lives. You know, it's kind of a tongue in cheek thing. So maybe we can talk about that a little bit, but Amando tell us about your role and a little bit about Bayview. So Bayview is an asset management company, primarily whole loans, mortgage back securities, mortgage servicing rights. >>We offer service advisory as well as investment vehicles. My role basically is to strategize, innovate, look at new technologies, new ways of streamlining the business. Um, and you know, about in 2016, you know, we were faced with the challenge and the challenge was we have a lot of road swiveled to chair type work, back office, operational work. Um, and I went in there just trying to look at people, processes and systems and trying to figure out a way to make things more efficient. And you know, RPA is one of those vehicles. Okay. So smart. You started with people in process, you didn't start with the technology. Yeah, absolutely. Right. So what did you learn? I mean, take us back to 2016 when you started to do the investigation, you started unpack the processes and the people. What did you see and then what led you to RPA? >>Yeah, I mean, I think inherently you're, you're, there's a lot of business processes that are just brought down through years of just being kind of entrepreneurial and doing a lot of business. So a, of these prices of processes early on we felt like we can just go in and automate and we realized that just needed a level of process optimization first. Um, so in doing that, it just kind of directs the vehicle right into what type of automation you need to do. It's not always RPA. RPA is big, a big component for us. Um, it works for us. Early on we wanted to put a strong governance structure. I strongly believe that, you know, and it's worked out so far for us. >> So you, you brought in accelerate, you brought in an outside firm to help you with that process automation, is that right? Absolutely. >>So tell us more about how that all went down. So that was, that was an interesting, um, interesting time, right? The, these products were coming up. Nobody really knew how well they work. And so we went in and we actually did proof of value, right? We said, Hey, this is all well and good. Let's do a proof of concept that a proof of value at that time, proof of concepts really were a thing. I don't, I don't think we should do them anymore. We should only do proof of values. But we went in, looked at the various systems they had, tried it out so he could demonstrate it to his management that this thing works. And as soon as that was over, I think I'd given to her, Armando, here we went all in, right? We said, all right, let's look at the highest value things. >>Let's deliver this. Um, let's figure out a governance model. Let's, let's not, let's not hold it back like we, like we have done in the past. It project spinning up. So let's get the infrastructure up and running very quickly. Let's get, let's get a few automations out there. Some of the business sees the value right away, right? Crawl, walk, run. We can do this. You know, what are we going to automate and what do we need from it and how are we going to govern this? These are the three pillars that I see that I suggest everybody look at it. And we did that in parallel parallel streams and all three of them. And within a few months he was able to return a significant value back to the business, which has led to adoption. I think, I think that has been a very big reason why he's been able to scale because he was able to show early value back to the business very quickly focusing on value rather than the technology or the underlying solution. >>Right? It's um, a lot of times we see folks going into RPA saying, what can RPA do for me? I think that is the wrong question. Um, the question really is what do you do? Let's classify what you do in manual mechanical work, intelligent work and wasteful work, right? And then look at your toolbox. I have RPA, I have AI, I have other technologies that within an enterprise folks are working on, and then apply those to it. RPA becomes the glue for most of these things. You have API as SDKs. You have AI technologies, be it cloud or on prem. RPA becomes the glue and it becomes easy to deploy once you figured out what all the different pieces are. But it's important to look at the process first and say, what? What do you do? So when the business comes back and say, what can you do for me with RPA? >>I said, no, I don't know. What can you do for, with the, I don't know. Tell me what you do and then I'll tell you what the solution is. So mono, given that you started with the value, did that ease some of those potential friction that you sometimes see with change management or change in general? Or did you still see that resistance? And I'm interested in where you started, what were some of those high value areas that you attack but, but the cultural piece first if you will. Yeah, I mean a lot of marketing, you know, it's really what it comes down to trying to prove to the Csuite and managing director areas. Like this is a value proposition. You know, early on, you know, we did a lot of presenting roundtables, luncheon learns with the business. You know, because there is some resistance early on. >>I think everybody has a misconception that it's going to take their jobs where I believe it's gonna create a lot more jobs in the future. Um, for me it was always a scalability play. You know, how can our business do more for less? And that's really what we really wanted to get to. Throughout that journey. We realized there's a lot of benefit, especially for companies that have a heavy back office operations. Um, and we just started, like I mentioned there, we started slow. I didn't want to boil the ocean. I knew I needed to prove to leadership that this works. And I think about three years ago, we all kind of felt, is this going to stick? You know, we've seen technology, I've been in technology for over 20 years and you know, some things fly, some things don't. Right? So we wanted to prove that it worked. >>And you know, the industry just kind of surrounded itself around that. And look where we are now. I think everybody's putting a lot of money in their budgets for, you know, intelligent automation, not just RPA. So the initiative was kind of middle up to the C suite and then top down. Is that how it, absolutely. I'm a firm believer the tone needs to come from the top. It has to come from the top. And you know, luckily for me, I have great leaders in our company. Um, they understood the vision, they understand what, what it could potentially mean for their business. They just needed someone to help execute it. So what kinds of things did you start with? There was a lot of sort of manual form filling out or some of that, uh, you know, data extraction from PDFs using, utilizing OCR, you know, RPAs great to gather and collect data so that they can put it in their models and make more informed decisions. >>Uh, claims processes, you know, dealing with different agencies. So, you know, early on in adopting UI path, there were some limitations. We worked around that. Now it's pretty much limitless and they could touch any system, any technology, any process. So yeah, it's growing tremendously. And in terms of just ensuring governance and compliance as you scale, you have robots doing that. Um, how do you tell me what we're working more and more. I mean, I think regulators now realize, okay, you're removing the human element, right? So, you know, that's a big value as well. Or sampling. Now you're not limited to what you can sample. You can sample 100%, you know, so those are big values and when you speak to regulators, they really understand that I would say five years ago, I'm not so sure. Um, but now they welcome it. And I think a lot of the government agencies now are, are adopting RPA. >>Uh, so it's, it's a good story. Well, automation kill sampling is that I think it is absolutely right. The point actually interesting point that you made, right? Uh, the regulators or the auditors or for that matter, the security and the compliance guys inside the enterprise have this. So this term of the bot, right has this connotation of Terminator and I keep telling him, no, this is that thing. You buy a target that does this. I press the right button, it goes right up, press the left button, mil goes left. It just doesn't think on its own. And I think that conversation is very important, right? Once you have that conversation with the security and the compliance guys to say, this is a bot, it only does what you ask it to do. You could put a social security number in front of this guy all day long in front of this user ID all day long. >>It just doesn't know what to do with it. Won't ever read it. And once they realize that they, the, the conversation changes, um, you know, especially when it in compliance and audit, right? Uh, the compliance officers would love this. Once you tell them there's a lot of decision making that happened in people's heads or Excel spreadsheets that never made it to systems and was never logged. So you'd get something in you massage that, you did that, and you put that in the system. That decision making is now auditable. So you can go back and say, here was the input, here was the massaging of it. Here's what went into the system of record after it came in. So that I think, I think those conversations early on really helped this scale in an age old problem and tribal knowledge. Exactly. You know, Joe has his spreadsheet and Fred knows the Joe has the spreadsheet. >>So when Joe leaves, he has to get the spreadsheet back. And that's kind of this perpetual thing. How much of what you guys did, Armando was processed re-engineering versus just applying automation at some low hanging fruit. Um, I think looking back now it's about a 50, 50 split. Um, you know, there are some areas that have robust processes and that makes our life easier. We can just kind of go in, map it out, look at the automation future state and deploy, develop and deployed. Uh, you know, some areas, you know, they inherit processes and they don't always just so busy doing their day jobs and they don't always realize there's, there's room for efficiency in their process. So, you know, early on when we priced out how much this would cost, how much development it would be, we didn't always factor in that it would be a 50, 50 split and doing a lot more process improvement in the beginning. >>Um, we've now counted for that. So absolutely. It's about a 50, 50 split. Craig LeClaire this morning said something that, you know, I was an analyst and he says, very, you know, very analyst's sort of savings. You've got to stop worrying about the ROI, you know, focus on the more strategic stuff. Every analyst sort of says that. But yeah, there weren't a lot of CFOs too. And they're like, where's the ROI? So you know, you're in the services business, you know, you have to have ROI dollars matter. Absolutely. So you obviously measure ROI. How do you look at it? You know what you said earlier, you're not cutting jobs, right? But so what do you tick? How do you measure kind of the, the value, the ROI? I mean, you know, giving the end user a little more to think about, right? Giving them the opportunity to, you know, do more, be more thoughtful in what their day to day job is rather than doing the swivel chair type work. >>So, you know, the measurement, the beauty around RPA is it's very quantifiable. You know, unlike some traditional it systems, you really can, the data doesn't always kick back. You know, all our, our, our own bots, all our processes kickback, they give us data that we can quantify, um, metrics on, on, on volume versus man hours. This is all information you capture early on. You need to do this at the discovery stage and we train. We have a robust training program for our business analysts and program managers and developers and they're always, that's the question they ask every time. It's not just about what is your process, your cute future, current state, future state, and it's like, how many limit? Let me look at your historical trending. What are their volumes look like? You know, our business is very cyclical. It goes up and down, and when I mentioned I want them to be scalable and have more capacity, that's really the play for me. >>For me, it's never been an FTE. I get it. It may come from the Csuite, but like I said, the tone from the top has been solid. Their vision is more about, Hey, when it's cyclical and it goes up and down, we need to be able to do more. We need to be able to scale. Have you been able to measure productivity improvement? Absolutely. Absolutely we have. If you had a Mulligan, what would you do differently? A good question. I mean, I think we factored early on, I mentioned this early on how much process improvement was needed. I think we undervalued that. And um, you know, every business faces the same challenges, right? They, you know, everyone feels like they're doing the right thing. These processes are inherited. You know, regulations change, investors change. There's new business rules every day, you know, and you kind of need to sit back as a business user every now and then and refresh that. >>And um, you know, we didn't account for that early on. We're helping the business do that. Our business is fantastic. They bought into the program and it's like having additional workforce working on your side. You know, Daniel Dienes in his keynote last night, basically sending them pick up on something you guys said is, is, um, he really appreciates those customers who took a chance early on. He goes, because frankly, our product wasn't, you know, fully, fully baked out. And I was like, wow, what an honest statement from a CEO. You don't usually hear that. My sense is that they got it right. You path. And I'd love your comments. In the sense that they attack, they went after simplicity and said, okay, make it easy to adopt and then we'll figure it out. And then, you know, bringing in the functionality is that, is that kind of what happened or picking up on Daniel? >>And by the way it was, it's amazing. Humility really comes through, right? So I saw him 2016 standing on stage and when my partner came to us for the idea of saying, Hey, we're going to do, we should do this RPA thing. Now I'm giving away my age. But 1998, my first job, I was sitting in front of the computer and Prudential and they put this software in front of me. It was called SQA robot. It was a test automation tool. It was called SQL robot. Uh, why that relates to Daniel is he's had a, came on the stage in the IRPA conference in 2016 if remember, I love this presentation just to blues black thing and few words on it. He goes, let's not kid ourselves. We have this very traditional, you know, QA automation technology that we think can do something really super. >>And I have built a product on top of that, but there's, there's not a lot of magic in here yet. Right. So that's, but, but I think the, the, the great thing about you I've had has been the vision, right? The vision has been, and if you saw yesterday they started with the core and unlike some of the other vendors, they said, we're just going to do RPA really well. We're not going to go into the OCR market. We're not going to try to build AI things. Let's make sure that our core RPA, so you know, you want to go, you're an enterprise, you want to do OCR, you're not going to buy it from an RPA company. You want to buy it from somebody who's been doing it for 30 years or we just has that sole focus. I think you'll have had had that sole focus. >>But as I've seen in the past three, four years, they've just done a great job with the, with the full vision, right. Starting from, they started with the middle of the core of the product and they said, okay, let's go towards the business and see what the business needs with, you know, planning of their, um, of their automations on and so forth and going further to the right to say, let us enable the technology guys who actually implement this to give them the tools and the integrations they need to, to actually make this routed to full product. Um, I think it's a very good question when people say, what can you do with RPA for me? So I said that answer was very different three years ago than it is today. Right? Some of the things are coming out of the box with these. So I, I, I predict that in the next few years, document understanding and natural language and all of that will just be built in today's still very sort of clunky in terms of how you do it. >>But I think those things are coming, coming together. So looking at processes that way is really important. It's a lot of runway for this. Margaret, Armando, I'll give you the last word. Where do you see are RPA or intelligent automation going in, in your organization? Is it still early days you had a lot more adoption or you're pretty much, you know, settled? No, definitely not settled. Um, I think it's, you know, RPA is just one of the tools in the spectrum of intelligent automation. So more integration, more API APIs, a lot of machine learning, uh, eventually some AI. Um, so yeah, we are not slowing down. There's a lot of opportunity. My mandate as I mentioned before, is just scale, scale, scale. So you know, the process is working. We have a good program in place. We'll continue marching forward. Great guys, thanks so much for coming. Thank you for sharing your story. Thank you for watching. From right back with the cube. Live from UI path forward three in Las Vegas. Right back.
SUMMARY :
forward Americas 2019 brought to you by UI path. Thank you for that. So accelerate at accelerated where one of the largest nice providers is the only thing that we do a process you know, process automation lives. Um, and you know, about in 2016, you know, I strongly believe that, you know, and it's worked out so far for us. you brought in an outside firm to help you with that process automation, is that right? I think I'd given to her, Armando, here we went all in, right? So let's get the infrastructure up and running very quickly. becomes the glue and it becomes easy to deploy once you figured out what all the different pieces are. So mono, given that you started with the value, I've been in technology for over 20 years and you know, some things fly, some things don't. I think everybody's putting a lot of money in their budgets for, you know, intelligent automation, Uh, claims processes, you know, dealing with different agencies. this is a bot, it only does what you ask it to do. the, the conversation changes, um, you know, especially when it in compliance and audit, Uh, you know, some areas, you know, they inherit processes and I mean, you know, giving the end user a little more to think about, right? So, you know, the measurement, the beauty around RPA is it's very quantifiable. And um, you know, every business faces the And then, you know, bringing in the functionality is that, is that kind of what happened or picking up on you know, QA automation technology that we think can do something really super. Let's make sure that our core RPA, so you know, you want to go, you're an enterprise, you know, planning of their, um, of their automations on and so forth and going further to the right to So you know, the process is working.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daniel | PERSON | 0.99+ |
Miami | LOCATION | 0.99+ |
Bayview | ORGANIZATION | 0.99+ |
Ahmed Zion | PERSON | 0.99+ |
Daniel Dienes | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Armando Lambert | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
Margaret | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
1998 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Craig LeClaire | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Ahmed Zaidi | PERSON | 0.99+ |
Excel | TITLE | 0.99+ |
100% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
three years ago | DATE | 0.99+ |
five years ago | DATE | 0.99+ |
mid 2016 | DATE | 0.99+ |
first job | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
three pillars | QUANTITY | 0.98+ |
Armando | PERSON | 0.98+ |
over 20 years | QUANTITY | 0.98+ |
last night | DATE | 0.97+ |
2019 | DATE | 0.97+ |
Bay | LOCATION | 0.96+ |
Prudential | ORGANIZATION | 0.96+ |
Amando | PERSON | 0.96+ |
first | QUANTITY | 0.94+ |
UiPath | ORGANIZATION | 0.92+ |
about three years ago | DATE | 0.9+ |
Armando | ORGANIZATION | 0.88+ |
50 split | QUANTITY | 0.87+ |
this morning | DATE | 0.87+ |
four years | QUANTITY | 0.86+ |
Accelirate | ORGANIZATION | 0.82+ |
IRPA | EVENT | 0.76+ |
one of | QUANTITY | 0.72+ |
next few years | DATE | 0.71+ |
Mulligan | ORGANIZATION | 0.65+ |
SQL | TITLE | 0.6+ |
Fontainebleau | LOCATION | 0.56+ |
about | QUANTITY | 0.5+ |
RPA | ORGANIZATION | 0.46+ |
Americas | LOCATION | 0.45+ |
Armando Ortiz, IBM | IBM Think 2019
>> Live from San Francisco, it's theCUBE! Covering IBM Think 2019, brought to you by IBM. >> Welcome back to intermittently sunny San Francisco, this is theCUBE, the leader in live tech coverage. We're here at day four at IBM Think. My name is Dave Vellante. I am here with Stu Miniman. John Furrier is also here. Wall to wall coverage Stu. The second Think, first big show really of the year at Moscone. The new Moscone, Armando Ortiz is here. He is the vice president and partner from Mobile & Extended Reality Leader at IBM iX. An interesting part of IBM that you may not know about. Armando, welcome to theCUBE, thanks for coming on. >> Thanks for having me. >> So tell us a little bit about iX. >> So IBM iX is a part of IBM services. We focus on user experiences, whether it's a consumer experience or an employee experience. And the we look at user experience it really kind of sticks together and allow you to unlock the value of all the technology investments that companies are making. >> So, you guys are not making headsets, or are you? >> No we don't make hardware, we just put hardware to work. >> So talk a little bit about the sort of state of whether its augmented reality or extended reality. Lay out the terminology for us if you would. >> Sure, sure. As part of the role I have I lead our mobile practice as well as the extended reality practice and this kind of all related together. We use the term extended reality to kind of encompass all of the different technologies along that spectrum from augmented reality to mixed reality to virtual reality. Of course there are a lot of technologies whether it's the glasses on your face like the wearables or it's in your hand as a lot of mobile platforms today like Apple's ARKit and Google's ARCore allow you to have AR experienced within your mobile apps. >> Yeah, I wonder if you can expand a little bit on that? We're all ready for the role out of 5G and that's going, holds the promise at least for a lot more band width and a lot more applications and that's one of the lynch pins we understand kind of make your world more of a reality. When do we see that role out? What devices are going to happen? You got a preview of the next iPhone for us? >> I certainly don't have a preview of the next iPhone, even though I do lead the Apple partnership for us in North America, the Apple IBM partnership. When you look at 5G, obviously some of the use cases for extended reality in enterprise are around field services and 5G will have an amazing impact on the ability. Not only because of the band width but also the low latency that you have for 5G. So we're excited to see that role out in the different markets around the world and you know the pilots and things that are starting this year. There are going to be a lot of great devices and I think for handsets all the way to the wearables. It'll really allow us to put more use cases on these devices. >> Can you walk us through some of those use cases? Any specific customer examples you have that may make our audience understand a little bit more what's really available today. Sure, I mean in the XR space or in the extended reality space there's a lot that we learned through what we've done in mobile for years. I mean, even our Apple partnership for the past five years and things we've done across the 16 industries we work on. But the initial sort of wave one use cases that we're really seeing today kind of follow along these categories of work related use cases that are like in field services, training related use cases that go all the way from virtual reality immersive training like teaching someone how to do something in a dangerous situation where you want to simulate that. All the way to on the job sort of training and step-by-step guidance that you can get with AR. Step one attach the cable here. Step two, check this over here. Those kind of use cases and then into use cases related to shopping and retail. If you look at what augmented reality is going to do for shopping and retail allow people to assess sort of fit and purpose of something they want to buy. Does it fit in my home? Does it fit in my life? And then also even in the stores as people in retail sort of navigate a store they can use AR to help understand. Add all that metadata to the in store experience that we're gotten used to in our online experiences. And the last broad category we sort of call it share ideas or sharing of ideas, which kind of expands the game from collaboration to even having AR brochures and augmented realty tools to help people understand a product or a service that you're offering. Imagine that we can just kind of expand a piece of equipment here on the table, walk through it and help understand how that piece of equipment is going to help your business. >> You're giving me flashbacks. I remember IBM had a huge initiative in like Second Life and it was like come build an island and we're going to do recruiting and things like that. So, tell us why this generation is, going to be better for business and not have everyone put some money in and have it stolen by you know. >> Not as goofy. >> It's funny you should ask that, the Second Life topic actually came up with someone I was speaking to yesterday. It's come up before. I think there is a significant difference between what Second Life was trying to be and what extended reality is going to be and it already is. I mean when you look at extended reality today, I think one important thing to think about this is not future tech, this is not some sort of dream of sort of Ready Player One type of situation. But more, it's looking at real enterprise use cases that are already driving a value; time savings on inspections, productivity enhancements for people assembling, consistency and increase safety. All the key performance indicators and value drivers we have for mobile. So there's a real path to business value and the uses are much clearer than it might have been in the days of Second Life. >> Less mistakes, less rework. Armando, what kind of infrastructure would a consumer need? You gave the example of retail for instance, what kind of infrastructure would I need? Am I just, is it just my mobile home? Am I going to wear headsets, what does that look like? >> So when we talk about extended reality, we tend to keep one foot in today and one foot in the future cause its changing so fast. When you talk about retail there is a sale associate side of things that might be helping you decide an automotive. Maybe you're looking at configuring a car right in front of you or in a retail store maybe you're looking to look at a piece of furniture or something that's not on the show room floor. Now those experiences can start today with tablets and iPhones and other devices. But we see also as well devices that people be wearing wearables that are available today and that trend moving that glass kind of from your hand to your face is going to be something that is really going to be accelerated. >> So, this is maybe how a piece of clothing will fit or what a couch might look like in a particular room, is that right? >> Yeah. >> And you would envision that people will purchase this infrastructure for a variety of uses. Not only to see how things look but maybe there's gaming. So it's a multi-use kind of environment or not necessarily? Is it more specialized to use it? >> No absolutely, it's important, it's a good thing that you brought up sort of gaming as well. Because, obviously we all know that gaming has been kind of at the fore front for virtual reality but when you look at gaming and entertainment those are also going to include many use cases. When we look at the enterprise side we're kind of focused on those other wave one use cases. But I also expect in the sort of share ideas category I spoke of marketing and sales activities will also include AR experienced to help people understand the product or service that you're positioning. >> What's the state of adoption? We always joke about google glass. Remember the movie The Jerk with the Opti-Grab and the guy was cross-eyed? So that didn't take off but what's the state of hardware and hardware adoption today? >> So I think what's unique about this technology and what's happening now, the technology we already all have in our hands on our mobile phones is already there and that's where you're going to see it happen first. I think the numbers by next year are like 3.4 billion phones will have an AR capability so the technology is already with us. The next sort of technology set that we're talking about is getting to the wearables and of course we see things today in the VR space that's much more available in the consumer side, things like the oculus go. In the enterprise space you also have headsets from many manufacturers that maybe grew up doing things in the military that are now more commercially available. Things like someone trying to repair something that needs to be hand free. We're seeing those technologies readily available in the enterprise. >> Tell about how AI fits into this new world? >> That's a great question. If you think about it its really kind of a really great combination. You take XR, extended reality, so whether its AR or VR and you add AI to it you can kind of give AI the ability to kind of enter the 3D space. So as you think about AI solutions that we had in the mobile world where you might be using AI to solve a problem, diagnose a problem, visual diagnostics, acoustic detection AI can kind of give sort of super powers to an employee. At the same time we see that the experiences that we have in the extended reality space get really enhanced because you now have the ability to democratize expertize with AI. You take all of the expertize of your organization and that one technician whose only been there for 10 days now has the power of your entire collective knowledge. >> What about privacy? Anytime you hear some of these and I think about you can have wearables out there, there is concern about you know with facial recognition is going to be everywhere my privacy is going to be invaded. What's IBM positioning? Where does that fit in this whole environment? >> Of course we take privacy very seriously. When we talk about our AI and Watson you know your data is your data. If you look at some of the things, I mean, you'll make decisions, enterprises will make decisions on the same way they do with mobile devices. Is it okay to have a camera in this environment? And if I do have a camera in this environment, what's my cloud strategy and where am I going to host this data to make sure that I have not just privacy but also IP concerns, considered? All of the same things we've learned in the mobile world are going to apply to this and it'll get even a little more important as you think of the different types of sensors that are required to make these experiences happen. >> I wonder if you could help us understand about the pre-requisites to do things like technician actually trouble shooting a problem. Many of us have seen, we put on the glasses you walk around a show floor and you look at a new system or something and its really very cool. You can look inside and inspect the different layers. What has to be done, I'm inferring from what you're saying that a technician would be able to inspect live, real time a device and identify problems on that device. So what has be done? It has to be instrumented? It has to have cameras installed? What does the infrastructure build out look like? >> Sure, when you look at. Lets take the technician scenario for a moment and unpack that. When you look at that there are a couple of things that are already happening like a lot of major pieces of equipment are instrumented. So you have the internet of things data, sort of the data streams coming off of that. How do you make that available to that technician in the moment, sort of the vital signs of that piece of equipment that you might be operating on? So, having all that information like temperature and all the things from an IOT perspective, that's one angle of it. The other side of it really is when you think of failure of equipment usually at some point there's a situation that technician may not have encountered before but maybe someone else has. Maybe you've already had a bunch of closed tickets on that three years ago. So having all that information available and using cognitive processing to kind of navigate that unstructured data, that will let you navigate that. Voice will be part of this interface as well. I think voice is an important part because you're going to be hands free and you're going to be having a dialogue with Watson, let's say to help diagnose a problem. >> How about healthcare? It's not something we've really talked about a lot. Just in terms of applications, whether its for the operating room of the future, remote guidance from doctor, training. Do you see those kind of use cases emerging? >> Yeah absolutely, all the way from training through execution of surgery and other things. This is where the 5G topic really comes into play because low latency is really required if you're talking about surgery and things like that. >> Give me a few minutes. >> You get that round trip of that signal going back and forth. I think when you think about the VR side of things for training is immensely powerful. The AR side for during execution of procedures will also be powerful as well and it comes back to that general theme od democratizing expertize. One expert that's physically on this part of the world can serve many people that need their services around the world. >> It sounds like there are a lot of uncertainties in terms of how this is going to evolve. First of all od the a fair statement? Given that, not withstanding that can you give us a sense of expectations for how it will evolve and the adoption levels that you expect over the next two to five years. >> Five years is a long horizon for this technology. >> Too long, too long perhaps so what's more fair, 18 months? >> Lets talk more immediate. I think when you look at, there may be some uncertainty in terms of which use cases will drive the most value but there are already many use cases that companies are probably sharing information out. Like some companies, especially inspection use cases, you know there is a company that published 96% savings on time because really you are using AR to document. Okay inspect this point, this point, this point, this point. Assembly use cases, diagnostics with AI and AR are working together. All of these are already happening, so what I think is going to happen is enterprises are going to be able to more and more easily justify the spend to make these investments because the RY is rapid. Just like the RY in mobile was rapid for enterprise, the RY in XR will be extremely rapid. >> Armando for people who didn't come to IBM Think, give them a little taste of what they missed from an iX stand point. Some of the conversations that you've been having. >> Yeah, when we look at, I mean iX across the IBM Think we've had a lot of conversations and a lot of sessions around how experience is really driving the business value and also around marketing technologies and marketing services and all of the things that relate to experience on the consumer side and the employee side. We're really enjoyed some great show casing of our client stories and the works we've done. Everything from mobile to commerce to marketing platforms to sales floors across everything we do in the IBM services part that we're in. >> How long has this been around? >> IBM iX? >> Yeah. >> IBM iX has been a part of IBM originally since the 96 Olympics in Atlanta. I've been with IBM about 25 years and this space is kind of like really evolved in terms of the position of user experience and design. IBM has become really a design focused company and you look at enterprise design thinking in everything we do so this is really a part of our business that's really become focal point as companies start thinking more about design. >> Wow, it's been a long time but it's certainly not mature but it's a revenue generating business obviously. >> Yeah and a very high growth part of the company. >> Awesome, well Armando thanks so much for sharing this part of IBM that's not well known. Really exciting futures and I really appreciate you coming on theCUBE. >> Thank you very much, I appreciate being here. >> Alright, keep it right there everyone. Stu and I will be back. Day four, IBM Think, we're at Moscone. Stop by, we're at Moscone North. I'm Dave Vellante, Stu Miniman and John Furrier is here. We'll be right back, you're watching theCUBE. (techno music)
SUMMARY :
Covering IBM Think 2019, brought to you by IBM. An interesting part of IBM that you may not know about. And the we look at user experience it really kind of sticks Lay out the terminology for us if you would. all of the different technologies along that spectrum of the lynch pins we understand kind of make markets around the world and you know the pilots and step-by-step guidance that you can get with AR. put some money in and have it stolen by you know. I mean when you look at extended reality today, You gave the example of retail for instance, of you or in a retail store maybe you're looking to look And you would envision that people will purchase But I also expect in the sort of share ideas category and the guy was cross-eyed? In the enterprise space you also have headsets from the mobile world where you might be using AI to solve Anytime you hear some of these and I think about you can All of the same things we've learned in the mobile world the pre-requisites to do things like technician of that piece of equipment that you might be operating on? room of the future, remote guidance from doctor, training. Yeah absolutely, all the way from training through I think when you think about the VR side of things First of all od the a fair statement? and more easily justify the spend to make Some of the conversations that you've been having. services and all of the things that relate to experience is kind of like really evolved in terms of the position Wow, it's been a long time but it's certainly not mature appreciate you coming on theCUBE. Stu and I will be back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 days | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
96% | QUANTITY | 0.99+ |
Armando Ortiz | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
one foot | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
16 industries | QUANTITY | 0.99+ |
Atlanta | LOCATION | 0.99+ |
The Jerk with the Opti-Grab | TITLE | 0.99+ |
Five years | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
Moscone | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
ARKit | TITLE | 0.98+ |
today | DATE | 0.98+ |
3.4 billion phones | QUANTITY | 0.98+ |
wave | EVENT | 0.97+ |
next year | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
oculus | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
ARCore | TITLE | 0.97+ |
Moscone North | LOCATION | 0.97+ |
One expert | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
2019 | DATE | 0.97+ |
about 25 years | QUANTITY | 0.96+ |
ORGANIZATION | 0.96+ | |
Watson | PERSON | 0.96+ |
Day four | QUANTITY | 0.95+ |
three years ago | DATE | 0.95+ |
one angle | QUANTITY | 0.94+ |
five years | QUANTITY | 0.94+ |
iX | TITLE | 0.92+ |
iX | COMMERCIAL_ITEM | 0.92+ |
Armando | ORGANIZATION | 0.91+ |
Step two | QUANTITY | 0.9+ |
first big show | QUANTITY | 0.9+ |
one technician | QUANTITY | 0.89+ |
iX. | TITLE | 0.88+ |
first | QUANTITY | 0.86+ |
96 Olympics | EVENT | 0.85+ |
day four | QUANTITY | 0.85+ |
two | QUANTITY | 0.85+ |
Step one | QUANTITY | 0.84+ |
5G | QUANTITY | 0.84+ |
Second Life | TITLE | 0.82+ |
IBM Think | ORGANIZATION | 0.82+ |